Article

The Principles of Science

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The inclusion of geology and evolutionary biology in nineteenth-century science complicated this matter because, like astronomy, they were largely observational sciences that did not lend themselves to experiment and it was experiment more than anything else that characterised the modern approach to science, at least in English-speaking countries. Later writers such as Herschel [116], Whewell [117] and Jevons [118] were even more pluralistic in their approaches than the writers of the seventeenth and eighteenth centuries. They gave voluminous accounts of the various elements of scientific methods, but still they avoided giving a single algorithmic description of "the method." ...
... The function of the lungs is to bring in the pneuma, which mixes with the blood and makes it become arterial; the blood is then warmed up by the innate heat of the heart, and this mixture is pushed into the arteries to the body. 118 The best understanding of human anatomy and physiology is gained from dissection of human bodies. This had been done by the anatomists of Alexandria in the 3 rd century B.C. but had become taboo by the time of Galen in the 2 nd century A.D., which partly explains Galen's misunderstandings of blood circulation, although Galen had dissected many animals including Barbary Apes. ...
... William Stanley Jevons (1835 -1882), a student of de Morgan, wrote Principles of Science (1874) [118]. In this he discussed deduction and induction, describing induction as the reverse of deduction. ...
Preprint
Full-text available
In the Sociology of Scientific Knowledge, it is asserted that science is merely another belief system, and should not be accorded any credibility above other belief systems. This assertion shows a complete misunderstanding of how both science and philosophy work. Not only science but all logic-based philosophies become pointless under the belief system hypothesis. Science, formerly known as natural philosophy, is not a set of facts or beliefs, but rather a method for scrutinising ideas. In this it is far closer to a philosophical tool set than to an ideology. Popper’s view, widely endorsed by scientists, is that science requires disprovable propositions which can be evaluated using available evidence. Science is therefore not a system of belief, but a system of disbelief, which is a very different thing indeed. This paper reviews the origins of the Sociology of Scientific Knowledge, discusses the numerous flaws in its fundamental premises and revisits the views of Michael Polanyi and Karl Popper who have been falsely cited as supporters of these premises. Two appendices are included for reference: one on philosophies of science and one on history of scientific methods. A third appendix on ethics and science has been published separately.
... In this and many other cases we can convert the propositions into affirmative ones which will yield a conclusion by substitution without any difficulty. ( [20]: 63) Fig. 6 Everything is x or y Jevons states that a negative proposition can be transformed into a positive one. For instance, if we are told that "Some x are not y," we may convert it to "Some x are not-y." ...
... We surveyed various techniques used in the golden age of logic diagrams to treat negative terms. Figure 20 offers a summary of these solutions: (a) Euler (1768) keeps the negative term outside the circle that stands for the positive one, (b) Jevons [20] substitutes a positive term for the negative term and treats it accordingly, (c) Keynes (1894) encloses the universe and, thus, restricts the outer region that stands for the negative term, (d) Peirce (1896) reshapes the diagram to convey the sign of the term, and (e) Venn [44] returns to Euler's original plan but modifies the mode of representation. ...
Chapter
Full-text available
In the common use of logic diagrams, the positive term is conveniently located inside the circle while its negative counterpart is left outside. This practice, already found in Euler’s original scheme, leads to trouble when one wishes to express the non-existence of the outer region or to tackle logic problems involving negative terms. In this chapter, we discuss various techniques introduced by Euler’s followers to overcome this difficulty: some logicians modified the data of the problem at hand, others amended the diagrams, and another group changed the mode of representation. We also consider how modern diagrammatic systems represent negation.
... Discussing the second point of difference between the symbolists and the conceptualists, Johnson identifies the "conditional" proposition, if any subject is S, that subject is P with the categorical proposition, any subject (or all subjects) which is (are) S is (are) P, claiming this form is the same as the form: every S is P. The conceptualists reject this point of view [22,[16][17][18][19][20]. ...
... He called the third class Equivalents, by which he meant that the classes and relations are only viewed differently. The final class, Logically inferrible, he named for the category where the classes and relations are different but involve the same knowledge of the possible combinations as in the third category [20,[119][120]. ...
Article
Full-text available
I trace the development of implication as an inference operator using as the starting point the ideas primarily of R. Whately, W. Hamilton, and in added material, also J. S. Mill, before discussing the work of the transitional logicians, A. De Morgan and G. Boole on these topics. Although not appreciated until the first third of the twentieth century, Boole’s fundamental law of thought, x2 = x, initiated an analysis of how the algebra of logic differs from ordinary algebra, and subsequently, gave rise to a new inference rule, resolution. In an added section, I provide a roadmap of this development and then discuss the relevant views of four prominent but underappreciated nineteenth-century British logicians, W.E. Johnson, J.N. Keynes, E.E.C. Jones, and H. MacColl, as well as those of the more influential logicians, W.S. Jevons and J. Venn, closing with a section on “implication as inference” where I explore some key ideas of B. Russell and sketch the work of D. Hilbert, P. Hertz, and G. Gentzen who together are responsible for the development of the modern ideas leading to the mechanization of inference schemes.
... Mais de um século atrás, Stanley Jevons pontuou que atributos de definição de uma palavra são expandidos (por exemplo: guerra se torna "guerra estrangeira"), e sua profundidade empírica é estreitada. Ou seja, definições mais focadas geralmente se referem a um número menor de fenômenos (JEVONS, 1958(JEVONS, [1877). A seguir, o gráfico de relação entre intensidade e extensão. ...
... Mais de um século atrás, Stanley Jevons pontuou que atributos de definição de uma palavra são expandidos (por exemplo: guerra se torna "guerra estrangeira"), e sua profundidade empírica é estreitada. Ou seja, definições mais focadas geralmente se referem a um número menor de fenômenos (JEVONS, 1958(JEVONS, [1877). A seguir, o gráfico de relação entre intensidade e extensão. ...
Article
Full-text available
Resumo: Este trabalho tem por objetivo principal apresentar a proposta de unificação do conceito de corrupção. Com este objetivo, o artigo de divide em três partes. Na primeira parte fizemos um inventário dos conceitos encontrados entre os principais estudiosos do assunto junto a uma análise crítica dos conceitos. Na segunda parte apresentamos a nova metodologia dos conceitos, mostramos os critérios para a construção de um conceito, explicamos a estratégia “Min-Máx” e pontuamos a multidimensionalidade dos conceitos em três níveis. Na terceira parte criamos um novo conceito de corrupção, apresentamos sua estrutura esquemática, e, por fim, testamos a qualidade do conceito por meio da metodologia apresentada. Conclui-se que o novo conceito possui um escopo mais preciso e é mais operacionalizável.
... Thorough examination and comparison are and have been an integral part of all sciences throughout history, therefore also of the social sciences, wherein comparative studies have played a major contribution in developing them into a standalone scientific domain. This has been established as a scientific method since Jevons published his book on the principles of science in the year 1877 (Jevons, 1877). ...
Thesis
Full-text available
Training courses for the Pikler and the Montessori approach are highly demanded. They are often looked upon as being two different approaches, although they do have a lot in common. This has already been partially documented before by Födinger, Steinschulte Angelika as well as Ongari, Fresco, and Cocever. Almost no studies have been done concerning the principles sensory integration and psychomotor therapy do have in common. Even Aguinaga Hinojosa`s study on children with ASD does not compare the two approaches but she was able to show that children with ASD do need both methods. This is one of the reasons for this scholarly work, showing that combined approaches are more effective. The objective was to show that the four approaches Montessori and Pikler education as well as Sensory Integration and Psychomotor Therapy are equally effective and have a lot of methods in common because their efficacy is being built on the same neuroscientific foundation. The four approaches which have been developed by different personalities in different countries at different times and for different circumstances, children, and their issues do appear different as they are being applied separately from each other as well as from other methods which all appear to be effective. It could be shown that the four approaches are perfectly in line concerning their principles, methods, and the findings of recent scientific research. It can therefore be concluded that combining the four methods would lead to increased efficacy concerning education and therapy as the four approaches combined increase the applicability and areas for supporting children´s development and learning very much. Understanding all of this should lead to overthinking the education of childcare workers, teachers, and therapists. This thesis, therefore, ends with recommendations for practitioners as well as future research. 3 Acknowledgment I am grateful for the opportunity to complete the "Doctor of Philosophy by Thesis" program, at the IIC University of Technology in Phnom Penh, Kingdom of Cambodia. I want to say, that I enjoyed the program, am so happy for all the opportunities and support offered and I hope to continue working with the IIC University of Technology in Phnom Penh.
... Since the initial contribution of Jevons (1874;1884), the study was conducted due to economic fluctuations based on the relationship between solar cycles, climatic cycles, cycles of the agricultural sector and its link with the cycle of commercial credit, accompanied by social, political and economic factors with an average duration of 10.8 years −based on the context of the nineteenth century − as well as the incidence in different regions (tropical or sub-tropical) (Morgan, 1990). However, Jevons's theory was rejected and ridiculed by modern economists, interpreted as spurious (false) relationships or meaningless correlations (Yule, 1926). ...
... Hence A = C″ [13]. ...
Chapter
Full-text available
All too often, high school-and even university-students graduate with only a partial or oversimplified understanding of what the scientific method is and how to employ it. The long-running Discovery Channel television show MythBusters has attracted the attention of political leaders and prominent universities for having the potential to address this problem and help young people learn to think critically. MythBusters communicates many aspects of the scientific method not usually covered in the classroom: the use of experimental controls, the use of logical reasoning, the importance of objectivity, the operational definitions, the small-scale testing, the interpretation of results, and the importance of repeatability of results. In this content analysis, episodes from the show's 10-year history were methodically examined for aspects of the scientific method.
Book
Reasoning from inconclusive evidence, or 'induction', is central to science and any applications we make of it. For that reason alone it demands the attention of philosophers of science. This element explores the prospects of using probability theory to provide an inductive logic: a framework for representing evidential support. Constraints on the ideal evaluation of hypotheses suggest that the overall standing of a hypothesis is represented by its probability in light of the total evidence, and incremental support, or confirmation, indicated by the hypothesis having a higher probability conditional on some evidence than it does unconditionally. This proposal is shown to have the capacity to reconstruct many canons of the scientific method and inductive inference. Along the way, significant objections are discussed, such as the challenge of inductive scepticism, and the objection that the probabilistic approach makes evidential support arbitrary.
Article
Full-text available
It is well known that John Venn introduced an ingenious diagrammatic scheme in his 1880 paper “On the Diagrammatic and Mechanical Representation of Propositions and Reasonings.” It is less known that Venn also described there two plans for logical machines, inspired by his diagrams. These machines were said to be analogous to the machine that William S. Jevons constructed a few years earlier. However, Venn had “no high estimate . . . of the interest or importance” of such machines. He argued that such devices perform only a small part of the process required to solve logical problems. Consequently, the help that is offered is very slight. Given that Venn’s machine is founded on his diagrams, one may wonder what Venn’s discussion of logic machines teaches us about his diagrams. The paper argues that Venn failed to notice that his diagrams are vulnerable to the same criticisms that he raised against logic machines.
Chapter
E. E. Constance Jones (1848–1922) is best known for her distinction between connotation and denotation, which predates Frege’s analogous distinction between Sinn and Bedeutung. Yet, this focus both misidentifies and limits her significance. While the distinction is important, I argue that the emphasis should be on its role in her law of significant assertion: “Any subject of predication is an identity of denotation in diversity of intension.” This law is her central contribution to philosophical logic, not the distinction itself. In the paper, I situate Jones’s distinction in the context of this law and reconstruct some of the central debates surrounding it. These all concern her endorsement of Hermann Lotze’s identity theory of predication. Although Jones seeks to improve on Lotze by showing how the identity theory need not take true identity statements to be trivial, there is some question as to whether identity plays an essential role in the resulting theory. I will argue that this worry is unfounded: whether or not it is objectionable on other grounds, Jones’s theory preserves the core of Lotze’s view. There is also a question, first voiced by W. E Johnson, whether Jones’s version of the theory—or indeed any version— generates a regress. This challenge, as I argue, is not so easily dismissed.
Article
There is a sense in which a philosophic theory can be confirmed. We may ask what its effects were on the development of scientific theory,—did it clarify ideas and help open up new areas of research, or did it constrain the work of science? In this essay, we shall try to judge the significance of dialectical materialism from this standpoint. We shall be concerned with the bearing of this philosophy on scientific work, especially in the Soviet Union. Now dialectical materialism is not a “philosophy” in the same sense which the word has when applied to, let us say, logical empiricism. The difference is evident when a writer like Haldane asks, for instance, how far scientific discoveries have verified the principles of dialectical materialism.' We could not ask in a similar way whether scientific discoveries have confirmed the principles of logical empiricism. For the latter affirms no factual propositions concerning the world; it enunciates solely the principle of verifiability, and there is clearly no scientific discovery which could possibly refute the requirement of verification. If it makes sense to ask how far scientific discoveries verify dialectical materialism, then the latter must obviously contain factual propositions concerning nature which should be stated in a way to permit of confirmation of rejection.
Chapter
It is known that early modern logic tackled complex problems involving a high number of terms. For the purpose, John Venn, and several of his followers, designed algorithms for the construction of complex logic diagrams. This task proved difficult because diagrams tend to lose their visual advantage beyond five or six terms. In this paper, we discuss Charles S. Peirce’s work to overcome this difficulty. In particular, we reconstruct his algorithm for the purpose and compare it with those of his contemporaries Venn and Lewis Carroll. Keywords: Peirce, Venn diagram, Carroll diagram, Complex diagrams
Article
Full-text available
Logician Lewis Carroll published in 1897 a logic of Classes in the symbolic tradition that was growing in his time. Through a comparison of the different editions of this work, this paper discusses some key difficulties that this logician faced in the shaping of his logic. We review consecutively problems and insights related to the formation of classes, the processes of Classification and Division, the relation between Classes and Individuals, the notions of Existence and Imaginariness, the Normal Form of Propositions, and finally the business of Logic.
Preprint
Full-text available
This doctoral dissertation deals with the subject of single-photon technology applications. Particular emphasis was placed on the use of single-photon sources in quantum communication, metrology, and further development of quantum computers. At the beginning of this thesis, the development history of cryptography is described, starting from steganography and ending with quantum key distribution protocols. Later, the key elements of the quantum network, such as end nodes and transmission channels, are presented. Various types of sources and single-photon detectors are discussed. Methods for amplifying the signal and correcting errors that occur during transmission are also mentioned. In the further part of this thesis, two examples of single-photon sources are described, along with their experimental implementation. These sources are based on the nonlinear spontaneous parametric down-conversion (SPDC) process, which has also been described in detail. Parameters that specify the quality of a single-photon source (such as conversion efficiency, phase-matching conditions, singleness and photonic density matrix) are also characterized, both mathematically and experimentally. Furthermore, this doctoral dissertation consists of definitions and elaborated descriptions of quantum optical phenomena, which should be useful for those who are not yet familiar with the subject of single-photon sources and their applications or just start their work in this field.
Article
Full-text available
The use of the symbol ∨ for disjunction in formal logic is ubiquitous. Where did it come from? The paper details the evolution of the symbol ∨ in its historical and logical context. Some sources say that disjunction in its use as connecting propositions or formulas was introduced by Peano; others suggest that it originated as an abbreviation of the Latin word for “or,” vel . We show that the origin of the symbol ∨ for disjunction can be traced to Whitehead and Russell’s pre- Principia work in formal logic. Because of Principia ’s influence, its notation was widely adopted by philosophers working in logic (the logical empiricists in the 1920s and 1930s, especially Carnap and early Quine). Hilbert’s adoption of ∨ in his Grundzüge der theoretischen Logik guaranteed its widespread use by mathematical logicians. The origins of other logical symbols are also discussed.
Article
This essay offers a history of the development of philosophy of economics from the 1830s until today, with a personal perspective on the developments of the last four decades. It argues that changes in methodology have largely followed changes in practice, although practice and preaching are now in greater accord than earlier. The essay looks forward to fruitful collaboration particularly with respect to causal inference and normative appraisal.
Book
"Explanation, understanding and inference" presents a view of scientific explanation, called "inferentialist", and demonstrates the advantages of this view compared to alternative models and analyses of explanation, discussed in the philosophy of science in the last 70 years. In brief, the inferentialist view boils down to the claim that the qualities of an explanation depend on the inferences that it allows us to make. This statement stands on two premises: (a) the primary function of explanation is to bring us understanding of the object being explained, or to deepen the existing understanding; (b) understanding is manifested in the inferences we make about the object of our understanding and its relations with other objects. Hence, one explanation is good, i.e. it successfully performs the function of bringing us understanding, if it allows us to draw inferences that were not available to us before we had this explanation. The contents of the book include a preface, 11 chapters (divided into 3 parts) and an afterword.
Chapter
We begin with some questions. What constitutes a scientific discovery? How do we tell when a discovery has been made and whom to credit? Is making a discovery (always) the same as solving a problem? Is it an individual psychological event (an ahal experience), or something more articulated such as a logical argument or a mathematical derivation? May discovery require a long, intricate social process? Could it be an experimental demonstration? How do we tell exactly what has been discovered, given that old discoveries are often recharacterized in very different ways by succeeding generations? What kinds of items can be discovered, and how? Is the discovery of a theory accomplished in much the same way as the discovery of a new comet, or is “discovery” an inhomogeneous domain of items or activities calling for quite diverse accounts? Must a discovery be both new and true? How is discovery related to (other?) forms of innovation, such as invention and social construction? Can there be a logic or method of discovery? Just one? Many? What could such a procedure be? How is it possible that an (a priori?) logic or method available now has so much future knowledge already packed into it? How could a logic of discovery itself be discovered? How general in scope must a method of discovery be? Must it apply to all sciences, independently of the subject matter (as we might expect of a “logic”), or might it apply only to problems of a certain type or depend on substantive scientific claims? How, if at all, is their discovery related to the justification of scientific claims? Is the manner in which scientists make discoveries at all similar to the way in which they test them? Is this justificatory “checkout” procedure really part of the larger discovery process rather than distinct from it? Can discoveries be explained rationally, or do they always contain irrational or nonrational elements, such as inspiration or blind luck? Are historians, sociologists, and psychologists better equipped than philosophers to explain scientific creativity? Can a methodology of discovery help to explain the explosion of scientific and technological progress since 1600? If there is no logic of discovery, and if discovery is irrelevant to justification, then why include the subject of discovery in the domain of philosophy (epistemology or methodology of science) at all? What could philosophers have to say about it? Are there historical patterns of discovery, for example, that tell us something about the rationality, if not the logic (in the strict sense), of the growth of scientific knowledge?
Article
This article analyses the relationship between the concept of single aspect similarity and proposed measures of similarity. More precisely, it compares eleven measures of similarity in terms of how well they satisfy a list of desiderata, chosen to capture common intuitions concerning the properties of similarity and the relations between similarity and dissimilarity. Three types of measures are discussed: similarity as commonality, similarity as a function of dissimilarity, and similarity as a joint function of commonality and difference. Relative to the desiderata, it is found that a measure of the second type fares the best. However, rather than recommend this measure alone as a measure of similarity, it is suggested that there are at least three separate concepts of single aspect similarity, corresponding to the three types of measures. In light of this proposal, three of the eleven measures (and variants of these) are deemed acceptable.
Article
Full-text available
A task frequently encountered in digital circuit design is the solution of a two-valued Boolean equation of the form ℎ(í µí±¿, í µí²€, í µí²) = 1, where ℎ: í µí°µ 2 í µí±˜+í µí±š+í µí±› → í µí°µ 2 and í µí±¿, í µí²€, and í µí² are binary vectors of lengths í µí±˜, í µí±š, and í µí±›, representing inputs, intermediary values, and outputs, respectively. The resultant of the suppression of the variables í µí²€ from this equation could be written in the form í µí±”(í µí±¿, í µí²) = 1 where í µí±”: í µí°µ 2 í µí±˜+í µí±› → í µí°µ 2. Typically, one needs to solve for í µí² in terms of í µí±¿, and hence it is unavoidable to resort to 'big' Boolean algebras which are finite (atomic) Boolean algebras larger than the two-valued Boolean algebra. This is done by reinterpreting the aforementioned í µí±”(í µí±¿, í µí²) as í µí±”(í µí²): í µí°µ 2 í µí°¾ í µí±› → í µí°µ 2 í µí°¾ , where í µí°µ 2 í µí°¾ is the free Boolean algebra í µí°¹í µí°µ(í µí±‹ 1 , í µí±‹ 2 … …. í µí±‹ í µí±˜), which has í µí°¾ = 2 í µí±˜ atoms, and 2 í µí°¾ elemnets. This paper describes how to unify many digital specifications into a single Boolean equation, suppress unwanted intermediary variables í µí²€, and solve the equation í µí±”(í µí²) = 1 for outputs í µí² (in terms of inputs í µí±¿) in the absence of any information about í µí²€. The paper uses a novel method for obtaining the parametric general solutions of the 'big' Boolean equation í µí±”(í µí²) = 1. The parameters used do not belong to í µí°µ 2 í µí°¾ but they belong to the two-valued Boolean algebra í µí°µ 2 , also known as the switching algebra or propositional algebra. To achieve this, we have to use distinct independent parameters for each asserted atom in the Boole-Shannon expansion of í µí±”(í µí²). The concepts and methods introduced herein are demonsrated via several detailed examples, which cover the most prominent type among basic problems of digital circuit design.
Article
Areas of endemism represent territories (no matter the size) of non-random overlap in the geographic distribution of two or more taxa, reflecting a common spatial history of these taxa. The common spatial history is a result of different processes that connect areas of endemism to evolutionary theory. Numerous and diverse definitions of areas of endemism have been proposed. All of them have used as the conceptual foundation of the definition a certain degree of non-random congruence of geographic distribution amongst at least two taxa. ‘Certain degree’ means that geographic congruence does not demand complete agreement on the boundaries of those taxa's distributions at all possible scales of mapping. The words ‘certain degree’ mask the polythetic nature of areas of endemism. The polythetic characterization of areas of endemism implies that each locality of the study area has a large number of a set of species. Each species of this set is present in many of those localities and, generally, none of those species is present in every locality of the area. The converse will be a monothetic nature of areas of endemism where a taxon or group of taxa is present in all the localities of the study area. We propose here that the expansion of the definition of areas of endemism, including their polythetic characterization, will improve understanding of large biogeographic areas such as realms, regions, provinces, and districts, and will increase the scientific content (e.g., predictive capability and explanatory power) of areas of endemism.
Article
Full-text available
There is something extreme about Ludwig von Mises’s methodological apriorism, namely, his epistemological justification of the a priori element(s) of economic theory. His critics have long recognized and attacked the extremeness of Mises’s epistemology of a priori knowledge. However, several of his defenders have neglected what is (and what has long been recognized by his critics to be) extreme about Mises’s apriorism. Thus, the argument is directed less against Mises than against those contributions to the secondary literature that assert his methodological moderation while overlooking what the most prominent critics have found extreme about Mises’s apriorism. Defending Mises as a merely moderate apriorist because he held only a narrow part of the foundation of economics to be a priori is a straw-man defense against criticisms of his apriorism as epistemologically extreme.
Chapter
Probability logic is generally understood today as a logic which assigns to propositions not just two truth-values but a whole series of such values, variously called probabilities of truth, degrees of confirmation, degrees of likelihood, etc. [1]. As distinguished from classical mathematical logic which operates with two truth-values, probability logic has to do with a range of such values, which is, in principle, unlimited. It is for this reason a branch of many-valued logic. However, while the other systems of many-valued logic have to do with a set of discrete truth-values, probability logic deals with a continuous scale of values.
Chapter
Security always involves making judgements in which the possibilities of harm to someone’s interests are assessed with respect to possible actions that can be taken to reduce the likelihood of that harm occurring, and/or reducing, the intensity of the harm when it occurs. Where the beneficiary of the reduction in risk is also the party footing the bill (in direct terms such as paying for the installation of locks or the wages of security guards, or indirect in terms of reduced opportunities), a fairly standard approach to cost–benefit analyses can be deployed. As other chapters in this volume discuss, though, it is rarely that simple and security decisions very often place burdens on those who have no voice in the decision-making processes. In this chapter, we first take a step back from questions of security per se and present an abstract philosophical framework called ‘ethics’ which provides tools for designing systems of decision-making to take into account a broader range of goals than just improving the immediate security of the decision-maker. This framework is then instantiated with two sets of real-world security issues. These are used to bring the principles to life using real-world examples (or thought experiments drawn from one or more real-world examples but simplified and sharpened to bring out the ethical issues in sharp relief) and demonstrate not only the complexity of the ethical dilemmas posed in security, but also the benefits of using ethical analysis approaches in making security policy decisions, and in particular from having ethical principles which inform both policy and practical security decisions.
Chapter
In this chapter is presented the methodological principles that have been applied in modeling economic events. The neoclassical framework is compared with that of Econophysics, and the parallels and conflicts between them are identified. The axioms of modeling economic phenomena are then defined in a new way that is based on the mental character of human beings. Traditionally microeconomic modeling has been based on the decision-making of human beings, and we show that this is analogous with the behavior of a steelyard in physics. On this basis, the principle of modeling in economics is defined that is applied throughout the book.
Chapter
A unit is a concrete magnitude selected as a standard by reference to which other magnitudes of the same kind may be compared. A derived unit is a unit determined with reference to some other unit. Thus the unit of area may be derived from the unit of length by being defined as the area of the square, erected on the unit of length. The unit of speed may be derived from the unit of length and the unit of time, by being defined as that speed at which the unit of length is traversed in the unit of time. In relation to the derived units of area and speed, the units of length and time would then be fundamental— ‘fundamental’ being a term correlative to ‘derived’.
Chapter
Induction, in its most general form, is the making of inferences from the observed to the unobserved. Thus, inferences from the past to the future, from a sample to the population, from data to an hypothesis, and from observed effects to unobserved causes are all aspects of induction, as are arguments from analogy. A successful account of induction is required for a satisfactory theory of causality, scientific laws, and predictive applications of economic theory. But induction is a dangerous thing, and especially so for those who lean towards empiricism, the view that only experience can serve as the grounds for genuine knowledge. Because induction, by its very nature, goes beyond the observed, its use is inevitably difficult to justify for the empiricist. In addition, inductive inferences differ from deductive inferences in three crucial respects. First, the conclusion of an inductive inference does not follow with certainty from the premises, but only with some degree of probability. Second, whereas valid deductive inferences retain their validity when extra information is added to the premises, inductive inferences may be seriously weakened. Third, whereas there is widespread agreement upon the correct characterization of deductive validity, there is widespread disagreement about what constitutes a correct inductive argument, and indeed whether induction is a legitimate part of science at all.
Conference Paper
Full-text available
We commonly represent a class with a curve enclosing individuals that share an attribute. Individuals that are not predicated with that attribute are left outside. The status of this outer class has long been a matter of dispute in logic. In modern notations, negative terms are simply expressed by labeling the spaces that they cover. In this note, we discuss an unusual (and previously unpublished) method designed by Peirce in 1896 to handle negative terms: to indicate the position of the terms by the shape of the curve rather than by labeling the spaces.
Conference Paper
Full-text available
We commonly represent a class with a curve enclosing individuals that share an attribute. Individuals that are not predicated with that attribute are left outside. The status of this outer class has long been a matter of dispute in logic. In modern notations, negative terms are simply expressed by labeling the spaces that they cover. In this note, we discuss an unusual (and previously unpublished) method designed by Peirce in 1896 to handle negative terms: to indicate the position of the terms by the shape of the curve rather than by labeling the spaces.
Chapter
Charles Peirce’s development of his diagrammatic logic, his entitative and existential graphs, was significantly influenced by his affinity for tree graphs that were being used in chemistry. In his development of systems of natural deduction and sequent calculi, Gerhard Gentzen made use of the tree (tableau) method. In presenting the historical sources of both these tools, we draw on unpublished manuscripts from the Peirce Edition Project at the University of Indianapolis, where for many years the first author was a member of the research staff.
Chapter
The latter half of the nineteenth century was a heroic period for formal logic. The new mathematical tools for analysing the art of reasoning liberated logicians from their traditional chains. This event gave birth to the modern research programme of mathematical logic, which has ever since been able to keep up a high rate of progress with a wealth of novel techniques, results, and applications.
Chapter
I have chosen to speak about the use of analogy in scientific argument, and I ought to begin by clearing away a number of potential misconceptions: I cannot completely ignore the historical origins of the word ‘analogy’, but the time at my disposal is much too valuable to be spent in tracing its Aristotelian pedigree in detail. On the other hand, I have no wish to do what so many contemporary philosophers do, that is to say, treat the word as synonymous with the word ‘model.’ What I shall say has much to do with the notion of scientific model, but my perspective will be rather different from that of a historian searching for the various uses of models in science. I repeat that my concern is with analogical argument. As you will see in due course, the sorts of argument that count as analogical are usually reckoned to be rather weak, and even rather dangerous. There is no point in my trying to persuade you that things are otherwise. Mine will not be a history of ‘positive science’, in Comte’s sense, but a history of tentative science, and of certain methods of conjecture. Analogy is the basis for much scientific conjecture, but even conjecture is an art, which can be done well, done rationally, that is, even though it might prove in the end to have yielded a false conclusion.
Chapter
Induction, in its most general form, is the making of inferences from the observed to the unobserved. Thus, inferences from the past to the future, from a sample to the population, from data to an hypothesis, and from observed effects to unobserved causes are all aspects of induction, as are arguments from analogy. A successful account of induction is required for a satisfactory theory of causality, scientific laws and predictive applications of economic theory. But induction is a dangerous thing, and especially so for those who lean towards empiricism, the view that only experience can serve as the grounds for genuine knowledge. Because induction, by its very nature, goes beyond the observed, its use is inevitably difficult to justify for the empiricist. In addition, inductive inferences differ from deductive inferences in three crucial respects. First, the conclusion of an inductive inference does not follow with certainty from the premises, but only with some degree of probability. Second, whereas valid deductive inferences retain their validity when extra information is added to the premises, inductive inferences may be seriously weakened. Third, whereas there is widespread agreement upon the correct characterization of deductive validity. there is widespread disagreement about what constitutes a correct inductive argument, and indeed whether induction is a legitimate part of science at all.
Chapter
A general theory for obtaining limits within which an unknown parameter can be asserted to lie with known probability was developed by Fisher and Neyman in the 1930”s.2 The basic device through which such limits are obtained consists in "solving” a probability statement which involves an unknown parameter θ, for this parameter. The resulting statement concerning θ is then equivalent to the original statement and thus holds with the same lenown probability.
Chapter
Stanley Jevons wrote these words more than a century ago in The Principles of Science; (Jevons, 1958, p. 1). Yet even today, The Principles of Science is a guidepost for defining what science is and how it is conducted.
Chapter
Forecasts are answers to questions of the form ‘What will happen to x if p is the case ?’, ‘When will x happen if p obtains ?’, and the like. In science such answers are called predictions and are contrived with the help of theories and data: scientific prediction is, in effect, an application of scientific theory. Prediction enters our picture of science on three counts: (i) it anticipates fresh knowledge and therefore (ii) it is a test of theory and (iii) a guide to action. In this chapter we shall be concerned with the purely cognitive function of prediction, i.e. with foresight. The methodological aspect of prediction (its test function) will concern us in Ch. 15, and the practical side (planning) in the next chapter.
Chapter
Logik ist die Analyse sprachlicher Aussagen auf formale Korrektheit der in ihnen enthaltenen Begriffe, Urteile und Schlüsse. Aus dieser Definition folgt, daß Sprache und Denken vor der Logik da waren; tatsächlich ist jeder geistig gesunde, wenn auch logisch Ungebildete imstande, formal richtige Gedanken zu äußern. Die formale Richtigkeit des Denkens ist eine wichtige Voraussetzung dafür, daß der Mensch die ihm vom Leben auferlegten Aufgaben erfüllen kann. Die logischen Gesetze sind der psychologischen Gesetzmäßigkeit unseres Denkens keineswegs fremd; sie verhalten sich zu dieser, um einen Vergleich von A. Pauler zu gebrauchen, wie die Gesetze des Atmens zur Tätigkeit der Lunge.
Chapter
Haben wir in den vorangehenden Kapiteln die allgemeinen psychologischen und logischen Beziehungen der Erkenntnis besprochen, so wenden wir uns jetzt den eigentlichen Arbeitsmethoden der Wissenschaft zu, indem wir die Wege und Mittel untersuchen, mit deren Hilfe sie zu ihren Ergebnissen kommt. Mit Hilfe der Psychologie und der Logik allein können wir höchstens die analytischen und formalen Wissenschaften, die Mathematik und Logik aufbauen; alle Erfahrungswissenschaften setzen dagegen eine besondere und für jedes Wissensgebiet mehr oder weniger spezifisch angepaßte Technik voraus, welche das Erfassen der ihnen zugrundeliegenden Wirklichkeit ermöglicht; diese Methodik wollen wir nunmehr in ihren Grundzügen kennenlernen.
Chapter
Unter „Psychologismus“ verstehen wir die Erklärung menschlicher Leistungen und Handlungen aus ihren seelischen Entstehungsmotiven heraus, wobei vorausgesetzt wird, daß diese Handlungen und Leistungen mit den psychologischen Vorgängen bei ihrer Entstehung schlechterdings identisch sind, daß diese Leistungen weiter nichts sind als psychische Phänomene.
Chapter
The place of Leon Walras in the history of Western economic thought would appear honorable and secure. One of the earliest to proclaim his stature was Joseph Schumpeter (1954, p. 827): “So far as pure theory is concerned, Walras is in my opinion the greatest of all economists. His system of economic equilibrium, uniting as it does, the quality of ‘revolutionary’ creativeness with the quality of classic synthesis, is the only work by an economist that will stand in comparison with theoretical physics.” In the interim, this conviction has become institutionalized to such an extent that the recipient of the 1983 Nobel Prize in economics could assert that, “Walras wrote one of the greatest classics, if not the greatest, in our science” (Debreu, 1984, p. 268).
Chapter
In order to understand reason’s irrationality, it is necessary to review the field upon which reason acts. Nature, the world, or the matrix is revealed to the mind through pressures which the mind cannot overcome. The matrix is the total field: the becoming-being-steered, both flux and law in union. It is composed of groupings of gestalten, internally related — each gestalt a particular patternment of material. Gestalten are related, forming large and natural groupings — configurations; these configurations group themselves together in still larger unities forming thereby the body of the universe, the matrix. The world is, and is marked by embodiments of coherent structures in internal relation. Nature, then, is everything in concatination. Nature is not simply a manifold existing independently of law; nor is it only pure formula and invisible law. These are creations of certain aspects of reason.
Chapter
In daily life, the terms ‘specification’ and ‘design’ are intuitively understood, and refer to other intuitive notions, such as ‘product’, ‘process’ and ‘artefact’. Through practice, these terms have acquired more precise meaning in the classical disciplines of engineering, but their casual usage still causes much uncertainty and confusion in many fields. Special concern arises in software- based systems which are increasingly entrusted with essential tasks in industry, finance, public administration and control of safety-sensitive plant. It is proving hard to specify and design such systems adequately, and to define for them standards which could assure their quality and safety.
Chapter
Full-text available
According to Volney Stefflre (personal communication) theories of category formation can be divided into three general kinds: (a) realist theories, which argue that “people categorize the world the way they do because that’s the way the world is”; (b) innatist theories, which argue that “people categorize the world the way they do because that’s the way people are”; and (c) social construction theories, which argue that people categorize the world the way they do because they have participated in social practices, institutions, and other forms of symbolic action (e.g., language) that presuppose or in some way make salient those categorizations. The “constructive” part of a social construction theory is the idea that equally rational, competent, and informed observers are, in some sense, free (of external realist and internal innate constraints) to constitute for themselves different realities; and the cognate idea, articulated by Goodman (1968, 1972, 1978), that there are as many realities as there are ways “it” can be constituted or described (also see Nagel 1979, pp. 211–213). The “social” part of a social construction theory is the idea that categories are vicariously received, not individually invented; and the cognate idea that the way one divides up the world into categories is, in some sense, tradition-bound, and thus transmitted, communicated and “passed on” through symbolic action.
Chapter
It was already fairly late in the development of economic theory that aggregation of individual economic relations into macro relations was recognized as a serious problem. This is illustrated in this paper by pointing to the more or less incidental way in which aggregation problems are mentioned and solved in the older literature till the famous Econometrica debate in the late forties. In particular, we try to disentangle the notions of consistency and representativity.
Article
Re-examining Language Testing explores ideas that form the foundations of language testing and assessment. The discussion is framed within the philosophical and social beliefs that have forged the practices endemic in language education and policy today. From historical and cultural perspectives, Glenn Fulcher considers the evolution of language assessment, and contrasting claims made about the nature of language and human communication, how we acquire knowledge of language abilities, and the ethics of test use. The book investigates why societies use tests, and the values that have driven changes in practice over time. The discussion is presented within an argument that an Enlightenment inspired view of human nature and advancement is most suited to a progressive, tolerant, and principled theory of language testing and validation. Covering key topics such as measurement, validity, accountability and values, Re-examining Language Testing provides a unique and innovative analysis of the ideas and social forces that shape the practice of language testing. It is an essential read for advanced undergraduate and postgraduate students of Applied Linguistics and Education. Professionals working in language testing and language teachers will also find this book invaluable.
Article
Full-text available
What role have theoretical methods initially developed in mathematics and physics played in the progress of financial economics? What is the relationship between financial economics and econophysics? What is the relevance of the “classical ergodicity hypothesis” to modern portfolio theory? This paper addresses these questions by reviewing the etymology and history of the classical ergodicity hypothesis in 19th century statistical mechanics. An explanation of classical ergodicity is provided that establishes a connection to the fundamental empirical problem of using nonexperimental data to verify theoretical propositions in modern portfolio theory. The role of the ergodicity assumption in the ex post/ex ante quandary confronting modern portfolio theory is also examined.
ResearchGate has not been able to resolve any references for this publication.