To read the full-text of this research, you can request a copy directly from the author.
... Occam's Razor, a principle that was described by Aristotle (Charlesworth, 1956), states that one should prefer the simplest hypothesis, or model, that does the job. A review of 32 studies found 97 comparisons between simple and complex methods ) None found that complexity improved forecast accuracy. ...
Purpose: Commentary on M4-Competition and findings to assess the contribution of data models—such as from machine learning methods—to improving forecast accuracy. Methods: (1) Use prior knowledge on the relative accuracy of forecasts from validated forecasting methods to assess the M4 findings. (2) Use prior knowledge on forecasting principles and the scientific method to assess whether data models can be expected to improve accuracy relative to forecasts from previously validated methods under any conditions. Findings: Prior knowledge from experimental research is supported by the M4 findings that simple validated methods provided forecasts that are: (1) typically more accurate than those from complex and costly methods; (2) considerably more accurate than those from data models. Limitations: Conclusions were limited by incomplete hypotheses from prior knowledge such as would have permitted experimental tests of which methods, and which individual models, would be most accurate under which conditions. Implications: Data models should not be used for forecasting under any conditions. Forecasters interested in situations where much relevant data are available should use knowledge models.
... Occam's Razor, a principle that was described by Aristotle (Charlesworth, 1956), states that one should prefer the simplest hypothesis, or model, that does the job. A review of 32 studies found 97 comparisons between simple and complex methods ) None found that complexity improved forecast accuracy. ...
Purpose: Commentary on M4-Competition and findings to assess the contribution of data models--such as from machine learning methods--to improving forecast accuracy.
Methods: (1) Use prior knowledge on the relative accuracy of forecasts from validated forecasting methods to assess the M4 findings. (2) Use prior knowledge on forecasting principles and the scientific method to assess whether data models can be expected to improve accuracy relative to forecasts from previously validated methods under any conditions.
Findings: Prior knowledge from experimental research is supported by the M4 findings that simple validated methods provided forecasts that are: (1) typically more accurate than those from complex and costly methods; (2) considerably more accurate than those from data models.
Limitations: Conclusions were limited by incomplete hypotheses from prior knowledge such as would have permitted experimental tests of which methods, and which individual models, would be most accurate under which conditions.
Implications: Data models should not be used for forecasting under any conditions. Forecasters interested in situations where much relevant data are available should use knowledge models.
... This does not necessarily imply that nature itself is parsimonious, or that most parsimonious theories are true. Aristotle (350 B.C.E.) articulated a different view of the principle of parsimony, that "nature operates in the shortest way possible" and "the more limited, if adequate, is always preferable" (Charlesworth 1956). This postulates that nature itself is parsimonious, using the principle in an ontological rather than epistemological manner. ...
We review Hennigian phylogenetics and compare it with Maximum parsimony, Maximum likelihood, and Bayesian likelihood approaches. All methods use the principle of parsimony in some form. Hennigian-based approaches are justified ontologically by the Darwinian concepts of phylogenetic conservatism and cohesion of homologies, embodied in Hennig's Auxiliary Principle, and applied by outgroup comparisons. Parsimony is used as an epistemological tool, applied a posteriori to choose the most robust hypothesis when there are conflicting data. Quantitative methods use parsimony as an ontological criterion: Maximum parsimony analysis uses unweighted parsimony, Maximum likelihood weight all characters equally that explain the data, and Bayesian likelihood relying on weighting each character partition that explains the data. Different results most often stem from insufficient data, in which case each quantitative method treats ambiguities differently. All quantitative methods produce networks. The networks can be converted into trees by rooting them. If the rooting is done in accordance with Hennig's Auxiliary Principle, using outgroup comparisons, the resulting tree can then be interpreted as a phylogenetic hypothesis. As the size of the data set increases, likelihood methods select models that allow an increasingly greater number of a priori possibilities, converging on the Hennigian perspective that nothing is prohibited a priori. Thus, all methods produce similar results, regardless of data type, especially when their networks are rooted using outgroups. Appeals to Popperian philosophy cannot justify any kind of phylogenetic analysis, because they argue from effect to cause rather than from cause to effect. Nor can particular methods be justified on the basis of statistical consistency, because all may be consistent or inconsistent depending on the data. If analyses using different types of data and/or different methods of phylogeny reconstruction do not produce the same results, more data are needed.
... The trend is at odds with the common belief among scientists that scientists should strive for simplicity. The preference for simplicity in science can be traced back to Aristotle (Charlesworth, 1956), and is commonly identified with the 14th Century formulation, Occam's razor. Indeed "since that time, it has been accepted as a methodological rule in many sciences to try to develop simple models" (Jensen, 2001, p. 282). ...
This article introduces the Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models,forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods—including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy.Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives:(1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
... "There is, perhaps, no beguilement more insidious and dangerous than an elaborate and elegant mathematical process built upon unfortified premises." Chamberlin (1899, p. 890) The call for simplicity in science goes back at least to Aristotle, but the 14 th century formulation, Occam's razor, is more familiar (Charlesworth, 1956). The use of complex methods reduces the ability of potential users and other researchers to understand what was done and therefore to detect mistakes and assess uncertainty. ...
... The trend is at odds with the common belief among scientists that scientists should strive for simplicity. The preference for simplicity in science can be traced back to Aristotle (Charlesworth, 1956), and is commonly identified with the 14 th Century formulation, Occam's razor. Indeed "since that time, it has been accepted as a methodological rule in many sciences to try to develop simple models" (Jensen 2001, p. 282). ...
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
... This does not necessarily imply that nature itself is parsimonious. Aristotle (350 BCE) articulated an ontological basis for the principle of parsimony, the postulate that "nature operates in the shortest way possible" and "the more limited, if adequate, is always preferable" (Charlesworth 1956). This sense of the principle postulates that nature is itself parsimonious in some manner. ...
This chapter reviews Hennigian, maximum likelihood, and Bayesian approaches to quantitative phylogenetic analysis and discusses their strengths and weaknesses and protocols for assessing the relative robustness of one’s results. Hennigian approaches are justified by the Darwinian concepts of phylogenetic conservatism and the cohesion of homologies, embodied in Hennig’s Auxiliary Principle, and applied using outgroup comparisons. They use parsimony as an epistemological tool. Maximum likelihood and Bayesian likelihood approaches are based on an ontological use of parsimony, choosing the simplest model possible to explain the data. All methods identify the same core of unambiguous data in any given data set, producing highly similar results. Disagreements most often stem from insufficient numbers of unambiguous characters in one or more of the data types. If analyses based on different types of data or using different methods of phylogeny reconstruction, or some combination of both, do not produce the same results, more data are needed. New developments in the application of phylogenetic methods in paleoanthropology have resulted in major advances in the understanding of morphological character development, modes of speciation, and the recent evolutionary history of the human species.
... This does not necessarily imply that nature itself is parsimonious, or that most parsimonious theories are true. Aristotle (350 B.C.E.) articulated a different view of the principle of parsimony, that "nature operates in the shortest way possible" and "the more limited, if adequate, is always preferable" (Charlesworth 1956). This postulates that nature itself is parsimonious, using the principle in an ontological rather than epistemological manner. ...
... This does not necessarily imply that nature itself is parsimonious, or that most parsimonious theories are true. Aristotle (350 B.C.E.) articulated a different view of the principle of parsimony, that "nature operates in the shortest way possible" and "the more limited, if adequate, is always preferable" (Charlesworth 1956). This postulates that nature itself is parsimonious, using the principle in an ontological rather than epistemological manner. ...
We review Hennigian phylogenetics and compare it with Maximum parsimony, Maximum likelihood, and Bayesian likelihood approaches. All methods use the principle of parsimony in some form. Hennigian-based approaches are justifi ed ontologically by the Darwinian concepts of phylogenetic conservatism and cohesion of homologies, embodied in Hennig's Auxiliary Principle, and applied by outgroup comparisons. Parsimony is used as an epistemological tool, applied a posteriori to choose the most robust hypothesis when there are confl icting data. Quantitative methods use parsimony as an ontological criterion: Maximum parsimony analysis uses unweighted parsimony, Maximum likelihood weight all characters equally that explain the data, and Bayesian likelihood relying on weighting each character partition that explains the data. Different results most often stem from insuffi cient data, in which case each quantitative method treats ambiguities differently. All quantitative methods produce networks. The networks can be converted into trees by rooting them. If the rooting is done in accordance with Hennig's Auxiliary Principle, using outgroup comparisons, the resulting tree can then be interpreted as a phylogenetic hypothesis. As the size of the data set increases, likelihood methods select models that allow an increasingly greater number of a priori possibilities, converging on the Hennigian perspective that nothing is prohibited a priori. Thus, all methods produce similar results, regardless of data type, especially when their networks are rooted using outgroups. Appeals to Popperian philosophy cannot justify any kind of phylogenetic analysis, because they argue from effect to cause rather than from cause to effect. Nor can particular methods be justifi ed on the basis of statistical consistency, because all may be consistent or inconsistent depending on the data. If analyses using different types of data and/or different methods of phylogeny reconstruction do not produce the same results, more data are needed.
... Charlesworth (1956) argues that the claim for simplicity commonly ascribed to William of Ockham ("Ockham's razor") has already been articulated by Aristotle. ...
According to Kuhn, science and progress are strongly interrelated. In this paper, we define criteria of progress for design theories. A broad analysis of the literature on information systems design science reveals that there is no consensus on the criteria of progress for design theories. We therefore analyze different concepts of progress for natural science theories. Based on well-founded criteria stemming from the philosophy of science and referring to natural science theories, we develop a set of criteria of progress for design theories. In summary, our analysis results in six criteria of progress for design theories: A design theory is partially progressive compared to another if it is ceteris paribus (1) more useful, (2) internally more consistent, (3) externally more consistent, (4) more general, (5) simpler, or (6) more fruitful of further research. Although the measurement of these criteria is not the focus of this paper, the problem of measurement cannot be totally neglected. We therefore discuss different methods for measuring the criteria based on different concepts of truth: the correspondence theory of truth, the coherence theory of truth, and the consensus theory of truth. We finally show the applicability of the criteria with an example.
A pervasive aspect of human communication and sociality is argumentation: the practice of making and criticizing reasons in the context of doubt and disagreement. Argumentation underpins and shapes the decision-making, problem-solving, and conflict management which are fundamental to human relationships. However, argumentation is predominantly conceptualized as two parties arguing pro and con positions with each other in one place. This dyadic bias undermines the capacity to engage argumentation in complex communication in contemporary, digital society. This book offers an ambitious alternative course of inquiry for the analysis, evaluation, and design of argumentation as polylogue: various actors arguing over many positions across multiple places. Taking up key aspects of the twentieth-century revival of argumentation as a communicative, situated practice, the polylogue framework engages a wider range of discourses, messages, interactions, technologies, and institutions necessary for adequately engaging the contemporary entanglement of argumentation and complex communication in human activities.
Models are of great importance in science and engineering. Theories are almost invariably expressed using mathematical models. All engineering calculations are performed using models. Modeling is the process of generating a model as a conceptual representation of some phenomenon. Important issues related to the development and application of models in reinforced concrete (RC) are discussed in this chapter.
Simplicity was recently described in the philosophy of science as “perhaps the most controversial theoretical virtue” (Schindler 2018). It has also been argued that contrary to the standard view, simplicity is not merely a pragmatic virtue but also an epistemic one. Virtue epistemologists are also interested in epistemic virtues, but simplicity is usually absent in their discussions. This paper adduces several contemporary approaches to simplicity showing that in philosophy and in psychology it can be considered either as a virtue or as a vice. The paper then refers to the thought of Thomas Aquinas as an example of a premodern understanding of the virtue simplicity and suggests that his notion of simplicity as a virtue might be developed today within virtue epistemology into an interesting complement to this “perhaps the most controversial theoretical virtue” (Schindler 2018) known from contemporary philosophy.
The new generations of physicians are, to a large extent, unaware of the complex philosophical and biological concepts that created the bases of modern medicine. Building on the Hellenistic tradition of the four humors and their qualities, Galen (AD 129 to c. 216) provided a persuasive scheme of the structure and function of the cardiorespiratory system, which lasted, without serious contest, for 1,300 yr. Galen combined teleological concepts with careful clinical observation to defend a coherent and integrated system in which the fire-heart-flaming at the center of the body-interacts with lungs' air-pneuma to create life. Remarkably, however, he achieved these goals, despite failing to grasp the concept of systemic and pulmonary blood circulations, understand the source and destiny of venous and arterial blood, recognize the lung as the organ responsible for gas exchange, comprehend the actual events taking place in the left ventricle, and identify the source of internal heat. In this article, we outline the alternative theories Galen put forward to explain these complex phenomena. We then discuss how the final consequences of Galen's flawed anatomical and physiological conceptions do not differ substantially from those obtained if one applies modern concepts. Recognition of this state of affairs may explain why the ancient practitioner could achieve relative success, without harming the patient, to understand and treat a multitude of symptoms and illnesses.
Problem: The scientific method is unrivaled for generating useful knowledge, yet papers published in scientific journals frequently violate the scientific method.
Methods: A definition of the scientific method was developed from the writings of pioneers of the scientific method including Aristotle, Newton, and Franklin. The definition was used as the basis of a checklist of eight criteria necessary for compliance with the scientific method. The extent to which research papers follow the scientific method was assessed by reviewing the literature on the practices of researchers whose papers are published in scientific journals. Findings of the review were used to develop an evidence-based checklist of 20 operational guidelines to help researchers comply with the scientific method.
Findings: The natural desire to have one’s beliefs and hypotheses confirmed can tempt funders to pay for supportive research and researchers to violate scientific principles. As a result, advocacy has come to dominate publications in scientific journals, and had led funders, universities, and journals to evaluate researchers’ work using criteria that are unrelated to the discovery of useful scientific findings. The current procedure for mandatory journal review has led to censorship of useful scientific findings. We suggest alternatives, such as accepting all papers that conform with the eight critera of the scientific method.
Originality: This paper provides the first comprehensive and operational evidence-based checklists for assessing compliance with the scientific method and for guiding researchers on how to comply.
Usefulness: The “Criteria for Compliance with the Scientific Method” checklist could be used by journals to certify papers. Funders could insist that research projects comply with the scientific method. Universities and research institutes could hire and promote researchers whose research complies. Courts could use it to assess the quality of evidence. Governments could base policies on evidence from papers that comply, and citizens could use the checklist to evaluate evidence on public policy. Finally, scientists could ensure that their own research complies with science by designing their projects using the “Guidelines for Scientists” checklist.
Keywords: advocacy; checklists; data models; experiment; incentives; knowledge models; multiple reasonable hypotheses; objectivity; regression analysis; regulation; replication; statistical significance
An original instrument for thermo-mechanical polymer testing has been developed. This article describes the process of data acquisition, preprocessing and classification into 11 main polymergroups. The following polymergroups are used: polystyrene, acrylonitrile–butadiene–styrene, polycarbonate, low density polyethylene, polypropylene, high density polyethylene, polyamide 4.6, polyamide 6, polyamide 6/6, polybutylene terephthalate and polyethylene terephthalate. Three pattern recognition techniques of increasing complexities are applied in order to assess their suitability for the automated categorisation of polymer samples: k-nearest neighbours, various combinations of Q- and D-statistics (sometimes referred to as Soft Independent Modelling of Class Analogy, SIMCA) and Back-propagation Neural Networks. It is found that all the three methods categorise the materials into the correct polymergroups irrespective of their complexity. Methods based on the correlation structure in the data prove more beneficial than methods based on distance due to particular characteristics in the data. Best results are obtained using an adequate combination of two coefficients: one based on correlation and another based on distance.
We review Hennigian, maximum likelihood, and different Bayesian approaches to quantitative phylogenetic analysis and discuss
their strengths and weaknesses. We also discuss various protocols for assessing the relative robustness of one's results.
Hennigian approaches are justified by the Darwinian concepts of phylogenetic conservatism and the cohesion of homologies,
embodied in Hennig's Auxiliary Principle, and applied using outgroup comparisons. They use parsimony as an epistemological
tool. Maximum likelihood and Bayesian likelihood approaches are based on an ontological use of parsimony, choosing the simplest
model possible to explain the data. All methods identify the same core of unambiguous data in any given data set, producing
highly similar results. Disagreements most often stem from insufficient numbers of unambiguous characters in one or more of
the data types. Appeals to Popperian philosophy cannot justify any kind of phylogenetic analysis, because they argue from
effect to cause rather than cause to effect. Nor can any approach be justified by statistical consistency, because all may
be consistent or inconsistent depending on the data being analyzed. If analyses based on different types of data or using
different methods of phylogeny reconstruction, or some combination of both, do not produce the same results, more data are
needed.
ResearchGate has not been able to resolve any references for this publication.