Science topics: Philosophy Of ScienceHistory and Philosophy of Science
Science topic
History and Philosophy of Science - Science topic
Explore the latest questions and answers in History and Philosophy of Science, and find History and Philosophy of Science experts.
Questions related to History and Philosophy of Science
We have learned in QM the famous U. Principle which is probably the most important thing in this branch.
We also have learned that space-time stays together in GR.
The problem of measurements in QM comes from U. Principle & vice-versa and why it is not present in GR, not in the same form but analog?
Thanks
Why are numbers and shapes so exact? ‘One’, ‘two’, ‘point’, ‘line’, etc. are all exact. But irrational numbers are not so. The operations on these notions are also intended to be exact. If notions like ‘one’, ‘two’, ‘point’, ‘line’, etc. are defined to be so exact, then it is not by virtue of the exactness of these substantive notions, but instead, due to their being defined so, that they are exact, and mathematics is exact.
But on the other side, due to their being adjectival: ‘being a unity’, ‘being two unities’, ‘being a non-extended shape’, etc., their application-objects are all processes that can obtain these adjectives only in groups. These are pure adjectives, not properties which are composed of many adjectives.
A quality cannot be exact, but may be defined to be exact. It is in terms of the exactness attributed to these notions by definition that the adjectives ‘one’, ‘two’, ‘point’, ‘line’, etc. are exact. This is why the impossibility of fixing these (and other) substantive notions as exact misses our attention.
If in fact these quantitative qualities are inexact due to their pertaining to groups of processual things, then there is justification for the inexactness of irrational numbers, transcendental numbers, etc. too. If numbers and shapes are in fact inexact, then not only irrational and other inexact numbers but all mathematical structures should remain inexact except for their having been defined as exact.
Thus, mathematical structures, in all their detail, are a species of qualities, namely, quantitative qualities. Mathematics is exact only because its fundamental bricks are defined to be so. Hence, mathematics is an as-if exact science, as-if real science. Caution is advised while using it in the sciences as if mathematics were absolutely applicable, as if it were exact.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
CRITERIA TO DIFFERENTIATE BETWEEN
VIRTUALS AND EXISTENTS IN SCIENCE
Raphael Neelamkavil, Ph. D., Dr. phil.
Existents are in Extension (each having a finite number of finite-content parts) and in Change (existents, which are always with parts, possessing parts which always exert finite impacts on others, inclusive of exertion of finite impacts on some parts within). Can an existence without parts and exertion of impacts be thought of? Anything that is not in Extension-Change is non-existent.
The Extension-Change kind of existence is what we call Causation, and therefore, every existent is a causal Process in all parts. This is nothing but the Universal Law of Causality. That is, no more do we need to prove causation scientifically. This Law is a pre-scientific and hence physical-ontological Law, meant also for biological existents.
No quantum physics, statistical physics, or quantum cosmology can now declare that certain processes in nature are non-causal or acausal, after having admitted that these processes are in existence!
That is, existents at any level of formation are fully physical, possess at least a minimum of causal connection with others in its environment, are not merely virtual (nor fully modular / non-local / non-emergent / self-emergent / sui generis in a totally isolated manner). Therefore, any existent must have causal connections with its finitely reachable environment and within its inner parts.
Physical-ontologically real generalities must be about, or pertinent to, existents in groups, i.e., as parts of a type / natural kind. These generalities are not existents, but pure ontological universals in natural kinds.
Space and time are just the measurement-based epistemic notions or versions of the more generally physical-ontological Extension and Change respectively. The latter two are generalities of all existent processes, because nothing can exist without these two Categories.
Hence, space and time are not physical-ontological, not real about, not pertinent to, existents. In short, physical science working only on measuremental space-time cannot verify newly discovered energy wavicles and matter particles by use of the physical “properties” they are ascribed to. The reasons are the following.
We can speak not merely of existents but also about their “qualities / universals” and about non-existent “beings” and “properties”. All of them are denotables. Thus, a denotable has reference to something that either (1) has a physical body (physically existent processes), or (2) is inherent in groups of physical processes but are not themselves a physical body (pure universal qualities of all description), or (3) is non-real, non-existent, and hence just a mere notion (e.g., a non-physical possible world with wings, or one with all characteristics – i.e., Extension and Change – absolutely different from the existent physical world).
Denotables of type (1) belong to existent realities, namely, physical processes. They are of matter-energy in content, because Extension-Change determine them to be so. To denotables of type (1) belong also theoretically necessary realities, which are composed theoretically of methodical procedures using properties of existents, which, as a rule, (a) may be proved to be existing (i.e., existent unobservables) or (b) may not be proved to be existing (non-existent unobservables, which are just virtual objects) but are necessary for theory (e.g., potential energy).
To type (2) belong those universals that are never proved to exist but belong to all existents of a group as the general qualities of the members. These are termed ontological universals. The denotables of (1b) are the sub-types that are either fully virtual or partially virtual but are necessary for theory. Both are theoretically useful, but are often mistaken as being existents. Denotables of type (3) are nothing, vacuous. These are pure imaginations without any success in being proved to be in existence.
The difference between non-existent, real, virtual, and existent denotables is this:
Non-existents have no real properties, and generate no ontological commitment to existence via Extension and Change. Real virtuals have the properties that theoretically belong to the denotables that are lacunae in theory, but do not have the Categorial characteristics, namely, Extension and Change. Existent denotables (a) have these Categories (characteristics), (b) generate ontological commitment to existence, and (c) possess also properties that are conglomerations of many ontological universals. All ontological universals are under obedience to Extension and Change.
Hence, virtuals are versions of reality different from those that have been proved as actual existents. They are called in general as unobservables. Some of them are non-existent. When they are proved to exist, they become observables and partial observables, and are removed from membership in virtuals. Some partial observables may yet be considered as not yet proved to be existent. They happen further to be called unobservable virtuals. Some of them do not at all get the status of existent observables or existent partial observables. They belong to group of purely vacuous notions (3) above.
Theories yield unobservables (electrons, neutrinos, gravitons, Higgs boson, vacuum energy, dark energy, spinors, strings, superstrings …). They may be proved to exist, involving detectable properties.
Note that properties are not physical-ontological (metaphysical) characteristics, which latter I call ontological universals, the two most important of which are the Categories: Extension-Change. Instead of being ontological universals, properties are concatenations of ontological universals.
Virtual unobservables fill the lacunae in theoretical explanations, and most of them do not get proved as existent. Nevertheless, they will continue to be useful virtual worlds for theory from the viewpoint of explanation in a state of affairs where there are no ways of explanation using existent unobservables.
As is clear now, the tool to discover new unobservables is not physical properties of which physical and social sciences speak a lot, but instead, the physical-ontological Categories of Extension and Change.
Mere virtuals are non-existent as such, but are taken as solutions to the lacunae in rational imagination. The sciences and many philosophies of the sciences seem not to differentiate between their denotables in the above manner.
I have spoken of universals here, which may fall in distaste for the minds of physicists, scientists of other disciplines, and even for some philosophers. Please note that I have spoken only of the generalities that we are used to speak of regarding existent types of things. I have not brought out here all my theory about kinds of universals.
My claim in the present discussion is only that properties are also just physical virtuals, if we have the unobservables (say, vacuum energy, dark energy, etc.) behind them not fully steeped in physical existence in terms of EXTENSION and CHANGE through experimentally acceptable proofs of existence.
Do we have a science that has succeeded to accept this challenge? Can the scientists of the future accept these criteria for their discoveries?
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
SCIENTIFIC METAPHYSICAL CATEGORIES
BEYOND HEIDEGGER
ENHANCING PHYSICS
Raphael Neelamkavil, Ph. D., Dr. phil.
1. Introduction beyond Heidegger
I begin my cosmologically metaphysical critique of the foundations of Heidegger’s work, with a statement of concern. Anyone who attempts to read this work without first reading my arguments in the book, Physics without Metaphysics?, (1) without being in favour of a new science-compatible metaphysics and concept of To Be, and (2) without a critical attitude to Heidegger – is liable to misunderstand my arguments here as misinformed, denigrative, or even trivial. But I do this critique in search of very general means of constructing a metaphysics capable of realising constant guidance and enhancement to scientific practice.
Contemporary mathematics, physics, cosmology, biology, and the human sciences have a shape after undergoing so much growth that we cannot think philosophically without admitting the existence (termed “To Be”) of all that exist, the cosmos and its parts. The general concept of existence is always as “something-s” that are processually out there, however far-fetched our concepts of the various parts of or of the whole cosmos are. “The existence of the totality (Reality-in-total) as the whole something whatever” and “particular existence in the minimally acceptable state of being something/s whatever that is not the whole totality” are absolutely trans-subjective and thus objectual presuppositions behind all thought.
Today we do not have to theoretically moot any idea of non-existence of the cosmos and its parts as whatever they are. This is self-evident. That is, basing philosophical thinking – of the very nature of the existence-wise metaphysical presuppositions of all that are subjective and objective – upon the allegedly subjective origin of thought processes and concepts – should be universally unacceptable.
Therefore, I think we should get behind Heidegger’s seemingly metaphysical words – all based on the human stage on which Being is thought – by chipping his prohibitively poetical and mystifying language off its rhetorically Reality-adumbrating shades, in order to get at the senses and implications of his Fundamental Ontology as Being-historical Thinking. It suffices here to admit that the history of Being is not the general concept of the history of the thought of Being, and not the history of the thought of Being.
Moreover, it is not a necessity for philosophy that the Humean-Kantian stress on the subject-aspect of thought be carried forward to such an extent that whatever is thought has merely subjectively metaphysical Ideal presuppositions. All subjective presuppositions must somehow be taken to possess the merely subjective character.
There are, of course, presuppositions with some conceptual character. But to the extent some of them are absolute, they are to be taken as absolutely non-subjective. These presuppositions are applicable without exception to all that is, e.g. To Be and all Categories that may be attributed to all that exist. HENCE, SUBJECTIVE PRESUPPOSITIONS ARE NOT A SUBSTITUTE FOR CONCEPTUAL PRESUPPOSITIONS.
This fact should be borne out while doing philosophy, without which no philosophy and science are possible. The weight of the subject-aspect continues to be true of thought insofar as we go to non-absolute details of metaphysical presuppositions and empirical details, and not when we think only of the metaphysical Ideals of all existents in themselves.
It is true that there is no complete chipping off of the merely subjective or anthropological aspect of the Heideggerian theory. Nor is there an analysis without already interpreting anything. The guiding differentiation here should be that between “the subjective” and the “conceptual”. The conceptual is not merely subjective, but also objective. It is objective due to the inheritance pattern behind it from the objectual.
Such a hermeneutic is basic to all understanding, speculation, feeling, and sensing. The linguistically and otherwise symbolic expression of concepts and their concatenations is to be termed as the denotative universals and their concatenations.
At the purely conceptual level we have connotation. These are purely conceptual universals and their concatenations. Since these are not merely a production of the mind but primarily that by the involvement of the generated data from the little selection of the phenomena from physical processes, which are from a highly selected group of levels of objectual processes, which belong to the things themselves.
At the level of the phenomena, levels of objectual processes, and the things themselves there are universals, which we shall term ontological universals and their conglomerations. These conglomerations are termed so because they have the objectual content at the highest level available within the processes of sensing, feeling, understanding, speculation, etc.
2. Conclusions on Heidegger Proper
The above should not necessarily mean (1) that we cannot base thought fully on the Metaphysical Ideals of “To Be” and “the state of existents as somethings”, and (2) that we cannot get sufficiently deep into the fundamental implications of his work by side-lining the purely subjective concepts of the fundamental metaphysical concepts. This claim is most true of the concept of To Be.
To Be is the simultaneously processual-verbal and nomic-nominal aspect of Reality-in-total, and not merely that of any specific being, phenomenon, or concept. For Heidegger, To Be (Being) is somehow a private property of Dasein, the Being-thinking being. To Be which is the most proper subject matter of Einaic Ontology (metaphysics based completely on the trans-thought fact of the Einai, “To Be” of Reality-in-total) is not the Being that Dasein thinks or the Being that is given in Dasein, because To Be belongs to Reality-in-total together and in all its parts.
Even in Heidegger’s later phase highlighted best by his Contributions to Philosophy: From Enowning, his concept of To Be as belonging to the Dasein which is the authentically Being-thinking human being has not changed substantially. Even here he continues to project positively the history of Being-thinking human being as the authentic Being-historical process and as the essence of the history of all that can be thought of.
Against the above metaphysical backdrop of essentially anthropocentric definitions, I write this critique based on cosmological-metaphysical necessities in philosophy, and indirectly evaluate what I consider as the major ontological imperfection in Heidegger’s thought from the viewpoint of the Categorial demands of the history of metaphysics, various provincial ontologies and scientific ontology, and of the way in which I conceive the jolts and peaks in such history.
Along with the purely meta-metaphysical To Be, (1) I present the metaphysical abstract notions of Extension (= compositeness: i.e., having parts) and Change (= impacts by composites: i.e., part-to-part projection of impact elements) as the irreducibly metaphysical Categories of all existents and (2) argue that Extension-Change existence in their non-abstract togetherness as existents is nothing but Universal Causation (= everything is Existence-Change-wise existent, i.e. if not universally causal, existence is vacuous).
These are metaphysical principles that Heidegger and most philosophers till today have not recognized the primordiality of. Most of them tend to fix to existence universal or partial or absolutely no causality. In short, Universal Causation, even in some allegedly non-causal aspects of cosmology, quantum physics, philosophy of mind, and human sciences, is to be the taken as a priorias and co-implied by existence (To Be), because anything existent is extended and changing...! No more should sciences or philosophy doubt Universal Causality. Herein consists the merit of Einaic Ontology as a universally acceptable metaphysics behind all sciences – not merely of human sciences.
To Be is the highest Transcendental Ideal; Reality-in-total is the highest Transcendent Ideal; and Reality-in-general is the highest Transcendental-Transcendent Ideal of generalized theoretical concatenation of ontological universals in consciousness. These are meta-metaphysical in shape. They are not at all classificational (categorizing) of anything in this world or in thought.
Although Heidegger has not given a Categorial scheme of all existents or Categorial Ideals for all metaphysics and thinking, he is one of the few twentieth century thinkers of ontological consequence, after Aristotle (in favour of an abstract concept of Being) and Kant (against treating the concept of Being as an attribute), to have dealt extensively with a very special concept of Being and our already interpretive ability to get at To Be.
I present here in gist the difference between the Dasein-Interpreted concept of Being and the ontologically most widely committed, Einaic Ontological, nomic-nominal, and processual-verbal concept of To Be, which should be metaphysically the highest out-there presupposition of all thought and existence. This is the relevance of metaphysics as a trans-science.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Dear Readers,
This is the latest news from my ongoing work to help bring the arts into better communications with the sciences. Have you studied Poe's texts that may be considered proto-science fiction (before this term existed?) Please share your work here, as well.
Wolf Forrest will speak at our Hard-Sci SF Zoom group on Jan. 6th and we will then publish that talk on YouTube. DETAILS AS ATTACHMENT.
MORE on Hard-Science SF Zoom project:
We have a Zoom group called The Hard-Science Science Fiction Group. If anybody wishes to be part of this group project or "lab" please let me know via message. I will add you to the list because I don't always rmember to post new talk od plays here. (Too busy.)
We also are involved with the Planet Zoom players, who record Zoom play adaptation of classic SF and then publish these on YouTube. (Many good works were never adapted to film or TV.)
Please add your thoughts on this. Sometimes we in the US miss what other countries and linguistic groups think of Poe.

Einstein is one of the greatest and most admired physicists of all times. Einstein's general theory of relativity is one of the most beautiful theories in physics. However, every theory in physics has its limitations, and that should also be expected for Einstein's theory of gravity: A possible problem on small length scales is signaled by 90 years of unwavering resistance of general relativity to quantization, and a possible problem on the largest length scales is indicated by the present search for "dark energy" to explain the accelerated expansion of the universe within general relativity.
Why, then, is the curvature of spacetime so generally accepted as an ultimate truth, as the decisive origin of gravitation, both by physicists and philosophers? This seems to be a fashionable but unreflected metaphysical assumption to me.
Are there alternative theories of gravity? There are plenty of alternatives. As a consequence of the equivalence of inertial and gravitational mass, they typically involve geometry. The most natural option seems to be a gauge field theory of the Yang-Mills type with Lorentz symmetry group, which offers a unified description of all fundamental interactions and a most promising route to quantization.
I feel that metaphysical assumptions should always be justified and questioned (rather than unreflected and fashionable). How can such a healthy attitude be awakened in the context of the curvature of spacetime?
Research areas: Theoretical Physics, Philosophy of Science, Gravitation, General Relativity, Metaphysics
Studying the various philosophers, even the contemporary thinkers, is a matter of study and analysis. Whatever our stage of development is, such study and analysis can only be educating ourselves in the strict sense. Thinking for ourselves is also part of the process, which should have greater weightage as the educative phase is had long enough.
Now what about forgetting for some time the contributions of the many philosophers of our time or of the past, especially the kind whom we all mention habitually, and then theorizing philosophically for ourselves without constant references to their works and notions, as doctoral students do?
Why do I suggest this? Such dependence on the works of the stalwarts and of the specialists on them may veil our abilities to see many things for ourselves. Thus, we can avoid becoming philosophical technicians and even the slightly better case of becoming philosophical technologists or philosophical experts.
I believe that synthesizing upon some good insights from the many thinkers and from the many disciplines would require also the inevitable conceptual foundations that we would be able to discover beyond these notions.
Suppose each of us looks for such foundations, and then share them on a platform. If the discussion is on these new foundations, something may emerge in each of us as what we could term genuine foundations. These need not remain forever, because philosophy and science show grow out of whatever we and others have done. But, as a result of the effort, we will have effected a better synthesis through such personal efforts than when without seeking foundations.
I think the conceptual foundations on which the concept of synthetic philosophy works may thus gain a lot. I for one consider the whole history of analytic and linguistic philosophy as lacking such rigour. You all may differ from what each one of us suggests. That is the manner in which deeper foundations can be sought. I am on such a journey.
I believe that in the journey to find deeper and more general foundations than those available, we will already have created a manner of doing philosophy independently, and if done in conjunction with the sciences, we will have a new manner of doing the philosophy of science. Fell trees from their roots, and we have the place to plant a new tree.
Let me suggest a question. All these 2.5 millennia of western philosophy, we have not found the question of the implications of existence (to exist, To Be) being discussed. Plato and Aristotle have tried it, and thereafter we do not see much on the implications of To Be. Now if some implications of To Be are found, these could be a strong foundation for philosophy of any kind. I hope we cannot find such implications of Non-existence for doing philosophy or science. The definitions of the implications of To Be will change in the course of time, but some core might continue to remain, if we do something validly deep and general enough.
Let me suggest an interesting manner in which many philosophers evaluate their peers. (This may also be applicable in all other fields.) This is here brought to a historical context, not merely theoretical. This I do in order to make the example very clear.
Suppose you (say, A) speak of space, time, entities, matter-energy, etc. in a special context. The peer (say, B) gets hold of the text and starts criticizing A’s notion of space, time, entities, matter-energy, etc. B starts from the concepts of space and time. He says, Kant and thereafter almost all thinkers have placed space and time merely as epistemic categories. This has been done in the context of phenomena. If you (A) hold the epistemic variety of notions of space and time, then they are phenomenal. In that case, you should have studied in the text what phenomena meant in Kant and analyze the scientific and philosophical consequences of those concepts.
B continues. If you wanted to make space and time metaphysical concepts, then you are speaking of the noumena. For Kant these are unknowables. Hence, you need to first show that the noumena are knowables. In that case you are rightful in suggesting epistemic / epistemological concepts of space and time. If not, you need to take recourse to other relevant philosophers or scientific disciplines to demonstrate the metaphysical meaning of space and time that you have introduced. And so on.
Absolute dependence upon the traditions and unpreparedness to think differently from the past or present thinkers is what is exhibited here. Not that B is not intelligent enough. B is. But the preparedness to think for years and decades differently comes not merely from the desire to think differently, but from the desire to SOLVE ALL THE PROBLEMS OF THE WORLD TOGETHER. We know we are being overambitious. If we demonstrate such an attitudes in our behaviour to others, then it is due to an intellectual sense of preponderance. But if we remain receptive to all new inputs from all others and all sciences, we will continue to be enabled to persevere in methodological obverambitiousness.
The peer had already decided how the author should write. It seems that the author should have written on all sub-themes within the title a separate book or part in the book....! Or, should he have cited from all sorts of authors on all possible sub-themes in his book in order to be approved by the peer?
Yet another systematically dominative and other-debilitating manner of peers is this: Say, I submit to you the publisher a book. The publisher sends it to the peer/s. Without even taking time for a good reading of the text, the peer suggests some opinions to the publishers, which the publisher relates to the author in a day or two: Your work may be very good, but its title is too broad. An author cannot do justice to the whole breadth of the subject matter!
Have you heard or read psychologists, neuroscientists, medical doctors, etc. discussing some symptoms and their causes? A book in psychology says: ‘According to the bio-psycho-social approach in psychopathology, one mental disturbance CAN have many causes.’ But a person trained and enthusiastic about philosophy (also of the philosophy of the sciences) would wonder why there should not be many causes, at least some of which one could seek to find...! Discovering ‘only the immediate, exact, and unique cause’ is not their work because any reason can tell us that nothing in this world has an exact cause.
This directs our attention to a basic nature of philosophy: Not that a philosopher should only generalize. But a philosopher should study any specific thing only in terms of the most generalizable notions. Here ‘generality’ does not directly indicate only abstraction. It demonstrates the viewpoint that philosophy always takes. Hence, speaking only of the linguistic formulation of notions and arguments, formulating arguments only of life-related events in order to prove general principles that belong to the whole of Reality, etc. are not philosophical. The philosophically trained reader can recognize which recent trends in philosophy I have in mind here.
I may be talking strange things here, especially for those trained mainly in analytic philosophy and the philosophy of science in a narrow manner. If you do not find such suggestions interesting, just ignore this intervention. I continue to work on this. I do have some success. Each of us has our own manner of approaching the problems.
I am aware that I may be laughed at. Since I have left the profession of teaching, I do not lose much. Moreover, getting great publishers is out of reach for me, but that too does not compound to much consequence if eventually one succeeds to do something solid.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
The Nobel Prize Summit 2023: Truth, Trust and Hope has started today, 24 May 2023. The summit encourages participation. Thus, I have sent an open letter and eagerly anticipate their response. Please comment if the points I have made is adequate enough.
Open Letter to The Nobel Committee for Physics
Is There a Nobel Prize for Metaphysics?
Dear Nobel Committee for Physics,
Among the differences between an established religion, such as Roman Catholicism, and science, is the presence of a hierarchical organization in the former for defending its creed and conducting its affairs. The head of the religious institution ultimately bears responsibility for the veracity of its claims and strategic policies. This accountability was evident in historical figures like John Wycliffe, Jan Hus, and Martin Luther, who held the papacy responsible for wrong doctrines, such as the indulgence scandal during the late Middle Ages. In that context, challenging such doctrines, albeit with the anticipated risk of being burned at the stake, involved posting opposing theses on the doors of churches.
In contrast, the scientific endeavour lacks a tangible temple, and no definitive organization exists to be held accountable for possible misconducts. Science is a collective effort by scientists and scientific institutes to discover new facts within and beyond our current understanding. While scientists may occasionally flirt with science fiction, they ultimately make significant leaps in understanding the universe. However, problems arise when a branch of science is held and defended as a sacred dogma, disregarding principles such as falsifiability. This mentality can lead to a rule of pseudo-scientific oppression, similar to historical instances like the Galileo or Lysenko affairs. Within this realm, there is little chance of liberating science from science fiction. Any criticism is met with ridicule, damnation, and exclusion, reminiscent of the attitudes displayed by arrogant religious establishments during the medieval period. Unfortunately, it seems that the scientific establishment has not learned from these lessons and has failed to provide a process for dealing with these unfortunate and embarrassing scenarios. On the contrary, it is preoccupied with praising and celebrating its achievements while stubbornly closing its ears to sincere critical voices.
Allow me to illustrate my concerns through the lens of relativistic physics, a subject that has captured my interest. Initially, I was filled with excitement, recognizing the great challenges and intellectual richness that lay before me. However, as I delved deeper, I encountered several perplexing issues with no satisfactory answers provided by physicists. While the majority accepts relativity as it stands, what if one does not accept the various inherent paradoxes and seeks a deeper insight?
Gradually, I discovered that certain scientific steps are not taken correctly in this branch of science. For example, we place our trust in scientists to conduct proper analyses of experiments. Yet, I stumbled upon evidence suggesting that this trust may have been misplaced in the case of a renowned experiment that played a pivotal role in heralding relativistic physics. If this claim is indeed valid, it represents a grave concern and a significant scandal for the scientific community. To clarify my points, I wrote reports and raised my concerns. Fortunately, there are still venues outside established institutions where critical perspectives are not yet suppressed. However, the reactions I received ranged from silence to condescending remarks infused with irritation. I was met with statements like "everything has been proven many times over, what are you talking about?" or "go and find your mistake yourself." Instead of responding to my pointed questions and concerns, a professor even suggested that I should broaden my knowledge by studying various other subjects.
While we may excuse the inability of poor, uneducated peasants in the Middle Ages to scrutinize the veracity of the Church's doctrine against the Latin Bible, there is no excuse for professors of physics and mathematics to be unwilling to revaluate the analysis of an experiment and either refute the criticism or acknowledge an error. It raises suspicions about the reliability of science itself if, for over 125 years, the famous Michelson-Morley experiment has not been subjected to rigorous and accurate analysis.
Furthermore, I am deeply concerned that the problem has been exacerbated by certain physicists rediscovering the power and benefits of metaphysics. They have proudly replaced real experiments with thought experiments conducted with thought-equipment. Consequently, theoretical physicists find themselves compelled to shut the door on genuine scientific criticism of their enigmatic activities. Simply put, the acceptance of experiment-free science has been the root cause of all these wrongdoings.
To demonstrate the consequences of this damaging trend, I will briefly mention two more complications among many others:
1. Scientists commonly represent time with the letter 't', assuming it has dimension T, and confidently perform mathematical calculations based on this assumption. However, when it comes to relativistic physics, time is represented as 'ct' with dimension L, and any brave individual questioning this inconsistency is shunned from scientific circles and excluded from canonical publications.
2. Even after approximately 120 years, eminent physicist and Nobel Prize laureate Richard Feynman, along with various professors in highly regarded physics departments, have failed to mathematically prove what Einstein claimed in his 1905 paper. They merely copy from one another, seemingly engaged in a damage limitation exercise, producing so-called approximate results. I invite you to refer to the linked document for a detailed explanation:
I am now submitting this letter to the Nobel Committee for Physics, confident that the committee, having awarded Nobel Prizes related to relativistic physics, possesses convincing scientific answers to the specific dilemmas mentioned herein.
Yours sincerely,
Ziaedin Shafiei
Anything exists non-vacuously as in Extension (having parts, each of which again is extended) and in Change (extended existents impacting some other extended existents). Anything without these two characteristics cannot exist. THESE ARE THE TWO, EXHAUSTIVE, IMPLICATIONS OF "TO BE" AND "EXISTENTS".
If not in Change, how can something exist in Extension (= in the state of Extension) alone? And if not in Extension, how can something exist in the state of Change alone? These are impossible. ((The traditional interpretations of Parmenides and Heraclitus as emphasizing merely one of these is unacceptable.)) Hence, Extension-Change are two fundamentally physical-ontological Categories of all existence.
But Extension-Change-wise existence is what we understand as Causality: extended existents and their parts exert impacts on other extended existents. Every part of existents does it. This is not the meaning of Change alone, but in Extension-Change! That is, if everything exists, everything is in Causation. This is the principle of Universal Causality...! All counterfactual imaginations need not yield really existent worlds of this kind.
Even the allegedly “non-causal” quantum-mechanical constituent processes are mathematically and statistically circumscribed, measuremental, concepts from the results of experiments upon Extended-Changing existents; and ipso facto the realities behind these statistical measurements are in Extension-Change if they are physically existent.
Space is the measured shape of Extension; time is that of Change. Therefore, space and time are merely epistemic categories. How then can statistical causality be causality at all? Bayesians should now re-interpret their results in terms of Universal Causality, as mere measuremental extent of our determination of the exact causes of some events.
No part of an existent is non-extended and non-changing. One unit of cause and effect may be called a process. Every existent and its parts are fully processual -- in the sense that every part of it is further constituted by sub-processes. None of these sub-processes is infinitesimal. Each is near-infinitesimal in Extension and Change.
Thus, Extension and Change are the very exhaustive meanings of To Be, and hence I call them the highest Categories of metaphysics, physical ontology, the sciences, etc. Science and philosophy must obey these two Categories if they deal with existent processes, and not merely of imaginary counterfactual worlds.
In short, everything existent is causal. Hence, Universal Causality as the highest pre-scientific Law, second only to Existence / To Be. To Be is not a law; it is the very reason for existence of anything...!
Natural laws are merely derivative from Universal Causality. If any natural law disobeys Universal Causality, it is not a scientific law. Since Extension-Change-wise existence is the same as Universal Causality, scientific laws are derived from Universal Causality, and not vice versa.
Today sciences attempt to derive causality from the various scientific laws! This is merely because millennia long we have been getting fooled about such a fundamental meaning of Causality. We were told that causality can be proved only empirically. The folly here has been that what is specific is universalized: The Fallacy of Conceptual / Theoretical Wholes and Parts. Search for the causes of a few events has been misinterpreted as capable of defining the search for the causal nature or non-causal nature of all...! IS THIS NOT ENOUGH PROOF FOR THE INFANCY IN WHICH THE FOUNDATIONS OF SCIENTIFIC AND PHILOSOPHICAL PRINCIPLES FIND THEMSELVES IN?
The relevance of metaphysics / physical ontology for the sciences is clear from the above. Lack of such a metaphysical foundation has marred the effectivity of the sciences and of course of philosophy as such. RECOLLECT THE ERA IN THE TWENTIETH CENTURY WHEN CAUSALITY WAS CONVERTED TO CAUSAL EXPLANATIONS.........
Existents have some Activity and Stability. This is a fully physical fact. These two categories may be shown to be subservient to Extension-Change. Pure vacuum (non-existence) is the absence of Activity and Stability. Thus, entities are irreducibly active-stable processes in Extension-Change. Physical entities / processes possess finite Activity and Stability. Activity and Stability together belong to Extension; and Activity and Stability together belong to Change too.
That is, Stability is not merely about space; and Activity is not merely about time. But the traditions (in both the sciences and philosophy) still seem to hold so. We consider Activity and Stability as sub-categories, because they are based on Extension-Change, which together add up to Universal Causality; and each unit of cause and effect is a process.
These are not Categories belonging to merely imaginary counterfactual situations. The Categories of Extension-Change and their sub-formulations are all about existents. There can be counterfactuals that signify cases that appertain existent processes. But separating these cases from useless logical talk is near to impossible in linguistic-analytically and denotatively active definitions of reference in logic, philosophy, philosophy of science, and the sciences. THE FAD NATURE OF THE PHILOSOPHIES OF FREGE, WITTGENSTEIN, THE VIENNA CIRCLE, AND THEIR FOLLOWERS TODAY FOLLOWS FROM THIS.
Today physics and the various sciences do something like this in that they indulge in particularistically defined terms and procedures, blindly thinking that these can directly denotatively represent the physical processes under inquiry.
Concerning mathematical applications too this is the majority attitude among scientists. Hence, without a very general physical ontology of Categories that are applicable to all existent processes, all sciences are in gross handicap. THIS IS A GENERAL INDICATION FOR THE DIRECTION OF QUALITATIVE GROWTH IN THE SCIENCES AND PHILOSOPHY.
The best examples are mathematical continuity and discreteness being attributed to physical processes IN THE MATHEMATICALLY INSTRUMENTALIZED SCIENCES. Mathematical continuity and discreteness are to be anathema in the sciences. Existent processes are continuous and discrete only in their Causality.
This is nothing but Extension-Change-wise discrete causal continuity. At any time, causality is present in anything, hence there is causal continuity. But this is different from mathematical continuity and discreteness.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
One of the central themes in the philosophy of formal sciences (or mathematics) is the debate between realism (sometimes misnamed Platonism) and nominalism (also called "anti-realism"), which has different versions.
In my opinion, what is decisive in this regard is the position adopted on the question of whether objects postulated by the theories of the formal sciences (such as the arithmetic of natural numbers) have some mode of existence independently of the language that we humans use to refer to them; that is, independently of linguistic representations and theories. The affirmative answer assumes that things like numbers or the golden ratio are genuine discoveries, while the negative one understands that numbers are not discoveries but human inventions, they are not entities but mere referents of a language whose postulation has been useful for various purposes.
However, it does not occur to me how an anti-realist or nominalist position can respond to these two realist arguments in philosophy of mathematics: first, if numbers have no existence independently of language, how can one explain the metaphysical difference, which we call numerical, at a time before the existence of humans in which at t0 there was in a certain space-time region what we call two dinosaurs and then at t1 what we call three dinosaurs? That seems to be a real metaphysical difference in the sense in which we use the word "numerical", and it does not even require human language, which suggests that number, quantities, etc., seem to be included in the very idea of an individual entity.
Secondly, if the so-called golden ratio (also represented as the golden number and related to the Fibonacci sequence) is a human invention, how can it be explained that this relationship exists in various manifestations of nature such as the shell of certain mollusks, the florets of sunflowers, waves, the structure of galaxies, the spiral of DNA, etc.?
That seems to be a discovery and not an invention, a genuine mathematical discovery. And if it is, it seems something like a universal of which those examples are particular cases, perhaps in a Platonic-like sense, which seems to suggest that mathematical entities express characteristics of the spatio-temporal world. However, this form of mathematical realism does not seem compatible with the version that maintains that the entities that mathematical theories talk about exist outside of spacetime. That is to say, if mathematical objects bear to physical and natural objects the relationship that the golden ratio bears to those mentioned, then it seems that there must be a true geometry and that, ultimately, mathematical entities are not as far out of space-time as has been suggested. After all, not everything that exists in spacetime has to be material, as the social sciences well know, that refer to norms, values or attitudes that are not.
(I apologize for using a translator. Thank you.)
is there a historical map of academic disciplines? what is the trend of academic disciplines changes(number, nature and label of disciplines)?
i will be thanks full if someone introduce any article, book, handbook or report about historical map of disciplines and history of academic disciplines.
What kind of scientific research dominate in the field of Philosophy of science and research?
Please, provide your suggestions for a question, problem or research thesis in the issues: Philosophy of science and research.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes

1) There is some tradition in philosophy of mathematics starting at the late 19th century and culminating in the crisis of foundations at the beginning of the 20th century. Names here are Zermelo, Frege, Whitehead and Russel, Cantor, Brouwer, Hilbert, Gödel, Cavaillès, and some more. At that time mathematics was already focused on itself, separated from general rationalist philosophy and epistemology, from a philosophy of the cosmos and the spirit.
2) Stepping backwards in time we have the great “rationalist” philosophers of the 17th, 18th, 19th century: Descartes, Leibniz, Malebranche, Spinoza, Hegel proposing a global view of the universe in which the subject, trying to understand his situation, is immersed.
3) Still making a big step backwards in time, we have the philosophers of the late antiquity and the beginning of our era (Greek philosophy, Neoplatonist schools, oriental philosophies). These should not be left out from our considerations.
4) Returning to the late 20th century we see inside mathematics appears the foundation (Eilenberg, Lavwere, Grothendieck, Maclane,…) of Category theory, which is in some sense a transversal theory inside mathematics. Among its basic principles are the notions of object, arrow, functor, on which then are founded adjunctions, (co-)limits, monads, and more evolved concepts.
Do you think these principles have their signification a) for science b) the rationalist philosophies we described before, and ultimately c) for more general philosophies of the cosmos?
Examples: The existence of an adjunction of two functors could have a meaning in physics e.g.. The existence of a natural numbers - object known from topos theory could have philosophical consequences. (cf. Immanuel Kant, Antinomien der reinen Vernunft).
Is "Quantization of Time" theory possible ?
According to science Time is a physical parameter but according to philosphy it is an illusion . How can we define Time ? Can we quantize illusions?
Hawking's Legacy
Black hole thermodynamics and the Zeroth Law [1,2].
(a) black hole temperature: TH = hc3/16π2GkM
The LHS is intensive but the RHS is not intensive; therefore a violation of thermodynamics [1,2].
(b) black hole entropy: S = πkc3A/2hG
The LHS is extensive but the RHS is neither intensive nor extensive; therefore a violation of thermodynamics [1,2].
(c) Black holes do not exist [1-3].
Hawking leaves nothing of value to science.
REFERENCES
[1] Robitaille, P.-M., Hawking Radiation: A Violation of the Zeroth Law of Thermodynamics, American Physical Society (ABSTRACT), March, 2018, http://meetings.aps.org/Meeting/NES18/Session/D01.3
[2] Robitaille, P.-M., Hawking Radiation: A Violation of the Zeroth Law of Thermodynamics, American Physical Society (SLIDE PRESENTATION), March, 2018, http://vixra.org/pdf/1803.0264v1.pdf
[3] Crothers, S.J., A Critical Analysis of LIGO's Recent Detection of Gravitational Waves Caused by Merging Black Holes, Hadronic Journal, n.3, Vol. 39, 2016, pp.271-302, http://vixra.org/pdf/1603.0127v5.pdf
The Copernican Principle: https://en.wikipedia.org/wiki/Copernican_principle
How can anyone absolute determine by inconclusive accepting "that since we observe so many stars and planets WE AND EARTH are nothing SPECIAL". I submit, this is a false pseudo philosophical and pseudo science childish assumption and ill based axiom, made 500 years ago and surprisingly adapted by modern science although all scientific evidence today proves otherwise.
We are not SPECIAL?! Really?...How any serious scientist can say that today. If we are not special what is then the scientific evidence of other life existing in the universe?
Our location in the universe is nothing SPECIAL!...Really?.... so why then it is verified by two separate satellite launches science experiments so far that the "axes of evil" are true in the cosmic microwave background (CMB) image of the universe? https://en.wikipedia.org/wiki/Axis_of_evil_(cosmology)
First there was philosophy (φίλος - philo - friend, sophy- σοφία - - wisdom), thus friend of knowledge and the seeker of truth and knowledge everywhere, and then came science. This is a long forgotten truth, and many scientists today turn against philosophy and into mockery of it.
THE TRUTH HOWEVER IS PHILOSOPHY IS THE MOTHER OF ALL SCIENCES THERE ARE TODAY IF THEY LIKE IT OR NOT YOU CAN NOT DENY IT OR ELSE YOUR LOGIC IS FLAWED...
The main reason why many scientists deny like Hell philosophy today and don't regard it as a science is it's branch of religion and theology, although it is the mother of all sciences and a a science discipline today.
Is it not whatever field of knowledge and information man studies exhaustively and systematically using his higher mental powers and logic tools for the betterment of human civilization, CALLED SCIENCE?!! Isn't it? Is this not the definition of science and its disciplines?
You see there is out there a stupid believe (propaganda ? ) that you can not have faith without science (and in many cases they mean their science). So yes there is a cabal and trend out there today to discourage any scientist to be a believer of GOD almighty. In order to be "a good scientist you must be an atheist" and they promote this unacceptable position whenever they can through media... we see this everyday.
However, these people and GOD faith deniers are in error, and not real wise people and therefore not really philosophers and thus ironically behave unscientific.
Greek philosophers 2,000 years ago knew that you can not get apart faith form logic and the one drives the other. They are complementary.
Denying faith is the same thing as closing the door to logic.
So, for example, what is the difference really of telling to someone that GOD snapped his fingers and everything came out... From, suddenly out of nowhere a big bang created everything?... They just replaced the word God with Big Bang! . You can't deny that....
So ATHEISTS scientists are actually ironically faith believers and actually religious THEMSELVES... THEIR RELIGION! they believe ultimately in something (e.g. Big Bang theory) they can not prove and therefore are also driven by faith as the rest.
So there are not really atheists.... no one is!
So far no problem.. However, the problem arises and got much worst the last decade when these allegedly "atheists scientists" through media and other means attack to GOD faith believer fellow scientists, and try to enforce and push down the throat THEIR BELIEVES to the general public.
So this is NOT called any more atheism (not believing in a higher entity - God deniers) BUT this is USUALLY CALLED RACISM AND DISCRIMINATION!...
THIS HAS TO STOP NOW... AND STOP THESE SCIENTISTS OF MAKING PUBLICLY A mockery of themselves and science by behaving more like PRIESTS THAN SCIENTISTS!!
Science disciplines should really concentrate on their study and leave the religion matters of each fellow scientists alone not interfering and spreading propaganda.
The only science discipline allowed to study faith is philosophy and its immediate branches.
Let's treat among us scientists as normal people don't openly criticize and act against each others faith or no faith, and restrict their freedom, thus, their faith which means logic.
Thank you for letting me express this Philosophical Question and pointing out a modern dangerous phenomena of science that potentially and in the long run harms scientific progression and can turn science into dogma.
Emmanouil.
I have posted a comment on André Orléan's "open letter" to the French Minister of Education (See the first answer below of my own). The letter and the comments on background explain what is happening in France in the field of economics education. In the comment, I have mentioned what had happened in Japan. An e-mail I have received this morning tells that similar dispute is repeated in University College London.
At the bottom of all arguments, there lies a problem how to interpret the status of neoclassical economics. The neoclassical economics occupies now a mainstream position and is trying to monopolize the economics education and academic posts, whereas various heterodox economists are resisting to the current, claiming the necessity of pluralism in economics education and research.
I have mentioned cases of three countries. There must be many similar stories in almost all countries. It would be wonderful if we can know all what is happening in other countries. So my question is:
What is happening in your country?
Why or why not?
Some philosophers maintain that science is morally neutral, while other philosophers maintain that science produces morality.
Or at least use the sentence waves above waves. If you can provide the source that would be great.
Einstein’s geometrodynamics considers 4-D spacetime geometry whose curvature is governed by mass. But the FLRW universe considers a 3-D space of curvature k (+ve, zero, or –ve) with time as an orthogonal coordinate. Hence it seems, the standard cosmology based on the FLRW space time tracked off the stated essence of general relativity.
Between the end of XIX Century and the beginning of XX there was a French teacher in Macau, he (or she) was theaching art at the Academy of natual science (格致书院). Somebody knows how was?
Thanks a lot!
Schrödinger self adjoint operator H is crucial for the current quantum model of the hydrogen atom. It essentially specifies the stationary states and energies. Then there is Schrödinger unitary evolution equation that tells how states change with time. In this evolution equation the same operator H appears. Thus, H provides the "motionless" states, H gives the energies of these motionless states, and H is inserted in a unitary law of movement.
But this unitary evolution fails to explain or predict the physical transitions that occur between stationary states. Therefore, to fill the gap, the probabilistic interpretation of states was introduced. We then have two very different evolution laws. One is the deterministic unitary equation, and the other consists of random jumps between stationary states. The jumps openly violate the unitary evolution, and the unitary evolution does not allow the jumps. But both are simultaneously accepted by Quantism, creating a most uncomfortable state of affairs.
And what if the quantum evolution equation is plainly wrong? Perhaps there are alternative manners to use H.
Imagine a model, or theory, where the stationary states and energies remain the very same specified by H, but with a different (from the unitary) continuous evolution, and where an initial stationary state evolves in a deterministic manner into a final stationary state, with energy being continuously absorbed and radiated between the stationary energy levels. In this natural theory there is no use, nor need, for a probabilistic interpretation. The natural model for the hydrogen, comprising a space of states, energy observable and evolution equation is explained in
My question is: With this natural theory of atoms already elaborated, what are the chances for its acceptance by mainstream Physics.
Professional scientists, in particular physicists and chemists, are well versed in the history of science, and modern communication hastens the diffusion of knowledge. Nevertheless important scientific changes seem to require a lengthy processes including the disappearance of most leaders, as was noted by Max Planck: "They are not convinced, they die".
Scientists seem particularly conservative and incapable of admitting that their viewpoints are mistaken, as was the case time ago with flat Earth, Geocentrism, phlogiston, and other scientific misconceptions.
[I had heard of the Know-Nothing Party, but apparently the internet tells me that that was a disclaimer used by members of what became the American Party, which was anti-immigration in the mid-nineteenth century ... another area of discussion, though proponents today may often fall into the category for discussion here as well, but that is still a bit out-of-scope for this discussion.]
For historians and other history buffs out there, and those interested in current events, what do you see as the path that has been taken to arrive at popular anti-intellectual, anti-science views in politics? The rejection of some members of the US House of Representatives with regard to correction of (US) census undercounts - the rejection of sampling statistics - comes to mind, in addition to the usual comments on climate change.
And are there any similar anti-intellectualism movements to be found in history anywhere in the world, including ancient history, which anyone would care to share? Can you draw any parallels?
Reasoned comments and historical evidence are requested. I do not intend to make further comments but instead wish to hear what applicable history lessons you may find interesting regarding this topic.
Thank you.
Has the experimental science got limits in its discipline? Many actual knowledges are not consequence of repetitive experiments. Regarding the sources of science, are they limited to experimentation? Other disciplines as history, unique experiences, philosophy, etc., can they be more important for the man?
In my studies many years ago, i came across the very influential thinker alexander bain. Most of his ideas are obsolete today, i know, but he was still an extremely influential person. I skimmed through his autobiography once, but i could not find any study on him by a modern scholar which could place him in a historical perspective. I thought this was odd, considering who he was.
Does anyone know if there are any standard works on bain? It didn't pop up on amazon.
Language, as an expression of the various 'knowledge' is subject to continuous transformations. I’d like to focus in particular on one of them in the field of scientific research.
As science can not critically verify its own assumptions, it is up to history, epistemology, philosophy and to the analysis of language to deepen the horizons of pre-understanding of each scientific proposition. In particular this is the understanding of a reality based on the assumption and tradition of antecedent interpretations, which precedes the direct experience of reality itself.
Popper was very attentive about the instrumental aspect of science (and therefore also to language), not interested in things in themselves, but to their verifiable aspects through measurements. Therefore, he invited not to interpret theories as descriptions or using their results in practical applications. He recalled that, as "knowledge", science is nothing but a set of conjectures or highly informative guesses about the world, which, although not verifiable (i.e. such that it is possible to demonstrate the truth) they can be subjected to strict critical controls.
This is evident from various texts and Popper emphasized these ideas in ‘The Logic of Scientific Discovery’: "Science is not a system of certain assertions, or established once and for all, nor is it a system that progresses steadily towards a definitive state. Our science is not knowledge (episteme): it can never claim to have reached the truth, not even a substitute for the truth, as probability .... "
We do not know, we can only presume. Our attempts to conceit are guided by the unscientific belief, metaphysical in the laws, in the regularities that we can uncover, discover.
A kind of approach which is not exempt from ethical questions because the operation has fluid boundaries. The borders can be crossed, leading to the possibility of manipulation and abuse of power against the same identity and autonomy of the persons involved.
As Bacon we could describe our contemporary science - the method of reasoning that today men routinely apply to Nature - consisting of hasty advances, premature and of prejudices. But, once advanced, none of our advances is supported dogmatically. Our research method is not what is to defend them, to prove how right we were; on the contrary, we try to subvert them, using all the tools of our logical, mathematical and technical ‘baggage’".
Hence the maximal caution: "The old scientific ideal of episteme, of absolutely certain and demonstrable knowledge, has proved an idol.
The need for scientific objectivity makes it inevitable that every assertion of Science remains necessarily and forever to the status of an attempt. The wrong view of science is betrayed because of its desire to be the right one. Since it is not the possession of knowledge, of irrefutable truth, that makes a man of science, but the critical research, persistent and anxious for the truth ".
[In this regard I consulted the following texts: H. R. Schlette, Philosophie, Theologie, Ideologies. Erläuterung der Differenzen, Cologne, 1968 (Italian transl c / o Morcelliana, Brescia, 1970, pp. 56, 78); G. Gismondi, The critique of ideology in the science foundation's speech, in "Relata Technica", 4 (1972), 145-156; Id., Criticism and ethics in scientific research, Marietti, Torino, 1978].
Then, Hermeneutics, applied to language, to human action and ethics allows to articulate text and action. An action may be told because it is the human life itself that deserves to be narrated; it presents possible narrative paths that the individual highlights, excluding others. Story and action also confirm the inter-subjectivity dimension of human beings: the action can be told because it is the same human life that deserves to be told. The story presents thoroughly the three moments of ethical reflection: describe, tell and prescribe.
In his 1963 book "little science, big science" Derek de Solla Price shows science as aa whole been growing exponentially for 400 years. He hypothesises this to be the first part of a logistic curve. If his predictions were right the growth of science should have been started to decline by now. Are there recent measurements that can be compared to his 1963 estimates? And... was he right?
Through many discussions in RearchGate, I came to recognize that majority of economists are still deeply influenced by the Friedmanian methodology. An evidence is the fact that they take little care for the economic consistency and relevance of the model. They pay enormous time and efforts in "empirical studies" and discuss the result, but they rarely question if the basic theory on which their model lies is sensible. This ubiquitous tendency gives grave effects in economics: neglect of theory and indulgence in empirics. I wonder why people do not argue this state of economics. Economic science should take back a more suitable balance between theory and empirics.
It is clear that we should distinguish two levels of Friedmanian methodology.
(1) Friedman's methodology and thought that is written in the texts, more specifically in his article The Methodology of Positive Economics (Chapter 7 of Essays in positive economics, 1953).
(2) The methodology that is believed to be Friedan's thought.
Apparently, (2) is much more important for this question. I see dozens of papers that examines Friedmanian methodology based on his text. Many of them detect that widely spread understanding is not correctly reflecting Friedman's original message. They may be right, but what is important is the widely spread belief in the name of Milton Friedman.
Verificationism (according to Wikipedia) is an epistemological and philosophical positioning that considers necessary and sufficient a criterion of verification for acceptance or validation of a hypothesis, a theory or a single statement or proposition. Essentially the verificationism says that a statement, added to a scientific theory, which can not be verified, is not necessarily false, but basically meaningless because it is not demonstrable at the empirical evidence of the facts. There could in fact be multiple statements inherently logical for the explanation / interpretation of a certain phenomenon, which, however, in principle only one by definition is true.
Nonsense does not mean false; only its value of truth can not be decided and then such a proposition can have no claim to be cognitive or foundational in scientific theory. It is defined a proposition any statement that may be assigned a truth value (in the classical logic, true or false). A proposition for which it is not possible to attribute this value is therefore a statement devoid of verifiability and so, for this kind of epistemology, not with any sense, and finally to be eliminated as mere opinion or metaphysical proposition. Verificationism is usually associated with the logical positivism of the Vienna Circle, in particular to one of its greatest exponents, Moritz Schlick, whose basic thesis can be summarized as follows:
The propositions with sense are those that can be verified empirically.
Science through the scientific method is the cognitive activity par excellence, since bases the truth of his propositions on this verificationist criterion .
The propositions of metaphysics are meaningless as they are based on illusory and unverifiable concepts .The propositions of metaphysics, says Carnap, express at most feelings or needs.
The valid propositions are, as had claimed the English empiricist Hume, the analytical ones, which express relationships between ideas (like mathematical propositions), and propositions that express facts (such as the propositions of physics). Math, as logic, does not express anything of the world, it should not be empirically verifiable, but must serve to concatenate propositions among themselves those verifiable and meaningful to give them the character of generality that is missing for the contingent propositions.
• The purpose of philosophy is to perform a critique of knowledge in order to eliminate all nonsensical propositions that claim to be cognitive. The philosopher must be able to perform on the language both a semantic analysis (relationship reality-language) and a syntactic analysis (ratio of the signs as they are linked together).
Verificationism has as a structural basis to find a connection between statements and experience, that is, sensations that give meaning to those. This connection is called verification.
The epistemological attitude that gives rise to verificationism, can be found within the history of philosophy and science as early as the Greek philosophy, to Thomas Aquinas passing by William of Occam, and English empiricism, positivism and Empiriocriticism of Avenarius and Mach.
According to English empiricism (whose leading exponents can be considered Locke, Berkeley and Hume) the only source of knowledge is experience.
As Berkeley says, in fact, "the objects of human knowledge are or ideas really impressed by the senses or ideas formed with the help of memory and imagination composing or dividing those perceived by the senses." So there is no other way of formulating sentences or judgments from the data of experience and the only way to verify the truth value is still using experience. The judgments that are thus based on data that can not be verified through experience do not have sense and are therefore to be rejected as unscientific.
A position that seriously reflects the consequences of empiricism is the version of Hume, who, considering that only experience can provide the truth value of a proposition, rejects all of them that claim to have universal validity. A law becomes true only if verified, but once it is verified, through experience, nothing can guarantee that the experience will occur whenever you present similar conditions that made it possible. The verification of an empirical proposition is always contingent, never needed. Difficult for Hume, therefore, is to give a definitive foundation to the same science in the traditional sense, i.e. as a set of knowledge that be certain and necessary.
Sciences, says the positivist Comte, must seek the immutable laws of nature and as such be verified regardless of any contingent experience that shows them to senses or should occur whenever the law so provides.
Some positivists (principle of verification ‘strong’) note, however, that the principle of verifiability makes significant some metaphysical judgments, such as "The soul is immortal." Indeed, there is a method of verification and simply “wait a while and die”. To avoid that statements of this type can be equipped with sense, it is processed a stronger version of the principle of verifiability. This states that a judgment has meaning only because it can be shown definitively true or false; i.e. it must give an experience that can show this value of truth.
This version is called strong because of the fact that it excludes that any knowledge be given that is not empirical and logical and therefore excludes that a sense can be given to any expression that is not the result of empirical knowledge or logical deduction derived from empirical propositions. This version of verificationism will be criticized by some positivists less radical, as Neurath and Carnap, for the simple fact that, if to give sense to a proposition is necessary its verification, even the principle of verifiability itself must be verified, and this It is not possible.
Numerous propositions of common use, whose meaning seems clear for the terms that we use, are unverifiable as statements that express the past or the future, such as “Churchill sneezed 47 times in 1949” or "Tomorrow it will rain." These propositions can, in principle, be verified, then it can be provided a method for the verification and for the principle of verifiability ‘weak version’ are equipped with meaning, but not for the ‘strong version’; they are only nonsense.
There are to be rejected the assertions about the Absolute and in general of metaphysical nature, at least as propositions to which it is possible to apply the positive verificationist method, even though this does not exclude its existence: to try to deny a metaphysical proposition has the same meaning as to try to prove it. The metaphysical propositions are therefore omitted, unrebutted.
Comte rejects the so-called absolute empiricism, which states that any proposition that is not established by the facts is to be rejected as senseless and therefore not liable to be taken as a scientific proposition.
Special mention must be made of math, no science, for Comte, but language and therefore the basis of any positive science. Mathematics as well as logic, as will say the logical empiricists, has the purpose of showing the connections between propositions in order to maintain the truth value of these, not to produce new values. The propositions of mathematics are ‘a priori’ truth, therefore, as such, can not be verified and therefore they say nothing of the world, but tell us how of the world it must be spoken after having experienced it.
The critique perhaps best known to the principle of verifiability is provided by Popper. He, though being its main critic, never abandons the beliefs set in the positivist poster and the idea that science has a rational and deductive structure, though describable in ways other than those contemplated by Schlick. In particular the principle of verification, weak and strong version, is abolished and replaced by that of falsifiability. This principle is in fact an admission of the impossibility of science to arrive at statements that they claim to be checked as they are, and also a condemnation of the principle of induction when it claims to provide a basis for the formulation of necessary laws . Popper says that billions of checks are not enough to determine if a given theory is certain; it is enough a falsification to show it is not true. The criterion of controllability of Carnap becomes the possibility of a statement to be subjected to falsification and the structure of science, as already stated by Hume, is that it does not confirm the hypothesis, to the maximum falsifies it. The experiments themselves to which are subject the laws of science are useful when trying to falsify the laws themselves foreseen by them and not if they try to verify them.
Criticism burying verificationism come from the so-called post-positivist epistemology, whose leading exponents are Kuhn, Lakatos and Feyerabend. In varying degrees all three claim that a fact can not be verified because the bare facts not even exist, but can only be represented in a theory already considered scientific. Therefore, there is no distinction between terms of observation and theoretical terms, and even the same concepts considered basic of science possess the same meaning if designed within two different theories (think for example to the concept of mass for Newton and Einstein) . According to post-positivism also science itself is not empirical because even its data are not empirically verifiable and there is no criterion of significance, that is, it is not possible to separate a scientific statement from one that concerns other human activities.
Now, finally, we follow the position of Professor Franco Giudice for whom in the work “Controllability and meaning” (1936-1937) Rudolf Carnap recognizes that absolute verification in science is almost impossible. It must, therefore, change the criterion of significance; the principle of verification must be replaced with the concept of confirmation: a proposition is significant if, and only if, it is confirmable. The criterion of verifiability of propositions consists only of confirmations gradually increasing. Thus, the acceptance or rejection of a proposition depends on the conventional decision to consider a given degree of confirmation of the proposition as sufficient or insufficient. Then, the meaning of a proposition is determined by the conditions of its verification (verification principle): a proposition is significant if, and only if, there is an empirical method for deciding if it is true or false. If such a method is not given, then it is an insignificant pseudo-proposition.
Should hypotheses always be based on a theory? I will provide an example here without variable names. I am reading a paper where the authors argue that X (an action) should be related to Y (an emotion). In order to support this argument the authors suggest that when individuals engage in X, they are more likely to feel a sense of absorption and thus they should experience Y. There is no theory here to support the relationship between X and Y. They are also not proposing absorption as the mediator. They are just using this variable to explain why X should lead to Y. Would this argument be stronger if I used a theory to support the relationship between X and Y? Can someone refer me to a research paper that emphasizes the need for theory driven hypotheses? Thanks!
I am quite surprised everybody says Galileo is the one who first scientifically described the relativity of motion which is contrary to the fact that at least Copernicus did it earlier and in quite explicit form:
"Every observed change of place is caused by a motion of either the observed
object or the observer or, of course, by an unequal displacement of each. For when things move with equal speed in the same direction, the motion is not perceived, as between the observed object and the observer."
NICHOLAS COPERNICUS OF TORUÑ, THE REVOLUTIONS OF THE HEAVENLY SPHERES 1543.
I am also surprised from time to time by statements that it was Galileo who proposed heliocentric system.
Its an interesting aspect of distortion of historical facts. Any thoughts or other examples of similar injustice? Why does it take place?
This refers to the recent experiments of Radin et al :
1) D. Radin, L. Michel, K. Galdamez, P. Wendland, R Rickenbach and A. Delorme
Physics Essays, 25, 2, 157 (2012).
2) D. Radin, L. Michel, J. Johnston and A. Delorme, Physics Essays, 26, 4, 553 (2013).
These experiments show that observers can affect the outcome of a double slit experiments as evidenced by a definite change in the interference pattern.
It requires urgent attention from the scientific community, especially Physicists.
If these observed effects are real, then we must have a scientific theory that can account for them.
I'm interested in comparing Indigenous research methods with other ancient cultures. Indigenous research methods are relatively well documented for Australian Aboriginals, New Zealand Maori and North American Indians. I was hoping to locate examples of other non-Western (non-Eurocentric) research methods used by cultures, such as China, Africa, South America, India etc. For example, what methodology did the Chinese use to develop their knowledge of Chinese medicine? I realise these methods may not have been documented or may be in a non-English language. Any leads would be helpful at this stage.
While scientific cosmology rarely occurs in the work Karl Popper, nevertheless it is a subject that interested him. The problem now is whether falsifiability criterion can be used for cosmology theories.
For instance, there are certain issues in cosmology which have never been refuted, but instead the same methods are used over and over despite their lack of observational support, for instance mutliverse idea (often used in string theory) and also Wheeler DeWitt equation (often used in quantum cosmology).
So do you think that Popperian falsifiability can be applied to cosmology science too? Your comments are welcome.
My objective is to create, accumulate physical evidence and demonstrate irrefutable physical evidence to prove that the existing definitions for software components and CBSE/CBSD are fundamentally flawed. Today no computer science text book for introducing software components and CBSD (Component based design for software products) presents assumptions (i.e. first principles) that resulted in such flawed definitions for software components and CBSD.
In real science, anything not having irrefutable proof is an assumption. What are the undocumented scientific assumptions (or first principles) at the root of computer science that resulted in fundamentally flawed definitions for so called software components and CBD (Component Based Design) for software products? Each of the definitions for each kind of so called software components has no basis in reality but in clear contradiction to the facts we know about the physical functional components for achieving CBD of physical products. What are the undocumented assumptions that forced researchers to define properties of software components, without giving any consideration to reality and facts we all knows about the physical functional components and CBD of physical products?
Except text books for computer science or software engineering for introducing software components and CBSD (Component Based Design for software products), I believe, first chapter of any text book for any other scientific discipline discusses first principles at the root of the scientific discipline. Each of the definitions and concepts of the scientific discipline is derived by relying on the first principles, observations (e.g. including empirical results) and by applying sound rational reasoning. For example, any text book on basic sciences for school kids starts by teaching that “Copernicus discovered that the Sun is at the center”. This is one of the first principles at the root of our scientific knowledge, so if it is wrong, a large portion of our scientific knowledge would end up invalid.
I asked countless expert, why we need different and new description (i.e. definitions and/or list of properties) for software components and CBSD, where the new description, properties and observations are in clear contradiction to the facts, concepts and observations we know about the physical functional components and CBD of large physical products (having at least a dozen physical functional components). I was given many excuses/answers, such as, software is different/unique or it is impossible to invent software components equivalent to the physical functional components.
All such excuses are mere undocumented assumptions. It is impossible to find any evidence that any one ever validated these assumptions. Such assumptions must be documented, but no text book or paper on software components even mentioned about the baseless assumptions they relied on to conclude that each kind of useful parts is a kind of software components, for example, reusable software parts are a kind of software components. Then CBD for software is defined as using such fake components. Using highly reusable ingredient parts (e.g. plastic, steel, cement, alloy or silicon in wafers) is not CBD. If anyone asks 10 different experts for definition/description for the software components, he gets 10 different answers (without any basis in reality we know about the physical components). Only the God has more mysterious descriptions, as if no one alieve seen the physical functional components.
The existing descriptions and definitions for so called CBSD and so called software components were invented and made out of thin air (based on wishful thinking) by relying on such undocumented myths. Today many experts defend the definitions by using such undocumented myths as inalienable truths of nature, not much different from how researchers defended epicycles by relying on assumption ‘the Earth is static’ up until 500 years ago. Also most of the concepts of CBSD and software components created during past 50 years derived by relying on such fundamentally flawed definitions of software components/CBSD (where the definitions, properties and descriptions are rooted in undocumented and unsubstantiated assumptions).
Is there any proof that it is impossible to invent real software components equivalent to the physical functional components for achieving real CBSD (CBD for software products), where real CBSD is equivalent to the CBD of large physical products (having at least a dozen physical functional components)? There exists no proof for such assumptions are accurate, so it is wrong to rely on such unsubstantiated assumptions. It is fundamental error, if such assumptions (i.e. first principles) are not documented.
I strongly believe, such assumptions must be documented in the first chapters of each of the respective scientific disciplines, because it forces us to keep the assumptions on the radar of our collective conscious and compels future researchers to validate the assumptions (i.e. first principles), for example, when technology makes sufficient progress for validating the assumptions.
I am not saying, it is wrong to make such assumptions/definitions created for software components 50 years ago. But it is huge error to not documenting the assumptions, on which they relied upon for making such different and new definitions (by ignoring reality and known facts). Such assumptions may be acceptable and true 50 years ago (when computer science and software engineering was in infancy and assembly language and FORTRAN were leading edge languages), but are such assumptions still valid? If each of the first principles (i.e. assumptions) is a proven fact, who proved it and where can I find the proof? Such information must be presented in the first chapters.
In real science, anything not having irrefutable proof is an assumption. Is such undocumented unsubstantiated assumptions are facts? Don’t the computer science text books on software components need to document proof for such assumptions before relying on such speculative unsubstantiated assumptions for defining the nature and properties of software components? All the definitions and concepts for software components and CBSD could be wrong, if the undocumented and unsubstantiated assumptions end up having huge errors.
My objective is to provide physical evidence (i) to prove that it is possible to discover accurate descriptions for the physical functional components and CBD of large physical products (having at least a dozen physical functional components), and (ii) to prove that it is not hard to invent real software components (that satisfy the accurate description for the physical functional components) for achieving real CBSD (that satisfy the accurate description for the CBD of physical products), once the accurate descriptions are discovered.
It is impossible to expose any error at the root of any deeply entrenched paradigm such as CBSE/CBSD (evolving for 50 years) and geocentric paradigm (evolved for 1000 years). For example, assumption “the Earth is static” considered an inalienable truth (not only of nature and but also of the God/Bible) for thousands of years, but ended up a flaw and sidetracked research efforts of countless researchers of basic sciences into a scientific crisis. Now we know, no meaningful scientific progress would have been possible, if that error was not yet exposed. Only possible way expose such error is showing physical evidence, even if most experts refuse to see the physical evidence, by finding few experts who are willing to see the physical evidence with open mind.
I have lot of physical evidence and now in the process of building a team of engineers and necessary tools for building software applications by assembling real software components for achieving real CBSD (e.g. for achieving CBD-structure http://real-software-components.com/CBD/CBD-structure.html by using CBD-process http://real-software-components.com/CBD/CBD-process.html). When our tools and team is ready, we should be able to build any GUI application by assembling real software components.
In real science, any thing not having irrefutable proof is an assumption. Any real scientific discipline must document each of the assumptions (i.e. first principles) at the root of the scientific discipline, before relying on the assumptions to derive concepts, definitions and observations (perceived to be accurate, only if the assumptions are proven to be True): https://www.researchgate.net/publication/273897031_In_real_science_anything_not_having_proof_is_an_assumption_and_such_assumptions_must_be_documented_before_relying_on_them_to_create_definitionsconcepts
I tried to write papers and give presentations to educate about the error, but none of them worked. I learned in hard way, that this kind of complex paradigm shift can’t happen in just couple of hour’s presentation or by reading 15 to 20 page long papers. Only possible way left for me to expose the flawed first principles at the root of any deeply entrenched paradigm is by finding experts willing to see physical evidence and showing them the physical evidence: https://www.researchgate.net/publication/273897524_What_kind_of_physical_evidence_is_needed__How_can_I_provide_such_physical_evidence_to_expose_undocumented_and_flawed_assumptions_at_the_root_of_definitions_for_CBSDcomponents
So I am planning to work with willing customers to build their applications, which gives us few weeks to even couple of months time to work with them to build their software by identifying ‘self-contained features and functionality’ that can be designed as replaceable components to achieve real CBSD.
How can I find experts or companies willing to work with us to see the physical evidence, for example, by allowing us the work with them to implement their applications as a CBD-structure? What kind of physical evidence would be compelling, when any one willing to give us a chance (at no cost to them, since we can work for free to provide compelling physical evidence)? I failed so many times in this complex effort, so I am not sure what could work? Does this work?
Best Regards,
Raju
I am looking for information on the history of the development of statistical significance formulae, the mathematical calculations and why they were chosen.
I would also like to learn the same about effect size.
Thanks!
It is known that physics is empirical science, in the sense that all propositions should be verified by experiments. But Bertrand Russell once remarked that the principle of verifiability itself cannot be verified, therefore it cannot be considered a principle of science.
In a 1917 paper, Bertrand suggested sense-data to replace the problem of verifiability in physics science (http://selfpace.uconn.edu/class/ana/RussellRelationSenseData.pdf), but later he changed his mind. see http://www.mcps.umn.edu/philosophy/12_8savage.pdf
So what do you think? Is there a role for sense-data in epistemology of modern physics?
Section II of “The fixation of belief” [2] opens dramatically with a one-premise argument—Peirce’s truth-preservation argument PTPA—concluding that truth-preservation is necessary and sufficient for validity: he uses ‘good’ interchangeably with ‘valid’. He premises an epistemic function and concludes an ontic nature.
The object of reasoning is determining from what we know something not known.
Consequently, reasoning is good if it gives true conclusions from true premises, and not otherwise.
Assuming Peirce’s premise for purposes of discussion, it becomes clear that PTPA is a formal fallacy: reasoning that concludes one of its known premises is truth-preserving without “determining” something not known. It is conceivable that Peirce’s conclusion be false with his premise true [1, pp. 19ff].
The above invalidation of PTPA overlooks epistemically important points that independently invalidate PTPA: nothing in the conclusion is about reasoning producing knowledge of the conclusion from premises known true: in fact, nothing is about premises known to be true, nothing is about conclusions known to be true, and nothing is about reasoning being knowledge-preservative.
The following is an emended form of PTPA.
One object of reasoning is determining from what we know something not known.
Consequently, reasoning is good if it gives knowledge of true conclusions not among the premises from premises known to be true, and not otherwise.
PTPA has other flaws. For example, besides being a formal non-sequitur, PTPA is also a petitio-principi [1, pp.34ff]. Peirce’s premise not only isn’t known to be true—which would be enough to establish question-begging—it’s false: reasoning also determines consequences of premises not known to be true [1, pp. 17f].
[1] JOHN CORCORAN, Argumentations and logic, Argumentation, vol. 3 (1989), pp. 17–43.
[2] CHARLES SANDERS PEIRCE, The fixation of belief, Popular Science Monthly. vol. 12 (1877), pp. 1–15.
Q1 Did Peirce ever retract PTPA?
Q2 Has PTPA been discussed in the literature?
Q3 Did Peirce ever recognize consequence-preservation as a desideratum of reasoning?
Q4 Did Peirce ever recognize knowledge-preservation as a desideratum of reasoning?
Q5 Did Peirce ever retract the premise or the conclusion of PTPA?
In The Nature of the Physical World, Eddington wrote:
The principle of indeterminacy. Thus far we have shown that modern physics is drifting away from the postulate that the future is predetermined, ignoring rather than deliberately rejecting it. With the discovery of the Principle of Indeterminacy its attitude has become definitely hostile.
Let us take the simplest case in which we think we can predict the future. Suppose we have a particle with known position and velocity at the present instant. Assuming that nothing interferes with it we can predict the position at a subsequent instant. ... It is just this simple prediction which the principle of indeterminacy expressly forbids. It states that we cannot know accurately both the velocity and the position of a particle at the present instant.
--end quotation
According to Eddington, then, we cannot predict the future of the particular particle beyond a level of accuracy related to the Planck constant (We can, in QM, predict only statistics of the results for similar particles). The outcome for a particular particle will fall within a range of possibilities, and this range can be predicted. But the specific outcome, regarding a particular particle is, we might say, sub-causal, and not subject to prediction. So, is universal causality (the claim that every event has a cause and when the same cause is repeated, the same result will follow) shown false as Eddington holds?
It was true that mathematics was done in argumentation and discourse or rhetoric in ancient times. The 6 volumes of Euclid’s elements have no symbols in it to describe behaviors of properties at all except for the geometric objects. The symbols of arithmetic: =, +, -, X, ÷ were created in the 15th and 16th centuries which most people hard to believe it - you heard me write. The equality sign “=” and “+,-“ appeared in writing in 1575, the multiplication symbol “X “ was created in 1631, and the division sign “ ÷” was created in 1659. It will be to the contrary of the beliefs of most people as to how recent the creations of these symbols were.
It is because of lack of symbols that mathematics was not developed as fast as it has been after the times where symbols were introduced and representations, writing expressions and algebraic manipulations were made handy, enjoyable and easy.
These things made way to the progress of mathematics in to a galaxy – to become a galaxy of mathematics. What is your take on this issue and your expertise on the chronology of symbol creations and the advances mathematics made because of this?
http://Notation,%20notation,%20notation%20%20a%20brief%20history%20of%20mathematical%20symbols%20%20%20Joseph%20Mazur%20%20%20Science%20%20%20theguardian.com.htm
The British astrophysicist, A.S. Eddington wrote (1928), interpreting QM, "It has become doubtful whether it will ever be possible to construct a physical world solely out of the knowable - the guiding principle of our macroscopic theories. ...It seems more likely that we must be content to admit a mixture of the knowable and the unknowable. ...This means a denial of determinism, because the data required for a prediction of the future will include the unknowable elements of the past. I think it was Heisenberg who said, 'The question whether from a complete knowledge of the past we can predict the future, does not arise because a complete knowledge of the past involves a self-contradiction.' "
Does the uncertainty principle imply, then, that particular elements of the world are unknowable, - some things are knowable, others not, as Eddington has it? More generally, do results in physics tell us something substantial about epistemology - the theory of knowledge? Does epistemology thus have an empirical basis or empirical conditions it must adequately meet?
Many scientists differentiate the hard physical sciences from philosophy, some even say "that's not science its philosophy". Are they missing the point in a big way?
I recently published my book "The Origin of Science" which can be downloaded at https://www.researchgate.net/profile/Louis_Liebenberg/publications/ I am interested in alternative theories on the origin of science and how this debate can lead to a better understanding of how our ability for scientific reasoning evolved.
Is it reasonable to use these terms?
A number of papers have been published a long time ago, but still have many citations.
If it is possible, then is it predictable?
Is citation can be a suitable measure to judge the useful age of a paper?
Which papers have more useful lifetime or long expire date?
Thanks for your inputs.
Back to my 2nd semester, I still remember those boring faces trying to hide their yawning during lectures on History of Science. But I found it unexpectedly interesting. Learning the manner of approach of ancient philosophers and naturalists was quite exciting. But it seems to me that most students neglect this valuable subject because their minds seem to be preoccupied with the notion that most of the thing taught in this course, like their manner of thinking about the cosmos, the earth, their perspective towards health and medicine, are almost apparent to everybody. But what they missed are what they ought to learn, their hardwork, their practices, their mode of approach, their determination, dedication at those days when everything seems to be mysterious, when there was no thing called apparent.
So what more can we learn from our forefathers? And how can this subject be popularized esp. among youngsters?
Is there a relationship between history of science and philosophy of science?