Questions related to Rationality
EPISTEMOLOGY OF EVER PUSHING THE DEFINITIONS
OF SYSTEMIC CATEGORIES AND AXIOMS
Raphael Neelamkavil, Ph.D., Dr. phil.
We discuss here the continuous and never-ending dimensionality of truth in philosophy, science, philosophical cosmology etc. (in my context, also in Gravitational Coalescence Cosmology – CCG). The present work on a new philosophical cosmology is based on general-ontologically validated epistemological truth-probabilism, which spells out the human tendency to articulate general- and physical-ontological foundations (axiomatic Categorial Laws of metaphysics) that will never be fixed forever, will be ever-better defined, and are therefore clearly and continuously dimensional concepts in the inexhaustible continuation of the very dimension of each of the notions and principles under consideration.
Theoretical foundations that can follow such continuous dimensionality, together, in their implications, indicate not our possession of any truth in its alleged correspondence to the totality of all processes (Reality-in-total) ontologically committed to. They clearly indicate that progress is being made in adequately capturing, or corresponding to, the ideal continuous dimension of what are being sought in human intellectual, technological, and cultural accomplishments – thanks to the logical, epistemological, and ontological implications of Kurt Gödel’s mathematical and logical achievements. [For the achievements of Gödel, see Torkel Franzén 2004: 1-11; see also Richard Tieszen 2011. For a detailed cosmological, epistemological, and ontological treatment of it, see my book, Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 2015, and Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 2018.]
Progress is being made not merely in the sciences, the arts, human institutions, etc. Progress is concretely taking place in philosophy too. The cumulative effect of progress in philosophy is not so easily visible as in the case of many other disciplines, because philosophy is to some extent philosopher-based and system-based.
The problem of Gödel’s incompleteness theorem stems from the incompleteness of systems that build themselves up with consistency from primitive notions and axioms: “So every formal system of arithmetic cannot derive the assertion of its own consistency, provided that it is consistent.” [Joseph Vidal-Rosset 2006: 56] But the reason for the innate inconsistency is the natural rigidity in the definitions of the primitive notions (Categories) and axioms. Such rigidity stems from the finitely symbolic nature of representations derived from the denotative function of denotatively defined universals / concepts.
Here the system does not sufficiently recognize connotative universals in consciousness, which are the ideal reflections of the ontological universalities / commonalities in the processes being studied. In that case, the issue stems from still deeper realms: the ever-abiding dubitability of any sort of denotative definitions of primitive notions and axioms from which systems start off. This is true of all sciences. That is, we need great flexibility in the definitions of primitive notions and axioms. This flexibility is what I have called “pushing categories and axioms”. In which case, why not consider all sciences philosophies as part of one generalized science facilitating flexibility?
I do not suggest that the general patterns in human thought or philosophy hold within themselves realizations merely of the implications of Gödel’s theorems [Torkel Franzén 2005: 77ff, 137ff] without the possibility of betterment of theories and systems. Truth can be conceived and defined in any rigorous axiomatic system, where foundational incompleteness will be systemically built in clearly from the possibility, after Gödel, of improvement of completeness if the system can follow the method of indefinitely pushing back the ontological and logical limits of definitions of both (1) axioms and sub-axioms as such into more fundamental ones or more adequate definitions of the same axioms, and (2) primitive notions’ meanings by reason of their definitions. I shall call this solution the method of “pushing Categories and axioms” into more fundamental realms in their definitions. This is the epistemological-methodological foundation of systemic science, namely, the science of all sciences.
This manner of procedure is the most fundamental epistemological ingredient of progress in systems, and this is what happens in history when systems are overhauled or overwhelmed in parts or as wholes. Without such pushing the definitional limits of the basic Categories (primitive notions, metaphysical Laws) and the axioms already created in any system, there is no foundation-building in systems of any kind, especially after we have proofs for this necessity in the logical, epistemological, and ontological implications of the work of Gödel.
This fact will (1) positively relativize the concept of philosophical, mathematical, and scientific truth and (2) negatively highlight human intellectual, technological, cultural, political, and religious institutions’ tendency to fossilize truths. Not relativistic truth-probabilism but clear, adequate, and applicable systemism with ever higher truth-probabilities is to be the foundation of all human thought including mathematics and logic. This is the justification for the creation of the systemic, axiomatic foundations of the sciences of all sciences. This would also satisfy postmodern philosophies with their Socratic effect upon philosophies and sciences and permit philosophy to find surer but ever more flexible paths.
 I define: Logic is the science of the best intersubjectively rational consequence of ever higher truth-probability in statements. Epistemology is the science of justifications for the fact and manner of achieving rationally explicable consequence, in a spirally broadening and deepening manner, serving to achieve ever better approximations of the epistemological ideal of Reality-in-general. (Einaic) Ontology is the rationally consequent science of the totality of existents, its parts, and their sine qua nons in terms of the To Be (Einai) of Reality-in-total and/or the to be (einai) of its parts (reality-in-particular), serving to achieve ever better approximations of the epistemological ideal of Reality-in-total.
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
PHYSICAL AND EXACT SCIENCES AND AXIOMATIC PHILOSOPHY:
Raphael Neelamkavil, Ph.D., Dr. phil.
1. WHY SHOULD PHYSICS AND COSMOLOGY BE GROUNDED?
I get surprised each time when some physicists tell me that either the electromagnetic (EM) or the gravitational (G) or both the forms of energy do not exist – that EM and G are, are "existent" neither like nor unlike material bodies – but that EM and G are to be treated or expressed as mathematical waves or particles propagated from material objects that of course exist for all sciences.
Some of them put in all their energies to show that both EM and G are mere mathematical objects, fields, etc., and not physically existent objects or fields of energy emissions that then become propagations from material bodies. If propagation from material bodies, then their nature too would have to be similar to that of material bodies!!! This is something that the mathematical realists of theoretical physics and cosmology cannot bear!!!
This is similar in effect to Newton and his followers thinking honestly and religiously that at least gravitation and perhaps also other energies are just miraculously non-bodily actions at a distance without any propagation particles / wavicles. But I admit that I explained certain things in the first paragraph above as if I myself were a Newtonian. This has been on purpose.
Even in the 21st century, we must be sharply aware that from the past more than 120 years the General Theory of Relativity with its various versions and especially its merely mathematical interpretations have succeeded in casting and maintaining the power of a terrifying veil of mathematical miracles on the minds of many scientists – miracles such as the mere spacetime curvature being the meaning of gravitation and all other sorts of fields. The mathematics did not need existence, and hence gravitation did not exist! But the same persons did not create a theory whereby the mathematics does not need the existence of the material world and hence the material world does not exist!!
A similar veil has been installed by quantum physics on the minds of many physicists and their audience too. We do not discuss it here. Hence, I have constructed in four published books a systemic manner of understanding these problems in cosmology and quantum physics. I do not claim perfection in any of my attempts. Hence, I keep perfecting my efforts in the course of time, and hope to achieve some improvement. The following is a very short attempt to summarize in this effort one important point in physics, cosmology, and the philosophy of physics and of cosmology.
There exists the tradition of lapping up whatever physicists may say about their observable and unobservable constructs, based on their own manner of using mathematics. The mathematics used are never transparent. Hence, the reader or the audience may not have the ability to makes judgements based on the minimum physical ontology expected of physicists. I believe that this should stop forever at least in the minds of physicists. Moreover, physicists are not to behave like magicians. Their readers and audience should not practice religious faithfulness to them. Nor should physicists expect it from them.
2. ONTOLOGICALLY QUALITATIVE NATURE OF INVARIANTS
When the search is for the foundations of any science, it is in fact for the invariant aspects of all the realities of that science, and not merely for the invariant aspects of some parts of the realities (object-set/s), methods, conclusions, etc. This does not suffice for science for maximizing success. This is because, any exclusive search for the foundations of the specific object-set or of the discourse of the specific object-set will further require foundations upon the totality of all specific object-sets and their discourse.
We find ourselves in a tradition that believes that proportionality quantities are to be taken as the invariables in physics. But I used to reduce into universal qualities the quantitative-structural aspect of all sciences, that are represented in mathematics as the ontological quantities dealt with in science. The real invariants of physics are not the ontological quantities or proportionalities of certain quantities being treated in physics.
The latter, being only the constant quantities, are one kind of ontological qualities, namely, (1) the quantitatively expressible qualities of processes, e.g., ‘quantity’, ‘one’, ‘addition’, etc. are explicable, respectively, as the qualities: ‘being a specific quantity’, ‘being a unity’, ‘togetherness of two or more units’, etc. The other kind is (2) the ontological qualities of processes in general (say, malleability, toughness, colour, redness, etc.) which cannot directly be expressed as ontological quantities of processes. This shows that pure ontological qualities are a more general notion than ontological quantities and includes the latter.
Explaining ontological qualities in terms of physical quantities cannot be done directly by fundamental physical quantities, but by physical properties that involve fundamental physical quantities. Properties are a mix mainly of ontological qualities and of course includes ontological quantities, of which some are the fundamental physical quantities. Hence, the invariants must be qualities that are generative of and apply to both quantities and non-quantities. These invariants then are fully qualitative.
If the invariants apply to all physical processes, these invariants are qualities ontologically universal to all of them in the specified group. Out of them are constructed properties by mixing many qualitative and quantitatively qualitative universals. Clearly, universals applying to all existents are the real invariants of all Reality – which is a matter to be discussed later.
Since universals are all qualitative and some of them are quantitative as qualities, ontological qualities are broader than mathematical in scope, because, the moment mathematics uses quantities, the use is not of quantities devoid of qualities, but instead, of the quantitative variety of general / universal qualities.
Qualities can also behave as some of the primitive notions that underlie all of physics and other sciences – but this will not exhaust the most necessary foundations of physics and other sciences, because these sciences require the general qualities of all existents, and not merely those of mathematics. These are the axiomatically formulable Categorial notions of philosophy, which latter is thus a general science.
In short, quantitative proportionalities as invariants are very partial with respect to existent processes and their totality. Naturally, philosophy too needs general qualities and not merely quantitative qualities to base the discipline.
3. DIFFERENCES IN FOUNDATIONS: EXACT AND NATURAL SCIENCES AND PHILOSOPHY
We see many theories in physics, mathematics, etc. becoming extremely axiomatic and rigorous. They call themselves or attempt to be as quantitative as possible. But are adequate comparisons between mathematics, physical sciences, biological sciences, human sciences, and philosophy, and adequate adaptation of the axiomatic method possible by creating a system of all exact, physical, and human sciences that depend only on the quantitively qualitative proportionalities and call them invariables?
They cannot do well enough to explain Reality-in-total, because Reality-in-total primarily involves all sorts of ontological universals that are purely qualitative, and some of them are the most fundamental, proportionality-type, quantitative invariables of all physical existents in their specificity and totality in their natural kinds. But as the inquiry comes to Reality-in-total, ontological qualitative universals must come into the picture. Hence, merely quantitative (mathematical) explanations do not exhaust the explanation of Reality-in-total.
Existence as individuals and existence in groups are not differentiable and systematizable in terms of quantitatively qualitative universals alone. Both qualitative and quantitatively qualitative universals are necessary for this. Both together are general qualities pertaining to existents in their processual aspect, not merely in their separation from each other. Therefore, the primitive notions (called traditionally as Categories) of Reality-in-total must be ontological qualitative universals involving both the qualitative and quantitative aspects. The most basic of universals that pertain properly to Reality-in-total are now to be found.
Can the primitive notions (Categories) and axioms of the said sciences converge so that the axioms of a system of Reality take shape from a set of the highest possible ontological Categories as simple sentential formulations of the Categories which directly imply existents? This must be deemed necessary for philosophy, natural sciences, and human sciences, because these deal with existents, unlike the formal sciences that deal only with the qualitatively quantitative form of arguments.
Thus, in the case of mathematics and logic there can be various sorts of quantitative and qualitative primitive notions (categories) and then axioms that use the primitive notions in a manner that adds some essential, pre-defined, operations. But the sciences and philosophy need also the existence of their object-processes. For this reason, the primitive axioms can be simple sentential formulations involving the Categories and nothing else. This is in order to avoid indirect existence statements and to involve existence in terms exclusively of the Categories.
Further, the sciences together could possess just one set of sufficiently common primitive notions of all knowledge, from which also the respective primitive notions and axioms of mathematics, logic, physical and human sciences, and philosophy may be derived. I support this view because the physical-ontological Categories involving the existence of Reality and realities, in my opinion, must be most general and fully exhaustive of the notion of To Be (existence) in a qualitatively universal manner that is applicable to all existents in their individual processual and total processual senses.
Today the nexus or the interface of the sciences and philosophies is in a crisis of dichotomy between truth versus reality. Most scientists, philosophers, and common people rush after “truths”. But who, in scientific and philosophical practice, wants to draw unto the possible limits the consequences of the fact that we can at the most have ever better truths, and not final truths as such?
Finalized truths as such may be concluded to in cases where there is natural and inevitable availability of an absolute right to use the logical Laws of Identity, Contradiction, and Excluded Middle, especially in order to decide between concepts related to the existence and non-existence of anything out there.
Practically very few may be seen generalizing upon and extrapolating from this metaphysical and logical state of affairs beyond its epistemological consequences. In the name of practicality, ever less academicians want today to connect ever broader truths compatible to Reality-in-total by drawing from the available and imaginable commonalities of both.
The only thinkable way to accentuate the process of access to ever broader truths compatible to Reality-in-total is to look for the truest possible of all truths with foundations on existence (nominal) / existing (gerund) / To Be (verbal). The truest are those propositions where the Laws of Identity, Contradiction, and Excluded Middle can be applied best. The truest are not generalizable and extendable merely epistemologically, but also metaphysically, physical-ontologically, mathematically, biologically, human-scientifically, etc.
The agents that permit generalization and extrapolation are the axioms that are the tautologically sentential formulations of the most fundamental of all notions (Categories) and imply nothing but the Categories of all that exist – that too with respect to the existence of Reality-in-total. These purely physical-ontological implications of existence are what I analyze further in the present work. One may wonder how these purely metaphysical, physical-ontological axioms and their Categories can be applicable to sciences other than physics and philosophy.
My justification is as follows: Take for example the case of the commonality of foundations of mathematics, logic, the sciences, philosophy, and language. The notions that may be taken as the primitive notions of mathematics were born not from a non-existent virtual world but instead from the human capacity of spatial, temporal, quantitatively qualitative, and purely qualitative imagination.
I have already been working so as to show qualitative (having to do with the ontological universals of existents, expressed in terms of adjectives) quantitativeness (notions based on spatial and temporal imagination, where, it should be kept in mind, that space-time are epistemically measuremental) may be seen to be present in their elements in mathematics, logic, the sciences, philosophy, and language.
The agents I use for this are: ‘ontological universals’, ‘connotative universals’, and ‘denotative universals’. In my opinion, the physical-ontological basis of these must and can be established in terms merely of the Categories of Extension-Change, which you find being discussed briefly here.
Pitiably, most scientists and philosophers forget that following the exhaustively physical-ontological implications of To Be in the foundations of science and philosophy is the best way to approach Reality well enough in order to derive the best possible of truths and their probable derivatives. Most of them forget that we need to rush after Reality, not merely after truths and truths about specific processes.
4. SYSTEMIC FOUNDATIONS VS. EXISTENCE/TS, NON-EXISTENCE/TS
4.1. Basis of Axiomatizing Science and Philosophy
The problem of axiomatizing philosophy, and/or philosophy of science, and/or all the sciences together is that we need to somehow bring in the elemental aspects of existence and existents, and absorb the elemental aspects of non-existence and non-existent objects that pertain to existents. Here it should be mentioned that axiomatizing mathematics and logic does not serve the axiomatization of philosophy, and/or philosophy of science, and/or all the sciences together. So far in the history of philosophy and science we have done just this, plus attempts to axiomatize the sciences separately or together by ignoring the elemental aspects of non-existence and non-existent objects that pertain to existents.
Existence (To Be) is not a condition for the possibility of existence of Reality-in-total or specific processual objects, but instead, To Be is the primary condition for all thought, feeling, sensation, dreaming, etc. All other conditions are secondary to this. If To Be is necessary as the condition for the possibility of any philosophy and science as discourse, we need to be axiomatic in philosophy and science about (1) existence (To Be, which is of all that exist) and/or (2) the direct and exhaustive implications of existence.
It is impossible to define existence without using words that involve existence. But it is possible to discover the exhaustive implications of To Be in order to use them in all discourse. Therefore, towards the end of this short document, I shall name what could be the inevitable primitive notions that are exhaustive of To Be and that may be used to create axioms for both philosophy and science together.
To put it differently, I attempt here to base all philosophy and science on the concept of existence of Reality-in-total as whatever it is, by deriving from the concept of the existence of all that exist the only possible (i.e., the exhaustive) implications of To Be.
Of course, the basic logical notions of identity and contradiction will have to be used here without as much danger as when we use them in statements on other less fundamental notions. I would justify their use here as the rational inevitabilities in the foundations – not as inevitabilities in the details that issue later. The inevitabilities in the later details need never to be realized as inevitabilities, because To Be implies some fundamental notions which will take case of this.
That is, the various ways in which the principles of identity and contradiction should be seen as inexact and inappropriate may be discovered in the in fields of derivation beyond the provinces of the fundamental Categorial implications of To Be. This latter part of the claims is not to be discussed here, because it involves much more than logic – in fact, a new conception of logic, which I would term as systemic logic.
Let me come to the matter that I promise in the name of the foundations of ‘Axiomatic Philosophy and Science’. First of all, to exist is not to be merely nothing. In this statement I have taken access to the Laws of Identity, Non-Contradiction, and Excluded Middle at one go in that whatever is, must be whatever it is, and not its opposite which is nothing but nothing, nor a middle point between the two extremes.
Therefore, existence must always be non-vacuous. That is, the primary logical implication of To Be is the non-non-being of whatever exists. But such a logical implication is insufficient for the sciences and philosophy, because we deal there with existents. Hence, let us ignore the logical implication as a truism. The existential implications of To Be are what we need.
I have so far not found any philosopher or scientist who derived these implications. But let us try, even if the result that obtained may be claimed by many ancients and others as theirs. In fact, theirs were not metaphysical / physical-ontological versions. Their epistemic versions of the same have been very useful, but have served a lot to misguide both philosophy and science into give “truth/s” undue importance in place of “Reality”. My claim about the exhaustive physical(-ontological) implications of To Be that I derive here is that they do not incur this fallacy.
To Be is not a thing. It is, as agreed at the start, the very condition for the possibility of discourse: philosophy, science, literature, art … and, in general, of experience. The To Be of existents is thus not a pre-condition for To Be – instead, it is itself the source of all conditions of discourse, not of existence.
4.2. Extension, Change, Universal Causality
If To Be is non-vacuous, it means that all existents are something non-vacuously real. Something-s need not be what we stipulate them to be, both by name and qualifications. But the purely general implication is that existents are something-s. This is already part of philosophical activity, but not of the sciences. We need to concretize this implication at the first tire of concrete implications. Only thereafter are sciences possible.
To be something is to be non-vacuous, i.e., to be in non-vacuous extendedness. However much you may attempt to show that Extension does not follow from the notions of To Be, something, etc., the more will be extent of your failure. You will go on using the Laws of Identity, Contradiction, and Excluded Middle, and never reach any conclusion useful for the sciences. Then you will have to keep your mouth and mind shut. I prefer for myself meaningful discourse in science and philosophy – when I meditate I shall attempt to keep my mind and lips as “shut” as possible.
As said above, Extension is one of the primary physical-ontological implications of To Be. Nothing exists without being extended, without being in Extension. Extended something-s are not just there in Extension. If in Extension, everything has parts. Thus, having parts is one of the primary implications of being something in existence. I term it alternatively also as Compositionality.
It is the very implication of being something that something-s are in Change. The deepest and most inevitable form of implication of Change is this: nothing that is in existence with parts can have the status of being something existent without the parts impacting at least a few others. This is the meaning of Change: impact-formation by extended parts. Any existent has parts existing in the state of impact formation in other parts and in themselves.
Hence, Change is the only other implication of To Be, not second to but equally important as Extension. I call it differently also as Impact-Formation. The notion of motion or mobility does not carry the full weight of the meaning of Change.
There cannot be any other implication equally directly derivable from To Be as Extension and Change can be. In other words, all other implications can be found to be sub-implications of Extension-Change, i.e., involving only Extension-Change. Showing them as involving only Extension-Change would suffice to show their sub-implications status with respect to Extension-Change.
Existence in Extension-Change belongs to anything existent, hence ubiquitous – to be met with in any existent. This is nothing but existence in the ubiquitously (to be met with in any existent) extended form of continuance in ubiquitous (to be met with in any existent) impact formation. What else is this but Universal Causality?
If you say that causation is a mere principle of science – as most philosophers and scientists have so far thought – I reject this view. From the above paragraphs I conclude that Causation is metaphysically (physical-ontologically) secondary only to existence. Everybody admits today that we and the universe exist. But we all admit that every part of our body-mind and every existent in the world must be causal because we are non-vacuously existent in Extension-Change.
This means that something has been fundamentally wrong about Causality in philosophy and science. We need to begin doing philosophy and science based fully on To Be and its implications, namely, Extension-Change-wise continuance, which is nothing but being in Universal Causation. It is universal because everything is existent. Universal Causality is the combined shape of Extension-Change. Causation the process of happening of Extension-Change-wise continuance in existence. Causality is the state of being in Extension-Change-wise continuance in existence.
4.3. Now, What Are Space and Time?
Note that what we measurementally and thus epistemically call as space is metaphysically to be termed as Extension. Space is the measuremental aspect of the primary quality of all existents, namely, of Extension. That is, space is the quantity of measurement of Extension, of measurements of the extended nature of existents. In this sense, space is an epistemic quality.
Further, note also that what we call time is the measuremental aspect of the primary quality of all existents, namely, of Change. If there is no impact-formation by parts of existents, there is no measurement called time. Hence, time is the epistemic quality of measurements of Change, which is the impact-formation tendency of all existents.
Immanuel Kant termed space as the condition for the possibility of sensibility, and Edmund Husserl called it as one of the fundamental essences of thought. Space and time in Kant are epistemic since they are just epistemic conditions of possibility; and essences in Husserl are epistemic, clearly as they are based on the continuous act of epochḗ.
Nothing can exist in epistemic space-time. That is, language and mind tend to falsely convert space and time into something that together condition existents. Thus, humans tend to believe that our measuremental concepts and derivative results are all really and exactly very essential to existent something-s, and not merely to our manner of knowing, feeling, sensing, etc.
This is the source of scientific and philosophical misconceptions that have resulted in the reification of the conclusions and concepts of thought and feeling. Thus, this is also the source of conceptual insufficiencies in philosophical and scientific theories. Scientism and scientific and mathematical instrumentalism justify these human tendencies in the name of pragmatism about science and thought.
Reification of certain statistical conclusions as probabilities and the metaphysicization of probable events as the only possible events are not merely due to the above sort of reification. It is also by reason of the equivocation of probability with possibility and the reification of our scientific and statistical conclusions of probabilities as real possibilities. Humans tend to forget that a certain amount of probability is exactly and properly the measure of the extent of human capacity (and by implication, of human incapacity), at a given instance and at a given measuremental moment of history, to use instruments to get at all the existents that are the causes of a given process.
As we know, To Be is not a Category / Quality. It is the very condition that is the same as the existence of something-s as whatever they are. This is a tautology: To Be is To Be. If To Be is a metaphysical notion, the physical-ontologically and scientifically relevant metaphysical implications of To Be are Extension-Change. These are the highest and only highest Categories of all philosophy and science. Universal Causality is the notion of combination of Extension-Change. It is not an indirectly derived notion.
If scientists tend to relegate such notions as philosophical, they are trying to be practical in a silly manner. Even scientific results need the hand of proper and best possible formulations of notions and theoretical principles. Theoretical principles (say, of causation, conservation, gravitation, matter, mass, energy, etc., which may clearly be formulated in terms of Extension-Change-wise existence and existents) must be formulated in the most systemic manner possible.
I would call Extension, Change, and the combination-term Universal Causality not merely as the highest metaphysical Categories. They are the very primitive terms in addition to terms like ‘existent’, ‘matter-energy’, etc., which are necessary for an axiomatic formulation of the foundations of the sciences. Hence, we need to formulate axiomatically both philosophy and science.
Universal Causality may hereafter also be taken as an axiom in philosophy and the sciences. An axiom is a formulated basic principle. In that case, why not formulate also the primitive notions (Categories) of Extension and Change as axioms? In short, the difference between mathematical-logical axiomatic foundations and physical-philosophical axiomatic foundations is that in the former set primitive notions are not axioms, and in the latter primitive notions may be formulated as axioms.
In the light of the above discussion, it becomes clear that Einstein’s postulation of gravitation and matter-energy as space-time curvatures is at the most a formulation of these notions in terms of the mathematical necessity to use space-time (epistemic) measurements and theorize based on them in theoretical physics.
Einstein was immersed in the neo-positivism and logical positivism of his time. Hence, he could not reason beyond the use, by mathematics, of quantitative notions as concrete measurements. Scientists and philosophers who still follow Einstein on this sort of a misguided reification of epistemic space and time are taking refuge not on Einstein but on his theoretical frailties. Even today most scientists and philosophers are unaware that quantities are in fact quantitatively characterized pure qualities – and not properties that are combinations of qualitative and quantitatively qualitative notions.
Minkowski formulated the mathematics of space-time and thus reduced space-time into a sort of ether in which physical processes take place gravitationally. Einstein put gravitation into this language and mistook this language (the language of mathematical space-time) to be the very matter-energy processes that curve according to gravitational processes. For the mathematics this is no too great error, because it worked. This is why some physicists even today consider gravitation and/or all energy forms as ether, as if without this stuff in the background material bodies would not be able to move around in the cosmos! A part of the cosmos is thus being converted into a background conditioner!
Only formal functioning has so far been found necessary in mathematics. Derivation from the metaphysical sources of existents and non-existents has not so far been found necessary in mathematics. But, note here also this: for more than 100 years physicists and philosophers of physics lapped up this substitution of the language of mathematics for the actual, physically existent, processes, which otherwise should have been treated also metaphysically, and if possible, in a manner that is systemically comprehensive of the sources of all sciences.
The implications of existence, non-existence, existents, and non-existents too can help to make the mathematical adaptations work pragmatically. Hence, clearly it does not suffice that only the mathematical formalism attained so far be used in physics and the sciences. The project of science, philosophy, mathematics, and logic must grow out of their limits and become parts of a systemic science with foundations in the implications of existence, non-existence, existents, and non-existents.
I have been attempting to explain in these pages a limited realm of what I otherwise have been attempting to realize. I show only that there are two physical-ontological Categories and some derived axioms (out of these many axioms, only one is discussed here, i.e., Universal Causality), using which we need to formulate not merely philosophy but also physics and other sciences.
But I suggest also that the existence-related and non-existents-related mathematical objects too must be formulated using some primitive terms and axioms that are compatible with the philosophical and physical primitive terms and axioms that may facilitate a systemic approach to all sciences.
4.4. Why Then Is Science Successful?
The awarding of the Nobel Prize 2023 for quantum informatics to Alain Aspect, John F. Clauser, and Anton Zeilinger does not, therefore, mean that all of quantum physics and their assumptions and results are ‘the realities’ behind the ‘truths’ formulated. Instead, it means only that the truths they have formulated are relatively more technology-productive within the context of the other truths and technologies that surround them in physics. Quantum informatics works at a level of effects where we involve only those movements and processes that result in the resulting discoveries, general truths, and the derivative technology.
Similarly, the successes of engineering, informatics, medical processing technology, and the medical science that (as of today) are based on these need not be a proof for the alleged “absolute truth status” of the theories based on Newtonian physics, of molecular and atomic level chemistry and biology, etc. These sciences use only certain contextual levels of interaction in the physical world.
Recollect here the ways in which occidental philosophers dating at least from Parmenides and Heraclitus and extending up until today have been mistaking space and time as (1) two metaphysical categories, or (2) as mere existents, or (3) as illusions.
Oriental philosophies, especially Hindu and Buddhist, have been the best examples of rejecting space-time as metaphysical and as equivalent to permanent substances in a manner that made some Occidental thinkers to look down on them or to reject all of them. In the course of conceptualization that is typical of humans, having to create further theoretical impasses is necessarily to be avoided as best as we can. Such an ideal requires the help of Extension, Change, and Universal Causality.
In the foregoing paragraphs I have only hinted at the necessity of axiomatic philosophy and science. I have only suggested some basic notions in this systemic science. I do also use these notions and some axioms developed from them to formulate a new philosophy of mathematics. I have already published some books based on these and have been developing other such works. I hope to get feedbacks from earnest minds that do not avoid directly facing the questions and the risk of attempting a reply to the questions themselves.
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Irrational numbers are uncountable while rational numbers are countable. Archimedes theorem says: there exsist a rational number between any two irrational numbers, so there must be rational numbers as much as irrational numbers.So rational numbers must be uncountable like the irrational numbers. Or irrational numbers must be countable like rational numbers.
Suicide, a fatal and tragic act, leaves no one indifferent. It touches on the sacredness of life and therefore on the deepest convictions and beliefs. Philosophical reflection has been prolific on the subject dealing with the rationality and morality of suicide. The question also covers a societal component in relation to the debate on the "right to die within dignity"
All contributions on the topic are welcome.
Picture: Staged seppuku with ritual attire and kaishaku, 1897 https://en.wikipedia.org/wiki/Seppuku
how far can Artificial Intelligence simulate and replicate human capabilities? Can it extend to the human abilities such as discovery and Inspiration?
Is scientific approach capable of answering this question at present or should we employ a rational reasoning approach? what would be that rational reasoning approach then?
Can reason and rational questions be carried beyond Physics and Cosmology, where reason and rational questions are perhaps based on and by generalization of existing results in Physics and Cosmology?
Is it ethical to question the credibility and plausibility of a persons experience, if they are diagnosed with a severe mental illness such as paranoid schizophrenia? To what extent does one draw the line between rational and irrational when appraising a persons experience of distress and is it wise to rely solely on a rationalist empiricist framework to attempt to derive meaning from the persons experience?
What are the available and possible tools to face this thrust the world is witnessing today? Is it satisfying to merely describe and study the details and their accumulation? Or is it that reality imposes several facts that are in the same direction of the new ones. The era of rationality that humanity has first dedicated since the 17th century and according to Toynbee, was principally based upon (thinking power). ( Richard Paul, 2007, p 137) , According to Toynbee, the British historian, man should be keen on developing it, and promoting it through programs and plans that enable it to be part of the societal and cultural system, not merely a slogan, that advocates an emerging idea or trend.
The congruent number problem has been a fascinating topic in number theory for centuries, and it continues to inspire research and exploration today. The problem asks whether a given positive integer can be the area of a right-angled triangle with rational sides. While this problem has been extensively studied, it is not yet fully understood, and mathematicians continue to search for new insights and solutions.
In recent years, there has been increasing interest in generalizing the congruent number problem to other mathematical objects. Some examples of such generalizations include the elliptic curve congruent number problem, which asks for the existence of rational points on certain elliptic curves related to congruent numbers, and the theta-congruent number problem as a variant, which considers the possibility of finding fixed-angled triangles with rational sides.
However, it is worth noting that not all generalizations of the congruent number problem are equally fruitful or meaningful. For example, one might consider generalizing the problem to arbitrary objects, but such a generalization would likely be too broad to be useful in practice.
Therefore, the natural question arises: what is the most fruitful and meaningful generalization of the congruent number problem to other mathematical objects? Any ideas are welcome.
here some articles
M. Fujiwara, θ-congruent numbers, in: Number Theory, Eger, 1996, de Gruyter, Berlin, 1998,pp. 235–241.
New generalizations of congruent numbers
A GENERALIZATION OF THE CONGRUENT NUMBER PROBLEM
Is the Arabic book about the congruent number problem cited correctly in the references? If anyone has any idea where I can find the Arabic version, it will be helpful. The link to the book is https://www.qdl.qa/العربية/archive/81055/vdc_100025652531.0x000005.
I will present a family of elliptic curves in the same spirit as the congruent number elliptic curves.
This family exhibits similar patterns as the congruent number elliptic curves, including the property that the integer is still "congruent" if we take its square-free part, and there is evidence for a connection between congruence and positive rank (as seen in the congruent cases of $n=5,6,7$).
It is common to affirm that "One can never perform any measurement whose result is an irrational number."
This is equivalent to say the contrapositive, that anything that can be measured or produced is a rational number.
But the irrational number √2 can be produced to infinite length in finite steps, as 2×sin(45 degrees). It also exists like that in nature, as the diagonal of a unit square.
There is no logical mystery in these two apparently opposing views. Nature is not Boolean, a new logic is needed.
In this new logic, the statements 'p' and 'not p' can coexist. In the US, Pierce already said it. In Russia, Setun used it.
This opens quantum mechanics to be logical, and sheds new light into quantum computation.
One can no longer expect that a mythical quantum "analog computer" will magically solve things by annealing. Nature is also solving problems algebraically, where there is no such limitation.
Gödel’s undecidability is Boolean, and does not apply. The LEM (Law of the Excluded Middle) falls.
What is your qualified opinion?
Our answer has been YES. Gödel's uncertainty is valid for the B set. The LEM is also valid in the B set. In the B set, numbers are either 0 or 1. And 0^n=0, while 1^n=1, so arithmetic is fast and easy. Digital computers only use the B set, and yet can calculate everything. Gödel's uncertainty is valid.
We, humans, can use the Q* set for fast and easy mental calculations. A negative times a negative is a positive. Gödel's uncertainty is not valid.
Quantum computing uses the set Q, to allow calculus with discontinuous functions--as functions must be in the digital world. We see that world in the XXI century. Gödel's uncertainty is not valid.
By the Curry-Howard relationship, this deprecates Gödel's uncertainties.
So must be finally accepted, under experiments -- not theory or opinions.
No longer individually distinguishable, the digits in each prime number is a "lump" and belong together, in a collective effect beyond digits or names. Peter Shor said this first, in 1994. This is important for quantum computing.
What is your qualified opinion?
(We are not attempting to define nature or evolution. We are just pointing out an illusion. Mathematical results can be absolutely exact.)
Our answer is NO. Think of it: Pythagorean Triples would NOT exist if numbers are arbitrary as values. Given a and b, c is fixed or it doesn't exist.
Given 2 and 3, what is c?
Prime numbers seem not arbitrary either. Some people consider prime numbers as just some feature of Z, which does not exist for composite numbers. And, they think, there are no primes or coprimes in Z_p, p-adic numbers, except for some numbers, which end by 0; there are no negative and positive numbers; there are not even or odd numbers (I e., they may point to the number 19 underscore 31, is it even or odd?).
Instead, let's be humble and observe nature. A prime number in any place of the universe must be a prime number. Here on Earth and in the star Betelgeuse. It is not a feature defined by a human.
Dedekind (1888) was incorrect, and mathematical real-numbers an illusion, that cannot be calculated (Gisin, Gerck).
That is why a number is a semiotic quantity. Numbers can be thought of as a 1:1 mapping between a symbol and a value. Digits become a “name”, a reference, and it is clear that one can use different “names” for the same number as a value.
So, the number 1 can have a name as "1", "2/2", "3/3" and infinitely many more, but is always 1 in value.
Equality of rational numbers does not have to have the same name for each other, as "2/3=2/3".
They can also obey the rule that their cross product is equal in value, so that "2/3=4/6".
That way, equivalence extends equality in a consistent way, even though the numbers are neither equal nor divisible. This is possible because numbers are semiotic quantities, and is essential to understand quantum computing.
Numbers are not arbitrary as values, which can allow us to calculate prime numbers using periodicity.
What is your qualified opinion?
At the moment, antibiotics are the most effective tools against infectious infections. Yet, the spread of antimicrobial resistance and the lack of recently produced antimicrobial medications pose a serious threat to both human and animal health (Cheng et al., 2016). The most effective methods for combating antimicrobial resistance involve the rational use of antibiotics.
Antimicrobial Activity and Resistance: Influencing Factors - PMC. (2017, June 13). NCBI. Retrieved February 24, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5468421/
- Is the construction of the Cantor set on the segment [0, 1], on the segment [2, 7] and on the segment [0, π] equivalent?
- How do the points of the Cantor set on the number line relate to natural (in particular, prime), rational, irrational, transcendental and, finally, hyperreal numbers?
Could any expert try to examine the new interesting methodology for multi-objective optimization?
A brand new conception of preferable probability and its evaluation were created, the book was entitled "Probability - based multi - objective optimization for material selection", and published by Springer, which opens a new way for multi-objective orthogonal experimental design, uniform experimental design, respose surface design, and robust design, etc.
It is a rational approch without personal or other subjective coefficients, and available at https://link.springer.com/book/9789811933509,
My article demonstrates that the image of the Line of Real positive numbers, which is bigger than the origine 2 , by the function f(B) is only a set of irrationals. Hence the function is not continuous at all. However, what about the transcendence of the elements of this set of irrationals ?
Here is the link to the article :
Rationalism distinguishes between empirical knowledge, i.e., knowledge that arises through experience, and a priori knowledge, i.e., knowledge that is prior to experience and that arises through reason. Empirical knowledge depends upon our senses, senses that, the rationalist wastes no time to demonstrate, are unreliable. Here the rationalist appeals to common sense deceptions and perceptual illusions.
Empiricism denies the rationalist distinction between empirical and a priori knowledge. All knowledge, the empiricist argues, arises through, and is reducible to, sense perception. Thus, there is no knowledge that arises through reason alone. Thus, empiricism credo is that where there is (or can be) no experience there is (and can be) no knowledge.
Thanks in advance.
Our answer is YES. Irrationals, since the ancient Greeks, have had a "murky" reputation. We cannot measure physically any irrational, as one would require infinite precision, and time. One would soon exhaust all the atoms in the universe, and still not be able to count one irrational.
The set of all irrationals does not even have a name, because there seems to be no test that could indicate if a member belongs to the set or not. All we seem to know is it is not a rational number -- but what is it?
The situation is clarified in our book Quickest Calculus, available at lowest price in paper, for class use. See https://www.amazon.com/dp/B0BHMPMMTY/
There, Instead of going into complicated values of elliptic curves, and infinite irrationals, algebra allows us to talk about "x".
No approximating rational numbers need to be used, nor Hurwitz Theorem.
Thus, one can "tame" irrationals by algebra, with 0 (zero) error. For example, we know the value of pi. It is 2×arcsin(1) exactly, and we can calculate it using Hurwitz Theorem, approximately.
GENERALIZATION: Any irrational number is some function f(x), where x belongs to the sets Z, or Q -- well-defined, isolated, and surrounded by a region of "nothingness". The set of all such numbers we call "E", for Exact. It is an infinite set.
What is your qualified opinion?
Irrigation water is one of the important inputs for agriculture, particularly for dry season cropping. However, rational use of water to achieve maximum productivity is essential. What are the various options to achieve the optimum benefit and sustainable use.
The realistic definition of transition probability in physics is well defined and constrains the probability to rational numbers. The abstract definition of probability in mathematics is also well defined, but it allows probability to be an arbitrary real number element of [0,1], whether rational or irrational. The difference is enormous and the conclusions diverge widely. The question is who do we track and when?
: We can provide two specific published examples among many others,
a-Numerical resolution of the 3D PDE of heat diffusion as a function of time in its most general case using the physical definition of probability.
b-Solve the statistical numerical integration for an arbitrary number of free nodes using the physical definition of probability. The trapezoidal ruler and the FDM-based sympson ruler would be just a special case.
Can we call a smart robot as Rational Artificial Being (RAB)? Can robots be considered as Rational Beings? Smart Robots are designed and programmed as intelligent artificial agents (or beings) that have the capacity to make certain decisions alike human beings. Human beings are the only entities who are considered as rationally intelligent, having a unique blend of sense of conscience, emotions, and feelings, and so are deemed as rational agents.
But it is also true that there has been enough progress in the field of Artificial Intelligence over the past few decades. Robots are now designed and programmed as highly intelligent entities that often outsmart their human counterparts in some selected activities.
Now whether it would be rational for us to call robots "rational beings" or rational artificial beings could be a question of interest, for they function on software programmed to mimic largely human behaviors which are considered as rational.
i am passaging my bacterial strain (DH5 alpha) without the selection marker and it has been post 12 passages but unknowingly it is still holding onto the plasmid and doesnt loose it, which it should have if talking rationally. Any suggestions?
I am searching for the main theories or frameworks that could help to explain consumer behaviours when they face sustainability constraints in their choices. It is fact that rational choices play a central role in decision making on consumption, but what theories or frameworks would help to raise broader awareness on the danger of pure rational choices in consumption? What strategies other than research-action could make consumers change their minds when perceiving the collective and individual effects of their behaviours?
I was told that Gary Becker's rational choice theory was NOT the basis for the theories in environmental criminology & situational crime prevention. Is that correct?
If not, which rational choice theory was the reference?
Specifically, let C(z) denote the field of rational functions over the field of complex numbers.
Is there an analog to the Schur lemma over this field?
Is there an analog to the Jordan Canonical Form over this field?
anyone knows how to estimate peak discharge using rational formula using idf(intensity duration frequency curve) curve for various return periods
please guide me in this matter...
I want to image liquid samples ( which is an emulsion ). To this end, I should use Cryo-SEM analysis, but we don t have this Cryo-SEM, FFEM, or Cryo-TEM in our country. In this regard, I want to flash freeze my liquid sample with liquid nitrogen and then image it with an SEM instrument. Is this procedure rational?
Can you offer a more reasonable method for imaging my liquid samples?
I'm currently investigating the relationship between response accuracy and latency when predicting a participant's performance. In many previous publications, response latencies from incorrect responses were excluded from the analysis. However, I've yet to find a rationale given. I do understand, that you want to reduce the influence of response times that stem from a guessed response. However, in many tasks, the participants have a 50% chance of guessing the item correctly, therefore only excluding the responses from incorrect trials removes only half of the problem. Furthermore, important information is lost (e.g. is the accuracy maybe lower for quicker responses?).
I wanted to ask specifically if there are any sources out there, that I could read for a rationale of this procedure, as there are never any references...
Thank you in advance,
A tunable clock source will consist of a PLL circuit like the Si5319, configured by a microcontroller. The input frequency is fixed, e.g. 100 MHz. The user selects an output frequency with a resolution of, say, 1 Hz. The output frequency will always be lower than the input frequency.
The problem: The two registers of the PLL circuit which determine the ratio "output frequency/input frequency" are only 23 bit wide, i.e. the upper limit of both numerator and denominator is 8,388,607. As a consequence, when the user sets the frequency to x, the rational number x/108 has to be reduced or approximated.
If the greatest common divider (GCD) of x and 108 >= 12 then the solution is obvious. If not, the task is to find the element in the Farey sequence F8388607 that is closest to x/108. This can be done by descending from the root along the left half of the Stern-Brocot tree. However, this tree, with all elements beyond F8388607 pruned away, is far from balanced, resulting in a maximum number of descending steps in excess of 4 million; no problem on a desktop computer but a bit slow on an ordinary microcontroller.
F8388607 has about 21*1012 elements, so a balanced binary tree with these elements as leaves would have a depth of about 45. But since such a tree cannot be stored in the memory of a microcontroller, numerator and denominator of the searched Farey element have to be calculated somehow during the descent. This task is basically simple in the Stern-Brocot tree but I don't know of any solution in any other tree.
Do you know of a fast algorithm for this problem, maybe working along entirely different lines?
Many thanks in advance for any suggestions!
There are so many target materials which can be used as X-ray source in powder XRD system, then why the most of XRD system uses Cu metal as source? What is the rationality behind this preferential favor?
Today i happened to see the colleague extract protein from the sample, i want to learn how to do protein ration, but she told me it is no need to do it as it is not accurate and it gonna waste the sample, all she need to do is run the b-actin making sure it is the close. Is it true? appreciate it.
There are a number of criteria for determining whether a polynomial with integral coefficients is irreducible over rational numbers (the traditional ones being Eisenstein criterion and irreducibility over a prime finite field).
I was wondering if the decision problem of "Given an arbitrary polynomial with integral coefficients is irreducible over rational numbers or not" is decidable or undecidable?
please suggest me the Q1/Q2 journals having high acceptance ration and deals with the synthesis and characterization work under material science, composite materials and manufacturing domain.
In my study on Awassi sheep, it was concluded that adding turmeric to the ration led to diverting the path of fat accumulation from the parts of the carcass to the broad tail, with significant weight increases in the treated lambs.
Suppose we have 2 type of silver nanoparticle , one have highly cationic surface charge and other have smaller size (4-5 nm). Which could shown more antimicrobial activity.
Kindly suggest a rational answer.
- Hello. I am struggling with the problem. I can measure two ratios of three independent normal random values with their means and variances (not zero): Z1=V1/V0, Z2=V2/V0, V0~N(m0,s0), V1~N(m1,s1), V2~N(m2,s2). These are measurements of the speeds of the vehicle. Now I should estimate the means and the variances of these rations. We can see it is Cauchy distribution with no mean and variance. But it has analogs in the form of location and scale. Are there mathematical relations between mean and location, variance and scale? Can we approximate Cauchy by Normal? I have heard if we limit the estimated value we can obtain the mean and variance.
Recently Jamali et al., at Ziv Williams' lab in Harvard published an intriguing paper in Nature:
regarding the cellular basis of theory of mind which I believe is one of the first hand evidence to prove mentalization at cell and circuit level. The methodology was based on single cell electrophysiology which seems interesting yet tricky as it may spark this philosophical dilemma of systems neuroscience that complex behaviors such as theory of mind may be originated from synchronized population activity in downstream path that may not be represented or evoked in individual neuronal activity in dmPFC. However, Jamali et al. could rationally and intriguingly conclude and investigate such a complex behavior at a single cell level. Interestingly, they indicated that dmPFC neurons can predict whether contents of one's beliefs in the big picture would be true or false.
- While study of theory of mind at cellular level seemed almost impossible before this, and studies shifted also to cellular and circuit level with this landmark publication, what would you think should be the future research on theory of mind? What is the big question and hypothesis if we want to use multi-modal neuroimaging approaches?
- What methodology and approaches would better decipher impaired theory of mind in psychiatric diseases such as schizophrenia and autism spectrum disorder?
These are just a couple of questions that may emerge but feel free to discuss and contribute to this topic from any aspects that you would think would give us a better view of the underlying mechanisms of theory of mind.
In my opinion, AHP has many drawbacks that make it very difficult to use in MCDM problems, even being the most used method. In my opinion, this happens because many practitioners ignore the reality behind the heralded easiness of use.
I propose an honest discussion from the technical point of view, naturally, supported by evidence, common sense, and rationality, not by words.
For that, we need the participants in this discussion, to work with an open mind, without prejudice and accusing or defending the method with reasons, not based on what other people say, or what the advertisement declares.
Just a suggestion, consider:
* The rationality of pair-wise comparison
* The rationality to give a value of the importance of one criterion over another
* The rationality in assuming that what the DM thinks is applicable to reality, and to pretend that the real world is transitive
* The rationality of determining weights without considering the alternatives they have to evaluate, and the justification for considering them constant, when they may be not
* The rationality in considering that the criterion with the highest weight is the most important
* Why AHP can only work with independent criteria?
Hello fellow researchers. I am using a horzontal small scale mill (wisemix). Using hdpe 250ml bottle as the mixing container.I have tungsten carbide balls sized 3mm and 5mm and i am going to make samples on 6mm/11mm die circular. Would love to know ball to powder ration for small samples around 3-4 gram. I am unable to determine how much volume to fill and the rations of balls dia and quantity.
If you want to prove the activation of the intracellular Wnt/b-catenin pathway, would an ELISA testing for the levels of intracellular phospho-b-catenin (or a ration of phospo/total-b-catenin) be enough or would you suggest additional test?
Hi, May i know how to perform meta analysis for single arm observation studies (i.e., contains one group, either control or intervention)?
I have datas of mean, sample size and SD for all studies and would like to perform meta analysis.
Which softwares should I need to use for Single observation ?
How to deal with missing datas in the studies? I am not too sure if we can ignore the missing datas without any rational?
In which direction to move while calculating or interpreting different parameters in extracts??
As my research, The synthesized catalyst was composed of Cu and Ni on SBA-15 support with different ration, 1:1, 2:1 and 3:1. The XRD pattern shows the peak of Cu-Ni alloy only 2:1CuNi and others don't. Actually, the peak of Cu-Ni alloy should be presented for all of catalyst. Perhaps the peak intensity increases at higher ration of Cu:Ni.
How can I discuss to my prof.
Thanks for your help.
Dear members, I am focused on a particular problem based on a series at the complex variable s which contains summations of triangle numbers, tetrahedral, pentatope and others sequences in the development of the coefficients. I mean, the summation involves such coefficients and the Stieltjes constants as well.
The formulation of such numbers (triangle, tetrahedral, etc) is well known, but I am interested more in to know if the Galois theory or groups of Galois and other branches related to it could specify theorems for transforming expressions with polynomials to rational functions or other expressions.
I am pursuing a philosophical interpretation of the geometry of such numbers for my particular problem related to the Riemann Zeta function formulation as they are present there. It would be a key for defining coefficients and resolving some terms within this context.
Also, a theory about Zeta transform and Mellin transform would be great related to the topics I am describing, not the introductory definition I can find on Internet for these topics. I would like to be able to connect triangle, tetrahedral and other numbers, their sequences with the variable s in an advanced algebra, then, I am able to learn more about that.
I have the mean normalized copy gene number for both Bax, and Bcl2 genes. How can I calculate Bax/Bcl2 ratio for each treatment?, and is there an indication range for the calculated ratios?...Thanks in advance.
I aim at proposing a mindset/method as a supporting tool in an emerging filed... what can be the process of proposing:
1) LR: previous "almost" similar projects have used this mindset, and benefited from it
2) decomposing the current field's aspects/sides
3) "debating" how these components can benefit from this mindset
4) "resulting" that this mindset can be a supporting tool
is it a rational and reliable process?
if that field's experts disagree, how should I convince them?
Maize (Zea mays L.), a crop with very much of nutritional & dietary importance doesn't have good yield & productivity(2.84t/ha in Nepal & 5.82t/ha in world in 2019 ) (FAOSTAT, 2021)as the plant is tall & do not bears any tillers. And the demand of Maize cereals as food & concentrate will increase a geometrical ration on future. The study on breeding to make Maize; a tillering type of cereal crop had not been heard by me. So, I want to ask what are the research the aforementioned subject?
Furthermore, if there is a signal that the virus obtains, would it be possible to synthetically mimic this signal thus make the virus release the DNA randomly, hence making it unable to attach to a host in order to replicate?
I am still very new to this concept. I know that not every Virus uses transduction to invade a host cell, but I wonder if there is a way to target this specific characteristic of certain viruses and make it their weakness.
For basic context to the question I might add that it is clear to me that Viruses bind to specific receptors on the host cell and just transfer genetic information via endocytosis. My question is aimed to understand what triggers the release of genetic information if they themselves can't actually interact or react to their surroundings.
The only rational conclusion I have been able to acquire by thought alone, was that the host cell itself initiates the interaction and almost sucks it out of the capsid. But as much I would like to believe this, I find it a quite unsatisfactory conclusion as I myself can't rationally explain the steps of this hypothesis.
I would greatly appreciate any assistance.
the is because the preliminary result of superovulation with (400-560 mcg) stimufol among unflushed cows did not yield any embryos
10 cows each were treated with graded doses of stimufol (400, 440, 480, 520, 560)
Is it possible that one can induce superovulation in up to 60 cows yet not a single embryo recovered.
1. they were fed with normal maintenance ration supplemented with digitaria hay and bracharia hay
2. they had normal cyclicity and genitalia following ultrasound examination and heat monitoring
3. they consist equal numbers of heifer and adult cows
4. they're indigenous tropical breeds
5. p-36 protocol was used
6. follicular development was monitored with ultrasound from injection of first dose of FSH to day oestrus
I want to choose, or design, a mechanism in a model where all agents belong to the same firm. They are interacting with a mechanism in a finite number of consecutive rounds, mechanism which belongs to the same firm. At each round, agents are granted some numeraire by the mechanism, and then agents non-truthfully reveal preferences to the mechanism. The mechanism goal is to produce and provide agents some excludable public goods, according to the rules of the mechanism and the preferences revealed by the agents, such that the provision maximize the aggregated revealed preferences.
However, in contrast with most of the literature I found on the subject, I think that the goal of the rational agents should be to maximize their utility, instead of maximizing the sum (utility + numeraire), because the economy is private to the firm and the numeraire is useless if not spent inside the firm.
I would like to know if this goal seems relevant to you, and if so do you know some work that consider such goals ? Thank you !
I want to publish my article but no idea how to publish it. and which journal is suitable for publication. what are the main headings? my topic is affects of the current ratio and interest coverage ration net profit margin at Pakistan stock exchange-listed companies in the various sector of Pakistan(cement, chemical, textile)
We wanted to conduct a meta-analysis study on the effect of “a prebiotic additive”, to commercial broiler growth performance, along with a group of experts in the field. My intention is to calculate the effect sizes by taking the part containing the ration averages of “negative control versus prebiotic” in these articles. However, when I search the literature, I see that almost all of the work done in this area reports only pooled SEM for all groups with their means, instead of reporting "the mean ± standard error (or Std. Deviation)" for each group separately . I attached a table from an article to better explain what I mean. In such a table, is there any way to calculate the effect sizes for the “prebiotic additive vs control group” in a sample containing information with 3 or more groups with pooled SEM, assuming n=10 for each group?
What is the difference between accountability and rationality? I am pleased to invite all dear respected RG colleagues to give their valuable suggestions and comments in this regard.
NO. No one on Earth can claim to "own the truth" -- not even the natural sciences. And mathematics has no anchor on Nature.
With physics, the elusive truth becomes the object itself, which physics trusts using the scientific method, as fairly as humanly possible and as objectively (friend and foe) as possible.
With mathematics, on the other hand, one must trust using only logic, and the most amazing thing has been how much the Nature as seen by physics (the Wirklichkeit) follows the logic as seen by mathematics (without necessarily using Wirklichkeit) -- and vice-versa. This implies that something is true in Wirklichkeit iff (if and only if) it is logical.
Also, any true rebuffing of a "fake controversy" (i.e., fake because it was created by the reader willingly or not, and not in the data itself) risks coming across as sharply negative. Thus, rebuffing of truth-deniers leads to ...affirming truth-deniers. The semantic principle is: before facing the night, one should not counter the darkness but create light. When faced with a "stone thrown by an enemy" one should see it as a construction stone offered by a colleague.
But everyone helps. The noise defines the signal. The signal is what the noise is not. To further put the question in perspective, in terms of fault-tolerant design and CS, consensus (aka,"Byzantine agreement") is a design protocol to bring processors to agreement on a bit despite a fraction of bad processors behaving to disrupt the outcome. The disruption is modeled as noise and can come from any source --- attackers or faults, even hardware faults.
Arguing, in turn, would risk creating a fat target for bad-faith or for just misleading references, exaggerations, and pseudo-works. As we see rampant on RG, even on porous publications cited as if they were valid.
Finally, arguing may bring in the ego, which is not rational and may tend to strengthen the position of a truth-denier. Following Pascal, people tend to be convinced better by their own-found arguments, from the angle that they see (and there are many angles to every question). Pascal thought that the best way to defeat the erroneous views of others was not by facing it but by slipping in through the backdoor of their beliefs. And trust is higher as self-trust -- everyone tends to trust themselves better and faster, than to trust someone else.
What is your qualified opinion? This question considered various options and offers a NO as the best answer. Here, to be clear, "truth-denial" is to be understood as one's own "truth" -- which can be another's "falsity", or not. An impasse is created, how to best solve it?
This appears to be a pure sabotage. Someone has thrown out my abstract - so far I discovered two - about scarce rationality, and about institutions in evolutionary economics - and replaced them by irrelevant and to my mind nonsensical notes from a symposium about Say's law. How can this happen? And how can I correct it?
Thanks for your answer and best regards,
Pavel Pelikan, prof. The University of Economics, Prague
It is usually (always?) said that the h=0 limit of a quantum system is classical. I recently realized that this is wrong. The spectrum of the h=0 limit has the density of the rationals, while that of the classical system is the reals: Cantor's aleph-null vs his aleph-1.
This could be of relevance in discussions of reversibility and related paradoxes.
Perhaps someone could bring this to the notice of someone who works on such problems?
We have a categorization task which loads heavily onto procedural learning systems, but we're looking to see if a manipulation we introduced recruits any additional neural regions. We've done a univariate analysis, and want to, now, do an exploratory, whole-brain searchlight analysis.
One thing that we can't decide on is what searchlight radius to use. We've scoured high and low, and can't seem to find a single article comparing searchlight radii, so we're not sure what to do!! Some papers use multiple sizes, some just use one, but it's not usually rationalized. We found the Kriegeskorte article in which they suggest 4 mm may be appropriate, however, the literature in our field seems to use much larger radii (8+ mm), and one paper in particular suggested 4 mm may only be appropriate for things like low-level visual processing, but not high-level conceptual information.
Any references or advice on how to pick an appropriate searchlight size would be greatly appreciated.
With much thanks,
I have tried to link some topics in mathematics which included the word " Rational" , I have got many references which used " Rational" in Group theory and Probability and number theory and algebraic geometry and Topology,Chaos theory and so on , Now I'm confused and I have asked my self many times why that "Rational" occurs so much in all topics of mathematics probably informatic and physics ? Why this word interesting in mathematics ? According to the below linked reference I ask why always we investigate to get things in mathematics to be rational ?What is the special of that word rational in mathematics ?
**List of Linked reference include word " Rational"**:
[Regularization of Rational Group Actions](https://arxiv.org/abs/1808.08729)
[Rational Points on Rational Curves](https://arxiv.org/abs/1911.12551)
[Automatic sets of rational numbers](https://arxiv.org/abs/1110.2382)
[Rational homology 3-spheres and simply connected definite bounding](https://arxiv.org/abs/1808.09135)
[Rational Homotopy Theory](https://link.springer.com/book/10.1007/978-1-4613-0105-9)
[A Rational Informatics-enabled approach to the Standardised Naming of Contours and Volumes in Radiation Oncology Planning](https://www.academia.edu/7430350/A_Rational_Informatics-enabled_approach_to_the_Standardised_Naming_of_Contours_and_Volumes_in_Radiation_Oncology_Planning)