Questions related to Reasoning
The Rac proteins are a subfamily of the Rho GTPases. The Rho GTPases are a subfamily of the Ras GTPases. Does the name Rac bear any significance regarding the protein's identity, discovery, or function? Is it an acronym? Was there any rhyme or reason for naming this subfamily Rac?
Artificial intelligence (AI) might be the most important technology we ever develop. Ensuring it is safe and used beneficially is one of the best ways we can safeguard the long-term future. Ensuring AI is used to benefit everyone is already a challenge, and it's critical we get it right. As AI becomes more powerful, so does its scope for affecting our economy, politics, and culture. This has the potential to be either extremely good, or extremely bad. On the one hand, AI could help us make advances in science and technology that allow us to tackle the world's most important problems. On the other hand, powerful but out-of-control AI systems ("misaligned AI") could result in disaster for humanity. Given the stakes, working towards beneficial AI is a high-priority cause that we recommend supporting, especially if you care about safeguarding the long-term future.
Why is ensuring beneficial AI important?
We think ensuring beneficial AI is important for three reasons:
- AI is a technology that is likely to cause a transformative change in our society — and poses some risk of ending it.
- Relative to the enormous scale of this risk, not enough work is being done to ensure AI is developed safely and in the interests of everyone.
- There are things we can do today to make it more likely that AI is beneficial.
What is the potential scale of AI's impact?
In 2019, AI companies attracted roughly $40 billion USD in disclosed investments. This number has increased over the last decade, and is likely to continue growing.
Already the results of these investments are impressive: in 2014, AI was only able to generate blurry, colourless, generic faces. But within just a few years, its synthetic faces became indistinguishable from photographs, as you can see below.
I'm working in an industrial field trying to improve some products and amongst them is Giemsa stain. The problem is, our solution cannot stain chromosomes of peripheral blood cells but control slides of other companies' solutions do. What could be the reason of the lack of staining in my solution?
I got a liquid starter culture of Thalassiosira Weisflogii microalgae. While the starter culture grows indoors. But it does not grow on agar plates using the F2+si medium. I am seeking reasons for its inability to grow on the petri dishes and solutions to this issue.
Qubit gives this error after reading second standard. It asks to read the standards once more or you can close and continue reading the samples but I am not sure if we can use the results. I could not find a detailed explanation in the user manuel. I would be very happy to hear your experience.
Dear community, I am working on my thesis data and my data analysis shows that there is no relationship between IV and DV, only the indirect relationship exist in presence of 2 mediators (serial mediation). Whereas the literature supports the positive relationship between the IV and DV.
The r square value for the same is very low 0.052.
*all the measurement model criteria i.e, loadings, Cronbach alpha, ave, htmt, vif value meet the criteria.
* all other relationships are significant (iv-mediator-dv)
Now my question is what could be the possible reason for insignificant relationship between IV and DV.
How should I proceed with this.
Why the value of r square for main DV is so low , whereas for mediators its 0.45 and 0.52.
*If I proceed with the same, would it create any problem at time of my PhD Defence.
I have employed a PtM/C (M= transition metals) based alloy catalyst for the ORR in acid, though i got a good limiting current but the E1/2 is disappointing, i would acknowledge useful suggestions regarding the reasons responsible for poor E1/2 even though the limiting current is much better. According to my understanding the poor kinetic current is due to insufficient exposed active sites on the catalyst, what other factors could be responsible for this issue? Thanks
I am conducting CV analysis of a rGO-based electrode using CHI 650E workstation with platinum foil as counter electrode, an Ag/AgCl reference electrode, and 1M H2So4 electrolyte. How can I decide the range of potential window that can be applied for the CV analysis? One sample analysis is attached for reference. Also, for many samples I am getting flat line at 1mA. What is reason for this?
I want to know which modeling strategies and which software are the best to define the parameters governing the behavior of reinforced 3d-printed concrete.
Secondly, the constitutive models of which software are good enough to capture the structural response under various loading conditions reasonably well? Let's say, incorporation of reinforcement, anisotropy of 3d-printed concrete, and its cracking in different directions, crack width, etc.
The Hypergeometrical Universe Theory has been censored since 2006. Notable censors are Paul Ginsparg, who keeps a blacklist at the Los Alamos Archives.
I presented my work in several papers (they need a refresh and contain a notable error - the SDSS data analysis) and in Quora, where I tried to educate people and expose my ideas to criticism.
The theory is called a theory of everything because it is fundamental (the least parameterized) and affects all fields of Physics.
I proposed a new model for Matter where particles are polymers of the Fundamental Dilator and derived Natural Laws from first principles.
The basis for the derivation of Natural Laws is three-fold:
a) The Universe is a Lightspeed Expanding Hyperspherical Hypersurface.
b) Particles are polymers of the Fundamental Dilator (FD). FD is a coherence between stationary states of deformation of space. The involved states are the well-characterized electron and proton. This has four implications:
b.1) It explains the plethora of particles with only two states instead of the field that creates them. This is a simplification when one considers Quantum Field Theory. Masses are given by just 3D deformation volume times an energy density, thus eliminating the need for the Higgs Model.
b.2) The FDs shapeshift and spin in a 4D Spatial Manifold, thus eliminating the need for assigning an "intrinsic degree of freedom" to spin. The shapeshifting defines a global ABSOLUTE FREQUENCY for the waves carrying interaction. In HU, the Quantum Lagrangian Principle states simply that FDs will move in a 4D spatial manifold such that they never do any work (or receive work from the spatial deformation). This is an actual Lagrangian Principle (no work done by constraints). It is called Quantum because it is the reason for Quantum Mechanics in material systems. I explicitly said "material systems" because space itself is a quantum system, and it is governed by the Heisenberg Principle.
b.3) HU contains Absolute Time and a preferential reference frame; hence, HU clashes with Relativity, and that is OK since HU succeeds everywhere GR and SR succeeded and also where GR failed (explaining the evolution of the Universe). The existence of an Absolute Time means that one can define an Absolute Frequency of Interaction and apply a Fourier-like model for interaction among particles. Neutrinos are particles that carry torsions for 3D rotations. They are coherences between states distinct from the electron and proton and thus have a distinct frequency. That is the reason for their ghostly nature.
b.4) HU's Fundamental Dilator is based upon a Stroboscopic Principle - one cannot observe phases of the FD that are perpendicular to our 3D Universe (3D hyperspherical hypersurface).
c) The third basis is the Quantum Lagrangian Principle. It governs dynamics and replaces Newton's Laws of Dynamics while introducing Quantum Mechanics.
HU derived an epoch-dependent Law of Gravitation where G is inversely proportional to the 4D radius of the Universe. Dirac tried to achieve the same using Numerology. An argument based on numerology was not convincing enough. HU derived the laws of Nature. That should be enough.
Epoch-dependent G is necessary to make HU consistent with the Supernova Cosmology Project data. HU's Cosmic Distance Ladder is simple:
d(z) = R_0 * z/(1+z)
For that to make sense, HU introduces a new model for the Photon (it becomes a waving on the top of another wave - the dilaton field).
Epoch-dependent G also solves the problem of "Impossibly Early Galaxy Formation." I will provide a new model for galaxy formation in my next paper.
Of course, HU also provides a new taxonomy for particle physics (replacing the standard model of particle physics) and provides the path to do non-perturbative Quantum Chromodynamics - eliminating the need to sum perturbation terms from Feynman Path Integrals).
The new model for the Universe creation is called The Big Pop Cosmogenesis. It is possible because HU's Universe contains only space, deformed space, and time. That is the simplest possible model.
Copycats and plagiarizers abound. You might have heard about similar ideas. The problem they have is that stealing just part of my ideas doesn't make a theory.
Below is a list of links:
There is a theory that reproduces Einstein's successes and avoids Einstein's failures and that is Quantum Mechanical... It is called The Hypergeometrical Universe Theory (HU).
Recasting Newton's Laws of Dynamics in the Space Stress Strain Paradigm
Here, I created a map for the observable and unobservable Universe and located Earth on it:
Here is how I created the map of the Hyperspherical Universe from the knowledge obtained by the Planck Satellite:
3D galaxy density map of the current universe:
Here is how I challenged Einstein's theory:
Here is my take on the Dark Stuff:
HU explaining JWST observations:
Here, I explained why the Universe has four spatial dimensions by calculating the probability of universes of different dimensionalities.
The Big Pop Cosmogenesis - replacement to the Big Bang
Big Pop Article
In how many ways can Dr. Marco Pereira prove Einstein wrong
The MAIN REASONS Einstein's Theory is wrong: a) Einstein missed an extra spatial dimension, b) The theory should use ABSOLUTE VELOCITY instead of calling everything Relative.
The first way Dr. Marco Pereira proves Einstein wrong.
The second way Dr. Marco Pereira proves Einstein wrong.
The third way Dr. Marco Pereira proves Einstein wrong.
How to get GR success without GR
For HIGH IQ people
Smarandache, F. (2007). Hadron Models and related New Energy issues. "Hadron models and related New Energy issues" by Florentin Smarandache
Smarandache, F., & Christianto, V. (2007). Quantization in Astrophysics, Brownian Motion, and Quantization in Astrophysics, Brownian Motion, and Supersymmetry Supersymmetry. "Quantization in Astrophysics, Brownian Motion, and Supersymmetry" by Florentin Smarandache and Victor Christianto
Pereira, M. (2017). The Hypergeometrical Universe: Cosmogenesis, Cosmology and Standard Model. World Scientific News, 82, 1–96.
Pereira, M. (2018). The Case for a Fourth Spatial Dimension and the Hyperspherical Force. World Scientific News, 98, 127–139.
Pereira, M. (2018). The Hypergeometrical Force: The Coma Cluster without Dark Matter. World Scientific News, 101, 222–228.
Pereira, M. (2019). The Optical Path of Ancient Photons and the Supernova Project. World Scientific News, 130, 195–215.
Pereira, M. (2017). The Hypergeometrical Universe: Cosmogenesis, Cosmology and Standard Model. Global Journal of Science Frontier Research, 17(5).
Pereira, M. A. (2010). The Hypergeometrical Universe: Cosmology and Standard Model. AIP Conference Proceedings, 1316(1).
Marco Pereira. The Big Pop Cosmogenesis - Equation of State, this article.
I am fabricating a small diameter vascular graft using the technique electrospinning. I am facing a problem of formation of spikes in the collected fiber after a certain thickness, if possible can anyone help me to find the reason for it and provide me with your valuable suggestion to overcome this.
During the match that took place between YouTubers around the world, there were only male members playing on the field. However, there are many female YouTubers who have a large number of followers. What is the reason that this only happened to a certain gender and did not happen to the other gender?
Mouse intestinal organoids were growing fine until 3 passages. After that, I passages them last Friday and left for the weekend. Monday I found that they were dying. I am not sure what is the reason for that. I am using the protocol of Intestinal Epithelial Organoid Culture with IntestiCult™ Organoid Growth Medium (Mouse). Instead of matrigel dome , I am using Cultrex UltiMatrix Reduced Growth Factor Basement Membrane Extract dome. Any suggestions would be greatly appreciated.
Here are some images.
I did a reaction, basically it is the thiol-Michael Addition where I tried to conjugate a -SH group to the double bound of an acrylate group. The reaction was aided with TCEP/TEA as catalyst. After the reaction, the acrylate group signals disappeared and new signals appeared which I suspected were due to the addition reaction. However, the peaks or signals showed high estimate of protons which I can't account for (x 3 folds more). Please can anyone help me with what possibly could be the cause of this? Thank you for your anticipated assistance.
I know AMC resonant frequency at phase 0 ! I read many paper, some AMC S11 are near 0 dB some are below -10 dB. I want to know which is correct ?
My point of view is near 0 dB, because AMC is for reflecting the wave in phase. That the reason why antenna gain rise !
Is AMC only for gain increasing ?
By reason of the application of the Lorentz Factor [(1 - (v squared / c squared)) raised to the power of 1/2] in the denominator of equations, luminal and other comparable energy propagations take on one and the same velocity. This is the relativity-effect (better, comparative effect) between v of objects, compared to c of the speed of light. That is, it is presupposed here that c is the object of comparison for determining the speed effect of velocity difference across a duration.
It is against the criterion-velocity itself c that c becomes unsurpassable! Hence, I am of the opinion that the supposed source-independence is nothing but an effect of OUR APPARATUS-WISE OBSERVATION LIMIT AND OUR FIXING OF THE CRITERION OF OBSERVATION AS THE OBSERVED VELOCITY OF LIGHT.
In this circumstance, it is useless to claim that (1) luminal and some other energy propagations with velocity c are source-independent, and (2) these wavicles have zero rest mass, since the supposed source-independence have not been proved theoretically or experimentally without using c cas the criterion velocity. The supposed source-independence is merely an effect of c-based comparison.
Against this background, it is possible to be assured that photons and other similar c-wavicles are extended particles -- varying their size throughout the course of motion in the spiral manner. Hence the acceptability of the term 'wavicle'. Moreover, each mathematical point of the spiral motion is to be conceived not as two-, but as three-dimensional, and any point of motion added to it justifies its fourth dimension. Let us call motion as change.
These four dimensions are measuremental, hence the terms 'space' (three-dimensional) and 'time' (one-dimensional). This is also an argument countering the opinion that in physics and cosmology (and other sciences) time is not attested!
The measurements of the 3-space and measurements of the 1-time are not in the wavicles and in the things being measured. The measurements are the cognitive characteristics of the measurements.
IN FACT, THE EXTENSION OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'SPACE', AND THE CHANGE OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'TIME'. Hence, the physically out-there-to-find characteristics of the wavicles and objects are EXTENSION AND CHANGE.
Extension is the quality of all existing objects by which they have parts. This is not space. Change is the quality by which they have motion, i.e., impact generation on other similar wavicles and/or objects. This is not time. Nothing has space and time; nothing is in space and time. Everything is in Extension-Change.
Any wavicle or other object existing in Extension-Change is nothing but impact generation by physically existent parts. This is what we term CAUSATION. CAUSALITY is the relation of parts of physical existents by which some are termed cause/s and the others are termed effect/s. IN FACT, THE FIRST ASPECT OF THE PHYSICALLY ACTIVE PARTS, WHICH BEGINS THE IMPACT, IS THE CAUSE; AND THE SECOND ASPECT IS THE CAUSE. Cause and effect are, together, one unit of continuous process.
Since energy wavicles are extended, they have parts. Hence, there can be other, more minute, parts of physical objects, which can define superluminal velocities. Here, the criterion of measurement of velocity cannot be c. That is all...! Hence, superluminal velocities are a must by reason of the very meaning of physical existence.
THE NOTION OF PHYSICAL EXISTENCE ('TO BE') IS COMPLELTEY EXHAUSTED BY THE NOTIONS OF EXTENSION AND CHANGE. Hence, I call Extension and Change as the highest physical-ontological Categories. A metaphysics (physical ontology) of the cosmos is thus feasible. I have been constructing one such. My book-length publications have been efforts in this direction.
I invite your contributions by way of critiques and comments -- not ferocious, but friendly, because I do not claim that I am the last word in any science, including philosophy of physics.
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
ESSENTIAL REASON IN PHYSICISTS’ USE OF LOGIC:
IN OTHER SCIENCES TOO!
Raphael Neelamkavil, Ph.D., Dr. phil.
1. The Logic of PhysicsPhysics students begin with meso-world experiments and theories. Naturally, at the young age, they get convinced that the logic they follow at that level is identical with the ideal of scientific method. Convictions on scientific temper may further confirm them in this. This has far-reaching consequences in the concept of science and of the logic of science.
But, unquestionably, the logic behind such an application of the scientific method is only one manner of realizing (1) the ideal of scientific method, namely, observe, hypothesize, verify, theorize, attempt to falsify for experimental and theoretical advancements, etc., and (2) the more general ideal of reason.
But does any teacher or professor of physics (or of other sciences) instruct their students on the advantages of thinking and experimenting in accordance with the above-mentioned fundamental fact of all scientific practice in mind, or make them capable of realizing the significance of this in the course of time? I think, no.
This is why physicists (and for that matter all scientists) fail at empowering their students and themselves in favour of the growth of science, thought, and life. The logic being followed in the above-said mode of practice of scientific method, naturally, becomes for the students the genuine form of logic, instead of being an instantiation of the ideal of logic as reason. This seems to be the case in most of the practices and instruction of all sciences till today. A change of the origin, justification, and significance of the use of logic in physics from the very start of instruction in the sciences is the solution for this problem. The change must be in the foundations.
All humans equate (1) this sort of logic of each science, and even logic as such, with (2) reason as such. Reason as such, in fact, is more generic of all kinds of logic. Practically none of the professors (of physics as well as of other sciences) terms the version of logic of their science as an instantiation of reason, which may be accessed ever better as the science eventually grows into something more elaborate and complex. Physicist gets more and more skilled at reasoning only as and when she/he wants to grow continuously into a genuine physicist.
As the same students enter the study of recent developments in physics like quantum physics, relativity, nano-physics (Greek nanos, “dwarf”; but in physics, @ 10-9), atto-physics (@ 10-18), etc., they forget to make place for the strong mathematical effects that are due by reason of the conceptual and processual paradoxes due to epistemological and physical-ontological difference between the object-sizes and the sizes of ourselves / our instruments. The best examples are the Uncertainty Principle, the Statistical Interpretation of QM, Quantum Cosmology, etc.
They tend to believe that some of these and similar physics may defy our (meso-physical) logic – but by this mistakenly intending that all forms of reasoning would have to fail if such instances of advanced physics are accepted in all of physics. As a result, again, their logic tends to continue to be of the same level as has been taken while they did elementary levels of physics.
Does this not mean that the ad hoc make-believe interpretations of the logic of the foundations of QM, Quantum Cosmology, etc. are the culprits that naturally make the logic of traditional physics inadequate as the best representative of the logic of nature? In short, in order to find a common platform, the logic of traditional and recent branches of physics must improve so to adequate itself to nature’s logic.
Why do I not suggest that the hitherto logic of physics be substituted by quantum logic, relativity logic, thermodynamic logic, nano-logic, atto-logic, or whatever other logic of any recent branch of physics that may be imagined? One would substitute logic in this manner only if one is overwhelmed by what purportedly is the logic of the new branches of physics. But, in the first place, I wonder why logic should be equated directly with reason. The attempt should always be to bring the logic of physics in as much correspondence with the logic of nature, so that reason in general can get closer to the latter. This must be the case not merely with physicists, but also with scientists from other disciplines and even from philosophy, mathematics, and logic itself.
Therefore, my questions are: What is the foundational reason that physicists should follow and should not lose at any occasion? Does this, how does this, and should this get transformed into forms of logic founded on a more general sort of physical reason? Wherein does such reason consist and where does it exist? Can there be a form of logic in which the logical laws depend not merely on the size of objects or the epistemological level available at the given object sizes, but instead, on the universal characteristics of all that exist? Or, should various logics be used at various occasions, like in the case of the suggested quantum logic, counterfactual logic, etc.?
Just like logic is not to be taken as a bad guide by citing the examples of the many logicians, scientists, and “logical” human beings doing logic non-ideally, I believe that there is a kernel of reason behind physics, justified solely on the most basic and universal characteristics of physical existents. These universals cannot belong solely to physics, but instead, to all the sciences, because they belong to all existents.
This kernel of reason in physics is to be insisted upon at every act of physics, even if many physicists (and other scientists and philosophers) may not ensure that kernel in their work. I shall discuss these possibly highest universals and connect them to logic meant as reason, when I elaborate on: 3. The Ontology of Physics (in a forthcoming discussion in RG)
The matter on which physicists do logical work is existent matter-energy in its fundamental implications and the derivative implications from the fundamental ones. This is to be kept in mind while doing any logically acceptable work physics, because existent matter-energy corpora in processuality delineate all possible forms of use of logic in physics, which logic is properly to be termed nature’s reason.
Moreover, conclusions are not drawn up by one subject (person) in physics for use by the same subject alone. Hence, we have the following two points to note in the use of logic in physics and the sciences: (1) the intersubjectively awaited necessity of human reason in its delineation in logical methods should be upheld at least by a well-informed community, and (2) the need for such reason behind approved physics should then be spread universally with an open mind that permits and requires further scientific advancements.
These will make future generations further question the genuineness of such logic / specific realization of reason, and constantly encourage attempts to falsify theories or their parts so that physics can bring up more genuine instantiations of human reason. But is such human reason based on the reason active in nature?
Although the above arguments and the following definition of logic in physics might look queer or at least new and unclear for many physicists, for many other scientists, for many mathematicians, and even for many logicians, I define here logic for use in physics as the fundamental aspect of reason that physics should uphold constantly in every argument and conclusion due from it:
Logic in physics is (1) the methodological science (2) of approaching the best intersubjectively rational and structural consequences (3) in what may be termed thought (not in emotions) (4) in clear terms of ever higher truth-probability achievable in statements and conclusions (5) in languages of all kinds (ordinary language, mathematics, computer algorithms, etc.) (6) based on the probabilistically methodological use, (7) namely, of the rules of all sensible logics that exemplify the Laws of Identity, Non-contradiction, and Excluded Middle, (8) which in turn must pertain to the direct and exhaustive physical implications of “to exist”.
Here I have not defined logic in physics very simply as “the discipline of the rules of thought”, “the discipline of the methodological approach to truths”, etc., for obvious reasons clarified by the history of the various definitions of logic.
But here comes up another question: Is the reason pertaining to physical nature the same as the most ideal form of human reason? From within the business of physics, how to connect the reason of physical nature with that of humans? I may suggest some answers from the epistemological and ontological aspects. But I would appreciate your responses in this regard too.
2. The Epistemology of Physics (in a forthcoming discussion in RG)
3. The Ontology of Physics (in a forthcoming discussion in RG)
Hello, I'm working with iPSCs and culturing them on feeder-free conditions using Matrigel. I'm using the Corning Matrigel for Organoid Culture as that was the only Matrigel that was being shipped in a reasonable amount of time. But now I'm having some trouble with cell attachment.
The colonies are rolling up after a few days. I'm not sure if it's because of the Organoid Matrigel or because of ROCKi removal (because I notice the detachment after I remove ROCKi as well) or a mix of both.
Has anyone faced this particular issue before or noted any differences in monolayer culture with the two different Matrigels? From what I can understand from the product documentation, both Matrigels are EHS tumor ECM with the only difference being the Organoid one being optimized for organoid culture. But the components are ultimately the same.
What challenges occur in India when the summer monsoon is irregular and what are the reasons for break in monsoon in different regions?
It's a dumb question maybe, but I'm not sure how to proceed.
Suppose you have two different models which generate molecules (in the form of coordinates of atoms). The molecules generated by models are not the same.
Next, you have a method to evaluate energy of a molecule (given atomic coordinates, it outputs a number). I can also optimize atomic coordinates with this method.
The question is, how to compare these two generative models in terms of energy?
My guess is that I can run minimization for each configuration and can evaluate dE = E_final - E_initial. But how can I compare/aggregate dE between different molecules? My guess is that to have a crude estimate one can divide these quantities by the total charge of the nuclei in the molecule (which equals to the number of electrons for neutral molecules). Is this reasonable or better ways exist?
There are still a huge number of direct and indirect subsidies to fossil fuels at all levels. Main reason is to sustain national industries and improve their competitiveness. In addition, it is deemed that low-income families should be supported for their Energy expanses. For these reasons, I think, governments are very reluctant to withdrawn subsidies. Maybe an EU directive could put all the Country subsidies at similia Level and improve fair competition in the EU market, releasing big financial amount for the renewables.
We could start from an harmonisation of the VAT in all member states.
PRIMES LDS software (PRIMES LASER DIAGNOSTIC SOFTWARE) is a well known tool for M2 measurements for laser beams. Anyone working with the software? Want to know the reason for taking multiple planes to measure M2. When we want to take a final result, which plane should be looked in to? If we are measuring a closer to a single mode beam, does each plane should give a Gaussian intensity distribution?
PLANCK ERA / QUANTUM ERA and “DISAPPEARANCE” OF PHYSICAL CAUSALITY: FALLACIES FROM “OMNIPOTENCE” OF MATHEMATICS
Raphael Neelamkavil, Ph.D. (Quantum Causality),
Dr. phil. (Gravitational Coalescence Cosmology)
Cosmologists and quantum cosmologists seem to be almost unanimous (but happily today, a bit decreasingly unanimous) that, at the so-called level of the Planck era / quantum era of the original state of the big bang (of our universe, or of an infinite number of such universes existent in an infinite-eternal multiverse – whichever the case may be), where all forces are supposed to be unified or quasi-unified (but always stated without any solid proof), (1) either there did not exist and will never exist causality, (2) or any kind of causality is indistinguishable from the normal course of physical existents.
Is this sort of cosmological theorizing acceptable, where (1) the unification is supposed but is not necessarily physical-ontologically presupposable, and (2) causality and non-causality are taken in the mood of dilemma? This sort of theorizing is, of course, based on some facts that most physicists and other scientists agree on without much effort to search for causes of approval or disapproval.
But the adequacy of such reasons for this conclusion is questionable. The manner of concluding to non-causality or indistinguishability of causality and non-causality at spots in the universe or multiverse, where all forces are supposed to be unified or quasi-unified, is questionable too. The main reason is the lack of physical-ontological clarity regarding the status of causality and the status of unification of the forces.
In my opinion, this is based on the inevitable fact that whatever the mathematics automatically prescribes for such situations can be absolute only if all the parameters, quantities, etc. that have entered the equations are absolute. The prescribed necessity condition has not been the case in the physics that goes into the mathematical formulation of the said theory.
Even concerning the measurement that humanity has so far made of the speed of light is not exact and absolute. The reason for the fantastic cosmological conclusion regarding a volatile decision for or against causality and regarding a supposed verity of the supposition that all forces are unified therein, does not possess an adequate mathematical reason, and of course not a sufficiently physical.
The reason I gave is not strictly and purely mathematical, physical, or just generally philosophical. It is strictly physical-ontological and mathematical-philosophical. Things physical-ontological are not “meta-”physical in the sense of being beyond the physical. Instead, they treat of the preconditions for there being physics and mathematics. They being pre-conditions, not respecting them leads to grave theoretical problems in mathematics, science, and philosophy.
Hence, in my opinion, fundamentally mathematical-ontological and physical-ontological presuppositions and reasons are more rationally to be acceptable for the foundations of mathematics and physics than all that we have as strictly mathematical and physical in the name of foundations. I give here the obvious in order to assure clarity: I presuppose that physical ontology consists of the necessary presuppositions of anything dealt with in physics, astrophysics, cosmology, and other purely physical sciences, and of course of the mathematics and logic as applied to existent physical things / processes.
The main reason being considered for the so-called non-causality and indistinguishability between causality and non-causality at certain cosmological or physical spots seems to be that space and time could exist only with the big bang (or whatever could be imagined to be in place of it), whether just less than 14 billion years ago or doubly or triply so much time ago or whatever.
First, my questions on this assumption are based on an antagonism that I have to cosmologists lapping up the opinion expressed by St. Augustine centuries ago. That is, if space and time “exist” only if and from the time when the universe exists, then the question of space and time before the expansion of the universe is meaningless. These cosmologists presume that the expansion of the universe was from a nullity state, and that hence it could not have existed before the beginning of the expansion. What if it existed from eternity like a primeval stuff without any change and then suddenly began to explode? This is the basic premise they seem to hold, and then conclude that time, as an “existent” now, would not have existed before the expansion! What a clarity about the concept of existence! Evidently, this is due to the gaping absence of regard for the physical-ontological presuppositions behind physical existence.
Secondly, as is evident, some of them think that space and time are some things to exist beyond or behind all the physical processes that exist. Thus, some identify space even with ether. If we have so far only been able to measure physical processes, why to call them as measures of space and time? Why not call them just as what it is, and accept that these are termed as space and time merely for ease? After all, whatever names we give to anything does not exist; and we have not seen space and time at all.
Thirdly, is it such a difficult thing for scientists to accept the lack of evidence of any sort of “existence” of space and time as background entities? Einstein spoke not of the curvature of existent spacetime, but of the mathematical calculations within a theory of the measurementally spatiotemporal aspect of existent physical processes as showing us that the measurementally spatiotemporal aspect of the physical processes – including existent energy-carrier gravitational wavicles – is curving within mathematical calculations.
Now, if the curvature is of existent processes (including existent energy-carrier gravitational wavcles), then, at the so-called primeval spot in each existent universe (even within each member of an infinite-eternal multiverse containing an infinite number of finite-content universes like ours) where all forces are supposed to be unified or quasi-unified, there cannot be a suspension of causation, because nothing existent can be compressed or rarefied into absolute nullity and continue to exist.
This demonstrates that, even at the highly condensed or rarefied states, no existent is nothing. It continues to exist in its Extended and Changing nature. If anything is in Extension-Change-wise existence, it is nothing but causal existence, constantly causing finite impacts.
Why, then, are some cosmologists and theoretical physicists insisting that gravitons do not exist, space and time are entities, gravitation is mere spacetime curvature, causality disappears at certain spots in the cosmos (and in quantum-physical contexts), etc? Why not, then, also say that material bodies are merely spacetime curvature and cannot exist? Is this not due to undue trust in the science-automation powers of mathematics, which can only describe processes in a manner conducive to its foundations, and not tell us whether there is causation or not? I believe that only slavishly mathematically automated minds can accept such claims.
Examples of situations where causality is supposed to disappear are plenty in physics. More than century of non-causal interpretations within Uncertainty Principle, Double Slit Experiment, EPR Paradox, Black Hole Singularity, Vacuum Creation of Universes, etc. are clear examples of physicists and cosmologists becoming prey to the supposed omnipotence of mathematics and their unquestioning faith in the powers of mathematics.
It is useless, in defence of mathematics and physics, to cite here the extreme clarity and effectiveness of mathematical applications in instruments in space-scientific, technological, medical, and other fields. Did I ever question these precisions and achievements? But do the clarity and effectivity of mathematics mean that mathematics is absolute? If they can admit that it is not absolute, then let them tell us where it will be relative and less than absolute. Otherwise, they are mere believers in a product of the human mind, as if mathematics were given by a miraculously active almighty space and time.
All physicists need to recognize that all languages including mathematics are constructions by minds, but with foundations in reality out-there. Nothing can present the physical processes to us absolutely well. Mathematics as applied in physics (or other sciences) is an exact science of certain conceptually generalizable frames of physical processes. This awareness might help physicists to de-absolutize mathematical applications in physics.
Fourthly, the above has another important dimension. Physics or for that matter any other science cannot have at its foundations concepts that belong merely to the specific science. I shall give an example as to how some physicists think that physics needs only physical concepts at its foundations: To the question what motion is, one may define it in terms allegedly merely of time as “the orientation of the wave function over time”. In fact, the person has already presupposed quantum physics here, which is clear from his mention of the wave function, which naturally presupposes also the previous physics that have given rise to quantum physics.
This sort of presupposing the specific science itself for defining its foundational concepts is what happens when concepts from within the specific science, and not clearly physical-ontological notions, come into play in the foundations of the science. Space and time are measuremental, hence cognitive and epistemic. These are not physical-ontological notions. Hence, these cannot be at the foundations of physics or of any other science. These are derivative notions.
It is for this reason that I have posited Extension and Change as the primary foundational notions. As I have already shown in many of my previous papers and books, these two are the only two exhaustive implications of the concept of the To Be of Reality-in-total as the totality of whatever exists.
What are the bankruptcy cases of large social enterprises in the world?
1,Social, environmental, and management reasons for bankruptcy；
2,What is the future direction and current situation of these social entrepreneurs after bankruptcy of large social enterprises ?
3,After the bankruptcy of a social enterprise, how can the original social solution recipients (aid recipients) receive continuous assistance?!Which institutions will continue these assistance?
4,What is the difference between social enterprise bankruptcy and general enterprise bankruptcy??
5, Is social enterprise bankruptcy receiving high or low tolerance from society? Why?
To explain the importance of moderation, the reasons for disagreement, and the difference between dilution and extremism in religion.
Why climates are colder at the poles and warmer at the equator and reason equatorial ocean water is warmer than polar ocean water?
Why do larger molecules have stronger intermolecular forces and intermolecular forces of attraction is the reason why water cannot evaporate quickly?
Recently, I setup an MFC experiment, and the OCV fell to zero after showing a certain value for 8 mins. Is it because of no biofilm formation? Thank you in advance for your answers.
What is the reason that sodium reacts with water more vigorously than lithium and relationship between the intermolecular forces in a liquid and its vapor pressure?
In TGA analysis, 0.5 to 0.8% weight gain in case of plastic sample at initial temperature. please let me know the reason behind this if any one knows. Thank you
Generally in fracture test pre cracking takes some time which depends on materials but in my case the pre cracking of an Al alloy sample with square notch is completing very soon like in 10 minutes ,why its happening? Is this due to the square shape notch or any other reason?
I have a query about the SEM images; when I analyzed them, they showed some cracks; I stuck to why it can be shown this, what is the reason behind it, and what is the chemistry behind it. I request everyone, please give me an answer as soon as possible; thank you for your attention.
Greetings professors, students and those interested in mathematics. We know that according to a theorem in geometry, all straight line shapes can be converted into a triangle with the same area and equivalent to those shapes. The question I have today is whether there is a method by which any indirect shape of a geometric line can be converted into Convert another shape with the same amount of area? Is there a principle or theorem about this? Is this possible at all? If the answer is yes, has there ever been a way to answer this question? If it is impossible, what is the reason? moreThanks
Men are certainly not offended by sex. It seems to me, that one way or another, the silent female population has its say.
The overwhelming majority of women answered that sex meant a great deal to them, and the reason almost always given was because it was a wonderful form of intimacy and closeness with another human being. (Shere Hite)
Is it because it is considered to be due to genetic mutations and people think less can be done about it once it occurs and so they focus more on therapy interventions.
What is the reason for adding an intron between homologous arms in the donor vector designed by crispr/cas9 gene editing knockout experiment? Why put the puro gene in the intron?
Please help me.
In my result, in solar cell, when I increase the active layer thickness, Jsc was lowered (about half). However, EQE current was similar. Is there any opinion about this? (relating with low diffusion length of active layer, traps, low carrier density can be a reason... etc.)
Its a small protein with 5.7kD weight. After doing weight adjustments considering acidic amino acids and also 3XFLAG tag, it should be 11.6 kD.
But on 16% tricine gel, it appears between 15kD and 25kD (almost 20kD).
What could be the reason?
I have checked it in cell extract and also after FLAG purification.
I was working on a gene construct synthesized in pet29 vector as a clone. Primers were prepared and optimized with gene at Tm 58 degrees. Once primers were optimized, I carried out transformation in expression vector and checked colony PCR with the same set of primers. After some time, I needed to conduct PCR for the same gene again for TA cloning and repeated BL21 transformation but issues occured. My primers that were previously optimized didn't work on the same gene on the mentioned Tm. After numerous trouble-shootings, I decided to check either the problem has appeared in my gene construct or not. I checked my commercially synthesized cloned gene on agarose gel in the intact and digested form and there was no band of gene once visualized. Is there any chance that my clone is destroyed by nucleases? What can be the reason for such conditions? It will be a great help if you can guide me
Since I received the new orders of this nichrome wire sold by Phymep but fabricated by A-M Systems, it is totally impossible for me (and for others) to build the twisted electrodes as usual without them breaking all the time... Lateron, I learned that the reason was a change in their fabrication, and in particular the coating. Thus, I cannot use this new wire at all but need the same as before to build my electrodes. Would you know another company fabricating it ?
Hello , After I have affinity (6x his pet24) purified protein from inclusion body using 50mM tris ph8 , 300mM Nacl , 250mM imidazole and 8M urea I have subjected it to dialysis in 50mM tris pH 7.6 and 150mM Nacl. Initially after 1 to 2 hours of dialysis at 4 degree C I saw some whitish precipitate in the bag. I took samples at 0hr ,1hr and 2hr post dialysis shown in the picture added below , there appears to be loss of or dilution of protein as in the picture added below. Protein is around 16.5kda
What can be the reason also what can I do to stop the same from happening? Kindly help
I want to understand the XRD characterization having different diffraction peaks of EDTA
Hello, I tried to prepare 10g/L MnCl2 solution but it got precipitated. What can be the possible reasons?
Asking for clinical data or personal data without proper authorization is considered a punishable offence under the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, in India. Similarly, sharing clinical data or personal information without consent can also lead to legal consequences. The penalties for violations under these sections may include imprisonment, fines, or both, depending on the nature and severity of the offence.
These actions pose significant risks as they may open the door to potential misuse of sensitive data, such as blackmail or other unlawful activities. Once involved in such data-sharing activities, individuals may find themselves inadvertently involved in further crimes, either as victims or perpetrators.
Being cautious and mindful of data privacy and legal regulations is crucial. Unauthorized data sharing can have severe repercussions, both legally and ethically. Always seek consent and follow established guidelines when dealing with personal or clinical data to protect oneself and others from potential harm or involvement in criminal activities. Under the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, which fall under the Information Technology Act, 2000, the unauthorized collection, use, or sharing of sensitive personal data or information is prohibited in India. Violating these rules can lead to legal consequences and specific actions related to personal data may be considered criminal offences.
The IT Act 2000 and its amendments in 2008 provide the legal framework for dealing with various cybercrimes, including offences related to the misuse of personal data. Some of the relevant sections of the IT Act that pertain to the handling of personal data include:
- Section 43A: This section deals with compensation for failure to protect data, where a company or person holding sensitive personal data is liable to pay damages if they fail to implement reasonable security measures and thereby cause wrongful loss or gain to any person.
- Section 66C: This section relates to identity theft and makes it a criminal offence to fraudulently or dishonestly use the electronic signature, password, or any other unique identification feature of any other person.
- Section 72A: This section deals with the disclosure of information in breach of a lawful contract, and it makes it a criminal offence to disclose, without consent, any personal information that is reasonably expected to be held in confidence.
It's important to note that the specific consequences and penalties for violating the IT Act can vary depending on the severity of the offence. Penalties may include imprisonment, fines, or both.
To protect personal data and comply with the IT Act, individuals and organizations should take appropriate measures to safeguard sensitive information, seek explicit consent for data collection and use, and adhere to data protection and privacy guidelines.
I inserted a restriction site (5 nucleotides) between RBS and the start codon (these 5 nucleotides are upstream of multiple cloning site).
after confirming my cloning, we started protein expression but our protein doesn't express.
after study of this problem, we understand we should not insert any nucleotide between the start codon and RBS (based on the vector datasheet).
Is really these 5 nucleotides affect on mRNA translocation or secondary structure and is this reason for the lack of protein expression?
Iam extracting protein from red seaweed using sodium phosphate buffer, pH 7.0. I used different concentrations like 0.1M, 0.2M, 0.3M, 0.4M and 0.5M. After extracting protein, i found out that the protein% increased from 0.1M to 0.3M but it got reduced from 0.4M to 0.5M. What can be the reason behind this?
Is there a special reason, why the term "superficial cervical ansa" (Ansa cervicalis superficialis) was removed from official anatomical terminology?
I wanted to know whether the rapid evolution of the human brain (e.g. size increase, neocortex expansion, potentially certain genes that evolved, etc) is (or is a part of) the reason why us humans are the species most susceptible to Alzheimer's disease at the moment.
From literature it is seen that piperine is soluble in ethanol,methanol,and acetone but practically this is not happening. What can be the reason?
Problem-solving has caused confusion for grade 7 learners and irresolution in teachers for many years. Its rationalization, most of grade 7 learners have difficulties in reading and this is the main reason why the learners can’t comprehend and solve problems in Mathematics.
most of the time researchers perform the bivariate analysis of one dependent variable with several independent variables, and set the criteria at p value of 0.25 to retrieve a candidates variable for multivariable. is there a hard rule to set the p value? if yes what are the criterias?
I installed a superdex 75 pg myself, and the conductance baseline was flat at the beginning of the balance, and then the baseline was flat when the sample was eluted to 50%, and the last 50% jittered up and down, which was unstable.
I have some nonlinear PDEs that I wish to solve numerically. My initial stab at the solution seems to be very naive. I discretised the PDE using finite differences, and this leaves me with a set of nonlinear algebraic equations at each time step. To my simplistic mind, I can solve these using the Newton-Raphson method.
I tried this method and I can't get the solution to converge for some reason. Was my idea wrong from the outset?