Science topics: Physics
Science topic

Physics - Science topic

Physics related research discussions
Questions related to Physics
  • asked a question related to Physics
Question
4 answers
Hello, What physical value do you think is interesting to compare my car static torsional stiffness (kN·m/degree) values ​​with? The volume of the car? the mass ? its length? or something else ? looking for correlations. If you have any interesting graphic suggestions and tell me why. Thank you.
Relevant answer
Answer
This is the ratio of applied torque and angle of deformation, where F is the force caused by the applied mass and L is the length from the point of the force to the point of rotation. The value of torsional stiffness is useless unless it is not compared, usually com-pared with the roll stiffness of the suspension. As the vehicle has two axles and each can have a different roll stiffness, influence by linear and nonlinear suspension system.
refer this paper:
  • asked a question related to Physics
Question
8 answers
A project I am working on is the evaluation of stigma temperature in outdoor conditions (solar radiation up to 800-900 W/m2 and air temperature varying between 10-30 degrees).
I am utilizing three different instruments,
1. Thermal Camera that can be attached to a cell phone (thermal expert Q1)
2. Type T thermocouples 32 AWG (0.008 inches in diameter or 0.3255 mm2)
3. IR thermometer
The instruments were calibrated with a certified digital thermometer.
When all three methods are pooled together, we notice that IR camera and thermocouples have near consistent results while the IR thermometer is nearly systematically cooler than the two other methods (of about 1.5 degrees Celsius). This is odd and difficult to explain. Also, these values for the IR thermometer always make stigma cooler than air, which would not make much physical sense as the stigmas don't have any cooling mechanisms to our knowledge. Consequently, I am wondering if anybody has had any experience with any of these three instruments in order to help me get a better understanding of what could be the issue, but most importantly which instrument is actually the best to measure temperature.
Thank you for your time,
Relevant answer
Answer
I am sharing with you our latest research publication regarding the apple cultivars as follows:
  • asked a question related to Physics
Question
3 answers
This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.
Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.
Experiments, simulations, and theoretical results are equally important.
Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.
Relevant answer
Answer
Viscoelastic microfluidics: progress and challenges.
Zhou, J. and Papautsky, I.
Microsyst Nanoeng 6, 113 (2020).
Abstract:
The manipulation of cells and particles suspended in viscoelastic fluids in microchannels has drawn increasing attention, in part due to the ability for single-stream three-dimensional focusing in simple channel geometries. Improvement in the understanding of non-Newtonian effects on particle dynamics has led to expanding exploration of focusing and sorting particles and cells using viscoelastic microfluidics. Multiple factors, such as the driving forces arising from fluid elasticity and inertia, the effect of fluid rheology, the physical properties of particles and cells, and channel geometry, actively interact and compete together to govern the intricate migration behavior of particles and cells in microchannels. Here, we review the viscoelastic fluid physics and the hydrodynamic forces in such flows and identify three pairs of competing forces/effects that collectively govern viscoelastic migration. We discuss migration dynamics, focusing positions, numerical simulations, and recent progress in viscoelastic microfluidic applications as well as the remaining challenges. Finally, we hope that an improved understanding of viscoelastic flows in microfluidics can lead to increased sophistication of microfluidic platforms in clinical diagnostics and biomedical research.
###
Without proper understanding of viscoelastic liquids, which are omnipresent in all biology, our biological description of living systems remain incomplete.
Without proper understanding of viscoelastic liquids, which are omnipresent in all biology, our biological description of living systems remain incomplete.
Just one example. Flowing of blood through micro-vessels utilizes viscoelastic properties of this liquid. Advances in this ar4 can help us to understand better the very mechanisms of blood clotting, organ infractions, and many other health issues.
  • asked a question related to Physics
Question
5 answers
I know that δ(f(x))=∑δ(x−xi)/f′(xi). What will be the expression if "f" is a function of two variables, i.e. δ(f(x,y))=?
Relevant answer
Answer
K. Kassner You are right.
The method that I proposed may not work for all functions f(x,y), especially if they are continuous and do not have isolated zeros. In that case, one might try to separate the integrals or use a coordinate transformation as you suggested in your comment. For example, if we use polar coordinates $(r,\theta)$, then we have
$$\delta(f(r,\theta))=\frac{1}{r}\delta(r-r(\theta))$$
where $r(\theta)$ is the zero of f(r,$\theta$) as a function of r for a fixed $\theta$. This can be seen by using the Jacobian of the transformation and the property of the delta function.
  • asked a question related to Physics
Question
20 answers
The Nobel Prize Summit 2023: Truth, Trust and Hope has started today, 24 May 2023. The summit encourages participation. Thus, I have sent an open letter and eagerly anticipate their response. Please comment if the points I have made is adequate enough.
Open Letter to The Nobel Committee for Physics
Is There a Nobel Prize for Metaphysics?
Dear Nobel Committee for Physics,
Among the differences between an established religion, such as Roman Catholicism, and science, is the presence of a hierarchical organization in the former for defending its creed and conducting its affairs. The head of the religious institution ultimately bears responsibility for the veracity of its claims and strategic policies. This accountability was evident in historical figures like John Wycliffe, Jan Hus, and Martin Luther, who held the papacy responsible for wrong doctrines, such as the indulgence scandal during the late Middle Ages. In that context, challenging such doctrines, albeit with the anticipated risk of being burned at the stake, involved posting opposing theses on the doors of churches.
In contrast, the scientific endeavour lacks a tangible temple, and no definitive organization exists to be held accountable for possible misconducts. Science is a collective effort by scientists and scientific institutes to discover new facts within and beyond our current understanding. While scientists may occasionally flirt with science fiction, they ultimately make significant leaps in understanding the universe. However, problems arise when a branch of science is held and defended as a sacred dogma, disregarding principles such as falsifiability. This mentality can lead to a rule of pseudo-scientific oppression, similar to historical instances like the Galileo or Lysenko affairs. Within this realm, there is little chance of liberating science from science fiction. Any criticism is met with ridicule, damnation, and exclusion, reminiscent of the attitudes displayed by arrogant religious establishments during the medieval period. Unfortunately, it seems that the scientific establishment has not learned from these lessons and has failed to provide a process for dealing with these unfortunate and embarrassing scenarios. On the contrary, it is preoccupied with praising and celebrating its achievements while stubbornly closing its ears to sincere critical voices.
Allow me to illustrate my concerns through the lens of relativistic physics, a subject that has captured my interest. Initially, I was filled with excitement, recognizing the great challenges and intellectual richness that lay before me. However, as I delved deeper, I encountered several perplexing issues with no satisfactory answers provided by physicists. While the majority accepts relativity as it stands, what if one does not accept the various inherent paradoxes and seeks a deeper insight?
Gradually, I discovered that certain scientific steps are not taken correctly in this branch of science. For example, we place our trust in scientists to conduct proper analyses of experiments. Yet, I stumbled upon evidence suggesting that this trust may have been misplaced in the case of a renowned experiment that played a pivotal role in heralding relativistic physics. If this claim is indeed valid, it represents a grave concern and a significant scandal for the scientific community. To clarify my points, I wrote reports and raised my concerns. Fortunately, there are still venues outside established institutions where critical perspectives are not yet suppressed. However, the reactions I received ranged from silence to condescending remarks infused with irritation. I was met with statements like "everything has been proven many times over, what are you talking about?" or "go and find your mistake yourself." Instead of responding to my pointed questions and concerns, a professor even suggested that I should broaden my knowledge by studying various other subjects.
While we may excuse the inability of poor, uneducated peasants in the Middle Ages to scrutinize the veracity of the Church's doctrine against the Latin Bible, there is no excuse for professors of physics and mathematics to be unwilling to revaluate the analysis of an experiment and either refute the criticism or acknowledge an error. It raises suspicions about the reliability of science itself if, for over 125 years, the famous Michelson-Morley experiment has not been subjected to rigorous and accurate analysis.
Furthermore, I am deeply concerned that the problem has been exacerbated by certain physicists rediscovering the power and benefits of metaphysics. They have proudly replaced real experiments with thought experiments conducted with thought-equipment. Consequently, theoretical physicists find themselves compelled to shut the door on genuine scientific criticism of their enigmatic activities. Simply put, the acceptance of experiment-free science has been the root cause of all these wrongdoings.
To demonstrate the consequences of this damaging trend, I will briefly mention two more complications among many others:
1. Scientists commonly represent time with the letter 't', assuming it has dimension T, and confidently perform mathematical calculations based on this assumption. However, when it comes to relativistic physics, time is represented as 'ct' with dimension L, and any brave individual questioning this inconsistency is shunned from scientific circles and excluded from canonical publications.
2. Even after approximately 120 years, eminent physicist and Nobel Prize laureate Richard Feynman, along with various professors in highly regarded physics departments, have failed to mathematically prove what Einstein claimed in his 1905 paper. They merely copy from one another, seemingly engaged in a damage limitation exercise, producing so-called approximate results. I invite you to refer to the linked document for a detailed explanation:
I am now submitting this letter to the Nobel Committee for Physics, confident that the committee, having awarded Nobel Prizes related to relativistic physics, possesses convincing scientific answers to the specific dilemmas mentioned herein.
Yours sincerely,
Ziaedin Shafiei
Relevant answer
Answer
I looked at the link you gave which was
In that link I found the statement:
Einstein claimed that “If a unit electric point charge is in motion in an electromagnetic field, the force acting upon it is equal to the electric force which is present at the locality of the charge, and which we ascertain by transformation of the field to a system of co-ordinates at rest relatively to the electrical charge.”
I also get from the above link that you have a disagreement with the above statement. I think the confusion here is about which observer is defining the force. The electromagnetic field as transformed to coordinates at rest relative to the charge is the field needed to predict the force as seen by an observer at rest with the charge (an electric force but no magnetic force because the charge is not moving). Field transformations to other coordinate systems are needed to predict the force as seen by observers moving relative to the charge. This means that different observers (having different motions relative to each other) can see different forces even if all coordinate systems are inertial. This is in contrast to Newtonian mechanics in which the same force is seen in all inertial coordinate systems. Newtonian mechanics is wrong when applied to electromagnetic forces so we need to include things like field energy or field momentum (outside the scope of Newtonian mechanics) to obtain conservation laws. However, I think that your complaint is not that Newtonian mechanics should be used when it isn't, but rather that special relativity is wrong. Special relativity does have limitations (when general relativity becomes an issue) but for its intended applications (i.e., when general relativity is not needed) it has done a great job of producing all of today's modern technology derived from it. In particular, the treatment of electromagnetic forces in the context of special relativity is one of the most thoroughly studied of all topics in physics. If there was a real incompatibility between special relativity and electromagnetism, we would have known about that a long time ago. We would have known about it during the days when special relativity was first introduced and had a lot of opposition, and a lot of people searched very hard to find inconsistencies with the theory. The theory survived attacks by brilliant people searching for problems with the theory, and it will survive attacks by people that perceive it to be wrong because of their own lack of understanding.
  • asked a question related to Physics
Question
4 answers
A pendulum bob oscillates between potential energy maxima at the top of its swing through kinetic energy maxima at the bottom of its swing. The potential energy is given by the mass times the height of the arc times gravitational acceleration and the kinetic energy maximum is given by the mass times the maximum velocity times the average velocity. Energy may be treated as a scalar in many phenomena, but here both energies are products of two vectors, fall direction and gravitation for potential energy and momentum times average velocity for kinetic energy. This nature of the pendulum is important.
The potential energy of the pendulum is equal to the work it can do under gravitational acceleration until its string is vertical. That work accelerates the pendulum bob to its maximum velocity. The kinetic energy of the pendulum is equal to the work it can do against gravitation to bring the pendulum bob to the top of its arc. It is posited that the shape of the atoms and molecules in the bob is the cause of its acceleration in the refraction gradient of a gravitational field, like a light path bending when passing a star. When the bob is free to move, their asymmetrical oscillations change the position of the bob as the process of falling. The response of the atoms and molecules in the bob to motion is to adjust their shape in order to remain in harmony with themselves, that is they shorten to enable complete oscillations despite translation. These shape changes together transfer potential energy to kinetic energy in the bob which is then available for the work of lifting the bob by reversing the changes in shape while decelerating. The key to this speculation is that reversible refraction or translation compensation actually move the location of internal oscillations in matter.
Relevant answer
Answer
Howdy Igor Bayak,
Thank you for the opportunity to read your recent work, and also for looking into some of my writings. These references look interesting, and half-a-century ago, when I was gathering a sense of the world as revealed in equations, I would have read them more deeply. On perusal it seems you have a good, interesting approach to description and calculation of electrons, and further thought or interaction revealing your insights in discussion may lead to a deeper look.
For the moment, however, I do not find an answer to my question here, namely "What are kinetic and potential energy in oscillations?" That requires explanation which is fundamentally different from description or calculation. The Principle of Least Action also is not explanation (actually extreme action) but merely a reliable technique for calculations, despite the fact that the meaning and expression of "least action" have changed over the years. One must be careful and aware.
I offered what I would call an explanation in the question text. Do you have an opinion on the processes described there? Could they explain kinetic and potential energy in a pendulum, including the transfer of energy between them?
Happy Trails, Len
  • asked a question related to Physics
Question
9 answers
By reason of the application of the Lorentz Factor [(1 - (v squared / c squared)) raised to the power of 1/2] in the denominator of equations, luminal and other comparable energy propagations take on one and the same velocity. This is the relativity-effect (better, comparative effect) between v of objects, compared to c of the speed of light. That is, it is presupposed here that c is the object of comparison for determining the speed effect of velocity difference across a duration.
It is against the criterion-velocity itself c that c becomes unsurpassable! Hence, I am of the opinion that the supposed source-independence is nothing but an effect of OUR APPARATUS-WISE OBSERVATION LIMIT AND OUR FIXING OF THE CRITERION OF OBSERVATION AS THE OBSERVED VELOCITY OF LIGHT.
In this circumstance, it is useless to claim that (1) luminal and some other energy propagations with velocity c are source-independent, and (2) these wavicles have zero rest mass, since the supposed source-independence have not been proved theoretically or experimentally without using c cas the criterion velocity. The supposed source-independence is merely an effect of c-based comparison.
Against this background, it is possible to be assured that photons and other similar c-wavicles are extended particles -- varying their size throughout the course of motion in the spiral manner. Hence the acceptability of the term 'wavicle'. Moreover, each mathematical point of the spiral motion is to be conceived not as two-, but as three-dimensional, and any point of motion added to it justifies its fourth dimension. Let us call motion as change.
These four dimensions are measuremental, hence the terms 'space' (three-dimensional) and 'time' (one-dimensional). This is also an argument countering the opinion that in physics and cosmology (and other sciences) time is not attested!
The measurements of the 3-space and measurements of the 1-time are not in the wavicles and in the things being measured. The measurements are the cognitive characteristics of the measurements.
IN FACT, THE EXTENSION OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'SPACE', AND THE CHANGE OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'TIME'. Hence, the physically out-there-to-find characteristics of the wavicles and objects are EXTENSION AND CHANGE.
Extension is the quality of all existing objects by which they have parts. This is not space. Change is the quality by which they have motion, i.e., impact generation on other similar wavicles and/or objects. This is not time. Nothing has space and time; nothing is in space and time. Everything is in Extension-Change.
Any wavicle or other object existing in Extension-Change is nothing but impact generation by physically existent parts. This is what we term CAUSATION. CAUSALITY is the relation of parts of physical existents by which some are termed cause/s and the others are termed effect/s. IN FACT, THE FIRST ASPECT OF THE PHYSICALLY ACTIVE PARTS, WHICH BEGINS THE IMPACT, IS THE CAUSE; AND THE SECOND ASPECT IS THE CAUSE. Cause and effect are, together, one unit of continuous process.
Since energy wavicles are extended, they have parts. Hence, there can be other, more minute, parts of physical objects, which can define superluminal velocities. Here, the criterion of measurement of velocity cannot be c. That is all...! Hence, superluminal velocities are a must by reason of the very meaning of physical existence.
THE NOTION OF PHYSICAL EXISTENCE ('TO BE') IS COMPLELTEY EXHAUSTED BY THE NOTIONS OF EXTENSION AND CHANGE. Hence, I call Extension and Change as the highest physical-ontological Categories. A metaphysics (physical ontology) of the cosmos is thus feasible. I have been constructing one such. My book-length publications have been efforts in this direction.
I invite your contributions by way of critiques and comments -- not ferocious, but friendly, because I do not claim that I am the last word in any science, including philosophy of physics.
Relevant answer
Answer
  • asked a question related to Physics
Question
33 answers
In relativity (GTR, STR) we hear of masslessness. What is the meaning of it with respect to really (not merely measurementally) existent particles / waves?
I am of the opinion that, while propagating, naturally, wavicles have mass, and there is no situation where they are absolutely at rest or at rest mass.
Relevant answer
Answer
Are they , these virtual things , to be so easily applied in various explanations AND computations in different arenas of today's physics ??
  • asked a question related to Physics
Question
8 answers
Ontological arguments in philosophy have been used over the centuries to support a variety of ideas mostly commonly serving as purported proofs of the existence of God. Although these arguments have often been dismissed by philosophers, nevertheless, ontological thinking offers a powerful tool in the arsenal of the serious thinker.
My question relates the fundamental limits of knowledge that might necessarily arise if we live in a universe that 'is its own reason for being'.
I am interested in this question because if it can be show that there must be a fundamental 'blind spot' in a formal and quantitative approach to the understanding of an ontological universe then, although there may be no way to work out what it is we may nevertheless be in a privileged position to know what it is!
For over two thousand years we have been bending our minds into Gordian Knots attempting to show that what we suspect to be true may, in fact, not be!
Perhaps it is time to admit that consciousness is transparent to physics - and to attempt to answer the question why?
Relevant answer
Answer
“…If we live in an 'ontological universe' - then would we expect there to be something fundamental and yet fundamentally transparent to physics?…..”
- the question above is a typical mainstream philosophical question; and, since in the mainstream philosophy and sciences, including physics, all really fundamental phenomena/notions, first of all in this case “Matter”– and so everything in Matter, i.e. “particles”, “fields”, etc., “Consciousness”, “Space”, “Time”, “Energy”, “Information”, are fundamentally completely transcendent/uncertain/irrational,
- so in every case, when the mainstream addresses to some really fundamental problem, the result completely obligatorily logically is nothing else than some transcendent mental construction.
Including that really is in the question introduction , which uses the standard mainstream notion “ontological arguments” - despite that it relates to the really fundamental problem “what is ontology of Matter”, i.e. what is scientifically the observed by humans a huge system of some elements, and why it is as it is?,
- and , since any really scientific answers to the question above in the mainstream are fundamentally impossible, really the “ontological arguments” fundamentally aren’t some real just “arguments”, that are principally only some transcendent – including frankly transcendent religious – wordings.
Including in the mainstream “an 'ontological universe'” fundamentally cannot be, and so isn’t, anything that would be “fundamental and yet fundamentally transparent to physics”.
The ontologies of the fundamental phenomena/notions above can be, and are, really scientifically explained only in framework of the philosophical 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
- where the phenomena/notions above are rigorously scientifically defined,
- and concretely relating to physics in the Shevchenko-Tokarevsky’s informational physical model , two main papers are
- where, correspondingly, more 30 really fundamental physical problems are solved or essentially clarified. More see the links above, to read SS posts in the threads
Cheers
  • asked a question related to Physics
Question
5 answers
Animations are known to be a fast and very efficient way of dissemination of knowledge, insights, and understanding of complex systems. Through the animations, quite complicated research can be easily shared among all scientific disciplines.
While starting with complex systems descriptions of Dynamic Recrystallization in metals about almost 30 years ago, it had become very obvious almost instantly that animations carry with themselves a huge expressive power.
This recently led to development of the GoL-N24 open-source Python software that enables to create animations effortlessly. The user just defines the input parameters and the rest is done automatically. Share your software too.
This question is dedicated to all such animations and open-source-source software, which are producing them, in the area of complex systems.
Everyone is welcomed to share their own research in the form of animations with the relevant description.
Relevant answer
Answer
This observation of the second-order emergents occurring spontaneously from almost every randomly generated initial conditions is one of the key observations of emergent, self-organizing system, which is giving very reliable emergent output in the form of unexpected convergence towards a predefined shape and function.
  • asked a question related to Physics
Question
2 answers
I am trying to plot and analyze the difference (or similarities) between the path of two spherical pendulums over time. I have Cartesian (X/Y/Z) coordinates from an accelerometer/gyroscope attached to a weight on a string,
If I want to compare the path of two pendulums, such as a spherical pendulum with 5 pounds of weight and another with 15 pounds of weight, how can I analyze this? I am hope to determine how closely the paths match over time.
Thanks in advance.
Relevant answer
Answer
To compare the path of two spherical pendulums in 3D space, you can follow these steps:
First, define the equations of motion for each pendulum. These equations will describe the position and orientation of each pendulum as a function of time.
Solve these equations of motion numerically using a computer program or mathematical software such as MATLAB or Python. This will give you a set of data points that describe the trajectory of each pendulum over time.
Plot the trajectory of each pendulum in 3D space. You can use a 3D plotting software such as MATLAB's "plot3" or Python's "matplotlib" library to create these plots.
Compare the trajectories of the two pendulums visually by overlaying the plots. You can adjust the colors and styles of the plots to make them easier to distinguish.
Analyze the trajectories of the two pendulums by comparing their shapes, amplitudes, frequencies, and other characteristics. You can also use statistical techniques such as correlation analysis to quantify the similarity between the two trajectories.
Draw conclusions based on your analysis. Depending on your research question, you may be interested in identifying similarities or differences between the two pendulums, or in understanding the factors that influence their motion.
  • asked a question related to Physics
Question
1 answer
Dear Researcher,
Global Climate Models (GCMs) of Coupled Model Intercomparison Project Phase 6 (CMIP6) are numerical models that represent the various physical systems of the Earth's climate with respect to the surface of the land, oceans, atmosphere, and cryosphere, and these are employed to provide likely changes in future climate projections. I wanted to know what are the initial and lateral boundary conditions used while developing the models.
Sincerely,
Aman Srivastava
Relevant answer
Answer
Global Climate Models (GCMs) are complex numerical models used to simulate the Earth's climate system. The latest generation of GCMs used in the Coupled Model Intercomparison Project phase 6 (CMIP6) includes a wide range of models with varying resolutions, parameterizations, and boundary conditions. The specific initial and lateral boundary conditions used by each model depend on its design and configuration, but here are some general guidelines:
Initial Conditions:
The initial conditions of a GCM refer to the state of the climate system at the beginning of a simulation. The initial conditions typically include the atmospheric and oceanic conditions such as temperature, humidity, pressure, and winds, as well as the initial state of sea ice, land surface properties, and greenhouse gas concentrations.
For CMIP6-based GCMs, the initial conditions are usually taken from a pre-industrial control simulation, where the model is run for several hundred years without any external forcing, to allow the climate to reach a stable state.
Some models may also use observational data such as ocean temperatures and sea ice concentration to initialize their simulations.
Lateral Boundary Conditions:
The lateral boundary conditions of a GCM refer to the conditions at the edges of the model domain, where the model interacts with the outside world.
For CMIP6-based GCMs, the lateral boundary conditions are usually prescribed using outputs from other models, such as reanalysis data or output from previous versions of the same model.
In some cases, the boundary conditions may be nudged towards observed values to improve the realism of the simulation.
The specific boundary conditions used by each model depend on its design and configuration, but in general, they are chosen to ensure that the model produces realistic simulations of the global climate system.
  • asked a question related to Physics
Question
13 answers
I have defined matter and energy as follows (elsewhere), but is it possible to define them independent of each other?
Meanings of ‘Matter’ and ‘Energy’ Contrasted: By ‘matter’ I mean whatever exists as the venue of finite activities of existents and itself finitely active in all parts. Matter is whatever is interconvertible with existent energy.
Existent ‘energy’ is conceived as those propagative wavicles which, in a given world of existents, function as the fastest existent media of communication of someeffects between any two or more chunks of matter or of energy (i.e., of motions / changes).
Existent matter and existent energy are inter-convertible, and hence both should finally be amenable to a common definition: whatever exists with, in all parts, finite activity and stability.
  • asked a question related to Physics
Question
32 answers
1. Does consciousness exist?
2. If so, what is Consciousness and what are its nature and mechanisms?
3. I personally think consciousness is the subjective [and metaphysical] being that (if exists) feels and experiences the cognitive procedures (at least the explicit ones). I think that at some ambiguous abstract and fuzzy border (on an inward metaphysical continuum), cognition ends and consciousness begins. Or maybe cognition does not end, but consciousness is added to it. I don't know if my opinion is correct. What are potential overlaps and differences between consciousness and cognition?
4. Do Freudian "Unconscious mind" or "Subconscious mind" [or their modern counterpart, the hidden observer] have a place in consciousness models? I personally believe these items as well are a part of that "subjective being" (which experiences cognitive procedures); therefore they as well are a part of consciousness. However, in this case we would have unconscious consciousness, which sounds (at least superficially) self-contradictory. But numerous practices indicate the existence of such more hidden layers to consciousness. What do you think about something like an "unconscious consciousness"?
5. What is the nature of Altered States of Consciousness?
Relevant answer
Answer
Jerry waese
Thank you very much I have my own views & in this line I have expressed my publication which have been appreciated by well many for which I have no comment .
For your contribution I respect you .
Thanks
  • asked a question related to Physics
Question
5 answers
I have older version of x'pert highscore software and what to do for pdf2, pdf 4? Please suggest me or share the link of x'pert highscore software latest version (free)/ free pdf2, pdf4.
Relevant answer
Answer
Both Origin software and X'Pert HighScore software can be used to determine the hkl values of a crystal structure. Here are the steps to follow for each software:
Using Origin software:
1. Open the data file in Origin software and select the graph containing the diffraction pattern.
2. Click on the "Peak Analyzer" button in the toolbar.
3. In the Peak Analyzer window, select the peak of interest by clicking on it.
4. In the "Peak Information" tab, note down the "2Theta" value and the "Counts" value for the peak.
5. Use the Bragg's law equation (2dsinθ = nλ) to calculate the d-spacing value for the peak. Here, n = 1 for first-order diffraction. λ is the wavelength of the X-ray used in the experiment.
6. Use the d-spacing value to calculate the hkl values for the peak using the Miller indices formula (hkl = [n1d1,n2d2,n3d3]).
Using X'Pert HighScore software:
1. Open the data file in X'Pert HighScore software and select the diffraction pattern.
2. Click on the "Peak Fitting" button in the toolbar.
3. In the Peak Fitting window, select the peak of interest by clicking on it.
4. In the "Peak Information" tab, note down the "2Theta" value and the "d-spacing" value for the peak.
5. Use the Bragg's law equation (2dsinθ = nλ) to calculate the wavelength of the X-ray used in the experiment.
6. Use the d-spacing value to calculate the hkl values for the peak using the Miller indices formula (hkl = [n1d1,n2d2,n3d3]).
  • asked a question related to Physics
Question
8 answers
Similarly, are there books and articles on examples of generalization in physics?
Relevant answer
Answer
You ask: "Similarly, are there books and articles on examples of generalization in physics?"
Yes there is at least one article.
Sections 2.12 onwards analyzes the generalization ability in general, and Section 2.27 specifically deals with generalization in relation with mathematical physics.
Best Regards, André
  • asked a question related to Physics
Question
2 answers
Our response is YES. Quantum computing has arrived, as an expression of that.
Numbers do obey a physical law. Massachusetts Institute of Technology Peter Shor was the first to say it, in 1994 [cf. 1], in modern times. It is a wormhole, connecting physics with mathematics, and has existed even before the Earth existed.
So-called "pure" mathematics is, after all, governed by objective laws. The Max Planck Institute of Quantum Optics (MPQ) showed the mathematical basis by recognizing the differentiation of discontinuous functions [1, 2, 3], in 1982.
This denies any type of square-root of a negative number [4] -- a.k.a. an imaginary number -- rational or continuous.
Complex numbers, of any type, are not objective and are not part of a quantum description, as said first by Erwin Schrödinger (1926) --
yet,
cryogenic behemoth quantum machines (see figure) consider a "complex qubit" -- two objective impossibilities. They are just poor physics and expensive analog experiments in these pioneering times.
Quantum computing is ... natural. Atoms do it all the time, and the human brain (based on +4 quantum properties of numbers).
Each point, in a quantum reality, is a point ... not continuous. So, reality is grainy, everywhere. Ontically.
To imagine a continuous point is to imagine a "mathematical paint" without atoms. Take a good microscope ... atoms appear!
The atoms, an objective reality, imply a graininess. This quantum description includes at least, necessarily (Einstein, 1917), three logical states -- with stimulated emission, absorption, and emission. Further states are possible, as in measured superradiance.
Mathematical complex numbers or mathematical real-numbers do not describe objective reality. They are continuous, without atoms. Poor math and poor physics.
It is easy to see that multiplication or division "infests" the real part with the imaginary part, and in calculating modulus -- e.g., in the polar representation as well as in the (x,y) rectangular representation. The Euler identity is a fiction, as it trigonometrically mixes types ... avoid it. The FFT will no longer have to use it, and FT=FFT.
The complex number system "infests" the real part with the imaginary part, even for Gaussian numbers, and this is well-known in third-degree polynomials.
Complex numbers, of any type, must be deprecated, they do not represent an objective sense. They should not "infest" quantum computing.
Quantum computing is better without complex numbers. software makes R,C=Q --> B={0,1}.
What is your qualified opinion?
REFERENCES
[1] DOI /2227-7390/11/1/68
[3] June 1982, Physical review A, Atomic, molecular, and optical physics 26:1(1).
Relevant answer
Answer
Can numbers obey a physical law?
==============================
The situation is exactly the opposite - physical laws obey numbers. Otherwise, you will have to believe some pseudo-scientific statements that somehow caught my eye that a few hundred million years ago the number pi was exactly 3. Like, the Earth was closer to the Sun ... You are saying something similar here. (But, not about Betelgeuse...)
Speaking of prime numbers. In the picture, the spider has 8 legs. If you do not notice 3 legs, then yes! - Exactly 5!
  • asked a question related to Physics
Question
11 answers
The really important breakthrough in theoretical physics is that the Schrödinger Time Dependent Equation (STDE) is wrong, that it is well understood why is it wrong, and that it should be replaced by the correct Deterministic Time Dependent Equation (DTDE). Unitary theory and its descendants, be they based on unitary representations or on probabilistic electrodynamics, will have to go away. This of course runs against the claims about string and similar theories made in the video. But our claims are a dense, constructive criticism with many consequences. Taken into account if you are concerned about the present and the near future of Theoretical Physics.
Wave mechanics with a fully deterministic behavior of waves is the much needed and sought --sometimes purposely but more often unconsciously-- replacement of Quantism that will allow the reconstruction of atomic and particle physics. A rewind back to 1926 is the unavoidable starting point to participate in the refreshing new future of Physics. Many graphical tools currently exists that allow the direct visualization of three dimensional waves, in particular of orbitals. The same tools will clearly render the precise movement and processes of the waves under the truthful deterministic physical laws. Seeing is believing. Unfortunately there is a large, well financed and well entrenched quantum establishment that stubbornly resists these new developments and possibilities.
When confronted with the news they do not celebrate, nor try to renew themselves overcoming their quantum prejudices. Instead the minds of the quantum establishment refuse to think. They negate themselves the privilege of reasoning and blindly assume denial, or simply panic. The net result is that they block any attempt to spread the results. Accessing funds to recruit and direct fresh talents in the new direction is even harder than spreading information and publishing.
Painfully, this resistance is understandable. For these Quantists are intelligent scientists (yes, they are very intelligent persons) that instinctively perceive as a menace the news that debunk the Wave-Particle duality, the Uncertainty Principle, the Probabilistic Interpretation of wave functions and the other quantum paraphernalia. Their misguided lifelong labor, dedication and efforts --of themselves and of their quantum elders, tutors, and guides-- instantly becomes senseless. I feel sorry for such painful human situation but truth must always prevail. For details on the DTDE see our article
Hopefully young physicists will soon take the lead and a rational wave mechanics will send the dubious and troublesome Quantism to its crate, since long waiting in the warehouse of the history of science.
With cordial regards,
Daniel Crespin
Relevant answer
Answer
Well, the Schrödinger Time-Dependent Equation is a fully deterministic wave equation: given an initial condition, namely, the wave at t=0, one can deduce the wave at an arbritary time. What is not deterministic is quantum mecanics itself, because we cannot measure the wave at any given time. It is fair to say that some people have never accepted that impossibility; e.g., Einstein never did.
  • asked a question related to Physics
Question
8 answers
Why atoms do not repel each other when electrons are outside of nuclear. I think inverse square law of force doesn't allow atoms to come close and form molecules but in reality atoms come close and form molecules. How ?
Relevant answer
Answer
Because atoms are electrically neutral.
  • asked a question related to Physics
Question
3 answers
This is because the term "consciousness" is typically presumed to mean that which our selves "internally" experience. Something so large as that is an elaborate composition courtesy of evolution. What makes consciousness unique is feeling. Basic felling is fundamental, preceding minds.
Relevant answer
Answer
Is it possible to have unconscious emotions?
Unconscious emotions are of central importance to psychoanalysis. They do, however, raise conceptual problems. The most pertinent concern is the intuition, shared by Freud, that consciousness is essential to emotion, which makes the idea of unconscious emotion seem paradoxical unless there are physical feelings that make up the consciousness process within the unconscious. see
The crux of the matter is that philosophers and psychologists have defined consciousness as a state of self-awareness, especially in the medical profession. However, this needs to be changed to grasp the roots of consciousness as a science.
  • asked a question related to Physics
Question
1 answer
Two of the most interesting human spacecraft of our time, the SpaceX Starship and the NASA Gateway lunar space station are soon to join.
Relevant answer
Answer
New Program Manager taking over Gateway.
  • asked a question related to Physics
Question
19 answers
It would be very interesting to obtain a database of responses on this question :
What are the links between Algebra & Number Theory and Physics ?
Therefore, I hope to get your answers and points of view. You can also share documents and titles related to the topic of this question.
I recently read a very interesting preprint by the mathematician and physician Matilde Marcolli : Number Theory in Physics. In this very interesting preprint, she gave several interesting relations between Number Theory and Theoretical physics. You can find this preprint on her profile.
Relevant answer
Answer
Hi Michel,
Good question!!!
What about this?
My best wishes....
  • asked a question related to Physics
Question
74 answers
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
The answer may also consider further advances into quantum computing?
[2]
Preprint FT = FFT
[2]
Relevant answer
Answer
The form z=a+ib is called the rectangular coordinate form of a complex number, that humans have fancied to exist for more than 500 years.
We are showing that is an illusion, see [1].
Quantum mechanics does not, contrary to popular belief, include anything imaginary. All results and probabilities are rational numbers, as we used and published (see ResearchGate) since 1978, see [1].
Everything that is measured or can be constructed is then a rational number, a member of the set Q.
This connects in a 1:1 mapping (isomorphism) to the set Z. From there, one can take out negative numbers and 0, and through an easy isomorphism, connect to the set N and to the set B^n, where B={0,1}.
We reach the domain of digital computers in B={0,1}. That is all a digital computer needs to process -- the set B={0,1}, addition, and encoding, see [1].
The number 0^n=0, and 1^n=1. There Is no need to calculate trigonometric functions, analysis (calculus), or other functions. Mathematics can end in middle-school. We can all follow computers!
REFERENCES
[1] Search online.
  • asked a question related to Physics
Question
8 answers
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
Relevant answer
Answer
@Ed Gerck
Irrational numbers are uncomputable with probability one
================================================= ===
My deepest apologies, but I have read your Answer dated December 14, 2022 in the FLT thread https://www.researchgate.net/post/Are-there-other-pieces-of-information-about-Victory-Road-to - FLT#view=641367549777ccc70c026256/234 .
There was a link to your own thread given by you. This thread gives your erroneous statement from the very beginning, namely: "Irrational numbers are uncomputable with probability one".
Please agree, Dear Professor Ed G., that any irrational number is calculated with 100% accuracy with a probability of 1 for any number of orders p-1, if you write down p orders. Thus, if you write for the root of 2 one order before point and three orders after point, you will have sqrt(2)=1.414..., i.e., you can consider that you have written 4 orders. At the same time, the accuracy of 100% with a probability of 1 is provided for 3 orders, i.e. 1.41, etc., for any number of orders...
Speaking of some kind of all orders "full notation" , as you would like to see it, it's not possible for such a representation of irrational numbers.
If you point out my mistake to me, I will be grateful.
Greetings,
SPK
  • asked a question related to Physics
Question
1 answer
For those interested: A revamp of the Internet is under way to cover shortcomings interfering with expansion into the Solar System and beyond.
While extremely rugged, a few assumptions are built into the current Internet technologies, including short traversal times at light speeds and a relatively stable population of nodes to hop through to the destination, which are true on Earth. The present Internet fails otherwise.
Mars is minutes away at light speed, with a very spotty supply of nodes to there.
A new technology, DTN (Delay/Disruption-Tolerant Networking) utilizes a new protocol, Bundle Protocol, that is overlaid on top of existing space networking protocols or IP protocols. This has been tested in the ISS and is actively being placed in other spacecraft, by NASA and partner agencies. This is part of what is being tested at the Moon first for deployment to Mars.
Bundle Protocol was architected by Vint Cerf, a father of TCP/IP, and others.
Relevant answer
Answer
Dear Karl Sipfle,
thank you for explaining the importance of Delay Tolerant Networking (DTN) in such an interesting way. Thanks to the ideas underlying the concept of DTN, the Interplanetary Internet can be built. For more about this see:
The following IETF documents affect the Interplanetary Internet: RFC 5050, RFC 9171, RFC 9172, RFC 9173 and RFC 9174. These are available at the address: https://www.rfc-editor.org/rfc-index.html
Best regards
Anatol Badach
RFC 5050: Bundle Protocol Specification, Nov 2007
RFC 9171: Bundle Protocol Version 7, Jan 2022
RFC 9172: Bundle Protocol Security (BPSec), Jan 2022
RFC 9173: Default Security Contexts for Bundle Protocol Security (BPSec), Jan 2022
RFC 9174: Delay-Tolerant Networking TCP Convergence-Layer Protocol Version 4, Jan 2022
  • asked a question related to Physics
Question
18 answers
One finds action in photon emission and particle paths. What is it when its magnitude is expressed by a Planck Constant for emission versus when it may be expressed as a stationary integral value (least – saddle – greatest) along a path? The units match. Action is recognized as a valuable concept and I would like to appreciate its nature in Nature. (Struggling against the “energy is quantized” error has distracted me from the character of the above inquiry in the past.)
Brief aside: Max Planck and Albert Einstein emphasized energy as discrete amounts for their blackbody radiation and photoelectric studies, but they always added at a specific frequency! Energy without that secondary condition is not quantized! I emphasize this because it has been frustrating for decades and it interferes with the awareness that it is action that is quantized! Now, granted that it is irrelevant to “grind out useful results” activity, which also is valuable, it is relevant to comprehending the nature of Nature, thus this post.
The existence of The Planck Constant has been a mystery since Max Planck found it necessary to make emissions discrete in order to formulate blackbody radiation mathematically. He assumed discrete emission energy values for each frequency that made the action of radiated energy at each frequency equal to the Planck Constant value. (This can be said better – please, feel free to fix it.) Action had been being used to find the equations of motion for almost two centuries by then. Is a stationary integral of action along a path equal to an integral number of Planck Constants? Is the underlying nature in these several instances of mathematical physics the same? What is that nature; how can this be? If the natures are different, how is each?
Happy Trails, Len
P.S. My English gets weird and succinct sometimes trying to escape standard ruts in meanings: how is each? is a question that directs one to explain, i.e., to describe the processes as they occur – causes, interactions, events, etc., I hope.
Relevant answer
Answer
My last comment was meant to be some kind of an “introduction” of the idea that we humans can only reflect what is “inside” the system (the universe). Or to choose the opposite point of view: the description of the nature of reality is always available and “it is showing itself” through the way everyone express their intuition about the subject.
The consequence is that humans have ideas about reality – e.g. a period in the evolution of the universe when there was no matter around – that can be expressed in various ways. Some will use the conceptual framework of classic physics, some the mathematical approach, others modern field theory, etc. In science we are quarrelling about the “correct description” and it seems that the correct description is a description that is independent from human’s personal and cultural (inclusive religious) preferences. But this idea is a bit “too nice”.
In practise we experience that a lot of people make a mess of their opinion too. They sometimes connect different points of view within one description (creating paradoxes) and the consequence is that it is impossible to agree with other people about the subject. But this is not intentionally. Even if people have the opinion that they were “forced” by their own ego. So if we think about the possibility to reflect “the nature of reality” by the mysterious input of the whole universe it “smells” a bit unfair too.
Why have some people not so much trouble to come to a balanced opinion, in relation to other less fortunate people? In a culture that is ruled by the “game of competition” this is a frustrating topic for much scientists. Just because the only plausible answer is that “it is like it is”. At the other hand, if we know that we are some kind of a “radio” that is receiving the “broadcast of the universe” we can examine our own reproduction and think about the problem if it is a convincing reproduction or that we are influenced by something else that is affecting the interpretation.
Science is full of different points of view and it is difficult to believe that we personally are “in balance” in proportion to everyone else. So I read your last comment with more than one point of view to understand the connections with other ideas. I think it is in line with the general ideas about these topics.
With kind regards, Sydney
  • asked a question related to Physics
Question
6 answers
The topic considered here is the Klein-Gordon equation governing some scalar field amplitude, with the field amplitude defined by the property of being a solution of this equation. The original Klein-Gordan equation does not contain any gauge potentials, but a modified version of the equation (also called the Klein-Gordon equation in some books for reasons that I do not understand) does contain a gauge potential. This gauge potential is often represented in the literature by the symbol Ai (a four-component vector). Textbooks show that if a suitable transformation is applied to the field amplitude to produce a transformed field amplitude, and another suitable transformation is applied to the gauge potential to produce a transformed gauge potential, the Lagrangian is the same function of the transformed quantities as it is of the original quantities. With these transformations collectively called a gauge transformation we say that the Lagrangian is invariant under a gauge transformation. This statement has the appearance of being justification for the use of Noether’s theorem to derive a conservation law. However, it seems to me that this appearance is an illusion. If the field amplitude and gauge potential are both transformed, then they are both treated the same way as each other in Noether’s theorem. In particular, the theorem requires both to be solutions of their respective Lagrange equations. The Lagrange equation for the field amplitude is the Klein-Gordon equation (the version that includes the gauge potential). The textbook that I am studying does not discuss this but I worked out the Lagrange equations for the gauge potential and determined that the solution is not in general zero (zero is needed to make the Klein-Gordon equation with gauge potential reduce to the original equation). The field amplitude is required in textbooks to be a solution to its Lagrange equation (the Klein-Gordon equation). However, the textbook that I am studying has not explained to me that the gauge potential is required to be a solution of its Lagrange equations. If this requirement is not imposed, I don’t see how any conclusions can be reached via Noether’s theorem. Is there a way to justify the use of Noether’s theorem without requiring the gauge potential to satisfy its Lagrange equation? Or, is the gauge potential required to satisfy that equation without my textbook telling me about that?
Relevant answer
Answer
"Noether's thorem simply states that, if the equations of motion for the scalars are invariant under a continuous group of transformations, then there exists a conserved current. That's all."
If you review the derivation of Noether's theorem you will find another requirement. The varied functions must be extremals, i.e., satisfy Lagrange's equations, i.e., the equations of motion. Transformation properties alone, with no other requirements, will make the Lagrangian an invariant. But to obtain a conserved current we need not only invariance of the Lagrangian but also that the varied functions satisfy Lagrange's equations.
  • asked a question related to Physics
Question
31 answers
For a plate capacitor the force on the plates can be calculated by calculating the change of field energy caused by an infinitesimal displacement. Looks like a very first principle. However, thinking about a displacement of a charge in a homogeneous electric field seems to explain no force at all...
Relevant answer
Answer
sorry to say, but I do not understand anything of your theory. Furthermore I see no explanation of the force on a charge in an E field.
Best regards
Jörn
  • asked a question related to Physics
Question
4 answers
Start with a purely classical case to define vocabulary. A charged marble (marble instead of a point particle to avoid some singularities) is exposed to an external electromagnetic (E&M) field. "External" means that the field is created by all charges and currents in the universe except the marble. The marble is small enough for the external field to be regarded as uniform within the marble's interior. The external field causes the marble to accelerate and that acceleration causes the marble to create its own E&M field. The recoil of the marble from the momentum carried by its own field is the self force. (One piece of the charged marble exerts an E&M force on another piece and, contrary to Newton's assumption of equal but opposite reactions, these forces do not cancel with each other if the emitted radiation carries away energy and momentum.) The self force can be neglected if the energy carried by the marble's field is negligible compared to the work done by the external field on the marble. Stated another way, the self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy. Also, an analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone. The key points from this paragraph are the last two sentences repeated below:
(A) An analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone.
(B) The self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy.
Now consider the semi-classical quantum mechanical (QM) treatment. The marble is now a particle and is treated by QM (Schrodinger's equation) but its environment is an E&M field treated as a classical field (Maxwell's equations). Schrodinger's equation is the QM analog for the equation of force on the particle and, at least in the textbooks I studied from, the E&M field is taken to be the external field. Therefore, from Item (A) above, I do not expect this analysis to predict a self force. However, my expectation is inconsistent with a conclusion from this analysis. The conclusion, regarding induced emission, is that the energy of a photon emitted by the particle is equal to all of the energy lost by the particle. We conclude from Item (B) above that the self force is profoundly significant.
My problem is that the analysis starts with assumptions (the field is entirely external in Schrodinger's equation) that should exclude a self force, and then reaches a conclusion (change in particle energy is carried by its own emitted photon) that implies a self force. Is there a way to reconcile this apparent contradiction?
  • asked a question related to Physics
Question
5 answers
Please spread the word: Folding at Home (https://foldingathome.org/) is an extremely powerful supercomputer composed of thousands of home computers around the world. It tries to simulate protein folding to Fight diseases. We can increase its power even further by simply running its small program on our computers and donating the spare (already unused and wasted) capacity of our computers to their supercomputation.
After all, a great part of our work (which is surfing the web, writing texts and stuff, communicating, etc.) never needs more than a tiny percent of the huge capacity of our modern CPUs and GPUs. So it would be very helpful if we could donate the rest of their capacity [that is currently going to waste] to such "distributed supercomputer" projects and help find cures for diseases.
The program runs at a very low priority in the background and uses some of the capacity of our computers. By default, it is set to use the least amount of EXCESS (already wasted) computational power. It is very easy to use. But if someone is interested in tweaking it, it can be configured too via both simple and advanced modes. For example, the program can be set to run only when the computer is idle (as the default mode) or even while working. It can be configured to work intensively or very mildly (as the default mode). The CPU or GPU can each be disabled or set to work only when the operating system is idle, independent of the other.
Please spread the word; for example, start by sharing this very post with your contacts.
Also give them feedback and suggestions to improve their software. Or directly contribute to their project.
Folding at Home's Forum: https://foldingforum.org/index.php
Folding at Home's GitHub: https://github.com/FoldingAtHome
Additionally, see other distributed supercomputers used for fighting disease:
Relevant answer
Answer
Vahid Rakhshan I will definitely spread the word about this amazing initiative. It's great to know that we can contribute to such a noble cause by simply utilizing our excess computer power. Thank you for bringing this opportunity to my attention. Let's join hands in making a difference in the fight against diseases.
  • asked a question related to Physics
Question
26 answers
Dear colleagues. This is not a matter about mathematical questions, fields and the like that I do not understand, but about the following:
As a researcher in philosophy of science, I have read more than once - from qualified sources - and repeated that, unlike Newtonian mechanics, which assumes that macroscopic physical space is absolute, has three dimensions and is separated from absolute time, for general relativity space is a four-dimensional spacetime, and that time is relative to the position of the observer (due to the influence of gravity).
Now I find that wrong, having heard that, for the theory, time and the perception of time are different things. Specifically, that in the famous Einsteinian example (a mental or imaginary experiment) of twins, the one who is longer-lived when they meet again has perceived a greater passage of time. And if what has been different is the perception of time, and not time, then that would mean that objectively both have always been at the same point on the "arrow of time".
And it would mean that I have confused time, as an objective or "objective" dimension of spacetime, with one's perception of it. That is, if there were no observer, spacetime would still have its "time" dimension.
It follows that it is false that for general relativity time is relative (because it is a dimension of spacetime, which is not relative). Now, if this is so, how can the theory predict the - albeit hypothetical - existence of wormholes?
There is something I fail to understand: does the theory of relativity really differentiate time from the perception that an observer may have of it, and the example of twins refers to the latter?
If spacetime is only one - there are not several independent spacetimes - and it has objective existence, including its "time" dimension , how is it possible to travel - theoretically, according to the theory - through a wormhole to another part of it that has a different temporality (what we call past or future)?
Since it does not make sense to me to interpret that one would not travel to the future but to the perception of the future. And I rule out that Einstein has confused time with the perception of it.
Thank you.
Relevant answer
Answer
Buenos dias Sergio,
questioning the relationship between "objective" ("real") physical time (as in, e.g., the Einsteinian concept of space-time) versus individually perceived time (i.e., time as perceived by conscious agents such as organisms) is, indeed, intriguing. Let me just raise a few thoughts here in addition to what has already been pointed out:
Whether space-time is, indeed, "real" and fundamental is questioned by scholars such as cognitive scientist Donald Hoffman (University of California at Irvine), arguing that space-time may merely be a "headset" through which we perceive and interact with a more fundamental reality. This line of argument - in my view - essentially constitutes a modern-day incarnation of Plato's classic cave analogy.
Irrespective of whether you buy into such "headset"/"matrix" arguments, physicists are on the constant lookout for structures and processes that may physically, indeed, proove more fundamental than spacetime. Here, you may follow, for instance, the work of physicist Nima Arkani-Hamed (Institute for Advanced Study at Princeton). Whether, at the end of the day, "time" will turn out here as something "objective" and/or absolute, as an emergent property of deeper structures, whether it will relativistically stay deeply intertwined with space or be "torn away" from it on a deeper, yet unknown level of physics/reality, no one knows.
For the time being, though, I do think it is important to carefully distinguish between physical measurements and their interpretations in models and theories, in which we use time as part of four-dimensional space-time very successfully on one hand and the intricacies and complexities of an as yet hardly understood (but surely very limited!) human consciousness and perception of "time".
So, is time simply an illusion or a fundamental trait of some form of "reality"? We just don't know...
PS: You may also take interest in this discussion we had last fall: https://www.researchgate.net/post/Has_time_existed_forever .
Best,
Julius
  • asked a question related to Physics
Question
16 answers
Our answer is YES. A new question (at https://www.researchgate.net/post/If_RQ_what_are_the_consequences/1) has been answered affirmatively, confirming the YES answer in this question, with wider evidence in +12 areas.
This question continued the same question from 3 years ago, with the same name, considering new published evidence and results. The previous text of the question maybe useful and is available here:
We now can provably include DDF [1] -- the differentiation of discontinuous functions. This is not shaky, but advances knowledge. The quantum principle of Niels Bohr in physics, "all states at once", meets mathematics and quantum computing.
Without infinitesimals or epsilon-deltas, DDF is possible, allowing quantum computing [1] between discrete states, and a faster FFT [2]. The Problem of Closure was made clear in [1].
Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra [4], he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starts to use rational numbers in calculus, with DDF [1]. The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q*; the results are the same formulas -- but now, absolutely accurate.
New results have been made public [1-3], confirming the advantages of the YES answer, since this question was first asked 3 years ago. All computation is revealed to be exact in modular arithmetic, there is NO concept of approximation, no "environmental noise" when using it.
As a consequence of the facts in [1], no one can formalize the field of non-standard analysis in the use of infinitesimals in a consistent and complete way, or Cauchy epsilon-deltas, against [1], although these may have been claimed and chalk spilled.
Some branches of mathematics will have to change. New results are promised in quantum mechanics and quantum computing.
This question is closed, affirming the YES answer.
REFERENCES
[2]
Preprint FT = FFT
[3]
Relevant answer
Answer
This question follows a new standard in RG, where every opinion is respected, and yet research can be developed.
This is explained in:
  • asked a question related to Physics
Question
3 answers
Does transverse and longitudinal plasmons fall under localized surface plasmons? What is the significant difference between them? At what level will this affect the fabricated silver nanoparticle based electronic devices? Is surface plasmon propagation different from transverse and longitudinal plasmons?
Relevant answer
Answer
Transverse plasmonic resonance involves the oscillation of free charges under a homogeneous external electric field in the plane perpendicular to the direction of the electric field. Longitudinal plasmonic resonance involves the oscillation of free charges under a homogeneous external electric field along the direction of the electric field.
  • asked a question related to Physics
Question
7 answers
Using the Boltztrap and Quantum espresso I was able to calculate the electronic part of thermal conductivity but still struglling for the phononic part of thermal conductivity.
I tried the SHENGBTE but that demands a good computational facility and right now I am not having such type of workstation. Kindly suggest some other tool that can be useful for me in this regard.
Thanks,
Dr Abhinav Nag
Relevant answer
Answer
@Abhinav Nag
The modified Debye-Callaway model can be used to calculate the thermal lattice conductivity. See, for example, DOI: 10.1016/j.jpcs.2022.111196
  • asked a question related to Physics
Question
71 answers
Finding a definition for time has challenged thinkers and philosophers. The direction of the arrow of time is questioned because many physical laws seem to be symmetrical in the forward and backward direction of time.
We can show that the arrow of time must be in the forward direction by considering light. The speed of light is always positive and distance is always positive so the direction of time must always be positive. We could define one second as the time it takes for light to travel approximately 300,000 km. Note that we have shown the arrow of time to be in a positive direction without reference to entropy.
So we are defining time in terms of distance and velocity. Philosophers might argue that we then have to define distance and velocity but these perhaps are less challenging to define than time.
So let's try to define time. Objects that exist within the universe have a state of movement and the elapsed times that we observe result from the object being in a different position due to its velocity.
This definition works well considering a pendulum clock and an atomic clock. We can apply this definition to the rotation of the Earth and think of the elapsed time of one day as being the time for one complete rotation of the Earth.
The concept of time has been confused within physics by the ideas of quantum theory which imply the possibility of the backward direction of time and also by special relativity which implies that you cannot define a standard time throughout the universe. These problems are resolved when you consider light as a wave in the medium of space and this wave travels in the space rest frame.
Richard
Relevant answer
Answer
Time is life.
  • asked a question related to Physics
Question
69 answers
Our answer is YES. This question captured the reason of change: to help us improve. We, and mathematics, need to consider that reality is quantum [1-2], ontologically.
This affects both the microscopic (e.g., atoms) and the macroscopic (e.g., collective effects, like superconductivity, waves, and lasers).
Reality is thus not continuous, incremental, or happenstance.
That is why everything blocks, goes against, a change -- until it occurs, suddenly, taking everyone to a new and better level. This is History. It is not a surprise ... We are in a long evolution ...
As a consequence, tri-state, e.g., does not have to be used in hardware, just in design. Intel Corporation can realize this, and become more competitive. This is due to many factors, including 1^n = 1, and 0^n = 0, favoring Boolean sets in calculations.
This question is now CLOSED. Focusing on the discrete Weyl-Heisenberg group, as motivated by SN, this question has been expanded in a new question, where it was answered with YES in +12 areas:
[2]
Relevant answer
Answer
QM can have values unknown, but not uncertain. Likewise, RG questions. Please stay on topic, per question. Do not be uncertain yourself.
Opinions do not matter, every opinion is right and should be, therefore, not discussed.
But, facts? Mass is defined (not a choice or opinion) as the ratio of two absolutes: E/c^2. Then, mass is rest mass. There is no other mass.
This is consistent, which is the most that anyone can aspire. Not agreement, which depends on opinion. Science is not done by voting.
Everyone can, in our planet, reach consistency -- and the common basis is experiment, a fact. We know of other planets, and there consistency may be uncertain -- or ambivalent, and even obscure. A particle, there, may be defined, both, as the minimum amount of matter of a type, or the most amount of quantum particles of a type.
We can entertain such worlds in our minds, more or less formed by bodies of matter, and have fun with the consequences using physics. But, and there is my opinion (not lacking but not imposing objectivity) we all -- one day -- will be lead to abandon matter. What will we find? That life goes on. The quantum jump exists. Nature is quantum.
  • asked a question related to Physics
Question
13 answers
Some researchers say the type of surface electrical charges effects on pH value of the reaction medium and thus the adsorption and removal process , when pH value increases, the overall surface electrical charge on the adsorbents become negative and adsorption process decreases, while if pH value decreases, surface electrical charge become positive and adsorption process increases
Malkoc, E.;Nuhoglu, Y. and Abali,Y. (2006). “Cr (VI) Adsorption by Waste Acorn of Quercus ithaburensis in Fixed Beds: Prediction of Breakthrough Curves,” Chemical Engineering Journal, 119(1): pp. 61-68.
Relevant answer
Answer
At lower pH, the adsorption backpedal by H+ ion factor or proton factor and at higher pH, adsorption hampered because of yhe metallic ions start to precipitate as metallic hydroxide or metallic oxide. So it is looking good the pH value near between 4-6, and you should have to study for optimum pH for the removal of particular metal like Cr.
On contrast, time mainly find out the breakthrough time for column study is not so easy like batch study. Here you need to conduct a lot of trial and also need to modeling of Thomas model, Adam-Bohart model, Yoon-Nelson model for the much more accurate BTC curve.
Thank you.
  • asked a question related to Physics
Question
3 answers
Greeting,
When I tried to remotely accessed the scopus database by login into my institution id, it kept bring me back to the scopus preview. I tried cleaning the cache, reinstall the browser, using other internet and etc. But, none of it is working. As you can see in the image. It kept appeared in scopus preview.
Please help..
Relevant answer
Answer
To reach the Scopus document search module, you should use academic IPs. If your institute has been listed in the Scopus database, you have permission to search documents in Scopus. It is not free of charge, and your university should pay its share to Scopus to provide this service for its academic researchers.
  • asked a question related to Physics
Question
4 answers
If a string vibrates at 256 cycles per seconds then counting 256 cycles is the measure of 1 second. The number is real because it measures time and the number is arbitrary because it does not have to be 1 second that is used.
This establishes that the pitch is a point with the real number topology, right?
Relevant answer
Answer
To Mr Gerck
But pitch is continuous because between any two pitch values is an interval and between two intervals is a pitch value, so we have the real line minus finitely many points.
I am confused by the statement that real numbers are non-computable. real numbers are analytic.
My idea here is that music theory must be a theory of real numbers rather than frequencies. I think you are saying there is no theory of Q.
  • asked a question related to Physics
Question
16 answers
Material presence is essential for propagation of sound. Does it mean that sound waves can travel interstellar distances at longer wavelengths due to the presence of celestial bodies in the universe?
Relevant answer
Answer
Huge energy bursts starts with very high speed from giant objects and because they covers long distance instantly so they comes in contact with instant gravitational affect parallelly this thing indirectly supports in traveling upto Interstellar distances but not in all cases without presence of any medium. And also said thing is only about how radio bursts covers more distances. Because there's is no uniform distribution of mass and energy in all directions upto all distances in Universe so any such possibility cancel itself. Only thing lefts here is how sound is affected by gravity and vice versa.
  • asked a question related to Physics
Question
3 answers
The exposure dose rate at a distance of 1 m from a soil sample contaminated with 137Cs is 80 µR/s. Considering the source as a point source, estimate the specific activity of 137Cs contained in the soil if the mass of the sample is 0.4 kg. How can i calculate it?
Relevant answer
Answer
If the 'specific' in the question refers to the mass of the sample (it might, it should...), then we want to know the specific activity (ie, activity per kg) that, when you've only 400g of the stuff, leads to the activity you state.
So - we have 80 µR/s from 0.4kg.
Q: What activity might we get from 1kg of the soil?
(knowing that twice as much soil leads to twice the dose rate at a given distance)
A: 80/0.4 = 200 µR/s/kg
  • asked a question related to Physics
Question
6 answers
As we all know the Classical physics have wings over massive objects on other hand Quantum physics is about smaller level of objects. Can a new assumption of satisfying both Classical and Quantum Theories happens in future?
Relevant answer
Answer
If one considers classical physics as the limiting case of large quantum numbers, one does not need any new assumptions.
  • asked a question related to Physics
Question
2 answers
Hello Everyone,
I am able to sucessfully run scf.in using pw.x but while proceeding for the calculations to be done using thermo_pw.x the following errors occur.
Error in routine c_bands (1):successfully
too many bands are not converged
I have already tried increasing ecut, ecutrho, decreasing conv_thr, reducing mixing beta, reducing k points and pseudopotential too.
but none of them are helpful to fix the issue.
Someone who has faced this error in thermo_pw please guide,
Thanks,
Dr. Abhinav Nag
Relevant answer
Answer
I must thank you Roberto Sir as I have learned about many things of Quantum espresso through your answers which helped me a lot in PhD.
I was able to crack the problem by changing the pseudopotential. The problem was appearing with LDA type pseduopotential but when I use PBE pseudopotentials it worked fine
  • asked a question related to Physics
Question
51 answers
Which software is best for making high-quality graphs? Origin or Excel? Thank you
Relevant answer
Answer
origin
  • asked a question related to Physics
Question
2 answers
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
thanks
Relevant answer
Answer
The pump beam λ is split by a variable beam splitter (BS) into the two modes 1 and 2. The splitting ratio is adjusted by changing the distance between the two fibers using a micrometer screw. Each mode enters a non-linear periodically poled Lithium Niobate waveguide (ppLN) creating photon pairs via spontaneous parametric down-conversion. Cascaded dense wavelength division multiplexers (DWDM) separate and spectrally filter the down-converted photon pairs. Modes 1 and 2 (1' and 2') define a path-encoded qubit. This leads to the two qubit path-entangled state. Delay lines ( ) and polarization controller (PC) are used to adjust the arrival time and polarization of each mode. b.) 50/50 Beam splitters (BS A , BS B ) and phases ( A  , B  ). Combined with single photon detection the projective measurement | (1 2, ) (1 2, )| P P    is realized.
The experimental setup is
described in the publication given below:
Scalable fiber integrated source for higher-dimensional path-entangled
photonic quNits
Article  in  Optics Express · April 2012
DOI: 10.1364/OE.20.016145 · Source: arXiv
  • asked a question related to Physics
Question
11 answers
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
Relevant answer
Answer
Clarivate announced that starting with 2023 ESCI-indexed journals will also be assigned an impact factor. See: https://clarivate.com/blog/clarivate-announces-changes-to-the-2023-journal-citation-reports-release/
  • asked a question related to Physics
Question
25 answers
Dear fellow mathematicians,
Using a computational engine such as Wolfram Alpha, I am able to obtain a numerical expression. However, I need a symbol expression. How can I do that?
I need the expression of the coefficients of this series.
x^2*csc(x)*csch(x)
where csc: cosecant (1/sin), and csch: hyperbolic cosecant.
Thank you for your help.
Relevant answer
Answer
An alternative answer to this question is contained in Theorem 2.1 in the following paper:
Xue-Yan Chen, Lan Wu, Dongkyu Lim, and Feng Qi, Two identities and closed-form formulas for the Bernoulli numbers in terms of central factorial numbers of the second kind, Demonstratio Mathematica 55 (2022), no. 1, 822--830; available online at https://doi.org/10.1515/dema-2022-0166.
  • asked a question related to Physics
Question
4 answers
I'm getting repetitively negative open circuit potentials(OCP) vs. Ag/AgCl reference electrode for some electrodes during the OCP vs. time measurements using an electrochemical workstation. What's the interpretation of a negative open circuit potential? Moreover, I also have noticed that it got more negative on illumination. What's the reason behind it? Are there some references? Please help.
Relevant answer
Answer
Dear Dr. Ayan Sarkar ,
as I said in a similar question, long-term change of corrosion potential (open-circuit potential) reflects a change in a corrosion system because the change in corrosion potential depends on the change in one or both of the anodic and cathodic reactions. For example, an increase in corrosion potential can be attributed to a decrease in the anodic reaction with the growth of a passive film or the increase in the cathodic reaction with an increase in dissolved oxygen. A decrease in corrosion potential can be attributed to an increase in the anodic reaction or a decrease in the cathodic reaction. The monitoring of corrosion potential is therefore often carried out (ISO 16429, 2004; JIS T 6002). For the test solution, saline, phosphate buffer saline, Ringer solution, culture medium, serum and artificial saliva are typically used. The corrosion potential of the specimen can be monitored against a reference electrode using an electrometer with high input impedance (1011 Ω ~ 1014 Ω) or a potentiostat.
For more details, please see the source: Monitoring of corrosion potential by S. Hiromoto, in Metals for Biomedical devices, 2010.
The most widely used electrochemical method of determining the corrosion rate is the Stern-Geary method which allows to evaluate the corrosion current (i corr), an essential parameter from which to derive the corrosion rate of the material in that particular environment.
My best regards, Pierluigi Traverso.
  • asked a question related to Physics
Question
59 answers
Dear Sirs,
In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.
1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?
2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.
One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.
But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.
But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).
3.) Searching the internet I have found recent articles by Melvin M. Vopson
which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.
I would be grateful to hear your view on this subject.
Relevant answer
Answer
With respect to human societies and production methods, dear Anatoly A Khripov , we are witnessing the informatization of the economy. In this sense, this informatization changes the material conditions of the production process itself.
However, it is difficult to assess, if information is a new production factor or if the traditional production factors become more information-intense.
Consequently, my viewpoint from the physics of social systems (natural science of human society and mind) discerns that information converts (reorganizes) matter, energy and mass, in terms of economic production.
———-
Thermodynamic entropy involves matter and energy, Shannon entropy is entirely mathematical, on one level purely immaterial information, though information cannot exist without "negative" thermodynamic entropy.
It is true that information is neither matter nor energy, which are conserved constants of nature (the first law of thermodynamics). But information needs matter to be embodied in an "information structure." And it needs ("free") energy to be communicated over Shannon's information channels.
Boltzmann entropy is intrinsically related to "negative entropy." Without pockets of negative entropy in the universe (and out-of-equilibrium free-energy flows), there would no "information structures" anywhere.
Pockets of negative entropy are involved in the creation of everything interesting in the universe. It is a cosmic creation process without a creator.
—————
Without the physical world, Ideas will not exist. ― Joey Lawsin
Even when money seemed to be material treasure, heavy in pockets and ships' holds and bank vaults, it always was information. Coins and notes, shekels and cowries were all just short-lived technologies for tokenizing information about who owns what. ― James Gleick, The Information: A History, a Theory, a Flood
  • asked a question related to Physics
Question
3 answers
How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.
Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.
I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)
Relevant answer
Answer
Anil Kumar Jain The description of discrete spaces is found in physical works, e.g. "Discrete spacetime, quantum walks and relativistic wave equations" by Leonard Mlodinow and Todd A. Brun, https://arxiv.org/abs/1802.03910. But I have not seen any attempt to quantify the dimensionality of such spaces. This is exactly what I am looking for.
  • asked a question related to Physics
Question
21 answers
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
Relevant answer
Answer
It pleases me to see this discussion, realising there are more critical thinkers out there. Let me try to add a simply phrased contribution.
In my opinion, Physics has gone down the rabbit hole of sub-atomic particles and that part of physics has become what some call “phantasy physics”. Complex maths is used as smoke and mirrors to silence critical physicists who are convinced that theory must be founded in reality and that empirical evidence is necessary.
Concepts such as ”Big bang”, black holes, dark matter etc are actually hypotheses that try to explain why the outcomes of measurements are not in accordance with the calculations made on the basis of Einsteins theories of relativity. Unfortunately, and perhaps through the journalistic popularisation of science, these concepts have been taken as reality, such as “scientists have discovered dark matter, or anti-matter”. No, they have not. What they discovered was that the measured light or matter in the universe or a part of the universe was not as much as had been predicted by calculations based on a theory. Usually in science, that would lead to a refining of the theory. Here it did not, perhaps because Einstein has been placed on such a high pedestal that his theories are seen as the alpha and omega of physics that may not be questioned or touched, as that is considered sacrilege.
The solution was the hypothesis of Cookie Monsters, things out there that ate light or matter = black holes and dark matter. Anyone who dares questions these methodological steps is intimidated and attacked with complicated terminology and complex mathematics. Most physicists are afraid of looking stupid and therefore shut up. Decades ago the physics professor who was my head supervisor (experimental physics) said to his students that if you could not explain your work in ordinary household language, then you did not really understand it yourself. He considered complicated language and naming theories and authors as a cover up for not grasping the essentials.
A reason for looking at yet another species of virtual particles is that research proposals in this field receive funding because physicists all over the world are doing it. It is the reigning paradigm and it will take a ground swell of opposition to move on to the next phase in science after the 50-odd years of the present, now stagnant, paradigm.
  • asked a question related to Physics
Question
51 answers
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
Relevant answer
Answer
Good deductive arguments have two properties: (1) validity and (2) soundness. Validity is entirely a formal property: it says that IF the premises are true then so is the conclusion; soundness says that not only is the argument valid, but its premises ARE true. Whether the premises are indeed true may be a matter of empirical discovery or of previous deductions or definitions (including deductions or definitions in mathematics). Sometimes it's just interesting to see what else a certain assumption commits one to and deduction can answer that question and sometimes also give us a good reason for rejecting that assumption (that is the rationale for reductio ad absurdum arguments, aka indirect proofs). It helps to keep in mind that the alleged shortcoming of deduction is not an indictment of its formal nature but a matter of the "garbage in, garbage out" principle.
  • asked a question related to Physics
Question
6 answers
Dear all,
after a quite long project, I coded up a python 3D, relativistic, GPU based PIC solver, which is not too bad at doing some stuff (calculating 10000 time steps with up to 1 million cells (after which I run out of process memory) in just a few hours).
Since I really want to make it publicly available on GitHub, I also thought about writing a paper on it. Do you think this is a work worthy of being published? And if so, what journal should I aim for?
Cheers
Sergey
Relevant answer
Answer
Hi! Once again, thank you for the reply! I have never published before, that's why I was asking :D
  • asked a question related to Physics
Question
37 answers
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
Relevant answer
Answer
In a simple and introductory way, there is a book by Prof. F. Reif of the Berkeley course, i.e., Vol 5: "Statistical Physics" by F. Reif.
In chapter 7, section 7.4, pp. 281 of the 1965 edition by MCGraw Hill, he discusses in an introductory way what he calls: "the basic five statements of statistical thermodynamics" which are based on some statistical postulates that he also talks about in section 3.3, pp. 111, there are three postulates, inside of boxes, Eqs. 17, 18, & 19 and among those, the one who you refer to.
I prefer you read from same Prof. Reif book, what he has to say about your interesting question.
Kind Regards.
  • asked a question related to Physics
Question
12 answers
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
Relevant answer
Answer
Gerhard Martens Thanks! I guess that is my problem solved.... Thanks for your input and suggestions.... :-D
  • asked a question related to Physics
Question
4 answers
During AFM imaging, the tip does the raster scanning in xy-axes and deflects in z-axis due to the topographical changes on the surface being imaged. The height adjustments made by the piezo at every point on the surface during the scanning is recorded to reconstruct a 3D topographical image. How does the laser beam remain on the tip while the tip moves all over the surface? Isn't the optics static inside the scanner that is responsible for directing the laser beam onto the cantilever or does it move in sync with the tip? How is it that only the z-signal is affected due to the topography but the xy-signal of the QPD not affected by the movement of the tip?
or in other words, why is the QPD signal affected only due to the bending and twisting of the cantilever and not due to its translation?
Relevant answer
Answer
Indeed, in the case of a tip-scanning AFM the incident laser beam should follow the tip scanning motion, to record throughout the deflection signal for the same spot on the cantilever backside. This can be achieved by integrating the laser diode with a kind of tube (with its long axis parallel to the z-axis) that carries the cantilever holder at its lower end and is kind of hinged at its upper end. The scan piezos would act on the entire tube, incl the laser diode, in a plane between the tube's upper and lower ends. Whether or not your AFM system works exactly the same way I cannot tell for sure though.
  • asked a question related to Physics
Question
51 answers
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of flat spacetime.
Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .
---------------
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we assume that the origins coincide x = X = 0 at time tP = 0 we may write down the space-like component of the Lorentz transformation as:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP , inserting x =xP , X = XP , T = TP into 2)3) yields:
4) XP = g (xP - vtP)
5) TP = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
More information in:
Relevant answer
Answer
There is a difference between Lorentz transformations and scale transformations.
Special relativity satisfies Lorentzian symmetry due to the constancy of the speed of light and the special relativity principle.
However, in reality, the aether makes the speed of light locally invariant, so the Lorentz transformation is not necessary.
  • asked a question related to Physics
Question
10 answers
Hallo every one,
I did nanoidentation experiment :
1 photoresist with 3 different layer thicknesses.
My results show that the photoresist is harder when it has thicker layer..
I can't find the reason in the literature.
Can any one please explaine me why is it like that??
is there any literature for this?
best regards
chiko
Relevant answer
Answer
The nano layer thickness is very very small layer, otherwise it's cannot use by Resistivity method and it has VES limitation.
Best regards.
P. Hakaew
  • asked a question related to Physics
Question
48 answers
The above question emerges from a parallel session [1] on the basis of two examples:
1. Experimental data [2] that apparently indicate the validity of Mach’s Principle stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.
2. The negative outcome of gravitational wave experiments [3] apparently does not affect the main-stream acceptance of claimed discoveries.
Relevant answer
Answer
Stam Nicolis: "Mainstream theorosts"
Mainstream theorists, I would say, are those who, based on mainstream consensus, raise public funds (from taxpayers) for large-scale experiments (Big Science) and organize spectacular media campaigns that essentially affirm the mainstream consensus. It is a self-sustaining system that inhibits progress in science. When experimental results do not fit, they are made to fit or simply ignored, as can currently be observed with "gravitational wave astronomy." https://www.researchgate.net/project/Discussion-on-recently-claimed-simultaneous-discovery-of-black-hole-mergers-and-gravitational-waves https://www.researchgate.net/project/Discussion-on-recently-claimed-simultaneous-discovery-of-black-hole-mergers-and-gravitational-waves
  • asked a question related to Physics
Question
3 answers
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Relevant answer
Answer
Thanks for all comments.
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
  • asked a question related to Physics
Question
3 answers
Eighty years after Chadwick discovered the neutron, physicists today still cannot agree on how long the neutron lives. Measurements of the neutron lifetime have achieved the 0.1% level of precision (~ 1 s). However, results from several recent experiments are up to 7 s lower than the (pre2010) particle data group (PDG) value. Experiments using the trap technique yield lifetime results lower than those using the beam technique. The PDG urges the community to resolve this discrepancy, now 6.5 sigma.
I think the reason is “tropped p ”method had not count the number of protons in the decay reaction(n→p+e+ve+γ).As a result ,the number of decay neutrons obtained were low .This affected the measurement of neutron lifetime.Do you agree with me?
Relevant answer
Answer
If you don't believe me, you can search the literature to find out the Mampe paper and they have lifetime measurement with different waiting times. The shorter the waiting time, the longer the lifetime.
For storage time between 112-225 seconds, lifetime is 891 seconds, storage interval of 225-450 seconds, lifetime is 888.5, storage time above 900 seconds, lieftime is 887.0 seconds. See the attached screenshot.