Science topics: Physics

Science topic

# Physics - Science topic

Physics related research discussions

Questions related to Physics

Hello, What physical value do you think is interesting to compare my car static torsional stiffness (kN·m/degree) values with? The volume of the car? the mass ? its length? or something else ? looking for correlations. If you have any interesting graphic suggestions and tell me why. Thank you.

A project I am working on is the evaluation of stigma temperature in outdoor conditions (solar radiation up to 800-900 W/m2 and air temperature varying between 10-30 degrees).

I am utilizing three different instruments,

1. Thermal Camera that can be attached to a cell phone (thermal expert Q1)

2. Type T thermocouples 32 AWG (0.008 inches in diameter or 0.3255 mm2)

3. IR thermometer

The instruments were calibrated with a certified digital thermometer.

When all three methods are pooled together, we notice that IR camera and thermocouples have near consistent results while the IR thermometer is nearly systematically cooler than the two other methods (of about 1.5 degrees Celsius). This is odd and difficult to explain. Also, these values for the IR thermometer always make stigma cooler than air, which would not make much physical sense as the stigmas don't have any cooling mechanisms to our knowledge. Consequently, I am wondering if anybody has had any experience with any of these three instruments in order to help me get a better understanding of what could be the issue, but most importantly which instrument is actually the best to measure temperature.

Thank you for your time,

This question is dedicated only to sharing important research of OTHER RESEARCHERS (not our own) about complex systems, self-organization, emergence, self-repair, self-assembly, and other exiting phenomena observed in Complex Systems.

Please keep in own mind that each research has to promote complex systems and help others to understand them in the context of any scientific filed. We can educate each other in this way.

Experiments, simulations, and theoretical results are equally important.

Links to videos and animations will help everyone to understand the given phenomenon under study quickly and efficiently.

I know that δ(f(x))=∑δ(x−xi)/f′(xi). What will be the expression if "f" is a function of two variables, i.e. δ(f(x,y))=?

The Nobel Prize Summit 2023: Truth, Trust and Hope has started today, 24 May 2023. The summit encourages participation. Thus, I have sent an open letter and eagerly anticipate their response. Please comment if the points I have made is adequate enough.

Open Letter to The Nobel Committee for Physics

Is There a Nobel Prize for Metaphysics?

Dear Nobel Committee for Physics,

Among the differences between an established religion, such as Roman Catholicism, and science, is the presence of a hierarchical organization in the former for defending its creed and conducting its affairs. The head of the religious institution ultimately bears responsibility for the veracity of its claims and strategic policies. This accountability was evident in historical figures like John Wycliffe, Jan Hus, and Martin Luther, who held the papacy responsible for wrong doctrines, such as the indulgence scandal during the late Middle Ages. In that context, challenging such doctrines, albeit with the anticipated risk of being burned at the stake, involved posting opposing theses on the doors of churches.

In contrast, the scientific endeavour lacks a tangible temple, and no definitive organization exists to be held accountable for possible misconducts. Science is a collective effort by scientists and scientific institutes to discover new facts within and beyond our current understanding. While scientists may occasionally flirt with science fiction, they ultimately make significant leaps in understanding the universe. However, problems arise when a branch of science is held and defended as a sacred dogma, disregarding principles such as falsifiability. This mentality can lead to a rule of pseudo-scientific oppression, similar to historical instances like the Galileo or Lysenko affairs. Within this realm, there is little chance of liberating science from science fiction. Any criticism is met with ridicule, damnation, and exclusion, reminiscent of the attitudes displayed by arrogant religious establishments during the medieval period. Unfortunately, it seems that the scientific establishment has not learned from these lessons and has failed to provide a process for dealing with these unfortunate and embarrassing scenarios. On the contrary, it is preoccupied with praising and celebrating its achievements while stubbornly closing its ears to sincere critical voices.

Allow me to illustrate my concerns through the lens of relativistic physics, a subject that has captured my interest. Initially, I was filled with excitement, recognizing the great challenges and intellectual richness that lay before me. However, as I delved deeper, I encountered several perplexing issues with no satisfactory answers provided by physicists. While the majority accepts relativity as it stands, what if one does not accept the various inherent paradoxes and seeks a deeper insight?

Gradually, I discovered that certain scientific steps are not taken correctly in this branch of science. For example, we place our trust in scientists to conduct proper analyses of experiments. Yet, I stumbled upon evidence suggesting that this trust may have been misplaced in the case of a renowned experiment that played a pivotal role in heralding relativistic physics. If this claim is indeed valid, it represents a grave concern and a significant scandal for the scientific community. To clarify my points, I wrote reports and raised my concerns. Fortunately, there are still venues outside established institutions where critical perspectives are not yet suppressed. However, the reactions I received ranged from silence to condescending remarks infused with irritation. I was met with statements like "everything has been proven many times over, what are you talking about?" or "go and find your mistake yourself." Instead of responding to my pointed questions and concerns, a professor even suggested that I should broaden my knowledge by studying various other subjects.

While we may excuse the inability of poor, uneducated peasants in the Middle Ages to scrutinize the veracity of the Church's doctrine against the Latin Bible, there is no excuse for professors of physics and mathematics to be unwilling to revaluate the analysis of an experiment and either refute the criticism or acknowledge an error. It raises suspicions about the reliability of science itself if, for over 125 years, the famous Michelson-Morley experiment has not been subjected to rigorous and accurate analysis.

Furthermore, I am deeply concerned that the problem has been exacerbated by certain physicists rediscovering the power and benefits of metaphysics. They have proudly replaced real experiments with thought experiments conducted with thought-equipment. Consequently, theoretical physicists find themselves compelled to shut the door on genuine scientific criticism of their enigmatic activities. Simply put, the acceptance of experiment-free science has been the root cause of all these wrongdoings.

To demonstrate the consequences of this damaging trend, I will briefly mention two more complications among many others:

1. Scientists commonly represent time with the letter '

*t*', assuming it has dimension**T**, and confidently perform mathematical calculations based on this assumption. However, when it comes to relativistic physics, time is represented as '*ct*' with dimension**L**, and any brave individual questioning this inconsistency is shunned from scientific circles and excluded from canonical publications.2. Even after approximately 120 years, eminent physicist and Nobel Prize laureate Richard Feynman, along with various professors in highly regarded physics departments, have failed to mathematically prove what Einstein claimed in his 1905 paper. They merely copy from one another, seemingly engaged in a damage limitation exercise, producing so-called approximate results. I invite you to refer to the linked document for a detailed explanation:

I am now submitting this letter to the Nobel Committee for Physics, confident that the committee, having awarded Nobel Prizes related to relativistic physics, possesses convincing scientific answers to the specific dilemmas mentioned herein.

Yours sincerely,

Ziaedin Shafiei

A pendulum bob oscillates between potential energy maxima at the top of its swing through kinetic energy maxima at the bottom of its swing. The potential energy is given by the mass times the height of the arc times gravitational acceleration and the kinetic energy maximum is given by the mass times the maximum velocity times the average velocity. Energy may be treated as a scalar in many phenomena, but here both energies are products of two vectors, fall direction and gravitation for potential energy and momentum times average velocity for kinetic energy. This nature of the pendulum is important.

The potential energy of the pendulum is equal to the work it can do under gravitational acceleration until its string is vertical. That work accelerates the pendulum bob to its maximum velocity. The kinetic energy of the pendulum is equal to the work it can do against gravitation to bring the pendulum bob to the top of its arc.

*It is posited that the shape of the atoms and molecules in the bob is the cause of its acceleration in the refraction gradient of a gravitational field, like a light path bending when passing a star.*When the bob is free to move, their asymmetrical oscillations change the position of the bob as the process of falling.*The response of the atoms and molecules in the bob to motion is to adjust their shape in order to remain in harmony with themselves, that is they shorten to enable complete oscillations despite translation.*These shape changes together transfer potential energy to kinetic energy in the bob which is then available for the work of lifting the bob by reversing the changes in shape while decelerating.**The key to this speculation is that reversible refraction or translation compensation actually move the location of internal oscillations in matter.**By reason of the application of the Lorentz Factor [(1 - (v squared / c squared)) raised to the power of 1/2] in the denominator of equations, luminal and other comparable energy propagations take on one and the same velocity. This is the relativity-effect (better, comparative effect) between v of objects, compared to c of the speed of light. That is, it is presupposed here that c is the object of comparison for determining the speed effect of velocity difference across a duration.

It is against the criterion-velocity itself c that c becomes unsurpassable! Hence, I am of the opinion that the supposed source-independence is nothing but an effect of OUR APPARATUS-WISE OBSERVATION LIMIT AND OUR FIXING OF THE CRITERION OF OBSERVATION AS THE OBSERVED VELOCITY OF LIGHT.

In this circumstance, it is useless to claim that (1) luminal and some other energy propagations with velocity c are source-independent, and (2) these wavicles have zero rest mass, since the supposed source-independence have not been proved theoretically or experimentally without using c cas the criterion velocity. The supposed source-independence is merely an effect of c-based comparison.

Against this background, it is possible to be assured that photons and other similar c-wavicles are extended particles -- varying their size throughout the course of motion in the spiral manner. Hence the acceptability of the term 'wavicle'. Moreover, each mathematical point of the spiral motion is to be conceived not as two-, but as three-dimensional, and any point of motion added to it justifies its fourth dimension. Let us call motion as change.

These four dimensions are measuremental, hence the terms 'space' (three-dimensional) and 'time' (one-dimensional). This is also an argument countering the opinion that in physics and cosmology (and other sciences) time is not attested!

The measurements of the 3-space and measurements of the 1-time are not in the wavicles and in the things being measured. The measurements are the cognitive characteristics of the measurements.

IN FACT, THE EXTENSION OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'SPACE', AND THE CHANGE OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'TIME'. Hence, the physically out-there-to-find characteristics of the wavicles and objects are EXTENSION AND CHANGE.

Extension is the quality of all existing objects by which they have parts. This is not space. Change is the quality by which they have motion, i.e., impact generation on other similar wavicles and/or objects. This is not time. Nothing has space and time; nothing is in space and time. Everything is in Extension-Change.

Any wavicle or other object existing in Extension-Change is nothing but impact generation by physically existent parts. This is what we term CAUSATION. CAUSALITY is the relation of parts of physical existents by which some are termed cause/s and the others are termed effect/s. IN FACT, THE FIRST ASPECT OF THE PHYSICALLY ACTIVE PARTS, WHICH BEGINS THE IMPACT, IS THE CAUSE; AND THE SECOND ASPECT IS THE CAUSE. Cause and effect are, together, one unit of continuous process.

Since energy wavicles are extended, they have parts. Hence, there can be other, more minute, parts of physical objects, which can define superluminal velocities. Here, the criterion of measurement of velocity cannot be c. That is all...! Hence, superluminal velocities are a must by reason of the very meaning of physical existence.

THE NOTION OF PHYSICAL EXISTENCE ('TO BE') IS COMPLELTEY EXHAUSTED BY THE NOTIONS OF EXTENSION AND CHANGE. Hence, I call Extension and Change as the highest physical-ontological Categories. A metaphysics (physical ontology) of the cosmos is thus feasible. I have been constructing one such. My book-length publications have been efforts in this direction.

I invite your contributions by way of critiques and comments -- not ferocious, but friendly, because I do not claim that I am the last word in any science, including philosophy of physics.

In relativity (GTR, STR) we hear of masslessness. What is the meaning of it with respect to really (not merely measurementally) existent particles / waves?

I am of the opinion that, while propagating, naturally, wavicles have mass, and there is no situation where they are absolutely at rest or at rest mass.

Ontological arguments in philosophy have been used over the centuries to support a variety of ideas mostly commonly serving as purported proofs of the existence of God. Although these arguments have often been dismissed by philosophers, nevertheless, ontological thinking offers a powerful tool in the arsenal of the serious thinker.

My question relates the fundamental limits of knowledge that might necessarily arise if we live in a universe that 'is its own reason for being'.

I am interested in this question because if it can be show that there must be a fundamental 'blind spot' in a formal and quantitative approach to the understanding of an ontological universe then, although there may be no way to work out what it is we may nevertheless be in a privileged position to know what it is!

For over two thousand years we have been bending our minds into Gordian Knots attempting to show that what we suspect to be true may, in fact, not be!

Perhaps it is time to admit that consciousness is transparent to physics - and to attempt to answer the question why?

Animations are known to be a fast and very efficient way of dissemination of knowledge, insights, and understanding of complex systems. Through the animations, quite complicated research can be easily shared among all scientific disciplines.

While starting with complex systems descriptions of Dynamic Recrystallization in metals about almost 30 years ago, it had become very obvious almost instantly that animations carry with themselves a huge expressive power.

This recently led to development of the GoL-N24 open-source Python software that enables to create animations effortlessly. The user just defines the input parameters and the rest is done automatically. Share your software too.

This question is dedicated to all such animations and open-source-source software, which are producing them, in the area of complex systems.

Everyone is welcomed to share their own research in the form of animations with the relevant description.

I am trying to plot and analyze the difference (or similarities) between the path of two spherical pendulums over time. I have Cartesian (X/Y/Z) coordinates from an accelerometer/gyroscope attached to a weight on a string,

If I want to compare the path of two pendulums, such as a spherical pendulum with 5 pounds of weight and another with 15 pounds of weight, how can I analyze this? I am hope to determine how closely the paths match over time.

Thanks in advance.

Dear Researcher,

Global Climate Models (GCMs) of Coupled Model Intercomparison Project Phase 6 (CMIP6) are numerical models that represent the various physical systems of the Earth's climate with respect to the surface of the land, oceans, atmosphere, and cryosphere, and these are employed to provide likely changes in future climate projections. I wanted to know what are the initial and lateral boundary conditions used while developing the models.

Sincerely,

Aman Srivastava

I have defined matter and energy as follows (elsewhere), but is it possible to define them independent of each other?

*Meanings of ‘Matter’ and ‘Energy’ Contrasted:*By ‘matter’ I mean whatever exists as the venue of finite activities of existents and itself finitely active in all parts. Matter is whatever is interconvertible with existent energy.

Existent ‘energy’ is conceived as those propagative wavicles which, in a given world of existents, function as the fastest existent media of communication of

*some*effects between any two or more chunks of matter or of energy (i.e., of motions / changes).Existent matter and existent energy are inter-convertible, and hence both should finally be amenable to a common definition: whatever exists with, in all parts, finite activity and stability.

1. Does consciousness exist?

2. If so, what is Consciousness and what are its nature and mechanisms?

3. I personally think consciousness is the subjective [and metaphysical] being that (if exists) feels and experiences the cognitive procedures (at least the explicit ones). I think that at some ambiguous abstract and fuzzy border (on an inward metaphysical continuum), cognition ends and consciousness begins. Or maybe cognition does not end, but consciousness is added to it. I don't know if my opinion is correct. What are potential overlaps and differences between consciousness and cognition?

4. Do Freudian "Unconscious mind" or "Subconscious mind" [or their modern counterpart, the hidden observer] have a place in consciousness models? I personally believe these items as well are a part of that "subjective being" (which experiences cognitive procedures); therefore they as well are a part of consciousness. However, in this case we would have unconscious consciousness, which sounds (at least superficially) self-contradictory. But numerous practices indicate the existence of such more hidden layers to consciousness. What do you think about something like an "unconscious consciousness"?

5. What is the nature of Altered States of Consciousness?

I have older version of x'pert highscore software and what to do for pdf2, pdf 4? Please suggest me or share the link of x'pert highscore software latest version (free)/ free pdf2, pdf4.

Similarly, are there books and articles on examples of generalization in physics?

Our response is YES. Quantum computing has arrived, as an expression of that.

Numbers do obey a physical law. Massachusetts Institute of Technology Peter Shor was the first to say it, in 1994 [cf. 1], in modern times. It is a wormhole, connecting physics with mathematics, and has existed even before the Earth existed.

So-called "pure" mathematics is, after all, governed by objective laws. The Max Planck Institute of Quantum Optics (MPQ) showed the mathematical basis by recognizing the differentiation of discontinuous functions [1, 2, 3], in 1982.

This denies any type of square-root of a negative number [4] -- a.k.a. an imaginary number -- rational or continuous.

Complex numbers, of any type, are not objective and are not part of a quantum description, as said first by Erwin Schrödinger (1926) --

yet,

cryogenic behemoth quantum machines (see figure) consider a "complex qubit" -- two objective impossibilities. They are just poor physics and expensive analog experiments in these pioneering times.

Quantum computing is ... natural. Atoms do it all the time, and the human brain (based on +4 quantum properties of numbers).

Each point, in a quantum reality, is a point ... not continuous. So, reality is grainy, everywhere. Ontically.

To imagine a continuous point is to imagine a "mathematical paint" without atoms. Take a good microscope ... atoms appear!

The atoms, an objective reality, imply a graininess. This quantum description includes at least, necessarily (Einstein, 1917), three logical states -- with stimulated emission, absorption, and emission. Further states are possible, as in measured superradiance.

Mathematical complex numbers or mathematical real-numbers do not describe objective reality. They are continuous, without atoms. Poor math and poor physics.

It is easy to see that multiplication or division "infests" the real part with the imaginary part, and in calculating modulus -- e.g., in the polar representation as well as in the (x,y) rectangular representation. The Euler identity is a fiction, as it trigonometrically mixes types ... avoid it. The FFT will no longer have to use it, and FT=FFT.

The complex number system "infests" the real part with the imaginary part, even for Gaussian numbers, and this is well-known in third-degree polynomials.

Complex numbers, of any type, must be deprecated, they do not represent an objective sense. They should not "infest" quantum computing.

Quantum computing is better without complex numbers. software makes R,C=Q --> B={0,1}.

What is your qualified opinion?

REFERENCES

[1] DOI /2227-7390/11/1/68

[3] June 1982, Physical review A, Atomic, molecular, and optical physics 26:1(1).

The really important breakthrough in theoretical physics is that the Schrödinger Time Dependent Equation (STDE) is wrong, that it is well understood why is it wrong, and that it should be replaced by the correct Deterministic Time Dependent Equation (DTDE). Unitary theory and its descendants, be they based on unitary representations or on probabilistic electrodynamics, will have to go away. This of course runs against the claims about string and similar theories made in the video. But our claims are a dense, constructive criticism with many consequences. Taken into account if you are concerned about the present and the near future of Theoretical Physics.

Wave mechanics with a fully deterministic behavior of waves is the much needed and sought --sometimes purposely but more often unconsciously-- replacement of Quantism that will allow the reconstruction of atomic and particle physics. A rewind back to 1926 is the unavoidable starting point to participate in the refreshing new future of Physics. Many graphical tools currently exists that allow the direct visualization of three dimensional waves, in particular of orbitals. The same tools will clearly render the precise movement and processes of the waves under the truthful deterministic physical laws. Seeing is believing. Unfortunately there is a large, well financed and well entrenched quantum establishment that stubbornly resists these new developments and possibilities.

When confronted with the news they do not celebrate, nor try to renew themselves overcoming their quantum prejudices. Instead the minds of the quantum establishment refuse to think. They negate themselves the privilege of reasoning and blindly assume denial, or simply panic. The net result is that they block any attempt to spread the results. Accessing funds to recruit and direct fresh talents in the new direction is even harder than spreading information and publishing.

Painfully, this resistance is understandable. For these Quantists are intelligent scientists (yes, they are very intelligent persons) that instinctively perceive as a menace the news that debunk the Wave-Particle duality, the Uncertainty Principle, the Probabilistic Interpretation of wave functions and the other quantum paraphernalia. Their misguided lifelong labor, dedication and efforts --of themselves and of their quantum elders, tutors, and guides-- instantly becomes senseless. I feel sorry for such painful human situation but truth must always prevail. For details on the DTDE see our article

Hopefully young physicists will soon take the lead and a rational wave mechanics will send the dubious and troublesome Quantism to its crate, since long waiting in the warehouse of the history of science.

With cordial regards,

Daniel Crespin

Why atoms do not repel each other when electrons are outside of nuclear. I think inverse square law of force doesn't allow atoms to come close and form molecules but in reality atoms come close and form molecules. How ?

This is because the term "consciousness" is typically presumed to mean that which our selves "internally" experience. Something so large as that is an elaborate composition courtesy of evolution. What makes consciousness unique is feeling. Basic felling is fundamental, preceding minds.

Two of the most interesting human spacecraft of our time, the SpaceX Starship and the NASA Gateway lunar space station are soon to join.

It would be very interesting to obtain a database of responses on this question :

What are the links between Algebra & Number Theory and Physics ?

Therefore, I hope to get your answers and points of view. You can also share documents and titles related to the topic of this question.

I recently read a very interesting preprint by the mathematician and physician Matilde Marcolli : Number Theory in Physics. In this very interesting preprint, she gave several interesting relations between Number Theory and Theoretical physics. You can find this preprint on her profile.

This question discusses the YES answer. We don't need the

**√-1.**The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?

Math cannot be in ones head, as explains [1].

To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers

**are not**needed, physically, as a number. But, in algebra, they**are**useful.The YES answer can improve the efficiency in using numbers in calculations, although it

**is**less advantageous in algebra calculations, like in the well-known Gauss identity.For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.

This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].

I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?

The answer may also consider further advances into quantum computing?

[2]

Preprint FT = FFT

[2]

Preprint The quantum set Q*

Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.

But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.

This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.

Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.

This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.

There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.

But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.

For those interested: A revamp of the Internet is under way to cover shortcomings interfering with expansion into the Solar System and beyond.

While extremely rugged, a few assumptions are built into the current Internet technologies, including short traversal times at light speeds and a relatively stable population of nodes to hop through to the destination, which are true on Earth. The present Internet fails otherwise.

Mars is minutes away at light speed, with a very spotty supply of nodes to there.

A new technology, DTN (Delay/Disruption-Tolerant Networking) utilizes a new protocol, Bundle Protocol, that is overlaid on top of existing space networking protocols or IP protocols. This has been tested in the ISS and is actively being placed in other spacecraft, by NASA and partner agencies. This is part of what is being tested at the Moon first for deployment to Mars.

Bundle Protocol was architected by Vint Cerf, a father of TCP/IP, and others.

One finds action in photon emission and particle paths. What is it when its magnitude is expressed by a Planck Constant for emission versus when it may be expressed as a stationary integral value (least – saddle – greatest) along a path? The units match. Action is recognized as a valuable concept and I would like to appreciate its nature in Nature. (Struggling against the “energy is quantized” error has distracted me from the character of the above inquiry in the past.)

Brief aside: Max Planck and Albert Einstein emphasized energy as discrete amounts for their blackbody radiation and photoelectric studies, but they always added

**Energy without that secondary condition is not quantized! I emphasize this because it has been frustrating for decades and it interferes with the awareness that it is***at a specific frequency!***action that is quantized!**Now, granted that it is irrelevant to “grind out useful results” activity, which also is valuable, it is relevant to comprehending the nature of Nature, thus this post.The existence of The Planck Constant has been a mystery since Max Planck found it necessary to make emissions discrete in order to formulate blackbody radiation mathematically. He assumed discrete emission energy values

**for each frequency**that made the action of radiated energy**at each frequency**equal to the Planck Constant value. (This can be said better – please, feel free to fix it.) Action had been being used to find the equations of motion for almost two centuries by then. Is a stationary integral of action along a path equal to an integral number of Planck Constants? Is the underlying nature in these several instances of mathematical physics the same? What is that nature; how can this be? If the natures are different, how is each?Happy Trails, Len

P.S. My English gets weird and succinct sometimes trying to escape standard ruts in meanings:

*how is each?*is a question that directs one to explain, i.e., to describe the processes as they occur – causes, interactions, events, etc., I hope.The topic considered here is the Klein-Gordon equation governing some scalar field amplitude, with the field amplitude defined by the property of being a solution of this equation. The original Klein-Gordan equation does not contain any gauge potentials, but a modified version of the equation (also called the Klein-Gordon equation in some books for reasons that I do not understand) does contain a gauge potential. This gauge potential is often represented in the literature by the symbol

*A*(a four-component vector). Textbooks show that if a suitable transformation is applied to the field amplitude to produce a transformed field amplitude, and another suitable transformation is applied to the gauge potential to produce a transformed gauge potential, the Lagrangian is the same function of the transformed quantities as it is of the original quantities. With these transformations collectively called a gauge transformation we say that the Lagrangian is invariant under a gauge transformation. This statement has the appearance of being justification for the use of Noether’s theorem to derive a conservation law. However, it seems to me that this appearance is an illusion. If the field amplitude and gauge potential are both transformed, then they are both treated the same way as each other in Noether’s theorem. In particular, the theorem requires both to be solutions of their respective Lagrange equations. The Lagrange equation for the field amplitude is the Klein-Gordon equation (the version that includes the gauge potential). The textbook that I am studying does not discuss this but I worked out the Lagrange equations for the gauge potential and determined that the solution is not in general zero (zero is needed to make the Klein-Gordon equation with gauge potential reduce to the original equation). The field amplitude is required in textbooks to be a solution to its Lagrange equation (the Klein-Gordon equation). However, the textbook that I am studying has not explained to me that the gauge potential is required to be a solution of its Lagrange equations. If this requirement is not imposed, I don’t see how any conclusions can be reached via Noether’s theorem. Is there a way to justify the use of Noether’s theorem without requiring the gauge potential to satisfy its Lagrange equation? Or, is the gauge potential required to satisfy that equation without my textbook telling me about that?_{i}For a plate capacitor the force on the plates can be calculated by calculating the change of field energy caused by an infinitesimal displacement. Looks like a very first principle. However, thinking about a displacement of a charge in a homogeneous electric field seems to explain no force at all...

Start with a purely classical case to define vocabulary. A charged marble (marble instead of a point particle to avoid some singularities) is exposed to an external electromagnetic (E&M) field. "External" means that the field is created by all charges and currents in the universe except the marble. The marble is small enough for the external field to be regarded as uniform within the marble's interior. The external field causes the marble to accelerate and that acceleration causes the marble to create its own E&M field. The recoil of the marble from the momentum carried by its own field is the self force. (One piece of the charged marble exerts an E&M force on another piece and, contrary to Newton's assumption of equal but opposite reactions, these forces do not cancel with each other if the emitted radiation carries away energy and momentum.) The self force can be neglected if the energy carried by the marble's field is negligible compared to the work done by the external field on the marble. Stated another way, the self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy. Also, an analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone. The key points from this paragraph are the last two sentences repeated below:

(A) An analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone.

(B) The self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy.

Now consider the semi-classical quantum mechanical (QM) treatment. The marble is now a particle and is treated by QM (Schrodinger's equation) but its environment is an E&M field treated as a classical field (Maxwell's equations). Schrodinger's equation is the QM analog for the equation of force on the particle and, at least in the textbooks I studied from, the E&M field is taken to be the external field. Therefore, from Item (A) above, I do not expect this analysis to predict a self force. However, my expectation is inconsistent with a conclusion from this analysis. The conclusion, regarding induced emission, is that the energy of a photon emitted by the particle is equal to all of the energy lost by the particle. We conclude from Item (B) above that the self force is profoundly significant.

My problem is that the analysis starts with assumptions (the field is entirely external in Schrodinger's equation) that should exclude a self force, and then reaches a conclusion (change in particle energy is carried by its own emitted photon) that implies a self force. Is there a way to reconcile this apparent contradiction?

**Please spread the word:**

*Folding at Home*(https://foldingathome.org/) is an extremely powerful supercomputer composed of thousands of home computers around the world. It tries to simulate protein folding to

**Fight diseases**. We can

**increase its power even further**by simply running its small program on our computers and donating the spare (already unused and wasted) capacity of our computers to their supercomputation.

After all, a great part of our work (which is surfing the web, writing texts and stuff, communicating, etc.) never needs more than a tiny percent of the huge capacity of our modern CPUs and GPUs.

**So it would be very helpful if we could donate the rest of their capacity [that is currently going to waste]**to such "distributed supercomputer" projects**and help find cures for diseases.**The program runs at a very low priority in the background and uses some of the capacity of our computers.

**By default, it is set to use the least amount of EXCESS (already wasted) computational power.**It is very easy to use. But if someone is interested in tweaking it, it can be configured too via both simple and advanced modes. For example, the program can be set to run only when the computer is idle (as the default mode) or even while working. It can be configured to work intensively or very mildly (as the default mode). The CPU or GPU can each be disabled or set to work only when the operating system is idle, independent of the other.Please

**spread the word;**for example,**start by sharing this very post with your contacts.**Also give them feedback and suggestions to improve their software. Or directly contribute to their project.

**Folding at Home**: https://foldingathome.org/

Folding at Home's

**Forum**: https://foldingforum.org/index.phpFolding at Home's

**GitHub**: https://github.com/FoldingAtHomeAdditionally, see other distributed supercomputers used for fighting disease:

**Rosetta at Home**: https://boinc.bakerlab.org/

**GPUGRID**: https://www.gpugrid.net/

Dear colleagues. This is not a matter about mathematical questions, fields and the like that I do not understand, but about the following:

As a researcher in philosophy of science, I have read more than once - from qualified sources - and repeated that, unlike Newtonian mechanics, which assumes that macroscopic physical space is absolute, has three dimensions and is separated from absolute time, for general relativity space is a four-dimensional spacetime, and that time is relative to the position of the observer (due to the influence of gravity).

Now I find that wrong, having heard that, for the theory, time and the perception of time are different things. Specifically, that in the famous Einsteinian example (a mental or imaginary experiment) of twins, the one who is longer-lived when they meet again has perceived a greater passage of time. And if what has been different is the perception of time, and not time, then that would mean that objectively both have always been at the same point on the "arrow of time".

And it would mean that I have confused time, as an objective or "objective" dimension of spacetime, with one's perception of it. That is, if there were no observer, spacetime would still have its "time" dimension.

It follows that it is false that for general relativity time is relative (because it is a dimension of spacetime, which is not relative). Now, if this is so, how can the theory predict the - albeit hypothetical - existence of wormholes?

There is something I fail to understand: does the theory of relativity really differentiate time from the perception that an observer may have of it, and the example of twins refers to the latter?

If spacetime is only one - there are not several independent spacetimes - and it has objective existence, including its "time" dimension , how is it possible to travel - theoretically, according to the theory - through a wormhole to another part of it that has a different temporality (what we call past or future)?

Since it does not make sense to me to interpret that one would not travel to the future but to the perception of the future. And I rule out that Einstein has confused time with the perception of it.

Thank you.

Our answer is YES. A new question (at https://www.researchgate.net/post/If_RQ_what_are_the_consequences/1) has been answered affirmatively, confirming the YES answer in this question, with wider evidence in +12 areas.

This question continued the same question from 3 years ago, with the same name, considering new published evidence and results. The previous text of the question maybe useful and is available here:

We now can provably include DDF [1] -- the differentiation of discontinuous functions. This is not shaky, but advances knowledge. The quantum principle of Niels Bohr in physics, "all states at once", meets mathematics and quantum computing.

Without infinitesimals or epsilon-deltas, DDF is possible, allowing quantum computing [1] between discrete states, and a faster FFT [2]. The Problem of Closure was made clear in [1].

Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra [4], he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starts to use rational numbers in calculus, with DDF [1]. The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q*; the results are the same formulas -- but now, absolutely accurate.

New results have been made public [1-3], confirming the advantages of the YES answer, since this question was first asked 3 years ago. All computation is revealed to be exact in modular arithmetic, there is NO concept of approximation, no "environmental noise" when using it.

As a consequence of the facts in [1], no one can formalize the field of non-standard analysis in the use of infinitesimals in a consistent and complete way, or Cauchy epsilon-deltas, against [1], although these may have been claimed and chalk spilled.

Some branches of mathematics will have to change. New results are promised in quantum mechanics and quantum computing.

This question is closed, affirming the YES answer.

REFERENCES

[2]

Preprint FT = FFT

[3]

Preprint The quantum set Q*

Does transverse and longitudinal plasmons fall under localized surface plasmons? What is the significant difference between them? At what level will this affect the fabricated silver nanoparticle based electronic devices? Is surface plasmon propagation different from transverse and longitudinal plasmons?

Using the Boltztrap and Quantum espresso I was able to calculate the electronic part of thermal conductivity but still struglling for the phononic part of thermal conductivity.

I tried the SHENGBTE but that demands a good computational facility and right now I am not having such type of workstation. Kindly suggest some other tool that can be useful for me in this regard.

Thanks,

Dr Abhinav Nag

Finding a definition for time has challenged thinkers and philosophers. The direction of the arrow of time is questioned because many physical laws seem to be symmetrical in the forward and backward direction of time.

We can show that the arrow of time must be in the forward direction by considering light. The speed of light is always positive and distance is always positive so the direction of time must always be positive. We could define one second as the time it takes for light to travel approximately 300,000 km. Note that we have shown the arrow of time to be in a positive direction without reference to entropy.

So we are defining time in terms of distance and velocity. Philosophers might argue that we then have to define distance and velocity but these perhaps are less challenging to define than time.

So let's try to define time. Objects that exist within the universe have a state of movement and the elapsed times that we observe result from the object being in a different position due to its velocity.

This definition works well considering a pendulum clock and an atomic clock. We can apply this definition to the rotation of the Earth and think of the elapsed time of one day as being the time for one complete rotation of the Earth.

The concept of time has been confused within physics by the ideas of quantum theory which imply the possibility of the backward direction of time and also by special relativity which implies that you cannot define a standard time throughout the universe. These problems are resolved when you consider light as a wave in the medium of space and this wave travels in the space rest frame.

Preprint Space Rest Frame (March 2022)

Richard

Our answer is YES. This question captured the reason of change: to help us improve. We, and mathematics, need to consider that reality is quantum [1-2], ontologically.

This affects both the microscopic (e.g., atoms) and the macroscopic (e.g., collective effects, like superconductivity, waves, and lasers).

Reality is thus not continuous, incremental, or happenstance.

That is why everything blocks, goes against, a change -- until it occurs, suddenly, taking everyone to a new and better level. This is History. It is not a surprise ... We are in a long evolution ...

As a consequence, tri-state, e.g., does not have to be used in hardware, just in design. Intel Corporation can realize this, and become more competitive. This is due to many factors, including 1^n = 1, and 0^n = 0, favoring Boolean sets in calculations.

This question is now CLOSED. Focusing on the discrete Weyl-Heisenberg group, as motivated by SN, this question has been expanded in a new question, where it was answered with YES in +12 areas:

[2]

Preprint The quantum set Q*

Some researchers say the type of surface electrical charges effects on pH value of the reaction medium and thus the adsorption and removal process , when pH value increases, the overall surface electrical charge on the adsorbents become negative and adsorption process decreases, while if pH value decreases, surface electrical charge become positive and adsorption process increases

Malkoc, E.;Nuhoglu, Y. and Abali,Y. (2006). “Cr (VI) Adsorption by Waste Acorn of Quercus ithaburensis in Fixed Beds: Prediction of Breakthrough Curves,” Chemical Engineering Journal, 119(1): pp. 61-68.

Greeting,

When I tried to remotely accessed the scopus database by login into my institution id, it kept bring me back to the scopus preview. I tried cleaning the cache, reinstall the browser, using other internet and etc. But, none of it is working. As you can see in the image. It kept appeared in scopus preview.

Please help..

If a string vibrates at 256 cycles per seconds then counting 256 cycles is the measure of 1 second. The number is real because it measures time and the number is arbitrary because it does not have to be 1 second that is used.

This establishes that the pitch is a point with the real number topology, right?

Material presence is essential for propagation of sound. Does it mean that sound waves can travel interstellar distances at longer wavelengths due to the presence of celestial bodies in the universe?

The exposure dose rate at a distance of 1 m from a soil sample contaminated with 137Cs is 80 µR/s. Considering the source as a point source, estimate the specific activity of 137Cs contained in the soil if the mass of the sample is 0.4 kg. How can i calculate it?

As we all know the Classical physics have wings over massive objects on other hand Quantum physics is about smaller level of objects. Can a new assumption of satisfying both Classical and Quantum Theories happens in future?

Hello Everyone,

I am able to sucessfully run scf.in using pw.x but while proceeding for the calculations to be done using thermo_pw.x the following errors occur.

**Error in routine c_bands (1):successfully**

**too many bands are not converged**

I have already tried increasing ecut, ecutrho, decreasing conv_thr, reducing mixing beta, reducing k points and pseudopotential too.

but none of them are helpful to fix the issue.

Someone who has faced this error in

**thermo_pw**please guide,Thanks,

Dr. Abhinav Nag

Which software is best for making high-quality graphs? Origin or Excel? Thank you

I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?

Please share your comments and references with me.

thanks

How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?

Dear fellow mathematicians,

Using a computational engine such as Wolfram Alpha, I am able to obtain a numerical expression. However, I need a symbol expression. How can I do that?

I need the expression of the coefficients of this series.

x^2*csc(x)*csch(x)

where csc: cosecant (1/sin), and csch: hyperbolic cosecant.

Thank you for your help.

I'm getting repetitively negative open circuit potentials(OCP) vs. Ag/AgCl reference electrode for some electrodes during the OCP vs. time measurements using an electrochemical workstation. What's the interpretation of a negative open circuit potential? Moreover, I also have noticed that it got more negative on illumination. What's the reason behind it? Are there some references? Please help.

Dear Sirs,

In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.

1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?

2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.

One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.

But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.

But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).

3.) Searching the internet I have found recent articles by Melvin M. Vopson

which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.

I would be grateful to hear your view on this subject.

How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.

Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.

I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)

Have these particles been observed in predicted places?

For example, have scientists ever noticed the creation of energy and

pair particles from nothing in the Large Electron–Positron Collider,

Large Hadron Collider at CERN, Tevatron at Fermilab or other

particle accelerators since late 1930? The answer is no. In fact, no

report of observing such particles by highly sensitive sensors used in

all accelerators has been mentioned.

Moreover, according to one interpretation of uncertainty

principle, abundant charged and uncharged virtual particles should

continuously whiz inside the storage rings of all particle accelerators.

Scientists and engineers make sure that they maintain ultra-high

vacuum at close to absolute zero temperature, in the travelling path

of the accelerating particles otherwise even residual gas molecules

deflect, attach to, or ionize any particle they encounter but there has

not been any concern or any report of undesirable collisions with so

called virtual particles in any accelerator.

It would have been absolutely useless to create ultrahigh vacuum,

pressure of about 10

^{-14}bar, throughout the travel path of the particlesif vacuum chambers were seething with particle/antiparticle or

matter/antimatter. If there was such a phenomenon there would have

been significant background effects as a result of the collision and

scattering of the beam of accelerating particles from the supposed

bubbling of virtual particles created in vacuum. This process is

readily available for examination in comparison to totally out of

reach Hawking’s radiation which is considered to be a real

phenomenon that will be eating away supposed black holes of the

universe in a very long future.

for related issues/argument see

Consider the two propositions of the Kalam cosmological argument:

1. Everything that begins to exist has a cause.

2. The universe began to exist.

Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.

The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.

Your comments are much appreciated.

Dear all,

after a quite long project, I coded up a python 3D, relativistic, GPU based PIC solver, which is not too bad at doing some stuff (calculating 10000 time steps with up to 1 million cells (after which I run out of process memory) in just a few hours).

Since I really want to make it publicly available on GitHub, I also thought about writing a paper on it. Do you think this is a work worthy of being published? And if so, what journal should I aim for?

Cheers

Sergey

When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.

My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.

The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.

The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.

The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.

All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.

I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.

I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.

I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.

All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...

What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?

+2

During AFM imaging, the tip does the raster scanning in xy-axes and deflects in z-axis due to the topographical changes on the surface being imaged. The height adjustments made by the piezo at every point on the surface during the scanning is recorded to reconstruct a 3D topographical image. How does the laser beam remain on the tip while the tip moves all over the surface? Isn't the optics static inside the scanner that is responsible for directing the laser beam onto the cantilever or does it move in sync with the tip? How is it that only the z-signal is affected due to the topography but the xy-signal of the QPD not affected by the movement of the tip?

or in other words, why is the QPD signal affected only due to the bending and twisting of the cantilever and not due to its translation?

1) Can the existence of an aether be compatible with local Lorentz invariance?

2) Can classical rigid bodies in translation be studied in this framework?

By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of flat spacetime.

Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .

---------------

We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate x

_{P}to be t_{P}. Let the offset (t – t_{P}) between the two clocks be:1) (t – t

_{P}) = v (x - x_{P})/c^{2}where (t-t

_{P}) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c^{2}) we get:2) T = g (t

_{P}- vx_{P}/c^{2})On the other hand, if we assume that the origins coincide x = X = 0 at time t

_{P}= 0 we may write down the space-like component of the Lorentz transformation as:3) X = g(x - vt

_{P})Assuming that both clocks are placed at the same point x = x

_{P , }inserting x =x_{P}, X = X_{P ,}T = T_{P }into 2)3) yields:4) X

_{P}= g (x_{P}- vt_{P})5) T

_{P}= g (t_{P}- vx_{P}/c^{2})which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and x

_{P}is different from 0 and x_{P}is placed at the origin of coordinates, we may insert x_{P}= 0 into 2)3) to get:6) X = g (x - vt

_{P})7) T = g t

_{P}which is a change of coordinates that it:

- Is compatible with GPS simultaneity.

- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.

- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.

- Can be applied to solve the 2 problems of the preprint below.

- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...

Thus, we may conclude that, considering the synchronization condition 1):

a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.

b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).

c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.

a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.

More information in:

Hallo every one,

I did nanoidentation experiment :

1 photoresist with 3 different layer thicknesses.

My results show that the photoresist is harder when it has thicker layer..

I can't find the reason in the literature.

Can any one please explaine me why is it like that??

is there any literature for this?

best regards

chiko

The above question emerges from a parallel session [1] on the basis of two examples:

1. Experimental data [2] that apparently indicate the validity of

*Mach’s Principle*stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.2. The negative outcome of gravitational wave experiments [3] apparently does not affect the main-stream acceptance of claimed discoveries.

I am using Seek thermal camera to track the cooked food as this video

As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.

As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.

I would appreciate any explanations of this phenomenon and solution, from either physics or optics

Eighty years after Chadwick discovered the neutron, physicists today still cannot agree on how long the neutron lives. Measurements of the neutron lifetime have achieved the 0.1% level of precision (~ 1 s). However, results from several recent experiments are up to 7 s lower than the (pre2010) particle data group (PDG) value. Experiments using the trap technique yield lifetime results lower than those using the beam technique. The PDG urges the community to resolve this discrepancy, now 6.5 sigma.

*I think the reason is “tropped p ”method had not count the number of protons in the decay reaction(n→p+e+ve+γ).As a result ,the number of decay neutrons obtained were low .This affected the measurement of neutron lifetime.Do you agree with me?*