Science topics: Physics
Science topic
Physics - Science topic
Physics related research discussions
Questions related to Physics
Our answer is YES.
This question continues the same question from 3 years ago, with the same name, considering new published evidence and results. The previous text of the question maybe useful and is available here:
We now can provably include DDF [1] -- the differentiation of discontinuous functions. This is not shaky, but advances knowledge. The quantum principle of Niels Bohr in physics, "all states at once", meets mathematics and quantum computing.
Without infinitesimals or epsilon-deltas, DDF is possible, allowing quantum computing [1] between discrete states, and a faster FFT [2]. The Problem of Closure was made clear in [1].
Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra [4], he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starts to use rational numbers in calculus, with DDF [1]. The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q*; the results are the same formulas -- but now, absolutely accurate.
New results have been made public [1-3], confirming the advantages of the YES answer, since this question was first asked 3 years ago. All computation is revealed to be exact in modular arithmetic, there is NO concept of approximation, no "environmental noise" when using it.
As a consequence of the facts in [1], no one can formalize the field of non-standard analysis in the use of infinitesimals in a consistent and complete way, or Cauchy epsilon-deltas, against [1], although these may have been claimed and chalk spilled.
Some branches of mathematics will have to change. New results are promised in quantum mechanics and quantum computing.
What is your qualified opinion?
REFERENCES
[2]
Preprint FT ~ FFT
[3]
Preprint The quantum set Q*
Quais as perspectivas, desafios e propostas para a Educação Científica e Formação de Professores de Física no Brasil?
Ao analisarmos a formação dos novos jovens, é possível analisar a forte presença do senso comum em meio a discussões tanto em sala de aula quanto fora dela, mostrando ainda que há o forte embate entre Ciência e senso comum nas escolas e na sociedade como um todo. Nesse sentido, o que fazer para que haja de fato uma verdadeira Educação Científica tendo em vista a formação de Professores de Física no Brasil? O que fazer? Qual o referencial teórico a ser seguido? Quais as metodologias podem ser empregadas? Seria a falta de laboratórios na educação básica um fator a ser considerado? E as escolhas dos livros didáticos?
What are the perspectives, challenges and proposals for Science Education and Physics Teacher Training in Brazil?
When analyzing the formation of new young people, it is possible to analyze the strong presence of common sense in the midst of discussions both in the classroom and outside of it, also showing that there is a strong clash between science and common sense in schools and in society as a whole . In this sense, what can be done so that there is in fact a true Scientific Education in view of the formation of Physics Teachers in Brazil? What to do? What is the theoretical framework to be followed? What methodologies can be employed? Would the lack of laboratories in basic education be a factor to be considered? What about textbook choices?
Does transverse and longitudinal plasmons fall under localized surface plasmons? What is the significant difference between them? At what level will this affect the fabricated silver nanoparticle based electronic devices? Is surface plasmon propagation different from transverse and longitudinal plasmons?
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
The answer may also consider further advances into quantum computing?
[2]
Preprint FT ~ FFT
[2]
Preprint The quantum set Q*
Finding a definition for time has challenged thinkers and philosophers. The direction of the arrow of time is questioned because many physical laws seem to be symmetrical in the forward and backward direction of time.
We can show that the arrow of time must be in the forward direction by considering light. The speed of light is always positive and distance is always positive so the direction of time must always be positive. We could define one second as the time it takes for light to travel approximately 300,000 km. Note that we have shown the arrow of time to be in a positive direction without reference to entropy.
So we are defining time in terms of distance and velocity. Philosophers might argue that we then have to define distance and velocity but these perhaps are less challenging to define than time.
So let's try to define time. Objects that exist within the universe have a state a movement and the elapsed times that we observe result from the object being in a different position due to its velocity.
This definition works well considering a pendulum clock and an atomic clock. We can apply this definition to the rotation of the Earth and think of the elapsed time of one day as being the time for one complete rotation of the Earth.
The concept of time has been confused within physics by the ideas of quantum theory which imply the possibility of the backward direction of time and also by special relativity which implies that you cannot define a standard time throughout the universe. These problems are resolved when you consider light as a wave in the medium of space and this wave travels in the space rest frame.
Preprint Space Rest Frame (March 2022)
Richard
Our answer is YES. This question captures the reason of change: to help us improve. We, and mathematics, need to consider that reality is quantum [1-2], ontologically.
This affects both the microscopic (e.g., atoms) and the macroscopic (e.g., collective effects, like superconductivity, waves, and lasers).
Reality is thus not continuous, incremental, or happenstance.
That is why everything blocks, goes against, a change -- until it occurs, suddenly, taking everyone to a new and better level. This is History. It is not a surprise ... We are in a long evolution ...
As a consequence, tri-state, e.g., does not have to be used in hardware, just in design. Intel Corporation can realize this, and become more competitive. This is due to many factors, including 1^n = 1, and 0^n = 0, favoring Boolean sets in calculations.
[2]
Preprint The quantum set Q*
Some researchers say the type of surface electrical charges effects on pH value of the reaction medium and thus the adsorption and removal process , when pH value increases, the overall surface electrical charge on the adsorbents become negative and adsorption process decreases, while if pH value decreases, surface electrical charge become positive and adsorption process increases
Malkoc, E.;Nuhoglu, Y. and Abali,Y. (2006). “Cr (VI) Adsorption by Waste Acorn of Quercus ithaburensis in Fixed Beds: Prediction of Breakthrough Curves,” Chemical Engineering Journal, 119(1): pp. 61-68.
Greeting,
When I tried to remotely accessed the scopus database by login into my institution id, it kept bring me back to the scopus preview. I tried cleaning the cache, reinstall the browser, using other internet and etc. But, none of it is working. As you can see in the image. It kept appeared in scopus preview.
Please help..
If a string vibrates at 256 cycles per seconds then counting 256 cycles is the measure of 1 second. The number is real because it measures time and the number is arbitrary because it does not have to be 1 second that is used.
This establishes that the pitch is a point with the real number topology, right?
Material presence is essential for propagation of sound. Does it mean that sound waves can travel interstellar distances at longer wavelengths due to the presence of celestial bodies in the universe?
The exposure dose rate at a distance of 1 m from a soil sample contaminated with 137Cs is 80 µR/s. Considering the source as a point source, estimate the specific activity of 137Cs contained in the soil if the mass of the sample is 0.4 kg. How can i calculate it?
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
As we all know the Classical physics have wings over massive objects on other hand Quantum physics is about smaller level of objects. Can a new assumption of satisfying both Classical and Quantum Theories happens in future?
Hello Everyone,
I am able to sucessfully run scf.in using pw.x but while proceeding for the calculations to be done using thermo_pw.x the following errors occur.
Error in routine c_bands (1):successfully
too many bands are not converged
I have already tried increasing ecut, ecutrho, decreasing conv_thr, reducing mixing beta, reducing k points and pseudopotential too.
but none of them are helpful to fix the issue.
Someone who has faced this error in thermo_pw please guide,
Thanks,
Dr. Abhinav Nag
Which software is best for making high-quality graphs? Origin or Excel? Thank you
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
thanks
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
I am trying to plot and analyze the difference (or similarities) between the path of two spherical pendulums over time. I have Cartesian (X/Y/Z) coordinates from an accelerometer/gyroscope attached to a weight on a string,
If I want to compare the path of two pendulums, such as a spherical pendulum with 5 pounds of weight and another with 15 pounds of weight, how can I analyze this? I am hope to determine how closely the paths match over time.
Thanks in advance.
Dear fellow mathematicians,
Using a computational engine such as Wolfram Alpha, I am able to obtain a numerical expression. However, I need a symbol expression. How can I do that?
I need the expression of the coefficients of this series.
x^2*csc(x)*csch(x)
where csc: cosecant (1/sin), and csch: hyperbolic cosecant.
Thank you for your help.
Using the Boltztrap and Quantum espresso I was able to calculate the electronic part of thermal conductivity but still struglling for the phononic part of thermal conductivity.
I tried the SHENGBTE but that demands a good computational facility and right now I am not having such type of workstation. Kindly suggest some other tool that can be useful for me in this regard.
Thanks,
Dr Abhinav Nag
I'm getting repetitively negative open circuit potentials(OCP) vs. Ag/AgCl reference electrode for some electrodes during the OCP vs. time measurements using an electrochemical workstation. What's the interpretation of a negative open circuit potential? Moreover, I also have noticed that it got more negative on illumination. What's the reason behind it? Are there some references? Please help.
Dear Sirs,
In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.
1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?
2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.
One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.
But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.
But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).
3.) Searching the internet I have found recent articles by Melvin M. Vopson
which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.
I would be grateful to hear your view on this subject.
How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.
Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.
I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
Dear all,
after a quite long project, I coded up a python 3D, relativistic, GPU based PIC solver, which is not too bad at doing some stuff (calculating 10000 time steps with up to 1 million cells (after which I run out of process memory) in just a few hours).
Since I really want to make it publicly available on GitHub, I also thought about writing a paper on it. Do you think this is a work worthy of being published? And if so, what journal should I aim for?
Cheers
Sergey
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
During AFM imaging, the tip does the raster scanning in xy-axes and deflects in z-axis due to the topographical changes on the surface being imaged. The height adjustments made by the piezo at every point on the surface during the scanning is recorded to reconstruct a 3D topographical image. How does the laser beam remain on the tip while the tip moves all over the surface? Isn't the optics static inside the scanner that is responsible for directing the laser beam onto the cantilever or does it move in sync with the tip? How is it that only the z-signal is affected due to the topography but the xy-signal of the QPD not affected by the movement of the tip?
or in other words, why is the QPD signal affected only due to the bending and twisting of the cantilever and not due to its translation?
I'm searching for a good collaborator or a research group that might want to tackle an interesting problem involving the relationship between quantum dots generating nanoparticle clusters and their DNA/proteins corral. This relationship is encapsulated by geometric proximity, that is I'm looking for someone who might know how quantum mechanics impacts something like these nanoparticles, such as how close a nanoparticle is to another nanoparticle or a protein and whether sized clusters form. Ping me if you're in the bio sciences, computational biology, chemistry, biology or physical sciences and think you might be able to shed some light on the above.
I do recognise that there’s a well-known problem (hard though it is) of establishing how consciousness emerges or can be accounted for in physical processes. But I can’t at all agree that there’s a naturalistic, absolute hard problem of consciousness, because it’s an incoherent concept.
Nobody (at least nobody with a clue) supposes that neurophysiology can explain a qualitative difference in the way you and I experience the content of my music mix playing quietly in the background, or see the light reflect off a rainbow, or any of the other ways in which our qualitative experience discriminates from that of other live organisms. To suppose that just because you don’t know the mechanisms of the experience in your own head you will deny them the existence of them in somebody else’s is bizarre and reductionist.
Construct an imaginary metaphor of a magical, wizardry, thing-maker consciousness and you haven’t explained the qualitative data there either. It’s still the question of how consciousness comes into the work whether any magical things happen or whether there’s anybody there at all. To suppose a separate, inexplicable, mysterious, magic ingredient does neither any explanatory good, solve the hard problem, nor explain the evidence. All such arguments for a separate consciousness occurrent substance do, again, be it a magic nonsense or magic substance involved, reduce the hard problem of explaining thisness-of-consciousness (to pick a crazy approach) to the very same hard problem of explaining how consciousness arises in the first place.
If you identify the hard problem entirely with the mechanism through which the feeling-of-redness arises, or "the feeling of the future in an invariant past", or anything else you allude to, then you plainly have just traded in one way of asking a very simple question of the wrong approach. The question is, how do the millions of biological chunks and sub-systems interact with one another and integrate information over time and space? The sense of sight, sound, touch and soil all raise a “hard problem” of projection-understanding and categories-beyond-the-reliable-input-enumeration because by a vast over-engineering of the metaphor arms race (as even you must agree) the response-device signals of a single kind of appropriate examination will allow all in-the-know people to interpret an external reality quite differently. But the “hard problem” isn’t WHY is it that we can punch those signals at all, or make sense of the signals that come out the other end. That’s just the default condition of our very real neurological symposium. Whereas the “humanness” of that experience is also an entirely benignly apparent phenomenon, just as water’s polar nature is an entirely benignly apparentity.
For me the cardinal point is to reckon with how we perceive our own subjective value via multi-sensory data input both direct and indirect in both our two and three dimensional waking experience. And because at the very least you have to be wrong or qualified immensely if you think it’s not merely the interaction between general anatomy, organisation, information processing and output of your brain and all subjective processes such that personal conclusions then magically appear as relevant claims about reality.
P.S. I don't think evolution throws up any magical consciousness, either on its petri-dish experiments, or those novelty subjectiveness media that it comes up with sometimes. So I'd like to challenge that viewpoint, particularly in terms of our understanding of the nuances.
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of flat spacetime.
Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .
---------------
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we assume that the origins coincide x = X = 0 at time tP = 0 we may write down the space-like component of the Lorentz transformation as:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP , inserting x =xP , X = XP , T = TP into 2)3) yields:
4) XP = g (xP - vtP)
5) TP = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
More information in:
The above question emerges from a parallel session [1] on the basis of two examples:
1. Experimental data [2] that apparently indicate the validity of Mach’s Principle stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.
2. The negative outcome of gravitational wave experiments [3] apparently does not affect the main-stream acceptance of claimed discoveries.
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Eighty years after Chadwick discovered the neutron, physicists today still cannot agree on how long the neutron lives. Measurements of the neutron lifetime have achieved the 0.1% level of precision (~ 1 s). However, results from several recent experiments are up to 7 s lower than the (pre2010) particle data group (PDG) value. Experiments using the trap technique yield lifetime results lower than those using the beam technique. The PDG urges the community to resolve this discrepancy, now 6.5 sigma.
I think the reason is “tropped p ”method had not count the number of protons in the decay reaction(n→p+e+ve+γ).As a result ,the number of decay neutrons obtained were low .This affected the measurement of neutron lifetime.Do you agree with me?
Hallo every one,
I did nanoidentation experiment :
1 photoresist with 3 different layer thicknesses.
My results show that the photoresist is harder when it has thicker layer..
I can't find the reason in the literature.
Can any one please explaine me why is it like that??
is there any literature for this?
best regards
chiko
Dear all
Hope you are doing well!
What are the best books in Materials Science and Engineering (Basics and Advanced)? Moreover, what are the best skills (or materials topic related) that materials scientists have to develop and to acquire?
Thanks in advance
^_^
I use Fujikura CT-30 cleaver for PCF cleaving to use for supercontinuum generation. Initially, it seems like working fine as I could get high coupling efficiency (70-80%) in the 3.2um core of PCF. However, after some time (several hours) I notice that coupling efficiency decreases drastically and when I inspect the PCF endface with an IRscope, I could see a bright shine on the PCF end facet, which is maybe an indication that the end face is damaged. Also, I want to mention that the setup is well protected from dust and there is no chance of dusting contaminating the fiber facet.
Please suggest what should be done to get an optimal cleave, shall I use a different cleaver (pls suggest one) or there are other things to consider.
Thanks
If so, experimental results and related theory might also be helpful ...
Forgive some of my ignorance in the math for thermodynamics and heat exchange but my background is heavier in Chemistry and could use some help.
The project is to keep about 70L of water in an aquarium at 17C when the ambient temperature is 22C in the room. The original project built had the following set up:
(Top to Bottom):
1. 80x80x38mm fan running at 5700 RPMs and 76CFM
2. 80x80x20mm copper fin heatsink (0.5mm fin thickness and 40 fins with a 3.5mm bottom thickness)
3. 2-TEC1-12706 hot side towards heatsink, cold side down towards water block (Imax: 6.4A, Umax: 15.4V, Qmax: (dT=0) 63W, dTmax=68C)
4. 40x80x12mm water block centered under the heatsink (surrounded on the sides with 20mm styrofoam and 10mm styrofoam at the back)
5. ~26mm thick styrofoam
6. Wood base
• All power is supplied by an AC/DC converter (12V 20A 240W)
• Power to the system is managed by a W1209 Temperature Control Module (Relay)
• Water flow is achieved by a 4L/min water pump (slowest I can find)
This set up is only cooling the water to 18C at night and will slowly creep up to 18.7 across the day so I know this set up is not keeping up with the heat load. (also worth noting that output temp is about 1.5-2C cooler than input temp to the waterblock). My hypothesis is that the water does not have enough time in the water block for good thermal exchange or that the cooler is not creating enough of a dT in the water block to absorb the amount of heat needed to in that cycle time. The fact that the Aluminum water block has a 5x lower specific heat than water is what is making me think either more contact time or great dT is needed.
My thoughts were to swap out the water block for an 40x200x12mm water block and increase the number of peltier coolers from 2->5 and going with the TEC1-12715 (Imax: 15.6A, Umax: 15.4V, Qmax: (dT=0) 150W, dTmax=68C).
This is where is am lost in the weeds and need help. I am lacking in the intellectual horsepower for this. Will using the 5 in parallel do the trick and not max out the converter? OR Will using 5 in series still produce the needed cooling effect with the lower dT associated with the lower amperage? Or is there another setup someone can recommend? I am open to feedback and direction, thank you in advance.
In a hypothetical situation where I have two wires, ones cross section is a cylinder, and the others a star. Both have the same cross section area, both have the same length. What are the differences in electrical properties ?
Are there any experiments done looking into this ?
Also what would happen if a wire had a conical shape, by length ?
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
Preprint Space Rest Frame (Dec 2021)
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.
In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?
I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.
The 2023 ranking is available through the following link:
QS ranking is relatively familiar in scientific circles. It ranks universities based on the following criteria:
1- Academic Reputation
2- Employer Reputation
3- Citations per Faculty
4- Faculty Student Ratio
5- International Students Ratio
6- International Faculty Ratio
7- International Research Network
8- Employment Outcomes
- Are these parameters enough to measure the superiority of a university?
- What other factors should also be taken into account?
Please share your personal experience with these criteria.
Hello,
I would like to know how to measure a solid's surface temperature with fluid on it. The fluid will react with the solid surface and generates heat, so the temperature between the solid and the fluid is the crucial data I need. Here, I can only think of two options:
1. Thermal couple: Use the FLAT surface thermal couple and attach it to the surface of the solid to measure the data. For example, I can use Thin Leaf-Type Thermocouples for Layered Surfaces (omega.com) or Cement-On Polyimide Fast Response Surface Thermocouples (omega.com)
Pros: fast response, high accuracy
Cons: cannot guarantee that the measured data accurately represents the surface temperature
2. Infrared temperature sensor:
Pros: directly measure the surface temperature, high accuracy
Cons: slow response, the data might be affected by the fluid
Is there any other way to do the measurement or any suggestions?
Thank you very much in advance to anyone who answers this question.
Lee's disc apparatus is designed to finsd thermal conductivity of bad conductors. But I am having a doubt that, since soil having the following properties:
1. consists of irregular shaped aggregates
2. Non uniform distribution of particles
3. Presence of voids
Can we use Lee's disc method find thermal conductivity of soil???
I am interested to know the opinion of experts in this field.
LIGO and cooperating institutions obviously determine distance r of their hypothetical gravitational wave sources on the basis of a 1/r dependence of related spatial strain, see on page 9 of reference below. Fall-off by 1/r in fact applies in case of gravitational potential Vg = - GM/r of a single source. Shouldn’t any additional effect of a binary system with internal separation s - just for geometrical reasons - additionally reduce by s/r ?
In order to represent our observations or sight of a physical process and to further investigate it by conducting experiments or Numerically models? What are basics one need to focus ? Technically, how one should think? First, thing is understanding, you should be there! If we are modeling a flow we have to be the flow, if representing a let's say a ball, you have to be the ball! To better understand it! What are others?
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
In my previous question I suggested using the Research Gate platform to launch large-scale spatio temporal comparative researches.
The following is the description of one of the problems of pressing importance for humanitarian and educational sectors.
For the last several decades there has been a gradual loss in quality of education on all its levels . We can observe that our universities are progressively turning into entertaining institutions, where students parties, musical and sport activities are valued higher than studying in a library or working on painstaking calculations.
In 1998 Vladimir Arnold (1937 – 2010), one of the greatest mathematicians of our times, in his article “Mathematical Innumeracy Scarier Than Inquisition Fires” (newspaper “Izvestia”, Moscow) stated that the power players didn’t need all the people to be able to think and analyze, only “cogs in machines,” serving their interests and business processes. He also wrote that American students didn’t know how to sum up simple fractions. Most of them sum up numerator and denominators of one simple fraction with the ones of the other, i.e. as they did it, 1/2+ 1/3 according to their understand is equal to 2/5 . Vladimir Arnold pointed out that with this kind of education, students can’t think, prove and reason – they are easy to turn into a crowd, to be easily manipulated by cunning politicians because they don’t usually understand causes and effects of political acts. I would add, for myself, that this process is quite understandable and expected because computers, internet and consumer society lifestyle (with its continuous rush for more and newer commodities we are induced to regard as a healthy behavior) have wiped off young people’s skills in elementary logic and eagerness to study hard. And this is exactly what the consumer economics and its bosses, the owners of international businesses and local magnates, need.
I recall a funny incident that happened in Kharkov (Ukraine). One Biology student was asked what “two squared” was. He answered that it was the number 2 inscribed into a square.
The level and the scale of education and intellectual decline described can be easily measured with the help of the Research Gate platform. It could be appropriate to test students’ logic abilities, instead of guess-the-answer tests which have taken over all the universities within the framework of Bologna Process which victorious march on the territories of former Soviet states. Many people can remember the fact that Soviet education system was one of the best in the world. I have therefore suggested the following tests:
1. In a Nikolai Bogdanov-Belsky (1868-1945) painting “Oral accounting at Rachinsky's People's school”(1895) one could see boys in a village school at a mental arithmetic lesson. Their teacher, Sergei Rachinsky (1833-1902), the school headmaster and also a professor at the Moscow University in the 1860s, offered the children the following exercise to do a mental calculation (http://commons.wikimedia.org/wiki/File:BogdanovBelsky_UstnySchet.jpg?uselang=ru):
(10 х 10 + 11 х 11 + 12 х 12 + 13 х 13 + 14 х 14) / 365 = ?
(there is no provision here on Research Gate to write square of the numbers,thats why I have writen through multiplication of the numbers )
19th century peasant children with basted shoes (“lapti”) were able to solve such task mentally. This year, in September, this very exercise was given to the senior high school pupils and the first year students of a university with major in Physics and Technology in Kyiv (the capital of Ukraine) and no one could solve it.
2. Exercise of a famous mathematician Johann Carl Friedrich Gauss (1777–1855): to calculate mentally the sum of the first one hundred positive integers:
1+2+3+4+…+100 = ?
3. Albrecht Dürer’s (1471-1528) magic square (http://en.wikipedia.org/wiki/Magic_square)
The German Renaissance painter was amazed by the mathematical properties of the magic square, which were described in Europe firstly in Spanish (the 1280s) and Italian (14th century) manuscripts. He used the image of the square as a detail for in his Melancholia I painting , which was drawn in 1514, and included the numbers 15 and 14 in his magic square:
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
Ask your students to find regularities in this magic square. In case this exercise seems hard, you can offer them Lo Shu (2200 BC) square, a simpler variant of magic square of the third order (minimal non-trivial case):
4 9 2
3 5 7
8 1 6
4. Summing up of simple fractions.
According to Vladimir Arnold’s popular articles, in the era of computers and Internet, this test becomes an absolute obstacle for more and more students all over the world. Any exercises of the following type will be appropriate at this part:
3/7 + 7/3 = ? and 5/6 + 7/15=?
I think these four tests will be enough. All of them are for logical skills, unlike the tests created under Bologna Process.
Dear colleagues, professors and teachers,
You can offer these tasks to the students at your colleges and universities and share the results here, at the Research Gate platform, so that we all can see the landscape of the wretchedness and misery resulted from neoliberal economics and globalization.
Time is what permits things to happen. However, as a physical grandeur, time must emerge as a consequence of some physical law (?). But, how time could emerge as a consequence of something if " consequence", " causation", implies the existence of the time?
A long copper plate is moved at a speed v along its length as suggested in the attachment. A magnetic field exists perpendicular to the plate in a cylindrical region cutting the plate in a circular region. A and B are two fixed conducting brushes which maintain contact with the plate as the plate slides past them. These brushes are connected by a conducting wire.
Is there a current in the wire? In which direction?
My understanding of the significance of Bell's inequality in quantum mechanics (QM) is as follows. The assumption of hidden variables implies an inequality called Bell's inequality. This inequality is violated not only by conventional QM theory but also by experimental data designed to test the prediction (the experimental data agree with conventional QM theory). This implies that the hidden variable assumption is wrong. But from reading Bell's paper it looks to me that the assumption proven wrong is hidden variables (without saying local or otherwise), while people smarter than me say that the assumption proven wrong is local hidden variables. I don't understand why it is only local hidden variables, instead of just hidden variables, that was proven wrong. Can somebody explain this?
I am trying to plot 4 races on one polar plot using "hold on" command. the function I am using is "polar2". when plotting 2nd or 3rd trace, seems to create new axis (may be). the trace is extending.
Is there any other way to plot 2-3 plots in the same polar plot without using "hold on" command in Matlab?
If you have a substance of density x g/mol with boiling point y and one with density z and boiling point a and we make a 90% mix of 1 and 2, what is the resulting density and boiling point?
Over the last few months, I have come across several posts on social media where scientists/researchers even Universities are flaunting their ranking as per AD Scientific Index https://www.adscientificindex.com/.
When I clicked on the website, I was surprised to discover that they are charging a fee (~24-30 USD) to add the information of an individual researcher.
So I started wondering if it's another scam of ‘predatory’ rankings.
What's your opinion in this regard?
As you know peristaltic pump has a Constant fluid flow direction but Changing fluid flow rate. I was wondering if it is possible to produce a Constant (Steady) fluid flow rate using peristaltic pump.
A thin, circular disc of radius R is made up of a conducting material. A charge Q is given to it, which spreads on the two surfaces.
Will the surface charge density be uniform? If not, where will it be minimum?
Imagine a row of golf balls in a straight line with a distance of one metre between each golf ball. This we call row A. Then there is a second row of golf balls (row B) placed right next to the golf balls in row A. We can think of the row A of golf balls as marking of distance measurements within the inertial frame of reference corresponding to row A (frame A). Similarly the golf balls in row B mark the distance measurements in frame B. Both rows are lined up in the x direction.
Now simultaneously all the golf balls in row B start to accelerate in the x direction until they reach a steady velocity v at which point the golf balls in row B stop accelerating. It is clear that the golf balls in row B will all pass the individual golf balls of row A at exactly the same instant when viewed from frame A. It must also be the case that the golf balls in the rows pass each other simultaneously when viewed from frame B.
So we can see that the distance measurements in the frame of B are the same as the distance measurements in row A. The row of golf balls is in the x direction so this suggests that the coordinate transformation between frame A and frame B should be x - vt.
This contradicts the Lorentz transformation equation for the x direction which is part of the standard SR theory.
If we were to replace the golf balls in row B with measuring rods of length one metre then in order to match the observations of the Michelson Moreley experiment we would conclude that measuring rods must in general experience length contraction relative to a unique frame of reference. So this thought experiment suggests that we need to maintain distances as invariant between moving frames of reference while noting that moving objects experience length contraction.
This also implies the existence of a unique frame of reference against which the velocity v is measured.
Preprint Space Rest Frame (March 2022)
I would be interested to see if the thought experiment can be explained within standard Special Relativity while retaining the Lorentz transformation equations.
Richard
Human dynasty in its millennium era. We have identified fire from the friction of stones and now we are interacting with Nano robots. Once it was a dream to fly but today all the Premier league, La liga and Serie A players travel in airplane at least twice in a week due to the unprecedented growth of human science. BUT ONE THING IS STILL ELUDING IN THE GLITTERING PROFILE OF HUMAN DYNASTY.
Although we have the gravitation theory, Maxwell's theory of electromagnetism, Max Planck's Quantum mechanics, Einstein's relativity theory and in most recently the Stephen Hawking's Big bang concepts...… Why can't we still revert back and forth into our life?
Any possibilities in future?
if not..
Why? in terms of mathematics, physics and theology??
How much does the existence of advanced laboratories and appropriate financial budgets and different support for a researcher's research affect the quality and quantity of a researcher's work?
The formula for sin(a)sin(b) is a very well know highschool formula. But is there a more general version for the product of m sine function?