Science topics: Physics
Science topic

Physics - Science topic

Physics related research discussions
Questions related to Physics
  • asked a question related to Physics
Question
1 answer
Two of the most interesting human spacecraft of our time, the SpaceX Starship and the NASA Gateway lunar space station are soon to join.
Relevant answer
Answer
New Program Manager taking over Gateway.
  • asked a question related to Physics
Question
50 answers
This question discusses the YES answer. We don't need the √-1.
The complex numbers, using rational numbers (i.e., the Gauss set G) or mathematical real-numbers (the set R), are artificial. Can they be avoided?
Math cannot be in ones head, as explains [1].
To realize the YES answer, one must advance over current knowledge, and may sound strange. But, every path in a complex space must begin and end in a rational number -- anything that can be measured, or produced, must be a rational number. Complex numbers are not needed, physically, as a number. But, in algebra, they are useful.
The YES answer can improve the efficiency in using numbers in calculations, although it is less advantageous in algebra calculations, like in the well-known Gauss identity.
For example, in the FFT [2], there is no need to compute complex functions, or trigonometric functions.
This may lead to further improvement in computation time over the FFT, already providing orders of magnitude improvement in computation time over FT with mathematical real-numbers. Both the FT and the FFT are revealed to be equivalent -- see [2].
I detail this in [3] for comments. Maybe one can build a faster FFT (or, FFFT)?
The answer may also consider further advances into quantum computing?
[2]
Preprint FT = FFT
[2]
Relevant answer
Answer
The form z=a+ib is called the rectangular coordinate form of a complex number, that humans have fancied to exist for more than 500 years.
We are showing that is an illusion, see [1].
Quantum mechanics does not, contrary to popular belief, include anything imaginary. All results and probabilities are rational numbers, as we used and published (see ResearchGate) since 1978, see [1].
Everything that is measured or can be constructed is then a rational number, a member of the set Q.
This connects in a 1:1 mapping (isomorphism) to the set Z. From there, one can take out negative numbers and 0, and through an easy isomorphism, connect to the set N and to the set B^n, where B={0,1}.
We reach the domain of digital computers in B={0,1}. That is all a digital computer needs to process -- the set B={0,1}, addition, and encoding, see [1].
The number 0^n=0, and 1^n=1. There Is no need to calculate trigonometric functions, analysis (calculus), or other functions. Mathematics can end in middle-school. We can all follow computers!
REFERENCES
[1] Search online.
  • asked a question related to Physics
Question
6 answers
Irrational numbers are uncomputable with probability one. In that sense, numerical, they do not belong to nature. Animals cannot calculate it, nor humans, nor machines.
But algebra can deal with irrational numbers. Algebra deals with unknowns and indeterminates, exactly.
This would mean that a simple bee or fish can do algebra? No, this means, given the simple expression of their brains, that a higher entity is able to command them to do algebra. The same for humans and machines. We must be able also to do quantum computing, and beyond, also that way.
Thus, no one (animals, humans, extraterrestrials in the NASA search, and machines) is limited by their expressions, and all obey a higher entity, commanding through a network from the top down -- which entity we call God, and Jesus called Father.
This means that God holds all the dice. That also means that we can learn by mimicking nature. Even a wasp can teach us the medicinal properties of a passion fruit flower to lower aggression. Animals, no surprise, can self-medicate, knowing no biology or chemistry.
There is, then, no “personal” sense of algebra. It just is a combination of arithmetic operations.There is no “algebra in my sense” -- there is only one sense, the one mathematical sense that has made sense physically, for ages. I do not feel free to change it, and did not.
But we can reveal new facets of it. In that, we have already revealed several exact algebraic expressions for irrational numbers. Of course, the task is not even enumerable, but it is worth compiling, for the weary traveler. Any suggestions are welcome.
Relevant answer
Answer
@Ed Gerck
Irrational numbers are uncomputable with probability one
================================================= ===
My deepest apologies, but I have read your Answer dated December 14, 2022 in the FLT thread https://www.researchgate.net/post/Are-there-other-pieces-of-information-about-Victory-Road-to - FLT#view=641367549777ccc70c026256/234 .
There was a link to your own thread given by you. This thread gives your erroneous statement from the very beginning, namely: "Irrational numbers are uncomputable with probability one".
Please agree, Dear Professor Ed G., that any irrational number is calculated with 100% accuracy with a probability of 1 for any number of orders p-1, if you write down p orders. Thus, if you write for the root of 2 one order before point and three orders after point, you will have sqrt(2)=1.414..., i.e., you can consider that you have written 4 orders. At the same time, the accuracy of 100% with a probability of 1 is provided for 3 orders, i.e. 1.41, etc., for any number of orders...
Speaking of some kind of all orders "full notation" , as you would like to see it, it's not possible for such a representation of irrational numbers.
If you point out my mistake to me, I will be grateful.
Greetings,
SPK
  • asked a question related to Physics
Question
1 answer
For those interested: A revamp of the Internet is under way to cover shortcomings interfering with expansion into the Solar System and beyond.
While extremely rugged, a few assumptions are built into the current Internet technologies, including short traversal times at light speeds and a relatively stable population of nodes to hop through to the destination, which are true on Earth. The present Internet fails otherwise.
Mars is minutes away at light speed, with a very spotty supply of nodes to there.
A new technology, DTN (Delay/Disruption-Tolerant Networking) utilizes a new protocol, Bundle Protocol, that is overlaid on top of existing space networking protocols or IP protocols. This has been tested in the ISS and is actively being placed in other spacecraft, by NASA and partner agencies. This is part of what is being tested at the Moon first for deployment to Mars.
Bundle Protocol was architected by Vint Cerf, a father of TCP/IP, and others.
Relevant answer
Answer
Dear Karl Sipfle,
thank you for explaining the importance of Delay Tolerant Networking (DTN) in such an interesting way. Thanks to the ideas underlying the concept of DTN, the Interplanetary Internet can be built. For more about this see:
The following IETF documents affect the Interplanetary Internet: RFC 5050, RFC 9171, RFC 9172, RFC 9173 and RFC 9174. These are available at the address: https://www.rfc-editor.org/rfc-index.html
Best regards
Anatol Badach
RFC 5050: Bundle Protocol Specification, Nov 2007
RFC 9171: Bundle Protocol Version 7, Jan 2022
RFC 9172: Bundle Protocol Security (BPSec), Jan 2022
RFC 9173: Default Security Contexts for Bundle Protocol Security (BPSec), Jan 2022
RFC 9174: Delay-Tolerant Networking TCP Convergence-Layer Protocol Version 4, Jan 2022
  • asked a question related to Physics
Question
18 answers
One finds action in photon emission and particle paths. What is it when its magnitude is expressed by a Planck Constant for emission versus when it may be expressed as a stationary integral value (least – saddle – greatest) along a path? The units match. Action is recognized as a valuable concept and I would like to appreciate its nature in Nature. (Struggling against the “energy is quantized” error has distracted me from the character of the above inquiry in the past.)
Brief aside: Max Planck and Albert Einstein emphasized energy as discrete amounts for their blackbody radiation and photoelectric studies, but they always added at a specific frequency! Energy without that secondary condition is not quantized! I emphasize this because it has been frustrating for decades and it interferes with the awareness that it is action that is quantized! Now, granted that it is irrelevant to “grind out useful results” activity, which also is valuable, it is relevant to comprehending the nature of Nature, thus this post.
The existence of The Planck Constant has been a mystery since Max Planck found it necessary to make emissions discrete in order to formulate blackbody radiation mathematically. He assumed discrete emission energy values for each frequency that made the action of radiated energy at each frequency equal to the Planck Constant value. (This can be said better – please, feel free to fix it.) Action had been being used to find the equations of motion for almost two centuries by then. Is a stationary integral of action along a path equal to an integral number of Planck Constants? Is the underlying nature in these several instances of mathematical physics the same? What is that nature; how can this be? If the natures are different, how is each?
Happy Trails, Len
P.S. My English gets weird and succinct sometimes trying to escape standard ruts in meanings: how is each? is a question that directs one to explain, i.e., to describe the processes as they occur – causes, interactions, events, etc., I hope.
Relevant answer
Answer
My last comment was meant to be some kind of an “introduction” of the idea that we humans can only reflect what is “inside” the system (the universe). Or to choose the opposite point of view: the description of the nature of reality is always available and “it is showing itself” through the way everyone express their intuition about the subject.
The consequence is that humans have ideas about reality – e.g. a period in the evolution of the universe when there was no matter around – that can be expressed in various ways. Some will use the conceptual framework of classic physics, some the mathematical approach, others modern field theory, etc. In science we are quarrelling about the “correct description” and it seems that the correct description is a description that is independent from human’s personal and cultural (inclusive religious) preferences. But this idea is a bit “too nice”.
In practise we experience that a lot of people make a mess of their opinion too. They sometimes connect different points of view within one description (creating paradoxes) and the consequence is that it is impossible to agree with other people about the subject. But this is not intentionally. Even if people have the opinion that they were “forced” by their own ego. So if we think about the possibility to reflect “the nature of reality” by the mysterious input of the whole universe it “smells” a bit unfair too.
Why have some people not so much trouble to come to a balanced opinion, in relation to other less fortunate people? In a culture that is ruled by the “game of competition” this is a frustrating topic for much scientists. Just because the only plausible answer is that “it is like it is”. At the other hand, if we know that we are some kind of a “radio” that is receiving the “broadcast of the universe” we can examine our own reproduction and think about the problem if it is a convincing reproduction or that we are influenced by something else that is affecting the interpretation.
Science is full of different points of view and it is difficult to believe that we personally are “in balance” in proportion to everyone else. So I read your last comment with more than one point of view to understand the connections with other ideas. I think it is in line with the general ideas about these topics.
With kind regards, Sydney
  • asked a question related to Physics
Question
29 answers
1. Does consciousness exist?
2. If so, what is Consciousness and what are its nature and mechanisms?
3. I personally think consciousness is the subjective [and metaphysical] being that (if exists) feels and experiences the cognitive procedures (at least the explicit ones). I think that at some ambiguous abstract and fuzzy border (on an inward metaphysical continuum), cognition ends and consciousness begins. Or maybe cognition does not end, but consciousness is added to it. I don't know if my opinion is correct. What are potential overlaps and differences between consciousness and cognition?
4. Do Freudian "Unconscious mind" or "Subconscious mind" [or their modern counterpart, the hidden observer] have a place in consciousness models? I personally believe these items as well are a part of that "subjective being" (which experiences cognitive procedures); therefore they as well are a part of consciousness. However, in this case we would have unconscious consciousness, which sounds (at least superficially) self-contradictory. But numerous practices indicate the existence of such more hidden layers to consciousness. What do you think about something like an "unconscious consciousness"?
5. What is the nature of Altered States of Consciousness?
Relevant answer
Answer
Vahid Rakhshan subliminal is not meant to replace unconscious mind or subconscious.
Subliminal is just very fast brain processing of sensory and other signals so as to prepare them as useful mental contents. it's a speed thing. Mostly I think of subliminal as sensory completion - things more easily done with tiny brain networks than can be done in sensory organs - things that may use input from other aspects of the context, or mental contents; but all of subliminal stuff is not part of consciousness or memory, while the results of it are part of the sensory feed to memory and perception.
There is no subconscious mind nor any unconscious perception, all perception is at the same level -> memory reactivation from mental contents.
The frontal cortex is certainly very interesting. all the other cortices have a single dominant sensory or motor (expressive) function, but the frontal cortex senses what the other cortices are doing. In a sense, it is like an retinal image of the mind itself that interlinks this totality factor with the rest of memory.
In another thread you brought up the term feelings - usually that term relates to emotions or body feelings, but I like to use it with regard to the frontal cortex that assesses the overall feelings of all the other zones of consciousness.
By the way the C-T loops are hard wired and they regularly fire at alpha or theta rhythm upon any incoming sensory signals as Bottom Up resonance, and upon any perception signals as Top Down resonance. In both cases the resonance enables memory formation including the signals. Top Down resonance can be choked off by hypothalamic suppression in the thalamus, and we often experience that every time when we sniff the air, which clears the mind of persisting perception just enough to allow the smell of something to flood our consciousness. It is very important to be able to do it without sniffing and the frontal cortex activates the hypothalamic suppression of ongoing perception while learning new things so we can get a very clear picture of the new thing without prejudice.
I am not sure what other loops you are talking about, but certainly there are cognitive loops - obsessive loops - but that is about the mental contents shuffling around in circles - much slower than the hard-wired 1/10th of a second C-T loops that I mention repeatedly (I am a looper).
  • asked a question related to Physics
Question
6 answers
The topic considered here is the Klein-Gordon equation governing some scalar field amplitude, with the field amplitude defined by the property of being a solution of this equation. The original Klein-Gordan equation does not contain any gauge potentials, but a modified version of the equation (also called the Klein-Gordon equation in some books for reasons that I do not understand) does contain a gauge potential. This gauge potential is often represented in the literature by the symbol Ai (a four-component vector). Textbooks show that if a suitable transformation is applied to the field amplitude to produce a transformed field amplitude, and another suitable transformation is applied to the gauge potential to produce a transformed gauge potential, the Lagrangian is the same function of the transformed quantities as it is of the original quantities. With these transformations collectively called a gauge transformation we say that the Lagrangian is invariant under a gauge transformation. This statement has the appearance of being justification for the use of Noether’s theorem to derive a conservation law. However, it seems to me that this appearance is an illusion. If the field amplitude and gauge potential are both transformed, then they are both treated the same way as each other in Noether’s theorem. In particular, the theorem requires both to be solutions of their respective Lagrange equations. The Lagrange equation for the field amplitude is the Klein-Gordon equation (the version that includes the gauge potential). The textbook that I am studying does not discuss this but I worked out the Lagrange equations for the gauge potential and determined that the solution is not in general zero (zero is needed to make the Klein-Gordon equation with gauge potential reduce to the original equation). The field amplitude is required in textbooks to be a solution to its Lagrange equation (the Klein-Gordon equation). However, the textbook that I am studying has not explained to me that the gauge potential is required to be a solution of its Lagrange equations. If this requirement is not imposed, I don’t see how any conclusions can be reached via Noether’s theorem. Is there a way to justify the use of Noether’s theorem without requiring the gauge potential to satisfy its Lagrange equation? Or, is the gauge potential required to satisfy that equation without my textbook telling me about that?
Relevant answer
Answer
"Noether's thorem simply states that, if the equations of motion for the scalars are invariant under a continuous group of transformations, then there exists a conserved current. That's all."
If you review the derivation of Noether's theorem you will find another requirement. The varied functions must be extremals, i.e., satisfy Lagrange's equations, i.e., the equations of motion. Transformation properties alone, with no other requirements, will make the Lagrangian an invariant. But to obtain a conserved current we need not only invariance of the Lagrangian but also that the varied functions satisfy Lagrange's equations.
  • asked a question related to Physics
Question
31 answers
For a plate capacitor the force on the plates can be calculated by calculating the change of field energy caused by an infinitesimal displacement. Looks like a very first principle. However, thinking about a displacement of a charge in a homogeneous electric field seems to explain no force at all...
Relevant answer
Answer
sorry to say, but I do not understand anything of your theory. Furthermore I see no explanation of the force on a charge in an E field.
Best regards
Jörn
  • asked a question related to Physics
Question
4 answers
Start with a purely classical case to define vocabulary. A charged marble (marble instead of a point particle to avoid some singularities) is exposed to an external electromagnetic (E&M) field. "External" means that the field is created by all charges and currents in the universe except the marble. The marble is small enough for the external field to be regarded as uniform within the marble's interior. The external field causes the marble to accelerate and that acceleration causes the marble to create its own E&M field. The recoil of the marble from the momentum carried by its own field is the self force. (One piece of the charged marble exerts an E&M force on another piece and, contrary to Newton's assumption of equal but opposite reactions, these forces do not cancel with each other if the emitted radiation carries away energy and momentum.) The self force can be neglected if the energy carried by the marble's field is negligible compared to the work done by the external field on the marble. Stated another way, the self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy. Also, an analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone. The key points from this paragraph are the last two sentences repeated below:
(A) An analysis that neglects self force is one in which the total force on the marble is taken to be the force produced by external fields alone.
(B) The self force can be neglected if and only if the energy carried by the marble's field is negligible compared to the change in the marble's energy.
Now consider the semi-classical quantum mechanical (QM) treatment. The marble is now a particle and is treated by QM (Schrodinger's equation) but its environment is an E&M field treated as a classical field (Maxwell's equations). Schrodinger's equation is the QM analog for the equation of force on the particle and, at least in the textbooks I studied from, the E&M field is taken to be the external field. Therefore, from Item (A) above, I do not expect this analysis to predict a self force. However, my expectation is inconsistent with a conclusion from this analysis. The conclusion, regarding induced emission, is that the energy of a photon emitted by the particle is equal to all of the energy lost by the particle. We conclude from Item (B) above that the self force is profoundly significant.
My problem is that the analysis starts with assumptions (the field is entirely external in Schrodinger's equation) that should exclude a self force, and then reaches a conclusion (change in particle energy is carried by its own emitted photon) that implies a self force. Is there a way to reconcile this apparent contradiction?
  • asked a question related to Physics
Question
17 answers
It would be very interesting to obtain a database of responses on this question :
What are the links between Algebra & Number Theory and Physics ?
Therefore, I hope to get your answers and points of view. You can also share documents and titles related to the topic of this question.
I recently read a very interesting preprint by the mathematician and physician Matilde Marcolli : Number Theory in Physics. In this very interesting preprint, she gave several interesting relations between Number Theory and Theoretical physics. You can find this preprint on her profile.
Relevant answer
Answer
Hi Mohamed,
I have a long lasting interest on this subject. Good luck.
  • asked a question related to Physics
Question
5 answers
Please spread the word: Folding at Home (https://foldingathome.org/) is an extremely powerful supercomputer composed of thousands of home computers around the world. It tries to simulate protein folding to Fight diseases. We can increase its power even further by simply running its small program on our computers and donating the spare (already unused and wasted) capacity of our computers to their supercomputation.
After all, a great part of our work (which is surfing the web, writing texts and stuff, communicating, etc.) never needs more than a tiny percent of the huge capacity of our modern CPUs and GPUs. So it would be very helpful if we could donate the rest of their capacity [that is currently going to waste] to such "distributed supercomputer" projects and help find cures for diseases.
The program runs at a very low priority in the background and uses some of the capacity of our computers. By default, it is set to use the least amount of EXCESS (already wasted) computational power. It is very easy to use. But if someone is interested in tweaking it, it can be configured too via both simple and advanced modes. For example, the program can be set to run only when the computer is idle (as the default mode) or even while working. It can be configured to work intensively or very mildly (as the default mode). The CPU or GPU can each be disabled or set to work only when the operating system is idle, independent of the other.
Please spread the word; for example, start by sharing this very post with your contacts.
Also give them feedback and suggestions to improve their software. Or directly contribute to their project.
Folding at Home's Forum: https://foldingforum.org/index.php
Folding at Home's GitHub: https://github.com/FoldingAtHome
Additionally, see other distributed supercomputers used for fighting disease:
Relevant answer
Answer
Vahid Rakhshan I will definitely spread the word about this amazing initiative. It's great to know that we can contribute to such a noble cause by simply utilizing our excess computer power. Thank you for bringing this opportunity to my attention. Let's join hands in making a difference in the fight against diseases.
  • asked a question related to Physics
Question
20 answers
Dear colleagues. This is not a matter about mathematical questions, fields and the like that I do not understand, but about the following:
As a researcher in philosophy of science, I have read more than once - from qualified sources - and repeated that, unlike Newtonian mechanics, which assumes that macroscopic physical space is absolute, has three dimensions and is separated from absolute time, for general relativity space is a four-dimensional spacetime, and that time is relative to the position of the observer (due to the influence of gravity).
Now I find that wrong, having heard that, for the theory, time and the perception of time are different things. Specifically, that in the famous Einsteinian example (a mental or imaginary experiment) of twins, the one who is longer-lived when they meet again has perceived a greater passage of time. And if what has been different is the perception of time, and not time, then that would mean that objectively both have always been at the same point on the "arrow of time".
And it would mean that I have confused time, as an objective or "objective" dimension of spacetime, with one's perception of it. That is, if there were no observer, spacetime would still have its "time" dimension.
It follows that it is false that for general relativity time is relative (because it is a dimension of spacetime, which is not relative). Now, if this is so, how can the theory predict the - albeit hypothetical - existence of wormholes?
There is something I fail to understand: does the theory of relativity really differentiate time from the perception that an observer may have of it, and the example of twins refers to the latter?
If spacetime is only one - there are not several independent spacetimes - and it has objective existence, including its "time" dimension , how is it possible to travel - theoretically, according to the theory - through a wormhole to another part of it that has a different temporality (what we call past or future)?
Since it does not make sense to me to interpret that one would not travel to the future but to the perception of the future. And I rule out that Einstein has confused time with the perception of it.
Thank you.
Relevant answer
Answer
Buenos dias Sergio,
questioning the relationship between "objective" ("real") physical time (as in, e.g., the Einsteinian concept of space-time) versus individually perceived time (i.e., time as perceived by conscious agents such as organisms) is, indeed, intriguing. Let me just raise a few thoughts here in addition to what has already been pointed out:
Whether space-time is, indeed, "real" and fundamental is questioned by scholars such as cognitive scientist Donald Hoffman (University of California at Irvine), arguing that space-time may merely be a "headset" through which we perceive and interact with a more fundamental reality. This line of argument - in my view - essentially constitutes a modern-day incarnation of Plato's classic cave analogy.
Irrespective of whether you buy into such "headset"/"matrix" arguments, physicists are on the constant lookout for structures and processes that may physically, indeed, proove more fundamental than spacetime. Here, you may follow, for instance, the work of physicist Nima Arkani-Hamed (Institute for Advanced Study at Princeton). Whether, at the end of the day, "time" will turn out here as something "objective" and/or absolute, as an emergent property of deeper structures, whether it will relativistically stay deeply intertwined with space or be "torn away" from it on a deeper, yet unknown level of physics/reality, no one knows.
For the time being, though, I do think it is important to carefully distinguish between physical measurements and their interpretations in models and theories, in which we use time as part of four-dimensional space-time very successfully on one hand and the intricacies and complexities of an as yet hardly understood (but surely very limited!) human consciousness and perception of "time".
So, is time simply an illusion or a fundamental trait of some form of "reality"? We just don't know...
PS: You may also take interest in this discussion we had last fall: https://www.researchgate.net/post/Has_time_existed_forever .
Best,
Julius
  • asked a question related to Physics
Question
16 answers
Our answer is YES. A new question (at https://www.researchgate.net/post/If_RQ_what_are_the_consequences/1) has been answered affirmatively, confirming the YES answer in this question, with wider evidence in +12 areas.
This question continued the same question from 3 years ago, with the same name, considering new published evidence and results. The previous text of the question maybe useful and is available here:
We now can provably include DDF [1] -- the differentiation of discontinuous functions. This is not shaky, but advances knowledge. The quantum principle of Niels Bohr in physics, "all states at once", meets mathematics and quantum computing.
Without infinitesimals or epsilon-deltas, DDF is possible, allowing quantum computing [1] between discrete states, and a faster FFT [2]. The Problem of Closure was made clear in [1].
Although Weyl training was on these mythical aspects, the infinitesimal transformation and Lie algebra [4], he saw an application of groups in the many-electron atom, which must have a finite number of equations. The discrete Weyl-Heisenberg group comes from these discrete observations, and do not use infinitesimal transformations at all, with finite dimensional representations. Similarly, this is the same as someone trained in infinitesimal calculus, traditional, starts to use rational numbers in calculus, with DDF [1]. The similar previous training applies in both fields, from a "continuous" field to a discrete, quantum field. In that sense, R~Q*; the results are the same formulas -- but now, absolutely accurate.
New results have been made public [1-3], confirming the advantages of the YES answer, since this question was first asked 3 years ago. All computation is revealed to be exact in modular arithmetic, there is NO concept of approximation, no "environmental noise" when using it.
As a consequence of the facts in [1], no one can formalize the field of non-standard analysis in the use of infinitesimals in a consistent and complete way, or Cauchy epsilon-deltas, against [1], although these may have been claimed and chalk spilled.
Some branches of mathematics will have to change. New results are promised in quantum mechanics and quantum computing.
This question is closed, affirming the YES answer.
REFERENCES
[2]
Preprint FT = FFT
[3]
Relevant answer
Answer
This question follows a new standard in RG, where every opinion is respected, and yet research can be developed.
This is explained in:
  • asked a question related to Physics
Question
3 answers
Does transverse and longitudinal plasmons fall under localized surface plasmons? What is the significant difference between them? At what level will this affect the fabricated silver nanoparticle based electronic devices? Is surface plasmon propagation different from transverse and longitudinal plasmons?
Relevant answer
Answer
Transverse plasmonic resonance involves the oscillation of free charges under a homogeneous external electric field in the plane perpendicular to the direction of the electric field. Longitudinal plasmonic resonance involves the oscillation of free charges under a homogeneous external electric field along the direction of the electric field.
  • asked a question related to Physics
Question
6 answers
Using the Boltztrap and Quantum espresso I was able to calculate the electronic part of thermal conductivity but still struglling for the phononic part of thermal conductivity.
I tried the SHENGBTE but that demands a good computational facility and right now I am not having such type of workstation. Kindly suggest some other tool that can be useful for me in this regard.
Thanks,
Dr Abhinav Nag
Relevant answer
Answer
@Abhinav Nag
The modified Debye-Callaway model can be used to calculate the thermal lattice conductivity. See, for example, DOI: 10.1016/j.jpcs.2022.111196
  • asked a question related to Physics
Question
31 answers
Finding a definition for time has challenged thinkers and philosophers. The direction of the arrow of time is questioned because many physical laws seem to be symmetrical in the forward and backward direction of time.
We can show that the arrow of time must be in the forward direction by considering light. The speed of light is always positive and distance is always positive so the direction of time must always be positive. We could define one second as the time it takes for light to travel approximately 300,000 km. Note that we have shown the arrow of time to be in a positive direction without reference to entropy.
So we are defining time in terms of distance and velocity. Philosophers might argue that we then have to define distance and velocity but these perhaps are less challenging to define than time.
So let's try to define time. Objects that exist within the universe have a state a movement and the elapsed times that we observe result from the object being in a different position due to its velocity.
This definition works well considering a pendulum clock and an atomic clock. We can apply this definition to the rotation of the Earth and think of the elapsed time of one day as being the time for one complete rotation of the Earth.
The concept of time has been confused within physics by the ideas of quantum theory which imply the possibility of the backward direction of time and also by special relativity which implies that you cannot define a standard time throughout the universe. These problems are resolved when you consider light as a wave in the medium of space and this wave travels in the space rest frame.
Richard
Relevant answer
Answer
Time is life.
  • asked a question related to Physics
Question
69 answers
Our answer is YES. This question captured the reason of change: to help us improve. We, and mathematics, need to consider that reality is quantum [1-2], ontologically.
This affects both the microscopic (e.g., atoms) and the macroscopic (e.g., collective effects, like superconductivity, waves, and lasers).
Reality is thus not continuous, incremental, or happenstance.
That is why everything blocks, goes against, a change -- until it occurs, suddenly, taking everyone to a new and better level. This is History. It is not a surprise ... We are in a long evolution ...
As a consequence, tri-state, e.g., does not have to be used in hardware, just in design. Intel Corporation can realize this, and become more competitive. This is due to many factors, including 1^n = 1, and 0^n = 0, favoring Boolean sets in calculations.
This question is now CLOSED. Focusing on the discrete Weyl-Heisenberg group, as motivated by SN, this question has been expanded in a new question, where it was answered with YES in +12 areas:
[2]
Relevant answer
Answer
QM can have values unknown, but not uncertain. Likewise, RG questions. Please stay on topic, per question. Do not be uncertain yourself.
Opinions do not matter, every opinion is right and should be, therefore, not discussed.
But, facts? Mass is defined (not a choice or opinion) as the ratio of two absolutes: E/c^2. Then, mass is rest mass. There is no other mass.
This is consistent, which is the most that anyone can aspire. Not agreement, which depends on opinion. Science is not done by voting.
Everyone can, in our planet, reach consistency -- and the common basis is experiment, a fact. We know of other planets, and there consistency may be uncertain -- or ambivalent, and even obscure. A particle, there, may be defined, both, as the minimum amount of matter of a type, or the most amount of quantum particles of a type.
We can entertain such worlds in our minds, more or less formed by bodies of matter, and have fun with the consequences using physics. But, and there is my opinion (not lacking but not imposing objectivity) we all -- one day -- will be lead to abandon matter. What will we find? That life goes on. The quantum jump exists. Nature is quantum.
  • asked a question related to Physics
Question
11 answers
Some researchers say the type of surface electrical charges effects on pH value of the reaction medium and thus the adsorption and removal process , when pH value increases, the overall surface electrical charge on the adsorbents become negative and adsorption process decreases, while if pH value decreases, surface electrical charge become positive and adsorption process increases
Malkoc, E.;Nuhoglu, Y. and Abali,Y. (2006). “Cr (VI) Adsorption by Waste Acorn of Quercus ithaburensis in Fixed Beds: Prediction of Breakthrough Curves,” Chemical Engineering Journal, 119(1): pp. 61-68.
Relevant answer
Answer
At lower pH, the adsorption backpedal by H+ ion factor or proton factor and at higher pH, adsorption hampered because of yhe metallic ions start to precipitate as metallic hydroxide or metallic oxide. So it is looking good the pH value near between 4-6, and you should have to study for optimum pH for the removal of particular metal like Cr.
On contrast, time mainly find out the breakthrough time for column study is not so easy like batch study. Here you need to conduct a lot of trial and also need to modeling of Thomas model, Adam-Bohart model, Yoon-Nelson model for the much more accurate BTC curve.
Thank you.
  • asked a question related to Physics
Question
2 answers
Greeting,
When I tried to remotely accessed the scopus database by login into my institution id, it kept bring me back to the scopus preview. I tried cleaning the cache, reinstall the browser, using other internet and etc. But, none of it is working. As you can see in the image. It kept appeared in scopus preview.
Please help..
Relevant answer
Answer
To reach the Scopus document search module, you should use academic IPs. If your institute has been listed in the Scopus database, you have permission to search documents in Scopus. It is not free of charge, and your university should pay its share to Scopus to provide this service for its academic researchers.
  • asked a question related to Physics
Question
4 answers
If a string vibrates at 256 cycles per seconds then counting 256 cycles is the measure of 1 second. The number is real because it measures time and the number is arbitrary because it does not have to be 1 second that is used.
This establishes that the pitch is a point with the real number topology, right?
Relevant answer
Answer
To Mr Gerck
But pitch is continuous because between any two pitch values is an interval and between two intervals is a pitch value, so we have the real line minus finitely many points.
I am confused by the statement that real numbers are non-computable. real numbers are analytic.
My idea here is that music theory must be a theory of real numbers rather than frequencies. I think you are saying there is no theory of Q.
  • asked a question related to Physics
Question
16 answers
Material presence is essential for propagation of sound. Does it mean that sound waves can travel interstellar distances at longer wavelengths due to the presence of celestial bodies in the universe?
Relevant answer
Answer
Huge energy bursts starts with very high speed from giant objects and because they covers long distance instantly so they comes in contact with instant gravitational affect parallelly this thing indirectly supports in traveling upto Interstellar distances but not in all cases without presence of any medium. And also said thing is only about how radio bursts covers more distances. Because there's is no uniform distribution of mass and energy in all directions upto all distances in Universe so any such possibility cancel itself. Only thing lefts here is how sound is affected by gravity and vice versa.
  • asked a question related to Physics
Question
3 answers
The exposure dose rate at a distance of 1 m from a soil sample contaminated with 137Cs is 80 µR/s. Considering the source as a point source, estimate the specific activity of 137Cs contained in the soil if the mass of the sample is 0.4 kg. How can i calculate it?
Relevant answer
Answer
If the 'specific' in the question refers to the mass of the sample (it might, it should...), then we want to know the specific activity (ie, activity per kg) that, when you've only 400g of the stuff, leads to the activity you state.
So - we have 80 µR/s from 0.4kg.
Q: What activity might we get from 1kg of the soil?
(knowing that twice as much soil leads to twice the dose rate at a given distance)
A: 80/0.4 = 200 µR/s/kg
  • asked a question related to Physics
Question
6 answers
As we all know the Classical physics have wings over massive objects on other hand Quantum physics is about smaller level of objects. Can a new assumption of satisfying both Classical and Quantum Theories happens in future?
Relevant answer
Answer
If one considers classical physics as the limiting case of large quantum numbers, one does not need any new assumptions.
  • asked a question related to Physics
Question
2 answers
Hello Everyone,
I am able to sucessfully run scf.in using pw.x but while proceeding for the calculations to be done using thermo_pw.x the following errors occur.
Error in routine c_bands (1):successfully
too many bands are not converged
I have already tried increasing ecut, ecutrho, decreasing conv_thr, reducing mixing beta, reducing k points and pseudopotential too.
but none of them are helpful to fix the issue.
Someone who has faced this error in thermo_pw please guide,
Thanks,
Dr. Abhinav Nag
Relevant answer
Answer
I must thank you Roberto Sir as I have learned about many things of Quantum espresso through your answers which helped me a lot in PhD.
I was able to crack the problem by changing the pseudopotential. The problem was appearing with LDA type pseduopotential but when I use PBE pseudopotentials it worked fine
  • asked a question related to Physics
Question
51 answers
Which software is best for making high-quality graphs? Origin or Excel? Thank you
Relevant answer
Answer
origin
  • asked a question related to Physics
Question
2 answers
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
thanks
Relevant answer
Answer
The pump beam λ is split by a variable beam splitter (BS) into the two modes 1 and 2. The splitting ratio is adjusted by changing the distance between the two fibers using a micrometer screw. Each mode enters a non-linear periodically poled Lithium Niobate waveguide (ppLN) creating photon pairs via spontaneous parametric down-conversion. Cascaded dense wavelength division multiplexers (DWDM) separate and spectrally filter the down-converted photon pairs. Modes 1 and 2 (1' and 2') define a path-encoded qubit. This leads to the two qubit path-entangled state. Delay lines ( ) and polarization controller (PC) are used to adjust the arrival time and polarization of each mode. b.) 50/50 Beam splitters (BS A , BS B ) and phases ( A  , B  ). Combined with single photon detection the projective measurement | (1 2, ) (1 2, )| P P    is realized.
The experimental setup is
described in the publication given below:
Scalable fiber integrated source for higher-dimensional path-entangled
photonic quNits
Article  in  Optics Express · April 2012
DOI: 10.1364/OE.20.016145 · Source: arXiv
  • asked a question related to Physics
Question
11 answers
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
Relevant answer
Answer
Clarivate announced that starting with 2023 ESCI-indexed journals will also be assigned an impact factor. See: https://clarivate.com/blog/clarivate-announces-changes-to-the-2023-journal-citation-reports-release/
  • asked a question related to Physics
Question
1 answer
I am trying to plot and analyze the difference (or similarities) between the path of two spherical pendulums over time. I have Cartesian (X/Y/Z) coordinates from an accelerometer/gyroscope attached to a weight on a string,
If I want to compare the path of two pendulums, such as a spherical pendulum with 5 pounds of weight and another with 15 pounds of weight, how can I analyze this? I am hope to determine how closely the paths match over time.
Thanks in advance.
Relevant answer
Answer
Do you suspect that their behvaiour will depart from damped simple harmonic motion?
Making one bob three times more massive than the other will (broadly) make its decay time constant longer by a comparable amount - the details will depend on the drag model you use for the bob (smooth surface, laminar flow; or rough and turbulent?)
If you've got the data, I'd fit an exponential decay to their amplitudes and show that their characteristic timescale varies by something like their masses.
<I assume these are pendulums in air at 1bar and 300K>
  • asked a question related to Physics
Question
25 answers
Dear fellow mathematicians,
Using a computational engine such as Wolfram Alpha, I am able to obtain a numerical expression. However, I need a symbol expression. How can I do that?
I need the expression of the coefficients of this series.
x^2*csc(x)*csch(x)
where csc: cosecant (1/sin), and csch: hyperbolic cosecant.
Thank you for your help.
Relevant answer
Answer
An alternative answer to this question is contained in Theorem 2.1 in the following paper:
Xue-Yan Chen, Lan Wu, Dongkyu Lim, and Feng Qi, Two identities and closed-form formulas for the Bernoulli numbers in terms of central factorial numbers of the second kind, Demonstratio Mathematica 55 (2022), no. 1, 822--830; available online at https://doi.org/10.1515/dema-2022-0166.
  • asked a question related to Physics
Question
4 answers
I'm getting repetitively negative open circuit potentials(OCP) vs. Ag/AgCl reference electrode for some electrodes during the OCP vs. time measurements using an electrochemical workstation. What's the interpretation of a negative open circuit potential? Moreover, I also have noticed that it got more negative on illumination. What's the reason behind it? Are there some references? Please help.
Relevant answer
Answer
Dear Dr. Ayan Sarkar ,
as I said in a similar question, long-term change of corrosion potential (open-circuit potential) reflects a change in a corrosion system because the change in corrosion potential depends on the change in one or both of the anodic and cathodic reactions. For example, an increase in corrosion potential can be attributed to a decrease in the anodic reaction with the growth of a passive film or the increase in the cathodic reaction with an increase in dissolved oxygen. A decrease in corrosion potential can be attributed to an increase in the anodic reaction or a decrease in the cathodic reaction. The monitoring of corrosion potential is therefore often carried out (ISO 16429, 2004; JIS T 6002). For the test solution, saline, phosphate buffer saline, Ringer solution, culture medium, serum and artificial saliva are typically used. The corrosion potential of the specimen can be monitored against a reference electrode using an electrometer with high input impedance (1011 Ω ~ 1014 Ω) or a potentiostat.
For more details, please see the source: Monitoring of corrosion potential by S. Hiromoto, in Metals for Biomedical devices, 2010.
The most widely used electrochemical method of determining the corrosion rate is the Stern-Geary method which allows to evaluate the corrosion current (i corr), an essential parameter from which to derive the corrosion rate of the material in that particular environment.
My best regards, Pierluigi Traverso.
  • asked a question related to Physics
Question
59 answers
Dear Sirs,
In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.
1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?
2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.
One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.
But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.
But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).
3.) Searching the internet I have found recent articles by Melvin M. Vopson
which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.
I would be grateful to hear your view on this subject.
Relevant answer
Answer
With respect to human societies and production methods, dear Anatoly A Khripov , we are witnessing the informatization of the economy. In this sense, this informatization changes the material conditions of the production process itself.
However, it is difficult to assess, if information is a new production factor or if the traditional production factors become more information-intense.
Consequently, my viewpoint from the physics of social systems (natural science of human society and mind) discerns that information converts (reorganizes) matter, energy and mass, in terms of economic production.
———-
Thermodynamic entropy involves matter and energy, Shannon entropy is entirely mathematical, on one level purely immaterial information, though information cannot exist without "negative" thermodynamic entropy.
It is true that information is neither matter nor energy, which are conserved constants of nature (the first law of thermodynamics). But information needs matter to be embodied in an "information structure." And it needs ("free") energy to be communicated over Shannon's information channels.
Boltzmann entropy is intrinsically related to "negative entropy." Without pockets of negative entropy in the universe (and out-of-equilibrium free-energy flows), there would no "information structures" anywhere.
Pockets of negative entropy are involved in the creation of everything interesting in the universe. It is a cosmic creation process without a creator.
—————
Without the physical world, Ideas will not exist. ― Joey Lawsin
Even when money seemed to be material treasure, heavy in pockets and ships' holds and bank vaults, it always was information. Coins and notes, shekels and cowries were all just short-lived technologies for tokenizing information about who owns what. ― James Gleick, The Information: A History, a Theory, a Flood
  • asked a question related to Physics
Question
3 answers
How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.
Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.
I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)
Relevant answer
Answer
Anil Kumar Jain The description of discrete spaces is found in physical works, e.g. "Discrete spacetime, quantum walks and relativistic wave equations" by Leonard Mlodinow and Todd A. Brun, https://arxiv.org/abs/1802.03910. But I have not seen any attempt to quantify the dimensionality of such spaces. This is exactly what I am looking for.
  • asked a question related to Physics
Question
21 answers
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
Relevant answer
Answer
It pleases me to see this discussion, realising there are more critical thinkers out there. Let me try to add a simply phrased contribution.
In my opinion, Physics has gone down the rabbit hole of sub-atomic particles and that part of physics has become what some call “phantasy physics”. Complex maths is used as smoke and mirrors to silence critical physicists who are convinced that theory must be founded in reality and that empirical evidence is necessary.
Concepts such as ”Big bang”, black holes, dark matter etc are actually hypotheses that try to explain why the outcomes of measurements are not in accordance with the calculations made on the basis of Einsteins theories of relativity. Unfortunately, and perhaps through the journalistic popularisation of science, these concepts have been taken as reality, such as “scientists have discovered dark matter, or anti-matter”. No, they have not. What they discovered was that the measured light or matter in the universe or a part of the universe was not as much as had been predicted by calculations based on a theory. Usually in science, that would lead to a refining of the theory. Here it did not, perhaps because Einstein has been placed on such a high pedestal that his theories are seen as the alpha and omega of physics that may not be questioned or touched, as that is considered sacrilege.
The solution was the hypothesis of Cookie Monsters, things out there that ate light or matter = black holes and dark matter. Anyone who dares questions these methodological steps is intimidated and attacked with complicated terminology and complex mathematics. Most physicists are afraid of looking stupid and therefore shut up. Decades ago the physics professor who was my head supervisor (experimental physics) said to his students that if you could not explain your work in ordinary household language, then you did not really understand it yourself. He considered complicated language and naming theories and authors as a cover up for not grasping the essentials.
A reason for looking at yet another species of virtual particles is that research proposals in this field receive funding because physicists all over the world are doing it. It is the reigning paradigm and it will take a ground swell of opposition to move on to the next phase in science after the 50-odd years of the present, now stagnant, paradigm.
  • asked a question related to Physics
Question
41 answers
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
Relevant answer
Answer
Good deductive arguments have two properties: (1) validity and (2) soundness. Validity is entirely a formal property: it says that IF the premises are true then so is the conclusion; soundness says that not only is the argument valid, but its premises ARE true. Whether the premises are indeed true may be a matter of empirical discovery or of previous deductions or definitions (including deductions or definitions in mathematics). Sometimes it's just interesting to see what else a certain assumption commits one to and deduction can answer that question and sometimes also give us a good reason for rejecting that assumption (that is the rationale for reductio ad absurdum arguments, aka indirect proofs). It helps to keep in mind that the alleged shortcoming of deduction is not an indictment of its formal nature but a matter of the "garbage in, garbage out" principle.
  • asked a question related to Physics
Question
6 answers
Dear all,
after a quite long project, I coded up a python 3D, relativistic, GPU based PIC solver, which is not too bad at doing some stuff (calculating 10000 time steps with up to 1 million cells (after which I run out of process memory) in just a few hours).
Since I really want to make it publicly available on GitHub, I also thought about writing a paper on it. Do you think this is a work worthy of being published? And if so, what journal should I aim for?
Cheers
Sergey
Relevant answer
Answer
Hi! Once again, thank you for the reply! I have never published before, that's why I was asking :D
  • asked a question related to Physics
Question
37 answers
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
Relevant answer
Answer
In a simple and introductory way, there is a book by Prof. F. Reif of the Berkeley course, i.e., Vol 5: "Statistical Physics" by F. Reif.
In chapter 7, section 7.4, pp. 281 of the 1965 edition by MCGraw Hill, he discusses in an introductory way what he calls: "the basic five statements of statistical thermodynamics" which are based on some statistical postulates that he also talks about in section 3.3, pp. 111, there are three postulates, inside of boxes, Eqs. 17, 18, & 19 and among those, the one who you refer to.
I prefer you read from same Prof. Reif book, what he has to say about your interesting question.
Kind Regards.
  • asked a question related to Physics
Question
12 answers
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
Relevant answer
Answer
Gerhard Martens Thanks! I guess that is my problem solved.... Thanks for your input and suggestions.... :-D
  • asked a question related to Physics
Question
4 answers
During AFM imaging, the tip does the raster scanning in xy-axes and deflects in z-axis due to the topographical changes on the surface being imaged. The height adjustments made by the piezo at every point on the surface during the scanning is recorded to reconstruct a 3D topographical image. How does the laser beam remain on the tip while the tip moves all over the surface? Isn't the optics static inside the scanner that is responsible for directing the laser beam onto the cantilever or does it move in sync with the tip? How is it that only the z-signal is affected due to the topography but the xy-signal of the QPD not affected by the movement of the tip?
or in other words, why is the QPD signal affected only due to the bending and twisting of the cantilever and not due to its translation?
Relevant answer
Answer
Indeed, in the case of a tip-scanning AFM the incident laser beam should follow the tip scanning motion, to record throughout the deflection signal for the same spot on the cantilever backside. This can be achieved by integrating the laser diode with a kind of tube (with its long axis parallel to the z-axis) that carries the cantilever holder at its lower end and is kind of hinged at its upper end. The scan piezos would act on the entire tube, incl the laser diode, in a plane between the tube's upper and lower ends. Whether or not your AFM system works exactly the same way I cannot tell for sure though.
  • asked a question related to Physics
Question
1 answer
I'm searching for a good collaborator or a research group that might want to tackle an interesting problem involving the relationship between quantum dots generating nanoparticle clusters and their DNA/proteins corral. This relationship is encapsulated by geometric proximity, that is I'm looking for someone who might know how quantum mechanics impacts something like these nanoparticles, such as how close a nanoparticle is to another nanoparticle or a protein and whether sized clusters form. Ping me if you're in the bio sciences, computational biology, chemistry, biology or physical sciences and think you might be able to shed some light on the above.
Relevant answer
Answer
Navjot Singh This might surprise you but I recommend you analyse the problem without using quantum theory. If you take a look at the preprint linked below you will see a different approach to the analysis of molecular bonds:
This is based on the Spacetime Wave theory and shows how a stable bond is formed when the electrostatic and electromagnetic forces are in balance.
Richard
  • asked a question related to Physics
Question
6 answers
I do recognise that there’s a well-known problem (hard though it is) of establishing how consciousness emerges or can be accounted for in physical processes. But I can’t at all agree that there’s a naturalistic, absolute hard problem of consciousness, because it’s an incoherent concept.
Nobody (at least nobody with a clue) supposes that neurophysiology can explain a qualitative difference in the way you and I experience the content of my music mix playing quietly in the background, or see the light reflect off a rainbow, or any of the other ways in which our qualitative experience discriminates from that of other live organisms. To suppose that just because you don’t know the mechanisms of the experience in your own head you will deny them the existence of them in somebody else’s is bizarre and reductionist.
Construct an imaginary metaphor of a magical, wizardry, thing-maker consciousness and you haven’t explained the qualitative data there either. It’s still the question of how consciousness comes into the work whether any magical things happen or whether there’s anybody there at all. To suppose a separate, inexplicable, mysterious, magic ingredient does neither any explanatory good, solve the hard problem, nor explain the evidence. All such arguments for a separate consciousness occurrent substance do, again, be it a magic nonsense or magic substance involved, reduce the hard problem of explaining thisness-of-consciousness (to pick a crazy approach) to the very same hard problem of explaining how consciousness arises in the first place.
If you identify the hard problem entirely with the mechanism through which the feeling-of-redness arises, or "the feeling of the future in an invariant past", or anything else you allude to, then you plainly have just traded in one way of asking a very simple question of the wrong approach. The question is, how do the millions of biological chunks and sub-systems interact with one another and integrate information over time and space? The sense of sight, sound, touch and soil all raise a “hard problem” of projection-understanding and categories-beyond-the-reliable-input-enumeration because by a vast over-engineering of the metaphor arms race (as even you must agree) the response-device signals of a single kind of appropriate examination will allow all in-the-know people to interpret an external reality quite differently. But the “hard problem” isn’t WHY is it that we can punch those signals at all, or make sense of the signals that come out the other end. That’s just the default condition of our very real neurological symposium. Whereas the “humanness” of that experience is also an entirely benignly apparent phenomenon, just as water’s polar nature is an entirely benignly apparentity.
For me the cardinal point is to reckon with how we perceive our own subjective value via multi-sensory data input both direct and indirect in both our two and three dimensional waking experience. And because at the very least you have to be wrong or qualified immensely if you think it’s not merely the interaction between general anatomy, organisation, information processing and output of your brain and all subjective processes such that personal conclusions then magically appear as relevant claims about reality.
P.S. I don't think evolution throws up any magical consciousness, either on its petri-dish experiments, or those novelty subjectiveness media that it comes up with sometimes. So I'd like to challenge that viewpoint, particularly in terms of our understanding of the nuances.
Relevant answer
Answer
Navjot Singh I define consciousness as the subjective experience that we each have arising from the operation of the brain.
In the paper titled the conscious brain, I have identified the importance of understanding how we control our focus of attention.
The brain is a particular combination of biology chemistry and physics and it is a lack of understanding of fundamental physics that has held us back.
Neuroscience has revealed the brain activity in the form of the network of neurons but we have to understand the effect of the electromagnetic wave activity generated by the brain on the operation of the brain as a whole.
Richard
  • asked a question related to Physics
Question
51 answers
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of flat spacetime.
Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .
---------------
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we assume that the origins coincide x = X = 0 at time tP = 0 we may write down the space-like component of the Lorentz transformation as:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP , inserting x =xP , X = XP , T = TP into 2)3) yields:
4) XP = g (xP - vtP)
5) TP = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
More information in:
Relevant answer
Answer
There is a difference between Lorentz transformations and scale transformations.
Special relativity satisfies Lorentzian symmetry due to the constancy of the speed of light and the special relativity principle.
However, in reality, the aether makes the speed of light locally invariant, so the Lorentz transformation is not necessary.
  • asked a question related to Physics
Question
10 answers
Hallo every one,
I did nanoidentation experiment :
1 photoresist with 3 different layer thicknesses.
My results show that the photoresist is harder when it has thicker layer..
I can't find the reason in the literature.
Can any one please explaine me why is it like that??
is there any literature for this?
best regards
chiko
Relevant answer
Answer
The nano layer thickness is very very small layer, otherwise it's cannot use by Resistivity method and it has VES limitation.
Best regards.
P. Hakaew
  • asked a question related to Physics
Question
48 answers
The above question emerges from a parallel session [1] on the basis of two examples:
1. Experimental data [2] that apparently indicate the validity of Mach’s Principle stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.
2. The negative outcome of gravitational wave experiments [3] apparently does not affect the main-stream acceptance of claimed discoveries.
Relevant answer
Answer
Stam Nicolis: "Mainstream theorosts"
Mainstream theorists, I would say, are those who, based on mainstream consensus, raise public funds (from taxpayers) for large-scale experiments (Big Science) and organize spectacular media campaigns that essentially affirm the mainstream consensus. It is a self-sustaining system that inhibits progress in science. When experimental results do not fit, they are made to fit or simply ignored, as can currently be observed with "gravitational wave astronomy." https://www.researchgate.net/project/Discussion-on-recently-claimed-simultaneous-discovery-of-black-hole-mergers-and-gravitational-waves https://www.researchgate.net/project/Discussion-on-recently-claimed-simultaneous-discovery-of-black-hole-mergers-and-gravitational-waves
  • asked a question related to Physics
Question
3 answers
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Relevant answer
Answer
Thanks for all comments.
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
  • asked a question related to Physics
Question
3 answers
Eighty years after Chadwick discovered the neutron, physicists today still cannot agree on how long the neutron lives. Measurements of the neutron lifetime have achieved the 0.1% level of precision (~ 1 s). However, results from several recent experiments are up to 7 s lower than the (pre2010) particle data group (PDG) value. Experiments using the trap technique yield lifetime results lower than those using the beam technique. The PDG urges the community to resolve this discrepancy, now 6.5 sigma.
I think the reason is “tropped p ”method had not count the number of protons in the decay reaction(n→p+e+ve+γ).As a result ,the number of decay neutrons obtained were low .This affected the measurement of neutron lifetime.Do you agree with me?
Relevant answer
Answer
If you don't believe me, you can search the literature to find out the Mampe paper and they have lifetime measurement with different waiting times. The shorter the waiting time, the longer the lifetime.
For storage time between 112-225 seconds, lifetime is 891 seconds, storage interval of 225-450 seconds, lifetime is 888.5, storage time above 900 seconds, lieftime is 887.0 seconds. See the attached screenshot.
  • asked a question related to Physics
Question
30 answers
Dear all
Hope you are doing well!
What are the best books in Materials Science and Engineering (Basics and Advanced)? Moreover, what are the best skills (or materials topic related) that materials scientists have to develop and to acquire?
Thanks in advance
^_^
Relevant answer
Answer
Dear all, following a list of interesting books. My Regards
- Fundamentals of Materials Science and Engineering: An Integrated Approach, William D. Callister, David G. Rethwisch, 5th Edt (2015).
- Materials Science and Engineering: An Introduction, 10e WileyPLUS NextGen Card with Loose-Leaf Print Companion Set, Callister Jr., William D., Rethwisch, David G. 10th Edt (2018).
- The Science and Engineering of Materials, Donald R. Askeland, Wendelin J. Wright. 7th Edt (2014).
- Materials Science and Engineering: A First Course, V. Raghavan, (2004).
- Foundations of Materials Science and Engineering, Willaim Smith, Javed Hashemi, 6th Edt (2019).
  • asked a question related to Physics
Question
5 answers
I use Fujikura CT-30 cleaver for PCF cleaving to use for supercontinuum generation. Initially, it seems like working fine as I could get high coupling efficiency (70-80%) in the 3.2um core of PCF. However, after some time (several hours) I notice that coupling efficiency decreases drastically and when I inspect the PCF endface with an IRscope, I could see a bright shine on the PCF end facet, which is maybe an indication that the end face is damaged. Also, I want to mention that the setup is well protected from dust and there is no chance of dusting contaminating the fiber facet.
Please suggest what should be done to get an optimal cleave, shall I use a different cleaver (pls suggest one) or there are other things to consider.
Thanks
Relevant answer
Answer
Supercontinuum generation by short pulse with high power that lead to traction or fusion soliton.
  • asked a question related to Physics
Question
28 answers
Relevant answer
Answer
Dear all,
thanks for your kind replies and comments !
As with other discussions that refer to experimental results, it is clear in the present one that responses generally do not refer to cited experimental results and procedures, but preferably rely on mainstream conform theoretical arguments.
Indeed, Stam Nicolis, citing "experimental" results from LIGO labs, concludes that both gravitational and electromagnetic waves travel at the speed of light. However, the validity of the LIGO results is still disputed in view of certain fundamental flaws in the experimental setup (see reference below), but is simply taken for granted without further discussion by the public in view of the general acceptance of the spectacular discoveries, including Nobel Prizes.
I would indeed be very grateful for any comments on the Keith experiment quoted above, especially since I believe that Julius Riese and László Attila Horváth are right when they mention that the gravitational speed could be faster than the speed of light.
  • asked a question related to Physics
Question
2 answers
Forgive some of my ignorance in the math for thermodynamics and heat exchange but my background is heavier in Chemistry and could use some help.
The project is to keep about 70L of water in an aquarium at 17C when the ambient temperature is 22C in the room. The original project built had the following set up:
(Top to Bottom):
1. 80x80x38mm fan running at 5700 RPMs and 76CFM
2. 80x80x20mm copper fin heatsink (0.5mm fin thickness and 40 fins with a 3.5mm bottom thickness)
3. 2-TEC1-12706 hot side towards heatsink, cold side down towards water block (Imax: 6.4A, Umax: 15.4V, Qmax: (dT=0) 63W, dTmax=68C)
4. 40x80x12mm water block centered under the heatsink (surrounded on the sides with 20mm styrofoam and 10mm styrofoam at the back)
5. ~26mm thick styrofoam
6. Wood base
• All power is supplied by an AC/DC converter (12V 20A 240W)
• Power to the system is managed by a W1209 Temperature Control Module (Relay)
• Water flow is achieved by a 4L/min water pump (slowest I can find)
This set up is only cooling the water to 18C at night and will slowly creep up to 18.7 across the day so I know this set up is not keeping up with the heat load. (also worth noting that output temp is about 1.5-2C cooler than input temp to the waterblock). My hypothesis is that the water does not have enough time in the water block for good thermal exchange or that the cooler is not creating enough of a dT in the water block to absorb the amount of heat needed to in that cycle time. The fact that the Aluminum water block has a 5x lower specific heat than water is what is making me think either more contact time or great dT is needed.
My thoughts were to swap out the water block for an 40x200x12mm water block and increase the number of peltier coolers from 2->5 and going with the TEC1-12715 (Imax: 15.6A, Umax: 15.4V, Qmax: (dT=0) 150W, dTmax=68C).
This is where is am lost in the weeds and need help. I am lacking in the intellectual horsepower for this. Will using the 5 in parallel do the trick and not max out the converter? OR Will using 5 in series still produce the needed cooling effect with the lower dT associated with the lower amperage? Or is there another setup someone can recommend? I am open to feedback and direction, thank you in advance.
Relevant answer
Answer
Have you considered evaporative cooling for your aquarium. As long as the relative humidity in the room is not pushing 100%, you can achieve cooling using this technique. This link will show your how it is done:
  • asked a question related to Physics
Question
5 answers
In a hypothetical situation where I have two wires, ones cross section is a cylinder, and the others a star. Both have the same cross section area, both have the same length. What are the differences in electrical properties ?
Are there any experiments done looking into this ?
Also what would happen if a wire had a conical shape, by length ?
Relevant answer
Answer
The electrical properties are frequency dependent and are dependent on the electromagnetic field profile around the wire. For example, in a high frequency situation, the electric current will travel close to the surface of the wire. Consider a coaxial cable: if the center conductor is round (circular) and the shield is circular and collinear, the electric field will be evenly distributed around the center conductor and the current (tangential magnetic field) will also be evenly distributed. Hence, the resistance per unit length of the wire will be 1/(2*pi*a*delta*sigma), where a=radius of wire, delta=skin depth, sigma=wire material conductivity.
Now, suppose we have a "star" shaped wire. The electric field (and longitudinal current) will be concentrated at the points of the star. The effective area of the current flow will be reduced in this case and the wire will have a higher resistance than the smooth round wire.
If you have access too electromagnetic field simulator software, why not try some numerical experiments?
  • asked a question related to Physics
Question
5 answers
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
Richard
Relevant answer
Answer
Sydney Ernest Grimm Thank you for your comment and I would like to explain in more detail how the Spacetime Wave theory relates to quantum theory.
If you think about the electron as a looped wave in Spacetime the entire mass/energy of the electron is given by E=hf. Then when an electron changes energy level from an excited state f2 to a lower energy level f1 the emitted wave quantum (photon) is given by h(f2 - f1). It is easy to see how a looped wave can emit a non-looped wave.
Because the path of the electron wave loops many times around the nucleus and within each wavelength there is a small positive charge followed by a slightly larger negative charge the wave aligns with successive passes displaced by half a wavelength.
This alignment process means that there are certain possible energy states that can be adopted by the electron. This is the cause of the quantum nature of the electron and also explains the quantum nature of light.
Richard
  • asked a question related to Physics
Question
18 answers
For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.
In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?
I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.
Relevant answer
Answer
A revision with a more appropriate title is attached. The Conclusion section is specific about the difference between what is in this report and what is in at least some popular textbooks.
  • asked a question related to Physics
Question
4 answers
The 2023 ranking is available through the following link:
QS ranking is relatively familiar in scientific circles. It ranks universities based on the following criteria:
1- Academic Reputation
2- Employer Reputation
3- Citations per Faculty
4- Faculty Student Ratio
5- International Students Ratio
6- International Faculty Ratio
7- International Research Network
8- Employment Outcomes
- Are these parameters enough to measure the superiority of a university?
- What other factors should also be taken into account?
Please share your personal experience with these criteria.
Relevant answer
Answer
Cenk Tan; There are, of course, several websites that rank the universities worldwide. However, QS is the most famous of which.
  • asked a question related to Physics
Question
4 answers
Hello,
I would like to know how to measure a solid's surface temperature with fluid on it. The fluid will react with the solid surface and generates heat, so the temperature between the solid and the fluid is the crucial data I need. Here, I can only think of two options:
1. Thermal couple: Use the FLAT surface thermal couple and attach it to the surface of the solid to measure the data. For example, I can use Thin Leaf-Type Thermocouples for Layered Surfaces (omega.com) or Cement-On Polyimide Fast Response Surface Thermocouples (omega.com)
Pros: fast response, high accuracy
Cons: cannot guarantee that the measured data accurately represents the surface temperature
2. Infrared temperature sensor:
Pros: directly measure the surface temperature, high accuracy
Cons: slow response, the data might be affected by the fluid
Is there any other way to do the measurement or any suggestions?
Thank you very much in advance to anyone who answers this question.
Relevant answer
Answer
Well some one as propose the use of tmpeature sensitive paint, another altrnative is to pint liquid crystals ,but you need extra care when applying liquid crystal on the suface. Another method is thermal imaging
  • asked a question related to Physics
Question
1 answer
Lee's disc apparatus is designed to finsd thermal conductivity of bad conductors. But I am having a doubt that, since soil having the following properties:
1. consists of irregular shaped aggregates
2. Non uniform distribution of particles
3. Presence of voids
Can we use Lee's disc method find thermal conductivity of soil???
Relevant answer
Answer
Replace the glass plate in the original Lee kit with a new plate made up of test soil. Run your experiment and have your readings accordingly. It should give accurate results.
  • asked a question related to Physics
Question
10 answers
I am interested to know the opinion of experts in this field.
Relevant answer
Answer
Photons are massless and therefore non-localisable (consider any typical solution of Maxwell's equations, ), i.e. there are none that stay at a fixed and specific point-like location in space. In contrast, the wavefunction of a massive particle can be so localised.
Thus I would say that photons never match the common definition of a particle (because they are not point-like localisable, even in principle). However, since they can be counted, I would, if prevailed upon to suggest a qualitative description, instead describe them as "countable waves".
This is because in QED we quantize inside "mode" solutions of Maxwell's equations (see any quantum optics text, or the paper I cite above), and can describe the quantum state within each mode in terms of combinations of photon number states.
  • asked a question related to Physics
Question
69 answers
LIGO and cooperating institutions obviously determine distance r of their hypothetical gravitational wave sources on the basis of a 1/r dependence of related spatial strain, see on page 9 of reference below. Fall-off by 1/r in fact applies in case of gravitational potential Vg = - GM/r of a single source. Shouldn’t any additional effect of a binary system with internal separation s - just for geometrical reasons - additionally reduce by s/r ?
Relevant answer
Answer
"LIGO and cooperating institutions obviously determine distance r of their hypothetical gravitational wave sources on the basis of a 1/r dependence of related spatial strain, see on page 9 of reference below. Fall-off by 1/r in fact applies in case of gravitational potential Vg = - GM/r of a single source."
No. Fall-off for a single point source goes as 1/r2 for the field strength in the static case. The potential goes as 1/r, but it is the field strength that is measured (and gives the strain).
However, for any time-dependent radiation, the leading-order term of the field strength falls off as 1/r. This is true for dipole as well as for quadrupole radiation. Because of the appearance of time-dependent terms, the derivatives in the field equations produce all terms from 1/r, 1/r2,... to 1/rs, where s would be 2 for (non-existent) monopole radiation, 3 for dipole radiation (electromagnetic, e.g.), 4 for quadrupole radiation (gravitation), and so on.
A consequence of the leading order term being 1/r is that the energy current goes as 1/r2 in leading order and that means that energy can be radiated away. Which would not be the case, if the leading-order term fell off faster than 1/r.
  • asked a question related to Physics
Question
4 answers
In order to represent our observations or sight of a physical process and to further investigate it by conducting experiments or Numerically models? What are basics one need to focus ? Technically, how one should think? First, thing is understanding, you should be there! If we are modeling a flow we have to be the flow, if representing a let's say a ball, you have to be the ball! To better understand it! What are others?
Relevant answer
Answer
Aditya Kumar Mishra replication is tough though i do agree with the expert comments above that physical replication like visualization is a must one of the important criteria I feel is to To assess applicability one must always specify the requirements along with exact what attribute of a previous results of interest.
  • asked a question related to Physics
Question
11 answers
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
Relevant answer
Answer
Jiří Kroc: Greetings Prof. Kroc. In neurology the cutting-edge research is on 1) neurodegeneration, 2) neuroprotection, 3) the unification/entanglement between the nervous system and the immune system and 4) disorders of consciousness. thanks, Mustafa.
  • asked a question related to Physics
Question
17 answers
In my previous question I suggested using the Research Gate platform to launch large-scale spatio temporal comparative researches.
The following is the description of one of the problems of pressing importance for humanitarian and educational sectors.
For the last several decades there has been a gradual loss in quality of education on all its levels . We can observe that our universities are progressively turning into entertaining institutions, where students parties, musical and sport activities are valued higher than studying in a library or working on painstaking calculations.
In 1998 Vladimir Arnold (1937 – 2010), one of the greatest mathematicians of our times, in his article “Mathematical Innumeracy Scarier Than Inquisition Fires” (newspaper “Izvestia”, Moscow) stated that the power players didn’t need all the people to be able to think and analyze, only “cogs in machines,” serving their interests and business processes. He also wrote that American students didn’t know how to sum up simple fractions. Most of them sum up numerator and denominators of one simple fraction with the ones of the other, i.e. as they did it, 1/2+ 1/3 according to their understand is equal to 2/5 . Vladimir Arnold pointed out that with this kind of education, students can’t think, prove and reason – they are easy to turn into a crowd, to be easily manipulated by cunning politicians because they don’t usually understand causes and effects of political acts. I would add, for myself, that this process is quite understandable and expected because computers, internet and consumer society lifestyle (with its continuous rush for more and newer commodities we are induced to regard as a healthy behavior) have wiped off young people’s skills in elementary logic and eagerness to study hard. And this is exactly what the consumer economics and its bosses, the owners of international businesses and local magnates, need.
I recall a funny incident that happened in Kharkov (Ukraine). One Biology student was asked what “two squared” was. He answered that it was the number 2 inscribed into a square.
The level and the scale of education and intellectual decline described can be easily measured with the help of the Research Gate platform. It could be appropriate to test students’ logic abilities, instead of guess-the-answer tests which have taken over all the universities within the framework of Bologna Process which victorious march on the territories of former Soviet states. Many people can remember the fact that Soviet education system was one of the best in the world. I have therefore suggested the following tests:
1. In a Nikolai Bogdanov-Belsky (1868-1945) painting “Oral accounting at Rachinsky's People's school”(1895) one could see boys in a village school at a mental arithmetic lesson. Their teacher, Sergei Rachinsky (1833-1902), the school headmaster and also a professor at the Moscow University in the 1860s, offered the children the following exercise to do a mental calculation (http://commons.wikimedia.org/wiki/File:BogdanovBelsky_UstnySchet.jpg?uselang=ru):
(10 х 10 + 11 х 11 + 12 х 12 + 13 х 13 + 14 х 14) / 365 = ?
(there is no provision here on Research Gate to write square of the numbers,thats why I have writen through multiplication of the numbers )
19th century peasant children with basted shoes (“lapti”) were able to solve such task mentally. This year, in September, this very exercise was given to the senior high school pupils and the first year students of a university with major in Physics and Technology in Kyiv (the capital of Ukraine) and no one could solve it.
2. Exercise of a famous mathematician Johann Carl Friedrich Gauss (1777–1855): to calculate mentally the sum of the first one hundred positive integers:
1+2+3+4+…+100 = ?
3. Albrecht Dürer’s (1471-1528) magic square (http://en.wikipedia.org/wiki/Magic_square)
The German Renaissance painter was amazed by the mathematical properties of the magic square, which were described in Europe firstly in Spanish (the 1280s) and Italian (14th century) manuscripts. He used the image of the square as a detail for in his Melancholia I painting , which was drawn in 1514, and included the numbers 15 and 14 in his magic square:
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
Ask your students to find regularities in this magic square. In case this exercise seems hard, you can offer them Lo Shu (2200 BC) square, a simpler variant of magic square of the third order (minimal non-trivial case):
4 9 2
3 5 7
8 1 6
4. Summing up of simple fractions.
According to Vladimir Arnold’s popular articles, in the era of computers and Internet, this test becomes an absolute obstacle for more and more students all over the world. Any exercises of the following type will be appropriate at this part:
3/7 + 7/3 = ? and 5/6 + 7/15=?
I think these four tests will be enough. All of them are for logical skills, unlike the tests created under Bologna Process.
Dear colleagues, professors and teachers,
You can offer these tasks to the students at your colleges and universities and share the results here, at the Research Gate platform, so that we all can see the landscape of the wretchedness and misery resulted from neoliberal economics and globalization.
  • asked a question related to Physics
Question
42 answers
Time is what permits things to happen. However, as a physical grandeur, time must emerge as a consequence of some physical law (?). But, how time could emerge as a consequence of something if " consequence", " causation", implies the existence of the time?
Relevant answer
Answer
A lot of science we termed “metaphysics” long ago is just mathematics nowadays (set theory). So maybe it is more realistic if I change the title of the question: “Is time a mathematical consequence of phenomenological physics?
Physics as a branch of science is a bit troublesome. It represents the scientific method to search for – and to describe – the mutual relations between all the observable and detectable phenomena in the universe. But since ≈1900 physics is also the continuation of the main aim of philosophy: understanding the nature of reality. Unfortunately the latter is not a “tangible” aim. Actually it must be some kind of a model that represents a mixture between philosophy, mathematics and physics. And last but not least, physics is not leading in this particular scientific process. Physics is providing assistance with the descriptions of the properties of the physical universe.
Time is not a “tangible” property itself because it is a kind of an experience. Not only for humans because everything in the universe experience time. Time only exists if there is change and it shows there is a continuous change everywhere in the universe. But if everything changes – and everything influence everything at exactly the same moment because our universe is non-local – change (time) is a basic “mechanism” of the structure of our universe.
The properties of our universe were once limited to the observable and detectable phenomena (classic physics). This in contrast to philosophy because from the beginning (≈ 600 B.C.) there was a concept about the existence of an underlying “mathematical” structure that was responsible for the creation of observable/detectable reality.
In physics classic physics was replaced by “quantum physics” (Planck’s constant as the constant of energy) and quantum physics evolved into quantum field theory. Nowadays the leading theorists are convinced that matter emerges from the basic properties of the universal quantum fields. For 3 decades theorists are also trying to incorporate gravity into QFT and the consequence is that space itself must have a “non-visible” structure, a metric. In line with ancient Greek philosophers who already created a comparable concept (Parmenides and followers).
Einstein’s opinion that time is relative, is not really helpful. We can measure the rate of change of a decaying particle at moderate speed and the same particle at nearly the speed of light. Einstein shows to be right: accelerating the particle is slowing down the process of decay. But is this process “time itself” or is it just the rate of change of a composite phenomenon? One should expect that every theorists can draw the conclusion that Planck’s constant in relation to the constant speed of light (the linear velocity of a free quantum of energy) determines that time is a universal constant (the constant of physical change). In line with the expectation that at the smallest scale size the complexity in nature is build on simplicity and logic (advocated by e.g. Steven Weinberg).
But in practice physics is about measurements and equations that describe the detected standardized mutual relations between the measurable phenomena. If there is a theoretical problem in relation to the outcome of certain experiments theorists frequently start to hypothesize the existence of a new particle or even a new field to dissolve the problem. Don’t ask a theorist about the “tangible” existence of Planck’s constant or the mechanism behind the speed of light. It seems that in physics every hypothetical concept is acceptable if it doesn’t violate the equations.
In other words, if we want to understand time we have to discuss the structure of the universe.
With kind regards, Sydney
  • asked a question related to Physics
Question
3 answers
A long copper plate is moved at a speed v along its length as suggested in the attachment. A magnetic field exists perpendicular to the plate in a cylindrical region cutting the plate in a circular region. A and B are two fixed conducting brushes which maintain contact with the plate as the plate slides past them. These brushes are connected by a conducting wire.