Science topics: PhysicsQuantum Physics

Science topic

# Quantum Physics - Science topic

Explore the latest questions and answers in Quantum Physics, and find Quantum Physics experts.

Questions related to Quantum Physics

I am looking for some one to collaborate with regarding my research article...

[This picture is from class lecture slides, unfortunately whose source (and copyright attribution) is unknown to me and also course instructor.]

(n= principal, l= subsidiary/orbital m=magnetic and s= spin quantum number)

As I came through study materials,

Auger Electron Spectroscopy (AES) peaks are implicitly numberd as (absolute value of) magnetic plus/minus spin quantum numbers e.g. one (1/2) for s, two for p (1/2, 3/2), three for d (1/2, 3/2, 5/2), four for f (1/2, 3/2, 5/2, 7/2); thus 1 for K, 1+2=3 for L, 1+2+3=6 for M and 1+2+3+4= 10 variants of Auger emission from N (i know 3 shells are involved in a auger with these indices unwritten often e.g. Al KLL, U MNN...)

But for X-ray Photoelectron Spectroscopy (XPS), the peak labels implied involves (absolute value of) subsidiary plus spin quantum numbers, that is 1 for s (1/2), 2 for p (1/2, 3/2), 2 for d (3/2, 5/2), 2 for f (5/2, 7/2)

That means, no XPS with d_(1/2), f_(1/2) or f_(3/2) indices would be possible. But why is this so?

One can say, interaction of electron spin magnetic moment with subsidiary magnetic moment (the so-called LS coupling) is more important for photoelectron emission, while spin plus magnetic orbital moment is more important for auger emission. Why is this so? whatever the selection rules maybe involved, why it physically exists?

I know that LS coupling works better for lighter and JJ coupling for heavier elements. but both involes l and s; all individual l sum to L and all individual s sum to S, and this L and S precesses around net magnetic moment for LS coupling. and for JJ coupling, individual l and s combines to form individual j, all j combines to J and precesses around net magnetic moment. Nowhere I see m here in the picture

#nonlinearoptics #quantumchemistry #computationalchemistry #multiphoton #absorption #twophotonabsorption #tpa #mpa #optics #quantum #physics #chemistry

Is it a problem of philosophy, language, physics, thermodynamics, statistical mechanics, or brain physiology? Or something else? Or beyond understanding?

A physiological approach is discussed by Joseph LeDoux (in The Deep History of Ourselves, 2020) among other authors. A physics orientation is considered in Deepak Chopra, Sir Roger Penrose, Brandon Carter (How Consciousness Became the Universe: Quantum Physics, 2017). David Rosenthal has written several books of philosophy about consciousness. And Bedau 1997 and Chalmers 2006. Which is the right conceptual reference frame? Or is more than one required?

I'm searching for a good collaborator or a research group that might want to tackle an interesting problem involving the relationship between quantum dots generating nanoparticle clusters and their DNA/proteins corral. This relationship is encapsulated by geometric proximity, that is I'm looking for someone who might know how quantum mechanics impacts something like these nanoparticles, such as how close a nanoparticle is to another nanoparticle or a protein and whether sized clusters form. Ping me if you're in the bio sciences, computational biology, chemistry, biology or physical sciences and think you might be able to shed some light on the above.

Literature searches show that there are many papers and books on emerging quantum theory, but it seems that no specific model has ever been proposed. Here is such a proposal: https://www.researchgate.net/publication/361866270

The model describes wave functions as blurred tangles, using the ideas by Dirac and by Battey-Pratt and Racey. The tangles are the skeletons of wave functions that determine all their properties. The preprint tries to be as clear and as pedagogical as possible.

In quantum theory, tangles reproduce spin 1/2, Dirac's equation, antiparticles, entanglement, decoherence and wave function collapse. More interestingly, the deviations from quantum theory that the model predicts imply that only a limited choice of elementary fermions and bosons can arise in nature. Classifying (rational) tangles yields the observed elementary and composed particles, and classifying their deformations yields the known gauge theories.

Given that the text aims to be as understandable and enjoyable as possible, feel free to point out any issue that a reader might have.

I was trying to get an insight into quantum computing (QC) for a research purpose. However, every source was filled with lots of technical terms with less explanation. It becomes very difficult for us those who want to learn QC with no knowledge before.

Is there any source where beginners can learn QC with zero background knowledge?

Thanks in advance.

If something is next to something without extension that itself has no extension, it never manages to actually be SPACE. Instead it is the juxtaposition of non extended singularities, manufactured into a matrix in whose connection singularities are impossible. It makes no sense to me how space can be the juxtaposition of non extended and non extendable single locales who generate a system of ways of articulating spacial relations of all objects made up of material.

How can material be made of nothing more than frequency of strings working harmoniously. It makes no sense quantum Mechanics...it must be wrong as a model goes, even though it's models are enormously precise in some of their predictions.

IF I am wrong and it is correct can anyone please explain to me how something non extended can be next to something else non extended to between the two of them form a displacement? It's impossible right? So please explaing Quantum Mechanics to me then.

Dear Sirs,

In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.

1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?

2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.

One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.

But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.

But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).

3.) Searching the internet I have found recent articles by Melvin M. Vopson

which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.

I would be grateful to hear your view on this subject.

Hi there,

I was wondering if anyone here is aware of any quantum physics demo kits that are commercially available.

I have so far found these two:

- Quantenkoffer by qutools - very comprehensive on multiple levels (beginner, intermediate, expert), but very expensive
- phasespacecomputing.com - higher level quantum information experiments, not sure about $$$

If anyone is aware of anything else, please add your information below. Shameless self-promotions are encouraged.

Thanks

Markus

Both photon and electron are spinning polarized particles. They have coexisting wave and particle properties (wave particle duality). In Double Slit Interference experiment, particle detector can be used to influence the phase angles of particle waves such that the coherency is disturbed and the interference patterns are diminished or even totally disappeared. However, the interruption of the interference patterns cannot to be used to prove the non-existence of wave property, or even to prove that wave and particle properties cannot coexist at the same time under observation (with detector). Therefore, “Complementarity Principle” is not true.

If you have any different opinion? Please share with us in this forum.

A rigid body with vertical proper length J rises along the Y direction in an inertial frame S(T,X,Y) with constant proper acceleration, therefore me may write the equation of hyperbolic motion of the body along the Y direction as:

1) J

^{2 }= Y^{2}- c^{2}T^{2}Using Born´s definition of rigidity, the proper length “J” must be invariant under Lorentz transformations between instant commoving inertial frames where the proper length (squared) J

^{2 }coincides with the line element (squared) along the Y direction: Y^{2}- c^{2}T^{2}. It is straightforward to see that this is the case just for boosts along the Y direction. If the velocity of the body and its inertial commoving frames have an aditional constant component along the X direction, the line element is different, the vertical length J cannot be invariant in the inertial comoving frames and we get a violation of Born´s rigidity.It might be a stupid question to ask, but is it possible to change the strength of the vacuum fluctuation locally, i.e. create an area in which the local density of virtual particles is lower than in other areas? Specifically I think of QED and a certain resonator in which all the supported ground state wavefunctions have a "hole" (= vanishing wave function) in the middle of the resonator (is that even possible due to completness constraint of the wavefunctions?). As per my understanding this would imply lower vacuum fluctuations in that "hole".

My physical intuition tells me that such a device is impossible, because vacuum fluctuation are very fundamental linked with the Heisenberg uncertainty principle and because of "homogenity" of spacetime (at least in the framework of flat space time, ignoring Unruh-effect etc.). However, I do not have any good explanation yet, which rules out the resonator I described above. An inhomogeneity in the vacuum fluctuations could be measured immediately, for example via the lifetimes of elementary particles.

I really look forward to references, opinions and suggestions on this topic.

Heidegger said that philosophy is

*thinking*. What else is philosophy? What is the ultimate aim of philosophy? Truth? Certainty? …Heidegger said that science is

*knowledge*. What else is science? What is the ultimate aim of science? Knowledge? Truth? Certainty? …This question is the first step of a project which is in the start-up phase and aimed at understanding the physical meaning of exotic dimensions whose existence is proven in the work Linear microbundles.

We refer to Question 10.2 in the work Linear microbundles.

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality — the participants remained deeply divided about how the theory should be interpreted.

Consider two particles A and B in translation with uniformly accelerated vertical motion in a frame S (X,Y,T) such that the segment AB with length L remains always parallel to the horizontal axis X (X

_{A}= 0, X_{B}= L). If we assume that the acceleration vector (0, E) is constant and we take the height of both particles to be defined by the expressions Y_{A}= Y_{B}= 0.5 ET^{2}, we have that the vertical distance between A and B in S is always (see fig. in PR - 2.pdf):1) Y

_{B}- Y_{A}= 0If S moves with constant velocity (v, 0) with respect to another reference s(x,y,t) whose origin coincides with the origin of S at t = T = 0, inserting the Lorentz transformation for A (Y = y, T = g(t - vx

_{A}/c^{2}), xA = vt) into Y_{A}= 0.5 ET^{2}and the Lorentz transformation for B (Y = y, T = g(t - vx_{B}/c^{2}), x_{B}= vt + L/g) into Y_{B}= 0.5 ET^{2}we get that the vertical distance between A and B in s(x,y,t) is:2) y

_{B}- y_{A}= 0.5 E (L^{2}v^{2}/c^{4}- 2Lvt/c^{2}g)which shows us that, at each instant of time "t" the distance y

_{B}- y_{A}is different despite being always constant in S (eq.1). As we know that the classical definition of translational motion of two particles is only possible if the distance between them remains constant, we conclude that in s the two particles cannot be in translational motion despite being in translational motion in S.More information in:

Please, see the attached file RPVM.pdf. Any comment will be wellcome.

More on this subject at:

You can find the wording in the attached file PR1-v3.pdf. Any comment will be wellcome.

More on this topic at:

Hi,

I have a question in the field of computational physics. What is the physical meaning of Memory Kernel in Generalized Langevin equation? As I am not a physicist I have no sense to this concept and I need more simple description.

Thanks a lot

Hi everybody, Something has occupied my mind these days that what is the importance of time reversal breaking in condensed matter physics, e.g. in Haldane model, which is my new research field. I appreciate anyone who answer me conceptually and in exact word but however in simple word.

Thanks

An electron is in the state l=0,m

_{s}=1/2 when the x-component of its magnetic moment is measured.What values may be obtained &with what probabilities?What about the z-component of its magnetic moment?

Heisenberg uncertainty principle was initially proposed for position-momentum conjugate pair. It states that the concurrent precise measurements of position and momentum of a subatomic particle are not possible. This idea has been extended to another pair of quantities, time and energy, without proper justification. Therefore, there has been an endless debate on the validity of the uncertainty concept for the second pair, such as:

· Can time be considered as an observable quantity?

· Are these variables dynamically conjugate, both in classical and in quantum mechanics?

· Does this pair exhibit similar principle as the position-momentum?

· The mathematics of the uncertainty of energy-time pair is not well defined as standard deviation of time does not make sense.

Furthermore, if a certain duration of time is necessary for the accurate measurement of some quantity like energy then we should consider it for momentum too. However, in the latter case, it has been accepted during the history of uncertainty principle, that the measurement of the momentum of any particle can be taken with an arbitrary accuracy irrespective of the duration of the measurement.

If momentum should be treated like energy then it is better to separate Heisenberg’s uncertainty principle from the inevitable measurement inaccuracy of some physical quantities within short interval, which is well understood in science. They seem to be completely different issues, which are kept under the same title.

Actually more constraints...

The discovery of the mysterious He-2-4 nucleus and its geometry enabled me to calculate its mass accurately in 2 parity ways..

Using some logic I hit He-2-4 only to know its mysterious properties known but not taught much: (0) Most Symmetric, (1) Most Abundant, (2) Most Stable, and (3) Mother Nuclei to rest except Hydrogen.

A Geometric Model Satisfying.. (1) Quark, (2) QCD, (3) Yukawa Strong Force Quanta of 200 or 204 Electrons. (4) Thompson Problem of equidistribution charges, (5) CCP Packing of Energy or Mass in the most efficient way in Nature which makes sense for Nucleus, (6) Satisfying Vector Equilibrium Model, and showing evidence of Gravity Gradient Through Density.

Enabling: Leading Through Accurate Calculations it 2 Parity Steps with 99% Accuracy

Dear Friends,
I feel obliged to share this with you as some of you wanted me to share. This is a very mystical experience to me, as it is not me but some transcendental revelation. Many discoverers have experienced this. When I re-read the paper, I get shudders and goosebumps.

Please read my paper on He-2-4 to understand the Geometry of the nucleus, which is the mother nucleus for the rest of nuclei along with the father Hydrogen. The first compound to be synthesized in the Universe is HeH (Helium Hydride) as found out.
It is one of the most read scientific papers on Research Gate with 1050reads ( along some more reads in another version of the paper) and with 3 Recommendations. Reads for such an esoteric subject are rare. It often stands out as the "Most-Read" status and for 2017 to 2018 it held this record.

In 2008, I hit upon a connection between Photon and Electron/Positron without violating any laws of Physics except one experiment which is the most famous failed experiment. e.g, Michelson Morely. Then in 2010, trying to fit my theory in Strong/Nuclear Force, I hit this mysterious nucleus which we are not taught about. It took me till 2017 to get the geometry and calculations for its mass right.

My theory is nothing but 3 Orthogonal Fields shown in most of the EMF and QM experiments in Physics: where Electric, Magnetic, and Space are orthogonal fields. Unfortunately, we take (Vaccum of) Space to be granted and fail to understand that it is a Field with Supra Super Fluidity (it is like the Ghost passing through you without you feeling it, like in the case of millions of Neutrinos passing through you and one not feeling them but in the Yogic path you could wake up to this field). Some of you have seen my yogic trick of temporarily increasing the size of my fingers or toes by meditation but the second part of the experiment demonstrates that Space is a dynamic Supra Super-Fluid Field.

I was only encouraged to publish the paper when I found I was using the same units as Yukawa used, who was the first Noble Prize Winner for the Strong Force. His approach was more complex modified Dirac Equations but mine was a simple argument, that Nature would reuse components previously built in its walk of the Stack of Reality. The same arguments I used that Electron and Positron are 3D-vortexes weaved using Photons (in Space-Time constraints).

Like Crystal Molecules having Geometric Structure, the nuclei have it too. Modern Physics can't give us a simple picture of the nuclei.

He-2-4 nucleus is the most Symmetric, the most Abundant, the most Stable and tries to satisfy Quark and QCD models. It uses Yukawa's unit of the strength of 200 Electrons Mass (actually 204), the Packing of Spheres Problem (for most efficiently packing mass per unit of volume, which makes sense for a nucleus) for the inner layer, and the Thompson Problem (of distributing 'n' charges in Space for minimal energy or entropy needed for stability) for the outer layer. Then gradient of Gravity is seen (which justifies using Space as Field). e.g., gravity decreases with the increase in the altitude and lighter-dense things float up, while heavy-denser things float down.

The Noble gases are not only stable because of 8 electrons in their orbit (except He, in which case it is 2), but the Noble nuclei are stable with the quanta in units of He-2-4 (which is not taught). The corollary is that a skyscraper is not stable because the top floor is stable but the ground foundation is also needed to be stable.

It also satisfies the Equal Partition Theorem and the Principle of Reuse/Recycle by Nature, which Modern Physics does not use. It uses Muon and Anti Muon as building blocks - the next higher energy/mass resonance of Electron and Positron.

It proposes uses of a Space Field which can be called Dark Energy, Higgs Field, Ether, Prana or Chi. It uses Equivalence Energy Principle to see how much Electro Static Energy will be required to hold the cluster of 12 Nodes and 6 Nodes in the next layer.

There are two parity ways in which the mass of the nucleus is calculated to the accuracy of 99% (variation will happen based on speeds of the nucleus and secondary effects). In complex calculations, we match results up to all digits in two different paths! The baffling part is that all complex vector and energy operations in two different paths lead to the same result while satisfying all constraints!!!

It also demonstrates the Gravity at works at this fundamental level (which is mentioned in another paper).

The best approach to this question would be Energy to Matter; with the help of a Feynman diagram, the probability for pair production could be calculated. In Quantum Physics, it would be either 0 or 1. If it turns out to be 0, means; answer is No. If it is 1, the answer is yes. This is the most important question at the moment as it has consequences across diverse and disparate disciplines with many of us are associated with.

I'm not an expert in quantum mechanics, but I've come across misconceptions being exposed in the media. Have you ever found misconceptions about quantum mechanics? Could this type of distorted information harm the teaching and learning of such complex concepts?

I would like to do work on quantum gravity. But general relativity is not complete. So if i want to do work on GRT. I am beginner for this course. GRT fails in few aspects. Any one suggest me research papers. Please send me your answers.

Generally, when we calculate the carrier density in 2DEG from SdH oscillations (Field dependence of sheet resistance) and QHE (field dependence of Hall resistance) it should match. In some cases it was found that carrier density calculated using both data differ. What is the reason behind this difference? What is the physics behind the calculation of carrier density from SdH oscillations and Hall resistance data?

It is well known that for a closed shell (all electrons are paired up) molecule, the HOMO-LUMO gap is related to its stability. However, I often see that the same argument is used for open-shell systems (Unrestricted calculation).

**In such a case, the authors consider energies of both alpha and beta spin orbitals, and the orbital with the highest energy is considered as SOMO. Similarly, the LUMO is decided among both alpha and beta, and the respective gap is considered as the SOMO-LUMO gap.**Next, the authors discuss the stability of the system based on that value.However, to the best of my understanding, for an unrestricted calculation, three SOMO-LUMO gaps can be calculated (1) Gap between alpha spin HOMO and alpha spin LUMO (2) Gap between beta spin HOMO and beta spin LUMO, and (3). The method I mentioned above (considering both alpha and beta).

So, my questions are the following,

a)

**Is it technically correct to relate stability with the SOMO-LUMO gap calculated by method 3 for an open-shell system?**Is it even possible to draw such a relation? How correct are such value and such correlation? If yes, then is there any reason why we are ignoring the other two gaps?b) Are the gaps obtained by an unrestricted calculation have any practical significance at all (As argued here: https://joaquinbarroso.com/2018/09/27/the-homo-lumo-gap-in-open-shell-calculations-meaningful-or-meaningless/)

Any insightful response will be welcome. Thank you in advance.

The Editors of Reports on Progress in Physics have chosen to keep their readers in the dark rather than dealing with the physical evidence , as presented in the following Comment they rejected:

We would like to comment on your article “Entanglement: quantum or classical?”, published 26 May 2020, in Reports on Progress in Physics, Volume 83, Number 6, by Dilip Paneru et al., 2020, Rep. Prog. Phys. 83 064001.

That review article was rather misleading because real progress has been made in disproving and rebutting the concept of quantum nonlocality which is the underlying theme of your article mentioned above.

The following references, unambiguously and comprehensively, disprove and rebut the physically meaningless concept of remote quantum collapse or nonlocality of the global wavefunction of “entangled“ states, in general, and in the context of photonic systems, in particular:

1. Robert B. Griffiths, “Nonlocality claims are inconsistent with Hilbert -space quantum mechanics”, Phys. Rev. A 101, 022117 – Published 28 February 2020.

2. F. J. Tipler, "Quantum nonlocality does not exist", PNAS 111 (31), 11281-11286, August 5, 2014;

3. A. Vatarescu, “The Scattering and Disappearance of Entangled Photons in a Homogeneous Dielectric Medium”, Rochester Conference on Coherence and Quantum Optics (CQO-11),

4. S. Boughn, “Making Sense of Bell’s Theorem and Quantum Nonlocality”, Found. Phys., 47, 640-657 (2017)

5. A. Khrennikov, “Get Rid of Nonlocality from Quantum Physics “, Entropy, 2019, 21, 806

6. M. Kupczynski, “Closing the Door on Quantum Nonlocality “, Entropy, 2018, 20, 877.

Consequently, the readers of IOP Reports on Progress in Physics should be informed that your article does not present an objective and true picture of the state of knowledge and understanding of the alleged quantum nonlocality.

Does anyone know of applications of multiple sums of a sequence?

I know of the Multiple Zeta values (which is a multiple sum of 1/N^s). This has multiple applications in quantum physics, QED, QCD, connection between knot theory and quantum physics, ...

Does anyone know of potential applications for this more general form which is a general multiple sums? I have written an article about it and about its applications including partition identities, polynomial identities. I wanted to know if anyone know of applications outside mathematics or additional applications in math.

If we think we can observe that the same quantum particle can be simultaneously in different places, so maybe that made these places are only one: a track for a holographic theory of Universe?

The special Issue "Fibonacci and Lucas Numbers and the Golden Ratio in Physics and Biology"

is intended to be a repository for this question.

Apart from this opportunity to publish new results, I am much interested to have unsuspected answers about this subject.

We know that these principles apply to particles. How is it used in magnetic field now?...

Explain exactly what the detector does and if the choice of detector affects the outcome.

Einstein and Bohr had some discussions on this. What's the current situation?

How does one combine the basis of Quantum Physics that the information cannot be destroyed with the GR statement that black holes destroy the info?

In everyday speech, chaos means disorder, crowd, unpredictability, etc. In philosophy, chaos is used in the terms of pro-matter, primordial space, that is, what became before order was brought into our world. In psychology, the word chaos raises fears that order will disappear and disorder will reign again. Chaos is a new field of science, but also a new way of observing the world. So, for physicists, engineers, economists, doctors, biologists, sociologist, psychologists, psychiatrists etc., chaos means an incentive to re-examine their equations, data, knowledge and beliefs. Chaos enabled a systematic approach to phenomena and systems of great internal complexity as well as an understanding of seemingly extremely simple phenomena. Many (but not all) scientists agree that after the theory of relativity and quantum physics, chaos is the third scientific revolution.Is theory of chaos the third scientific revolution?

We know that for Spin-

^{1}/_{2 }particles, we can find the**Creation**and**Annihilation**operators from the**Spin**operators by using**Jordan–Wigner transformation**and also for Spin-1 particles, we can utilise**Holstein-Primakoff transformations**for mapping**b****oson****ic C****reation**and**Annihilation**operators to the**S****pin**operators. But suppose if we need to find the same relationship for Spin-^{3}/_{2 }or Spin-2 particles or other higher spin particles, then how can we approach to it?I will be highly grateful if someone kindly clarifies my doubt.

Imagine two fermion wave-packets of Gaussian form, whose paths cross one another, see figure. The blue circles represent the two wave-packets at different times. At a certain time the wave-packets coincide.

Assume that the fermions are identically polarized and the two wave-packets |ψ>

_{A}and |ψ>_{B}travel with the same group-velocity along the axis*z*,*v*_{A,z}=*v*_{B,z}=*v*_{z}. In addition, assume that the transversal group-velocities are opposite,*v*_{A,﬩ }= -*v*_{B,﬩ }= v_{﬩ }.**What happens in the region where the wave-packets coincide?**

Recall that identical fermions cannot occupy the same cell in the phase space. Recall also that for Gaussian wave-packets Δ

*P*Δ*r*= ħ, where Δ*P*is the width of the distribution of the linear momentum.It seems to me that the two wave-packets should "run away" from the region of overlapping.

**But how?**For changing the group-velocity of the wave-packet a force is required. No force is acting here, the two wave-packets propagate in free space.I am stuck between

**Quantum mechanics**and**General relativity**. The mind consuming scientific humor ranging from continuous and deterministic to probabilistic seems with no end. I would appreciate anyone for the words which can help me understand at least a bit, with relevance.Thank you,

Regards,

Ayaz

Do you think then that there is a universality of this particular ratio?

I, for my part, highlighted this ratio in these different areas:

I have tried to study quantum mechanics before but never understood it. After learning basics about quantum computing and quantum information including quantum hardware and qubit types, I wish to start studying quantum physics again. What are few of the areas of quantum mechanics that Quantum information systems relate or are based on?

The very common experiment in optics to demonstrate that light behaves same as the wave is single-slit diffraction.

If we assume that the thickness of the barriers is 0.1 mm, then the length of a slot along the optical axis will be a long route as a green photon will measure it nearly two hundred times larger than its own size.

Now the question is how the photon behaves along with that long route? Does it behave as a particle or wave? If the exit of the slot or a pinhole is causing photon behaves as a wave then why the entrance wouldn't do that? And if we accept that photon behaves like a wave as it enters the single slit or the pinhole, then formally we should apply the Fresnel diffraction equation from the entry of the slot that will lead us to nowhere.

In my opinion, wave-particle duality is leading us solely to some useful approximation but it doesn't talk about reality, as it cannot explain a sort of experiments that unfortunately have been ignored or left behind such as the glory of the shadow, and also the stretching the shadows when they meet each other and so on.

For sure, wave-particle duality is not the end of science and for sure five hundred years later people will not consider the existence as do we do now the same as us that we don't see the things same as our ancestors, so we should be open-minded to be able to open the new horizons.

Whitehead's "Process and Reality" dates from 1929.

In it, he argues – among many other things – that "It has been a defect in modern philosophies that they throw no light whatever on any scientific principle." (Pt.2, §IV, IV).

Does his own philosophy ("the philosophy of the organism") stand up to that requirement when confronted with contemporary scientific knowledge?

The two versions have reached 1100 Reads.

The first paper, "The Walk of Reality - He-2-4 Nucleus Model" went on to be the Most Read Status since Aug 2017 till April 2019. I have periodically shared this in my past posts.

The He-2-4 Nucleus is not listed in the Periodic Table but the most important one to understand Nature and Creation. It is the (1) Most Symmetric, (2) Most Stable, (3) Most Abundant, and (4) Mother Nuclei to all the rest of Nuclei, except Hydrogen. It also satisfies QCD and Quark Models. In its composition uses 204 Electron Mass Quanta, same as what Fukuma, the first Noble Prize Winner for Strong Force, derived but I came to know it later after I used the same unit. My argument was Nature would like to reuse its previous build particles, I chose Muons. It also satisfies Nature's HCP Packing of Max Mass/Energy per Volume, which made sense for an ideal nucleus.

I hit this special nucleus in 2010 using my arguments on 3 Fields symmetries in 3D space, and trying to satisfy Quark and QCD models.

There exist theoretical evidences that this hipothesis is true, especially, it is strongly supported by the Casimir type electron stability mechanism suggested by Prof. Hal Puthoff in his nice work: Puthoff H.E. "Casimir vacuum energy and the semicalssical electron". Int J Theor Phys, 46, 2007, p. 3005-3008, as well as in the works by Valerii B Morozov, 2011 Phys.-Usp. 54 371 doi:10.3367/UFNe.0181.201104c.0389

"On the question of the electromagnetic momentum of a charged body",

Rohrlich F. Self-Energy and Stability of the Classical Electron. American Journal of Physics, 28(7), 1960, p. 639-643,

Prykarpatsky A.K., Bogolubov N.N. (Jr.) On the classical Maxwell-Lorentz electrodynamics, the inertia problem and the Feynman proper time paradigm. Ukr. J. Phys. 2016, Vol. 61, No. 3, p. 187-212

and by Rodrigo Medina in the work "Radiation reaction of a classical quasi-rigid extended

particle", J. Phys. A: Math. Gen. 39 (2006) 3801–3816 doi:10.1088/0305-4470/39/14/021

The last one is very learning and also solves the well known "4/3"-problem formulated by Abraham, Lorentz and Dirac more than 100 years ago.

Let's talk about what is our self else than your memories (if all set of information that we've got is a different type of memories)?

Curious about what level of existence or non-existence (however one may qualify those two terms, or the two listed in the title ('living matter' or 'non-living matter') can collapse the wave function, so as to help research on understanding the "mesocopic" divide between the quantum (microscopic) and classical (macroscopic) realms of physics.

Here are some articles I've ran across/am using for my last paper in my undergraduate work:

The diffraction of light has been referred to as its wave quality since it seemed there was no other solution to describe that phenomenon as its particle quality and subsequently, it exhibited wave-particle duality.

I want to overlap the Cs atom with a nanofiber. I am using two cameras but it is still not clear. So I added an external coil to AH coils to move them flexibly. Now the issue is how to minimize the external magnetic field!!!

For instance, both nitride NW-QDs and h-BN point defect based SPEs seem to exhibit photon entanglement. Which platform will provide stronger entanglement?

I suspect, that purity of the single photon that is emitted from either source will determine which source offers the best route for entanglement . Or perhaps I am wrong, i.e. none of these emitters (either based on InGaN QDs or h-BN point defects) producing entangled states. Could anyone of you please share your experience.

When deriving the Hartee-Fock method, we minimize the electronic energy with respect to all molecular orbitals with the constraint of orthonormality of the molecular orbitals by using the method of Lagrange multipliers. Is there a fundamental reason why the molecular orbitals need to be orthogonal? Does it ensure a lower energy compared to any non-orthogonal set of molecular orbitals?

Thank you very much for your help

as you know, the different studies show that the shell improve the QY of QDs. I want to know that, how the shell could improve the QY of QDs?

Do you think that the recent paper of Aharonov et al. resolved the problem of the long-standings particle-wave duality in Quantum Physics?

Are you aware of more recent works?

"Finally making sense of the double-slit experiment"

Yakir Aharonov, Eliahu Cohen, Fabrizio Colombo, Tomer Landsberger, Irene Sabadini, Daniele C. Struppa, and Jeff Tollaksen PNAS June 20, 2017 114 (25) 6480-6485; first published May 31, 2017 https://doi.org/10.1073/pnas.1704649114

When we consider two particle interaction in a low dimensional semiconducting systems, we include the coulomb interaction between the two particles in the form of (q1*q2)/Mod[

**r1**-**r2**] in the Hamiltonian. If we solve this Hamiltonian using variational technique, is it necessary to consider the correlation between the two particles in wavefunction also? Can anyone give a clear idea about this?I'm recently buying some anti-CD3 and anti-CD28 to activate T cells. I also notice some of this clones have been conjugated with fluorescence for the purpose of labeling T cells for flow cytometry.

One thing i'm very interested is that once the conjugated antibody is binding to T cells, they will first activate T cells, which will definitely affect the result of the original T cells status (which is the status you want to check).

It reminds me of quantum physics. If you observe it, you change it.

I'm sure a lot of people never care about the effects of labeling T cells. How to ensure a marker antibody won't affect T cells functions?

Assuming that particle consists of a photon that moves circularly in the loop, creating standing wave obeying the rule that its closed path 2*Pi*R=n*Lambda (1) (resonator where resonator length 2*Pi*R is „n” times photon wavelength) and knowing particle's mass (experimental value), we can calculate its radius.

Lets put c/v (2) for Lambda in the above equation (1), where v is photon frequency. We get then 2*Pi*R=n*c/v (3). Now if we assume that particle's mass is of EM origin and m=E/c^2 (4) where E=hv (5) is the energy of circulating photon as described above we can rewrite (4) as m=hv/c^2 (6) or v=mc^2/h (7) (letter h stands for Planck constant of course). Now lets put (7) into (3) and we get 2*Pi*R=n*c/(mc^2/h) (8) or simplifying R=n* h/2*Pi*mc (9).

Now, let's take proton for our considerations. Assuming n=4 in eq. (9) and m=1,672621637(83)*10^(-27)kg (experimental value) we can calculate proton's radius to be R=

**0.84124 fm**which stays in agreement with the experimental value of**0.84184 fm +/- 0,00067 fm**(the most accurate experimental value measured in a Hydrogen atom with a Muon in 2010).You can read more about that theory and mechanism in my paper here:

What do You think?

. Just uploaded. Physical reality is conceived as being essentially classical and determinate. But due to the limitations of our neurone-based perceptual mechanism, we experience it in terms of three 'perceptual categories': 1) 'classical', where observations don't affect the observed, and our knowledge is certain to within experimental error; 2) 'quantum' where they do, and our knowledge is uncertain; 3) a hypothetical undetectable 'subliminal substrate'. Our overall universe view is then inherently incomplete, and apparent quantum indeterminacy is due to this. Anyone interested in discussing?

Exploration of various psychic abilities seemed had reached a dead end with the deterministic Newtonian physics. Now, with quantum physics gaining much progression, should psychic abilities be given more consideration to explore more and gain deeper understanding about them?

What are Ramanujan's modular functions, what are they for and how do they apply in relativity, quantum physics and string theory? And where can I consult bibliography about it?

OK, I try to bring in words what I like to understand - in a philosophical and mathematical sense ... Assuming that the paradigm "the observer determines (his) reality" is true - as was recently confirmed - I was wondering if there is a physical explanation and/or possibility so switch operators or transform operators into another operator. In the sense - if you have a certain (mental) filter - you see a situation with that bias - you can change your mental filter (if you are lucky) - by an act of "transformative insight" - and "see the world with other eyes". Now - is there a mathematical representation for this when applying an operator to the wavefunction? I know of course to rotate coordinates to have the eigenvalues in a different basis. Can this be somehow used to re-program an operator? Of course staying within the Heisenberg-limit.

nano devices, nano materials, semiconductor physics, quantum physics, quantum mechanics, quantum wire.

What is the relationship between quantum physics and artificial intelligence?

Since J. von Neumann physicists stick to categories of Hilbert spaces to modelize quantum phenomena. Categories of modules over a ring might represent an alternative if we add axioms (e.g. the existence of particular limits or co-limits) that would respond to the experimental requirements.

A very general setting for the purpose would be abelian categories. Have there been attempts to make use of them?

*References:*

Recently Ive started to learn about quantum chemistry programing. I need some advice about books I need to read and also skills I need to learn.

I have some experiences of working with c++ language and also I am familiar with quantum physic basics.

My major goal is to join to the developers team of Gaussian quantum chemistry software and I need a clear path to reach this goal.

Any suggestions would be greatly appreciated.

We have learned that they remain in superposition state unless detected. Can anyone elaborate it in theoretical terms?

Are there any good comprehensive review article on qubits? As in, one that gives a whole summary of all the possible platform and compare their strengths and weaknesses, like trapped ion, superconducting qubits, nuclear spin, quantum dots,etc.

OK, I try to bring in words what I like to understand - in a philosophical and mathematical sense ... Assuming that the paradigm "the observer determines (his) reality" is true - as was recently confirmed - I was wondering if there is a physical explanation and/or possibility so switch operators or transform operators into another operator. In the sense - if you have a certain (mental) filter - you see a situation with that bias - you can change your mental filter (if you are lucky) - by an act of "transformative insight" - and "see the world with other eyes". Now - is there a mathematical representation for this when applying an operator to the wavefunction? I know of course to rotate coordinates to have the eigenvalues in a different basis. Can this be somehow used to re-program an operator? Of course staying within the Heisenberg-limit.

**Like Crystal Molecules Having Geometric Structure. Atom Has It Too**

**.**

Modern Physics can't give us a simple picture of the atoms.
Please read my paper on He-2-4 to understand the Geometry of the nucleus, which is the mother nucleus for rest of nuclei's along with Father H.
It is most Symmetric, most Abundant, most Stable and also satisfies Quark, QCD, Yukawa's unit of strength of 200 Electrons Mass (actually 206), Nature's Packing of Spheres for inner layer and Thompson Problem for outer layer.

It also satisfies the Equi Partition Theorem and the Principle of Reuse/Recycle by Nature, which Modern Physics does not use. It uses Muon and Anti Muon as building blocks.

It proposes uses of a Space Field which can be called Dark Energy, Higgs Field, Ether, Prana or Chi.

It uses Equivalence Energy Principle to see how much Electro Static Energy will be required to hold the cluster of 12 Nodes and 6 Nodes in next layer.
There are two ways the mass of the nucleus is calculated to the accuracy of 99%. In complex calculations, we match results up to 4 digits!

It also demonstrates the Gravity at works at this fundamental level but the paper will come later.

It also explains why Noble Gases are stable besides the outer 8 electrons but He-2-4 has only 2? It is because the nucleus is stable and when the Nucleus is not stable, the Weak Force decay happens.