Science topic
Hamiltonian - Science topic
Explore the latest questions and answers in Hamiltonian, and find Hamiltonian experts.
Questions related to Hamiltonian
Hey everyone,what are the differences in vortex dynamics between cuprates and MgB2?
Can we use the same Hamiltonian to describe the dynamics of vortices in these two types?
Dear peers,
I am trying to perform quantum conductance calculations using wannier90. I would like to use an LCR transport style for defective graphene nanoribbons using externally supplied Hamiltonians (tran_read_ht = TRUE). I have been able to generate lead and conduction region hamiltonians, seedname.htL and seedname.htC using bulk transport of these sections but how to construct the lead-conductor hamiltonian? Tutorials and examples are absent in the wannier90 package.
I will be grateful for your help.
Regards,
Yuvam
Mach said [1], the principle of minimum xxxx, are they the natural purpose?
Born said in his "Physics in My Generation"[2], that while it is understandable that a particle chooses the straightest path to travel at a given moment, we cannot understand how it can quickly compare all possible motions to reach a point and pick the shortest path —— a question that makes one feels too metaphysical.
Speaking of the Hamiltonian principle and the minimum light path, Schrödinger recognizes the wonder of this problem [3]: Admittedly, the Hamilton principle does not say exactly that the mass point chooses the quickest way, but it does say something so similar - the analogy with the principle of the shortest travelling time of light is so close, that one was faced with a puzzle. It seemed as if Nature had realized one and the same law twice by entirely different means: first in the case of light, by means of a fairly obvious play of rays; and again in the case of the mass points, which was anything but obvious, unless somehow wave nature were to be attributed to them also. And this, it seemed impossible to do. Because the "mass points" on which the laws of mechanics had really been confirmed experimentally at that time were only the large, visible, sometimes very large bodies, the planets, for which a thing like "wave nature" appeared to be out of the question.
Feynman had a topic of minimum action in his "Lecture of Physics" [4]. It discusses how particle motion in optics, classical mechanics, and quantum mechanics can follow the shortest path. He argues that light "detects" the shortest path by phase superposition, but when a baffle with a slit is placed on the path, the light cannot check all the paths and therefore cannot calculate which path to take, and the phenomenon of diffraction of light occurs. Here, Feynman defined the path of light in two parts, before and after the diffraction occurs. If we take a single photon as an example, then before diffraction he considered that the photon travels along the normal geometric optical path, choosing the shortest path. After diffraction occurs, the photon loses its ability to "find" the shortest path and takes a different path to the diffraction screen, with different possibilities. This leads to the concept of probability amplitude in quantum mechanics.
To explain why light and particles can choose the "shortest path", the only logical point of view should be that light and particles do not look for the shortest path, but create it and define it, whether in flat or curved spacetime. Therefore, we should think about what light and particles must be based on, or what they must be, in order to be able to define the shortest paths directly through themselves in accordance with physics.
[1] Ernst Mach, Popular Scientific Lectures.
[2] Born, M. (1968). Physics in My Generation, Springer.
[3] Schrödinger, E. (1933). "The fundamental idea of wave mechanics. Nobel lecture " 12 (1933).
[4] Feynman, R. P. (2005). The Feynman Lectures on Physics(II), Chinese ed.
Keywords: light, Fermat principle of the shortest light time, Hamilton principle, Feynman path integral, Axiomatic
How find cases of Hamilton path for different types of prism and anti prism graph
The 2 theories have epistemologically ("how to get knowledge") Distinct starting points, i underly here epistemological, not methodological i.e on level of phosophy of knowledge
GR considers or tests on hypothesis that the properties of matter and radiation(mostly radiation i.e light) are given i.e constant speed or state equations and describes in details the time space that is the results of these, then motion emerges i.e geodesic etc
QM takes hypothesis that spacetime is given i.e symmetry of Hamiltonian and studies properties of matter or particles (and their motion). Properties of particles are
emergent i.e spin
According to the incompatibility thesis*, the two approaches yield knowledge from different hypotheses. There is no. Middle ground as this would destroy tge knowledge generating process
*I will elaborate soon on a essay
My exploration of the string Euler characteristic made me wonder if the Euler characteristic is related to energy conservation, then why not consider string thermodynamics. Not heat but free energy.
The Gibbs phase rule F = c - p + 2 for the string reads degree of freedom 1 = 1 component - 2 phases +2.
The rule is a tautology on one component because one degree of freedom implies two phases, and two phases implies one degree of freedom.
The string energy can only be defined under one degree of freedom.
So experimental evidence unequivocally shows two distinct energy phases: amplitude expansion and amplitude contraction.
Clearly the two phases are determined by the same closed system.
Note the 1 degree of freedom Lagrangian is E = T + U, not E = T - U
Phase I: When the deformed string is released, the baseline potential energy U is increases to U + U'(t). Energy conservation is the same as volume preservation, so the shape of the manifold minimizes surface area. This forces the excess potential energy U'(t) into kinetic action T(t) so that U + U'(t) > U + T(t).
Phase II When all excess potential energy is transferred to kinesis, the normal curvature of the smooth manifold is restored but with a surface that is moving. Then the kinetic energy T(t) runs down to zero. The base line potential energy cannot run down.
So the time-invariant standing wave has a covariant derivative which gives the string velocity, and therefore the invariant frequency, too.
This proves that the frequency and amplitude are both determined by the Gibbs free energy change which drives amplitude decay.
It is therefore proven that frequency and amplitude are dependent on the same closed potential system.
I have attached sketches of the string energy cycle at rest, deformed, expansion, and contraction.
If anyone would like to help write these equations better, I would appreciate it. My calculus has limits. I think someone could really do some interesting things here. The field is wide open for discovery and original research (in spite of what they tell me on Stack Exchange).
If you do write the string energy equations, go over and lay them on physics stack exchange for me.





Introduction to Tensor Analysis and the Calculus of Moving Surfaces, Springer 2013 is a wonderful book that contains everything there is to know about the theory of string vibration under one degree of freedom.
We are forced to accept the string has only one degree of freedom because otherwise energy cannot be defined, and so physics does not apply to matter without defined energy.
I can understand the exact mathematics involved, but I am still a bit shaky on tensors and the critical energy equations involve covariant derivatives that I would like to understand better.
I want to distill the parts that are specific to the string perturbation.
The basic point I follow Grinfeld on is the string shape can be constructed without reference to a coordinate system. The domain is the string line which has coordinates on the real line. Then we show how it moves.
The surface of the string under energy conservation is a catenoid.
Using the coordinates of points on the catenoid surface parameterized by time we can understand how the surface of the potential energy surface moves after perturbation.
I invite anyone who understands the calculus of moving surfaces (there are other books) to discuss this with me.
The fact that mathematicians and physicists failed to investigate string vibration physics means the field is wide open for original research and new discoveries.
For instance, you won't find the Euler characteristic (or even the manifold) in the literature. The catenoid minimum surface of revolution is known but very obscure.
I am investigating homotopy and cohomology on musical instruments and their free language. I want to characterize the manifold of a musical instrument by the Euler characteristic, Betti number, Lefschetz number and genus.
I think I can get as far as deriving the Euler characteristic for one string under the Gauss-Bonnet theorem. I think it proves the string is Hamiltonian, but I can't formalize that. If the string is a Gauss-Bonnet standing wave then it must be Hamiltonian.
First, four statements I think are true:
1) Genus g is 0 because any cut disconnects the space.
2) The manifold is oriental because the normal vector of the sub manifold is oriental.
3) The manifold is Riemann because the string line is the fundamental form, so every point has coordinates on the real line.
4) Musical instruments and the string contract upon a point.
Under the formula for the Euler characteristic C = V - E + F (vertices minus edges plus faces) we can count two singular points, one edge, and one face. C = 2.
Under the formula C = 2 -2g for a closed surface, we have C = 2.
According to the Gauss-Bonnet theorem the geodesic curvature and the Guassian curvature of the string can be integrated to give a number which is 2 pi times the Euler characteristic.
We know that Gaussian curvature is 4 pi because it is the Gaussian product of the longitudinal curve (a cycloid) and the curve in transverse section, a circle. So we can say the singular endpoints are located at -pi and pi ( =2 pi) and the circle is 2 pi, too. So the product of the circle and the cycloidal curve is 4 pi. Then it must be true that the Euler characteristic is indeed 2.
Then there is the Euler characteristic of a finite CW complex which applies to music.
This generalizes the Euler characteristic for 2-dimensional complexes. It makes C into an alternating sum. The dimensionality of the string is 2n so the Euler characteristic is always 1. That result seems inconsistent. I am hoping it is wrong! Can you help?
So I have the idea that this by itself proves the string is Hamiltonian. It seems like the Euler characteristic captures the totality of the string.
I also think if the Euler characteristic of the string is 2, then musical instruments also have Euler characteristic 2, also. But can't formalize that.
I know the guitar is a polyhedron defined by simplicial complexes which have a common vertex. This defines a polyhedron so C = 2.
Is there a theorem about Euler characteristics in union and intersection, like the union of manifolds requires the same Euler characteristics?.
It seems to me that if you know about Euler characteristics and Betti numbers you could say something profound about string physics.
This is a simple proof the guitar is Hamiltonian. Then by deconstruction so is string vibration because the string is the smallest open set on guitar.
The time-independent Hamiltonian has the form H(p, q) = c and dH/dt = 0.
All I need is to define p and q.
So p will be the center of harmonic motion, and q will be a potential energy gradient that reads off the differential between any two points.
Consider the set of notes for the guitar tuning known as standard: E A D G B E.
The tuning naturally separates into two vectors in this way: Indexing the tuning notes by counting up from the low E the pitch values are equivalent to p: 0 5 10 15 19 24.
Now taking the intervals between the notes we have a second vector q: 0 5 5 5 5 4.
It is important to notice that tuning vectors p and q are equal, opposite, and inverse, which is expected since the orbit and center have this relation in the Hamiltonian.
For instance, p is the summation of q and q is the differential of p. The points in p and the intervals in q together make a unit interval in R.
Most important, p = 1/q means the tuning is the identity of the guitar. If you know the tuning, you know everything (all movement). You can learn guitar without learning anything but the tuning.
The proof the vectors are Hamiltonian is this, p is the center of motion in R6, and q is the gradient of the potential field surface in R5 where every vibrational state is presented by a single point.
The coordinates of notes on guitar chord charts given by the gradient function
form a union as a smooth atlas.
Therefore, it must be true the guitar is Hamiltonian. How else could the symplectic manifold be smooth?
Physicists and mathematicians have no choice but to accept that one degree of freedom is better than two. The fact that they cannot see it implies an illness of the public mind that cannot think straight about classical mechanics.
A minion is a low-level official protecting a bureaucracy form challengers.
A Kuhnian minion (after Thomas Kuhn's Structure of Scientific Revolutions) is a low-power scientist who dismisses any challenge to existing paradigm.
A paradigm is a truth structure that partitions scientific statement as true to the paradigm or false.
Recently, I posted a question on Physics Stack Exchange that serves as a summary of the elastic string paradigm. My question was: “Is it possible there can be a non-Fourier model of string vibration? Is there an exact solution?”
To explain, I asked if they knew the Hamiltonian equation for the string vibration. They did not agree it must exist. I pointed out there are problems with the elastic model of vibration with its two degrees of freedom and unsolvable equations of motion can only be approximated by numerical methods. I said elasticity makes superposition the 4th Newtonian law. How can a string vibrate in an infinite number of modes without violating energy conservation?
Here are some comments I got in response:
“What does string is not Fourier mean? – Qmechanic
“ ‘String modes cannot superimpose!’ Yet, empirically, they do.” – John Doty
“ A string has an infinite number of degrees of freedom, since it can be modeled as a continuous medium. If you manage to force only the first harmonic, the dynamics of the system only involve the first harmonic and it’s a standing wave: this solution does depend on time, being (time dependence in the amplitude of the sine). No 4th Newton’s law. I didn’t get the question about Hamilton equation.
“What do you mean with ‘archaic model’? Can I ask you what’s your background that makes you do this sentence? Physics, Math, Engineering? You postulate nothing here. You have continuum mechanics here. You have PDEs under the assumption of continuum only. You have exact solutions in simple problems, you have numerical methods approximating and solving exact equations. And trust me: this is how the branch of physics used in many engineering fields, from mechanical, to civil, to aerospace engineering.” – basics
I want to show the rigid versus elastic dichotomy goes back to the calculus wars. Quoting here from Euler and Modern Science, published by the Mathematical Association of America:
"We now turn to the most famous disagreement between Euler and d’Alembert … over the particular problem of the theory of elasticity concerning a string whose transverse vibrations are expressed through second-order partial differential equations of a hyperbolic type later called the wave equation. The problem had long been of interest to mathematicians. The first approach worthy of note was proposed by B. Taylor, … A decisive step forward was made by d’Alembert in … the differential equation for the vibrations, its general solution in the form of two “arbitrary functions” arrived at by means original with d’Alembert, and a method of determining these functions from any prescribed initial and boundary conditions.”
[Editorial Note: The boundary conditions were taken to be the string endpoints. The use of the word hyperbolic is, I believe, a clear reference to Taylor’s string. A string with constant curvature can only have one mathematic form, which is the cycloid, which is defined by the hyperbolic cosh x function. The cosh x function is the only class of solutions that are allowed if the string cannot elongate. The Taylor/Euler-d’Alembert dispute whether the string is trigonometric or hyperbolic.
Continuing the quote from Euler and Modern Science:
"The most crucial issue dividing d’Alembert and Euler in connection with the vibrating string problem was the compass of the class of functions admissible as solutions of the wave equation, and the boundary problems of mathematical physics generally, D’Alembert regarded it as essential that the admissible initial conditions obey stringent restrictions or, more explicitly, that the functions giving the initial shape and speed of the string should over the whole length of the string be representable by a single analytical expression … and furthermore be twice continuously differentiable (in our terminology). He considered the method invalid otherwise.
"However, Euler was of a different opinion … maintaining that for the purposes of physics it is essential to relax these restrictions: the class of admissible functions or, equivalently, curves should include any curve that one might imagine traced out by a “free motion of the hand”…Although in such cases the analytic method is inapplicable, Euler proposed a geometric construction for obtain the shape of the string at any instant. …
Bernoulli proposed finding a solution by the method of superimposition of simple trigonometric functions, i.e. using trigonometric series, or, as we would now say, Fourier series. Although Daniel Bernoulli’s idea was extremely fruitful—in other hands--, he proved unable to develop it further.
Another example is Euler's manifold of the musical key and pitch values as a torus. To be fair, Euler did not assert the torus but only drew a network show the Key and Pitch can move independently. This was before Mobius's classification theorem.
My point is it should be clear the musical key and pitch do not have different centers of harmonic motion. But in my experience, the minions will not allow Euler to be challenged by someone like me. Never mind Euler's theory of music was crackpot!
In trigonometry we know that frequency and amplitude are independent because they have independent variables.
Then frequency and amplitude do not have the same equation of motion.
But according to Newtonian determinism, all of the motion of a system is determined an equation that depends only on the initial state of the string, being the totality of points on string and their velocities. The initial velocity is zero.
In a closed system, all of the movement must include both frequency and amplitude. That is, frequency and amplitude have the same equation of motion.
On the elastic string, the false assumption the string wave is trigonometric by itself implies amplitude and frequency have independent equations. Indeed, in the literature when mathematicians and physicists want the standing wave to stand down, they just add another arbitrary real-valued function. The frequency and amplitude are parameterized by sine wave and exponential functions, and each has its own time variable. Frequency and amplitude do not map on to the same interval of time.
But under one degree of freedom the standing wave never stands down because it is a surface defined by the potential energy. The surface being precisely those lines of motion along which energy is conserved.
So please tell why are two equations better than one? Why are two degrees of freedom better than one? Some even say the string has infinite degrees of freedom as if the string is not subject holonomic constraint.
You guy’s think the frequency is a velocity, but it's not. Frequency is a potential. Constant velocity and constant potential are both measure by a time unit.
Apparently, physicists and mathematicians think the velocity of the string is constant right up to the point in time when the string stops moving. Because the frequency is constant. That is, you think dv/dt = df/dt = 0. Then you write a partial differential equation that has the form of a sine wave. But your equation in the form u(x. t) is parameterized by time but contain coefficients that are not determined by the initial condition of the string. And it is not continuous on the lower limit.
That is to say the trigonometric string cannot map onto the string at rest. The trigonomtric string has no natural vector field.
Furthermore, the assumption of a continuous trig function implies that you are not required to have a lower semi-continuous boundary, without which it is not possible to formulate the law of string motion in terms of a minimum principle. (See Critical Point Theory by Mawhin and Willem)
There is a stumbling block here because it may seem that it is obvious that amplitude is dependent on time, since it occupies an interval of time. In fact, it is independent of time because decay always consumes the same amount of time regardless of amplitude magnitude.
the rate of amplitude decay da/dt2 = 0 is constant just like the frequency. They have the same Hamiltonian minimizing functions.
The equation da/dt2 = 0 is possible mathematically if the external derivative of amplitude decay is a tautochrone formed by the cycloidal involution of the cycloidal string manifold.
On a tautochrone, a rolling ball always arrives at the bottom of the curve at the same time regardless of how high the ball in dropped from.
This shows that frequency and amplitude are subject to the same holonomic restraint imposed by energy conservation.
When you give up your false assumption frequency is a velocity and change to frequency is a potential, you should see energy conservation is equivalent to volume preservation according to the principle of Liouville integration.
In attached diagrams I show the string manifold and amplitude decay manifold are both minimal surfaces of revolution and they have the same submanifold in Liouville integration except that amplitude is the involution of the cycloid at constant volume. Both manifolds uniform rectilinear motion. The frequency and amplitude run on the same time interval and clearly are not independent.
The trigonometric law of frequency/amplitude independence is not a natural Newtonian law, it is just an illusion that results from the assumption that frequency itself is sinusoidal.
But potential energy is a real number. You guys are just assuming frequency is real (so continuity seems to demand a trigonometric form).
Finally, if the moving string keeps moving until external force stops it, what force stops the string? Clearly not gravity, friction, or viscosity.
The answer is that the motion of the string is quasi-periodic meaning that perturbation involves only the loss of kinetic energy. Potential and kinetic energy do not alternate like a pendulum. When the string is deformed, the potential increases, but quickly the excess goes to kinetic energy and never returns to potential energy. Amplitude decay is simply the loss of kinetic energy doing work against the inertial mass of the string. Since it must be true that potential and kinetic energy have the same Hamiltonian equation, they cannot be independent.
Fig 1 The string manifold and amplitude decay manifold have the same submanifold
Fig 2 Amplitude Decay Manifold
Fig 3 Path of a Cycloidal Pendulum
Fig 4 Amplitude decay is the cycloidal involution of the Cycloidal Manifold.
Fig 5 Volume-preserving Liouville Integration
Fig 6 Constructing a cycloid geometrically using a horocycle give the string a constant radius of curvature.





+1
I'm excited to share my latest research, where I build upon the groundbreaking work of Professors James Maynard and Larry Guth on prime counting in almost-short intervals. In this paper, I introduce an enhanced Hamiltonian operator that extends their framework, deepening the connections between quantum mechanics and number theory. My analysis suggests that this operator plays a crucial role in linking these fields and provides strong evidence supporting the Lindelöf Hypothesis—a key component in the broader effort to solve the famous Riemann Hypothesis. This work could be a significant step toward unraveling one of the greatest mysteries in mathematics.
linke of paper : https://hal.science/hal-04683369
I would greatly appreciate any feedback or comments from the community on this contribution. Your insights and expertise would be invaluable in refining and further developing these ideas.
Please see the attached document for a summary of my proof of rigidity.

Paradox 1 - The Laws of Physics Invalidate Themselves, When They Enter the Singularity Controlled by Themselves.
Paradox 2 - The Collapse of Matter Caused by the Law of Gravity Will Eventually Destroy the Law of Gravity.
The laws of physics dominate the structure and behavior of matter. Different levels of material structure correspond to different laws of physics. According to reductionism, when we require the structure of matter to be reduced, the corresponding laws of physics are also reduced. Different levels of physical laws correspond to different physical equations, many of which have singularities. Higher level equations may enter singularities when forced by strong external conditions, pressure, temperature, etc., resulting in phase transitions such as lattice and magnetic properties being destroyed. Essentially the higher level physics equations have failed and entered the lower level physics equations. Obviously there should exist a lowest level physics equation which cannot be reduced further, it would be the last line of defense after all the higher level equations have failed and it is not allowed to enter the singularity. This equation is the ultimate equation. The equation corresponding to the Hawking-Penrose spacetime singularity [1] should be such an equation.
We can think of the physical equations as a description of a dynamical system because they are all direct or indirect expressions of energy-momentum quantities, and we have no evidence that it is possible to completely detach any physical parameter, macroscopic or microscopic, from the Lagrangian and Hamiltonian.
Gravitational collapse causes black holes, which have singularities [2]. What characterizes a singularity? Any finite parameter before entering a spacetime singularity becomes infinite after entering the singularity. Information becomes infinite, energy-momentum becomes infinite, but all material properties disappears completely. A dynamical equation, transitioning from finite to infinite, is impossible because there is no infinite source of dynamics, and also the Uncertainty Principle would prevent this singularity from being achieved*. Therefore, while there must be a singularity according to the Singularity Principle, this singularity must be inaccessible, or will not enter. Before entering this singularity, a sufficiently long period of time must have elapsed, waiting for the conditions that would destroy it, such as the collision of two black holes.
Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not 'real'.” [3] We believe this statement is correct. Nature will not destroy by itself the causality it has established.
-----------------------------------------------
Notes
* According to the uncertainty principle, finite energy and momentum cannot be concentrated at a single point in space-time.
-----------------------------------------------
References
[1] Hawking, S. (1966). "Singularities and the geometry of spacetime." The European Physical Journal H 39(4): 413-503.
[2] Hawking, S. W. and R. Penrose (1970). "The singularities of gravitational collapse and cosmology." Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 314(1519): 529-548.
==================================================
补充 2023-1-14
Structural Logic Paradox
Russell once wrote a letter to Ludwig Wittgenstein while visiting China (1920 - 1921) in which he said "I am living in a Chinese house built around a courtyard *......" [1]. The phrase would probably mean to the West, "I live in a house built around the back of a yard." Russell was a logician, but there is clearly a logical problem with this expression, since the yard is determined by the house built, not vice versa. The same expression is reflected in a very famous poem "A Moonlit Night On The Spring River" from the Tang Dynasty (618BC - 907BC) in China. One of the lines is: "We do not know tonight for whom she sheds her ray, But hear the river say to its water adieu." The problem here is that the river exists because of the water, and without the water there would be no river. Therefore, there would be no logic of the river saying goodbye to its water. There are, I believe, many more examples of this kind, and perhaps we can reduce these problems to a structural logic pradox †.
Ignoring the above logical problems will not have any effect on literature, but it should become a serious issue in physics. The biggest obstacle in current physics is that we do not know the structure of elementary particles and black holes. Renormalization is an effective technique, but offers an alternative result that masks the internal structure and can only be considered a stopgap tool. Hawking and Penrose proved the Singularity Theorem, but no clear view has been developed on how to treat singularities. It seems to us that this scenario is the same problem as the structural logic described above. Without black holes (and perhaps elementary particles) there would be no singularities, and (virtual) singularities accompany black holes. Since there is a black hole and there is a singularity, how does a black hole not collapse today because of a singularity, will collapse tomorrow because of the same singularity? Do yards make houses disappear? Does a river make water disappear? This is the realistic explanation of the "paradox" in the subtitle of this question. The laws of physics do not destroy themselves.
-------------------------------------------------
Notes
* One of the typical architectural patterns in Beijing, China, is the "quadrangle", which is usually a square open space with houses built along the perimeter, and when the houses are built, a courtyard is formed in the center. Thus, before the houses were built, it was the field, not the courtyard. The yard must have been formed after the house was built, even though that center open space did not substantially change before or after the building, but the concept changed.
† I hope some logician or philosopher will point out the impropriety.
-------------------------------------------------
References
[1] Monk, R. (1990). Ludwig Wittgenstein: the duty of genius. London: J. Cape. Morgan, G. (Chinese version @2011)
The notion in mathemathics is clear for non-Hermitian Hamiltonian, but how to understand in physics?
It may sound a bit awkward, but is there a Berry curvature in non parameter-dependent Hamiltonians? For example, consider the Bloch problem. There is a dual perspective when working in Bloch Solids, depending on the choice of eigenvectors: one may choose to work with the periodic cell functions unk (leading to a k-dependent Hamiltonian, H(k)) or choose the eigenstates of the translation operator as well (leading to a non k-dependent Hamiltonian H).
-Why is someone obliged to use unk so that Berry Physics comes into play?
-What about choosing the "normal" k-dependent eigenstates of the translation op.?
We know that in this case, H would not depend on any parameters, but does this necessarily mean that the Berry curvature vanishes? If the answer is negative, then, the extended Berry curvature formula (the one containing the summation over the states) becomes indeed problematic, simply because \nabla_k H=0. If the answer is positive, then, again, the initial definition is problematic, because it only involves differentiations of the eigenstates, which are obviously non-zero.
The energy operator ih∂/∂t and the momentum operator ihΔ or ih∂/∂x play a crucial role in the derivation of the Schrödinger equation, the Klein-Gordon equation, the Dirac equation, and other physics arguments.
The energy and momentum operators are not differential operators in the general sense; they do play a role in the derivation of the equations for the definition of energy and momentum.
However, we do not find any reasonable arguments or justifications for the use of such operators, and even their meaning can only be speculated from their names. It is used without explanation in textbooks.
The clues we found are:
1) In the literature [ Brown, L. M., A. Pais and B. Poppard (1995). Twentieth Centure Physics (I), Science Press.], "In March 1926, Schrödinger noticed that replacing the classical Hamiltonian function with a quantum mechanical operator, i.e., replacing the momentum p by a partial differentiation of h/2πi with position coordinates q and acting on the wave function, one also obtains the wave equation."
2) Gordon considered that the energy and momentum operators are the same in relativity and in non-relativism and therefore used in his relativistic wave equation (Gordon 1926).
(3) Dirac also used the energy and momentum operators in the relativistic equations with electron spins (Dirac 1928). Dirac called it the "Schrödinger representation", a self-adjoint differential operator or Hermitian operator (Dick 2012). (D).
Our questions are:
Why can this be used? Why is it possible to represent energy by time differential for wave functions and momentum by spatial differential for wave functions? Has this been historically argued or not?
Keywords: quantum mechanics, quantum field theory, quantum mechanical operators, energy operators, momentum operators, Schrödinger equation, Dirac equation.
I am currently using CASTEP version 22.11, and I encounter the following error when running calculations with spin-orbit coupling:
Error with dsytrf in nlpot_prepare_precon_ks
Current trace stack:
nlpot_prepare_precon_ks
hamiltonian_diagonalise_ks
electronic_approx_minimisation
castep_calc_approx_wvfn
check_elec_ground_state
castep
I am certain that I have correctly set the parameters in the .param file, including setting SPIN_TREATMENT: vector. I have encountered the same issue both on my desktop and high-performance cluster. Could anyone suggest what might be causing this error? Thank you very much in advance.
“According to general theory of relativity, gravitation is not a force but a property of spacetime geometry. A test particle and light move in response to the geometry of the spacetime.”[1] Einstein's interpretation of gravity is purely geometrical, where even a free point particle without any properties and any interactions, moves in a curved spacetime along geodesics, but which are generated by the energy tensor Tµν [2]. Why isn't gravity generated directly by Tµν, but must take a circuitous route and be generated by the geometry of spacetime Gµν‡?
Gµν=G*Tµν
This is Einstein's field equation, and the Einstein tensor Gµν describes the Space-Time Curvature. We know that in classical mechanics and quantum field theory, it is the Hamiltonian, Lagrangian quantities that determine motion. Motion is essentially generated by energy-momentum interactions. Why is it irrelevant to energy-momentum in GR? Einstein had always expected the unification of electromagnetic and gravitational forces to be geometrically realized [3]*. Is such an expectation an exclusion of energy-momentum interactions in motion? Can the ultimate unification of forces be independent of energy-momentum and manifest itself only in motion in pure spacetime? If not, one of these must be wrong.
--------------------------------------
Supplement: Gravity is still a force
Gravity appears to be a ‘spacetime gravity’, i.e., gravity caused by spacetime metric differences, the same as gravitational red shift and violet shift [1]. The current four-dimensional space-time ‘geodesic’ interpretation of gravity is to match the geometric appearance of Space-Time Curvature. Time and space are symmetrical, and geodesic motion is not initiated by the ‘arrow of time’ alone, but must be accompanied by equivalent spatial factors. Any interpretation that destroys the equivalence of space and time should be problematic.
[1] "What is Force, a Field? Where is the Force Field? How does it appear? Is the Force Field a Regulating Effect of the Energy-Momentum Field?"
-----------------------------
Notes
* "After his tremendous success in finding an explanation of gravitation in the geometry of space and time, it was natural that he should try to bring other forces along with gravitation into a “unified field theory” based on geometrical principles."
‡ If one thinks that it holds only at Tµν = 0, see the next question NO.37: Is there a contradiction in the Schwarzschild spacetime metric solution?
-----------------------------
References
[1] Grøn, Ø., & Hervik, S. (2007). Einstein's Field Equations. In Einstein's General Theory of Relativity: With Modern Applications in Cosmology (pp. 179-194). Springer New York. https://doi.org/10.1007/978-0-387-69200-5_8
[2] Earman, J., & Glymour, C. (1978). Einstein and Hilbert: Two months in the history of general relativity. Archive for history of exact sciences, 291-308.
[3] Weinberg, S. (2005). Einstein’s Mistakes. Physics Today, 58(11).
I am fighting my way through Axelrod and Hamilton (1981) on the Prisoners Dilemma.
this is the payoff matrix they present for the PD. But they only present the payoffs for player A. Normally, these matrices present the payoffs for both A and B. How do I modify this to present both . I’d like to really understand the math Later in the paper.

Depth jump or Drop jump
I would like to share with you the first round of debate on purely terminological issues in a prestigious journal. Although our response constitutes a position, I would like to know the opinion of several specialists.
Best regards to all of you.
Reviewer__
Line 190: You say the Drop Jump was performed "with rebounding." This would be termed a depth jump, not a drop jump (depth jump includes rebound, drop jump does not). Please correct this terminology throughout the manuscript.
Our reply
R/ We thank the author for this excellent commentary. This is a very controversial topic, and in the current literature, there is some divergence regarding these terms (Bobbert et al., 1987a; Smith et al., 2011). This divergence may be associated with the proximity of the DJ acronym for both types of jumps.
For many years, researchers have confused the two exercises, and currently, many textbooks, authors, and coaches use the terms depth jump and drop jump as synonyms (Hamilton, 2009; Suchomel et al., 2016) or to indicate the same exercise with variations in execution (Sheppard, 2014).
To our knowledge, Drop Jump was a term recognized by Komi and Bosco (Komi and Bosco, 1978) where they acknowledge that the exercises previously performed by Asmussen and Bonde-Petersen (Asmussen and Bonde-Petersen, 1974) were Drop Jumps. I quote. "From the upright position on different lifts and then dropping directly onto the force platform with subsequent jumping upward. This condition is called a drop jump." For its part, the Depth Jump presents different objectives and was proposed by Verkhoshansky (Verkhoshansky and Chernousov, 1974; Y.Verkhoshansky, 2006)
Drop jumps are executed from lower heights, striving for a stiff landing, and keeping the leg muscles rigid to minimize leg flexion during landing and ground contact times (Pedley et al., 2017; Ramirez-Campillo et al., 2018). On the other hand, the Depth Jump (as the name suggests) is an exercise that requires a high drop height where the athlete should not land with stiff and extended legs (bounce). On the contrary, the landing must be resilient and elastic, with the optimal depth of knee flexion at the end of the amortization phase to reach a high jump height. In the Depth jump, no rigid restrictions are imposed on the magnitude of leg flexion or ground contact time, although it is recognized that the exercise must be performed quickly (Verkhoshansky and Chernousov, 1974; Y.Verkhoshansky, 2006). Another widely used term is the Depht Drop, but from a semantic point of view, we understand that it does not require jumping after the drop (no jump). A previously published systematic review attempted to explain these differences in Appendix 1 (Montoro-Bombú et al., 2023).
However, these differences in terminology can be found in numerous studies (Bobbert et al., 1987b; Byrne et al., 2010; Wallace et al., 2010; Smith et al., 2011; Pedley et al., 2017), although, authors were clear in their position (Pedley et al., 2017; Montoro-Bombú et al., 2023). We consider that these terminological differences do not directly influence the quality of the results presented in this study. In this regard, we hope that our position could be considered.
Consider two different theories of string vibration determined by whether the string stretches as it moves.
1) If the string elongates as it bends, then the force acting on a point along the string is determined by the displacement of the point from its center of motion. The force acting upon the string is therefore greatest at the midpoint where the displacement is greatest and force decreases across the string to zero at the endpoints where no displacement can occur. This is the non-uniform theory of string motion. (See the Wikipedia page "String Vibration")
2) But if the string bends but does not elongate, the force acting on the string must be uniform across the string. This is because if the string does not elongate the curvature is constant, and if the curvature is constant then so is the field strength across the string is uniform. This is the uniform theory of string motion.
There are several reasons to believe the string orbit is uniform: 1) Gauss’s theorem says surfaces bend without elongation so curvature is constant. The string is a surface. 2) In the Hamiltonian formalism there is a tangent-cotangent vector field defined at a point on the string that results from the string as a bilinear form H: R2n x R → R. Since the tangent is perpendicular to the string, the motion of a point cannot be along the string axis. 3) The shape of the string must seek the lowest energy level and by Newtonian determinacy the shape must be a function of the initial state of the string. Therefore, even if tension and length somehow can vary, it must still be true the equations of motion are determined by length and tension at rest. 4) The use of partial differential equations based on the nonuniform theory leads to sine wave functions which have no normal vector and are defined in a plane. There is no way that sine wave functions can make a minimum surface of revolution for the string manifold.
This is an important question, I think, because if the nonuniform theory of string vibration is not correct then an entire field of mathematics and physics is also not correct. I say non-uniformity is nonsense. I do not see any mention in the literature that string curvature is constant. But how can it be understood in any other way?
I want to perform a calculation for Crystal Orbital Hamilton Population (COHP) Analysis. I need your kind suggestions and help.
The answer to the question will not be straightforward as the conventional Hamiltonian in the EM field. As it is difficult to write equations here, I am attaching the link to the question.
Question link: https://physics.stackexchange.com/q/749166/147579
The conservative and dissipative terms of a 3D chaotic system are separated using Helmholtz theorem [F(x) = Fc(x) + Fd(X)]. How to find its Hamiltonian energy function (analytically and numerically)?
F(x) = Fc(x) + Fd(x), where F(x) is a 3D chaotic system, Fc(x) is a column vector with conservative field terms and Fd(x) is a column vector with dissipative field terms.
After using Helmholtz theorem it is obtained that
Fc(x)= full column vector;
Fd(x)= column vector with zero first row term.
hello everyone,
I am calculating the eigenenergies of a particle confined in a monolayer of given dielectric constant sandwiched between two layers of dielectric constant much lower than that of the monolayer. So I have to add the image charge effect to the hamiltonian and I want to know how to set its expression.
thanks for your help.
When considering the Casimir effects, vacuum fluctuations, ..., the temperature can be confused by depending on spacetime + observers (Hamiltonian, accelerated observer, ...) and classical definition by entropy and energy as T-1 =∂S/∂E. In quantum mechanics, when we define a system in the state of ρ=e−βH^, the density matrices denotes the expectation value of such system is considered as thermal expectation value; this considering involves time!
When an observer is inside a defined system or accelerated, the Hamiltonian changes!
Complex numbers are involved almost everywhere in modern physics, but the understanding of imaginary numbers has been controversial.
In fact there is a process of acceptance of imaginary numbers in physics. For example.
1) Weyl in establishing the Gauge field theory
After the development of quantum mechanics in 1925–26, Vladimir Fock and Fritz London independently pointed out that it was necessary to replace γ by −iħ 。“Evidently, Weyl accepted the idea that γ should be imaginary, and in 1929 he published an important paper in which he explicitly defined the concept of gauge transformation in QED and showed that under such a transformation, Maxwell’s theory in quantum mechanics is invariant.”【Yang, C. N. (2014). "The conceptual origins of Maxwell’s equations and gauge theory." Physics today 67(11): 45.】
【Wu, T. T. and C. N. Yang (1975). "Concept of nonintegrable phase factors and global formulation of gauge fields." Physical Review D 12(12): 3845.】
2) Schrödinger when he established the quantum wave equation
In fact, Schrödinger rejected the concept of imaginary numbers earlier.
【Yang, C. N. (1987). Square root of minus one, complex phases and Erwin Schrödinger.】
【Kwong, C. P. (2009). "The mystery of square root of minus one in quantum mechanics, and its demystification." arXiv preprint arXiv:0912.3996.】
【Karam, R. (2020). "Schrödinger's original struggles with a complex wave function." American Journal of Physics 88(6): 433-438.】
The imaginary number here is also related to the introduction of the energy and momentum operators in quantum mechanics:
Recently @Ed Gerck published an article dedicated to complex numbers:
Our question is, is there a consistent understanding of the concept of imaginary numbers (complex numbers) in current physics? Do we need to discuss imaginary numbers and complex numbers ( dual numbers) in two separate concepts.
_______________________________________________________________________
2023-06-19 补充
On the question of complex numbers in physics, add some relevant literatures collected in recent days.
1) Jordan, T. F. (1975). "Why− i∇ is the momentum." American Journal of Physics 43(12): 1089-1093.
2)Chen, R. L. (1989). "Derivation of the real form of Schrödinger's equation for a nonconservative system and the unique relation between Re (ψ) and Im (ψ)." Journal of mathematical physics 30(1): 83-86.
3) Baylis, W. E., J. Huschilt and J. Wei (1992). "Why i?" American Journal of Physics 60(9): 788-797.
4)Baylis, W. and J. Keselica (2012). "The complex algebra of physical space: a framework for relativity." Advances in Applied Clifford Algebras 22(3): 537-561.
5)Faulkner, S. (2015). "A short note on why the imaginary unit is inherent in physics"; Researchgate
6)Faulkner, S. (2016). "How the imaginary unit is inherent in quantum indeterminacy"; Researchgate
7)Tanguay, P. (2018). "Quantum wave function realism, time, and the imaginary unit i"; Researchgate
8)Huang, C. H., Y.; Song, J. (2020). "General Quantum Theory No Axiom Presumption: I ----Quantum Mechanics and Solutions to Crisises of Origins of Both Wave-Particle Duality and the First Quantization." Preprints.org.
9)Karam, R. (2020). "Why are complex numbers needed in quantum mechanics? Some answers for the introductory level." American Journal of Physics 88(1): 39-45.
Hello every one,
I am looking for references about the theoretical calculation of nonlinear optical properties of hybrid perovskite quantum wells and how to set the hamiltonian of an electron, in the conduction band, confined in a quantum well.
thank you.
In teaching, or as a student in physics, oftentimes a difficulty becomes a motivation for new understanding. In this context, what difficulty do you see in using Lagrangian or Hamiltonian methods in physics, also thinking of avoiding difficulties ahead, for example, in teaching or learning Quantum Mechanics?
As a reference, please read the following. "Consider the system of a mass on the end of a spring. We can analyze this, of course, by using F=ma to write down mx'' = −kx. The solutions to this equation are sinusoidal functions, as we well know. We can, however, figure things out by using another method, which doesn’t explicitly use F=ma. In many (in fact, probably most) physical situations, this new [150 years old] method is far superior to using F=ma. You will soon discover this for yourself when you tackle the problems and exercises for this chapter [see instructions below, or search in Google]. We will present our new [150 years old] method by rst stating its rules (without any justi cation) and showing that they somehow end up magically giving the correct answer. We will then give the method proper justification.", in Chapter 6, The Lagrangian Method, Copyright 2007 by David Morin, Harvard University.
Morin continues, "At this point it seems to be personal preference, and all academic, whether you use the Lagrangian method or the F = ma method. The two methods produce the same equations.However, in problems involving more than one variable, it usually turns out to be much easier to write down T and V , as opposed to writing down all the forces. This is because T and V are nice and simple scalars. The forces, on the other hand, are vectors, and it is easy to get confused if they point in various directions. The Lagrangian method has the advantage that once you’ve written down L ≡ T − V , you don’t have to think anymore."
instructions: search in Google, or please write requesting the link.
Let's say we have a 2X3 (qubit-qutrit) system and one wishes to work in the subspace of 2X2(using only two levels of three) how to reduce the dimension of say Hamiltonian (or other operators on subspace) from 6X6 to 4X4?
This problem is not of the partial trace as we get rid of one of the parties in that case, but here I wish to get rid of one level.
I have been asking myself if one has different analytical representations (analytical extensions in complex functions) of some expressions containing hypergeometric function 2_F_1(a, b; c; z) derived from a Hamiltonian in quantum physics for a 1-D movement of a particle, what is the importance of the multiples set of eigenvalues "E" when choosing a specific analytical extension or continuation of a vital function immersed in the results like that specified in a spectral determinant for the Hamiltonian and its eigenvalues or det(E-H) =0. I am working on this problem and determined a set of eigenvalues "E" from a general solution of the Time-Independent Schrödinger equation in 1-D movement, but I suspect that depending on the analytical extension or analytical continuation of certain functions in the spectral determinant det(E-H) one can represent multiple spectral values of E (eigenvalues). So the analytical extension can represent multiple values of energy in quantum problems.
For a system consisting of N atoms, the spin-resolved (i.e. in terms of spin atomic orbitals) Hamiltonian is basically a (2N x 2N) matrix. We can diagonalize this Hamiltonian matrix to obtain 2N number of spin-orbitals (i.e.spin molecular orbitals, as a linear combination of spin atomic orbitals). From there, the spatial orbitals (i.e.spatial molecular orbitals) are needed.
hello every one ... does every one know how I can drive The Hamiltonian of the piezomechanical system that represented by the Bosonic operators(I mean the equation 7 in the photo that I share with you ) ...thank you very much if you can help me and introduce me a book or article in this field ...

Hi.
We will be injecting mice with various drugs intrathecally. We only need to inject 5µL but with the Luer slip tip Hamilton syringes and BD 30G needles, we are using over 100µL of drug per mouse, and that is going to get very expensive very fast. We need something better.
Has anyone used 1/3mL insulin syringes for this sort of thing? I know you can get them for U100 insulin, so half a unit should be 5µL, but does anyone have experience using them?
Thanks in advance
Heidi
Hello everybody,
I am new in topology in condensed matter physics. So excuse me if my question were somehow unusual. In Haldane model, we put one step (or steps) forward and take into account the annihilation and creation of the electron in the next-nearest neighbors in writing the Hamiltonian rather than the simple tight binding model, so my question is Why we do not take into account the annihilation and creation of the electron in the third, fourth and ... neighbors? Is this because those sublattices are far away ,so these hoppings are negligible?
Thanks
It is known that imaginary potentials are a source of particles when included into the Gross-Pitaevskii equation. As far as the dynamics of a Bose gas is concerned, is it possible that these potentials could be a source for chaos? Did anyone investigate this before?
I am currently using Wannier90 to make a site-symmetric tight-binding hamiltonian. To do this, I need all of my Wannier functions to be atomically centered. I would use site_symmetry = .true., but I cannot get the appropriate *.dmn files from VASP. In trying to get this symmetry, I attempted
num_iter = 0
to keep my functions on the atoms. In doing this, I saw how my initial WF centers were not even on my atoms, despite declaring that within my projections block in the *.win file. I found a forum post about this problem from 2016, but it was never fully resolved
Has anyone run into this problem? Or does anyone know how to fix this?
I have appended the appropriate data from my *win file, as well as the initial state from my *wout file.
Input data
Begin Projections
Sn: s;px;py;pz
S: s;px;py;pz
End Projections
begin unit_cell_cart
4.3303193 0.0000000 0.0000001
0.0000000 4.0765639 0.0000000
0.0000006 0.0000000 29.9986600
end unit_cell_cart
begin atoms_cart
Sn 2.5301712 2.0382820 16.9495545
Sn 0.3650114 0.0000000 13.9993643
S 0.0000909 0.0000000 16.5839284
S 2.1652504 2.0382820 14.3649879
end atoms_cart
Output data
------------------------------------------------------------------------------
Initial State
WF centre and spread 1 ( 2.547126, 2.038268, 16.967523 ) 2.50939867
WF centre and spread 2 ( 2.574011, 2.038383, 17.581330 ) 6.41210763
WF centre and spread 3 ( 2.504668, 2.038283, 17.005189 ) 9.76083682
WF centre and spread 4 ( 2.585412, 1.992948, 16.941401 ) 28.67792149
WF centre and spread 5 ( 0.381966, -0.000014, 13.981389 ) 2.50921043
WF centre and spread 6 ( 0.408842, 0.000101, 13.367488 ) 6.41138460
WF centre and spread 7 ( 0.339537, 0.000001, 13.943767 ) 9.76017540
WF centre and spread 8 ( 0.420172, -0.045365, 14.007884 ) 28.67839428
WF centre and spread 9 ( -0.009523, 0.000005, 16.550864 ) 3.54856232
WF centre and spread 10 ( 0.003042, 0.000030, 16.521522 ) 3.07910933
WF centre and spread 11 ( -0.015094, 0.000000, 16.516322 ) 4.46905309
WF centre and spread 12 ( 0.042570, -0.005514, 16.515811 ) 10.13798770
WF centre and spread 13 ( 2.155636, 2.038287, 14.398071 ) 3.54846432
WF centre and spread 14 ( 2.168198, 2.038312, 14.427348 ) 3.07904394
WF centre and spread 15 ( 2.150072, 2.038282, 14.432612 ) 4.46881459
WF centre and spread 16 ( 2.207729, 2.032774, 14.433204 ) 10.13745989
------------------------------------------------------------------------------
In quantum renormalization group studies, the process is starting with dividing of the hamiltonian into block and block-block part, and then the projection operator is found by using ground state functions of block hamiltonian. My question is about that how can we write reduced density matrix of the block by taking partial trace over some of the qubits (if there are more than two particles in a block). How can we choose the qubits remained after partial trace and is there any physical reason of this choice?
I was trying to find the generating function of the canonical transformations. Firstly it seems to me that (p1,Q1,q2,P2) are independent variables so I can make a Generating function that is of 3rd kind w.r.t the first particle and of 2nd kind w.r.t the second particle. So I need to find the generating function of the form $ F32(p1,Q1,q2,P2).
To start with I used several methods
- I tried to make an exact differential out of the given equations
- I tried to use the partial derivatives of Generating function and tried to Integrate them with additional proper functions
- Tried to use another pair of variables
Nothing seems to work.
Please tell what I should do.

Dear colleagues,
I need to compute charge transfer integral J(RP), spacial overlap S(RP) and site enegries of the dimer H(RR) and H(PP) (two same molecules, R and P, specifically oriented), formulated as it is shown in pictire, from JPhysChemB, 2009, v113, p8813.
Could you specify the keywords of the Gaussian 09 to do this?
Thanks in advance,
Andrey Khroshutin.

If we have N qubit system of the Heisenberg model how would we calculate the number of degeneracies analytically associated with the Hamiltonian?
Is anyone here have used or using Liquid Handling system in India? Need some expert advice.
It can be of any company, Beckman, Tecan, Hamilton any.
Hamiltonian formulation of GR, justification of constraints in LQG, connection formulation
In non-hermitian Quantum mechanics states,for certain threshold value of pseudo-hermitian Hamiltonian will give real to complex eigen value spectra and abrupt phase transition have occurred at some point.About this point spontaneous break down of parity-time symmetry(PT symmetry) have occurred but what is meaning of spontaneous breakdown of PT-symmetry????
Dear professors, researchers,
After some models for the Riemann Xi and zeta function, I was able to get a curious equation which is based on two parameters in function of one final independent parameter that rules the entire equation and of course the complex variable s or z which is the variable for the domain of the Riemann Zeta function and the Riemann Xi. Although the model has not been revised at all, due to I need to compute the parameter I have mentioned, which demands transcendental equations in some parts, not difficult to compute but their analysis must be done, I have noticed that I have arrived to some similarities described in important articles regarding the conjecture of a hypothetical Hamiltonian, there are a lot of similarities like the fact of involving Bernoulli summations in some operators in some references and hyperbolic trigonometric functions mentioned in some works, and similarities like that seem to appear in my own model purely related to a mathematical methodology ( I am not defining physical terms like position, time or potential functions). Yesterday, after checking old articles and new ones regarding the Hamiltonian and the Polya's conjecture, Berry and other authors, I have noticed that I achieved some components as a resemblance to the "H*i about the i*H that is PT symmetric with a broken PTsymmetry" or at least what is understood in ,with the imaginary unit i =sqrt(-1) , and the term 1/2 involved and the possibility to factor the structures of my model for the expected eingevalues and potential functions within a physic model. However, the work is not concluded and I have just wanted to be instructed by physicists or other experts to know the brief concrete mathematical and physical characteristics of an operator like H quantized and how to quantize it or describe it from my own results, how to understand properly the self-adjoint property and if it is obligatory to look for an Hamiltonian or it could be other operator that involve the eingevalues or in this case the imaginary part of the non trivial zeros within a physical context.
I would like to have a serious contact with physicists and mathematicians who are interested in to resolve this part of the mathematical model, since I am convince that hyperbolic trigonometric functions and Bernoulli numbers are enrolled in this path!
Carlos
The standard method for carrier transport in the nanoscale is tight-binding Hamiltonians combined with the NEGF method. But in many journals, we also find the DFT method for carrier transport. Could anyone explain to me how the DFT+NEGF method is better than the tight-binding hamiltonian method?
Thanks in advance
I want to talk about the stationary state and especially for the example of a particle in the box (infinite square well), even this is a simple example but in fact, it contains a strange behavior, for example for the ground state we have a stationary simple wave function with a quantized energy E= h^2/(8ma^2) (a is the length of the box), QM tells us that if H is the hamiltonian operator the <H>=E and <H^2>=E^2 then sigma^2 =<H^2> - <H>^2 = 0 then each measurement of the energy is certain to return the same value E.
First, the potential energy is zero into the box by definition so we have only kinetic energy, but the measurement of momentum in the ground state is not certain, we have a density probability, and yes the mean of it gives us the kinetic energy that equal to the quantized energy E of the particle but we have many trials of particle that have zero momentum or very close to zero (when we measured it) in the ground state! so from where come this fix energy all time?
for an external observer the negative energy of the virtual particle may have positive energy relative to an observer inside the horizon. So, the energy sign (-) is frame dependent. How can I see that clearly ?
I know that for understanding this situation, we must note that the Hamiltonian is the generator of time translations. But I need more explanations.
thanks.
N.M
I have come up with an algorithm, but I am not sure whether it is correct, and I do not know how to prove it.
How to minimize an Hamiltonian to find the optimal control?
I need to solve an optimal control problem using Pontryagin's Minimum Principle.
To find the u* I should minimize the Hamiltonian function. But the Hamiltonian's minimization required to know the optimal state x* and the optimal co-state p*, that
I can know only solving the state and co-state ODEs x*_dot=f(x*,u*) and p*_dot=-Hx.
So, I need to know the optimal state and costate to minimize the Hamiltonian and find the optimal input u*, but I need to know the optimal input u* to solve the ODEs and finding the optimal state x* and costate p*.
How can I get out of this loop? Or is this reasoning wrong?
In the descritption of the atom by the hamiltonian transpose by quantum operators into a generalized Shrödinger equation of the n-body problem of a nucleon with n electrons that have mutual interractions of electrostatic type. What are the methodologies to solve such problem? And does the Bohr-Oppenheimer approximation gives analytical solutions?
As an output of the wannier90 calculation, we get one hr.dat file which provides us a Hamiltonian with on-site energies and hopping term in WF basis. How to get or convert this Hamiltonian in a k-space basis?
I was wondering: in common DFT (or Hartree-Fock) algorithms/codes, and in particular in solving the Roothan equation, what is the most computationally expensive part:
1- evaluate all the terms of the Hamitonian matrix (a.k.a. Fock matrix)
or
2- solve the eigenvalue problem once the H matrix had been calculated
?
And by how much?
Many Thanks to whomever will give useful answers. References (articles or reviews) on that very topic will also be much appreciated.
Imagine that we have a defined Hamiltonian H. This Hamiltonian is going to help as to implement single quantum gates. During the evolution of the Hamiltonian H, the chance of the quantum system to interact with environment becomes very high. We can calculate the fidelity of specific state psi numerically by solving Lindblad equation. Here, I am looking for an analytical method to compute the approximate average fidelity of the Hamiltonian H.
Generally we add spin-orbit interaction as a perturbation term in the system. which system has this spin-orbit term naturally in its hamiltonian.
Let $X$ be a hamiltonian vector field on the plane. Then either has no closed orbit or it has infinite number of closed orbit. Now what can be said about higher dimensional hamiltonian vector fields? Is there a hamiltonian vector field on R^4 which has a finite number of periodic orbits?
Please see the following corresponding MO link:
Thank you
Is there a polynomial Hamiltonian vector field with a finite number of periodic orbits?
Please see this MO question:
Thank you
In a random graph, the number of paths that cannot be further extended in an hamiltonian path or an hamiltonian cycle, can be roughly be separated in two equal parts?
I have an idea for solve for some class of graphs, for example random graphs, the Hamiltonian cycle problem in a very efficient way, but to proof this I need to separate about equally in two parts the paths of a graph that do not contain the same vertex more than one time, and that cannot be extended in length to cover all vertices. This separation could be for example based on paths length, for example even and odd legth not Hamiltonian paths. And under which assumptions on the graph this works. I would like of some proof of this.
Using Quantum Espresso ab-initio package I have generated _hr.dat file, which contains information about hopping parameters. Now how can I use this file to generate TB dispersion relations matrix elements?
Dear Respected Colleagues, I want to solve the Schrodinger Equation for Helium Atom by Finite difference method to find the wave functions and Energy Eigen Values including the spin orbit interaction term.
Please help me to solve the problem.
Thanks and Regards
N Das
The main quantity in the study of dynamical quantum phase transition (DQPT) is Loschmidt echo amplitude defined as
$G(t)=\langle \Psi_{0}|\Psi_{0}(t)\rangle=\langle \Psi_{0}|e^{-iHt}|\Psi_{0}\rangle$.
The rate of return probability is given by
$R(t)=-\frac{1}{N}\lim_{N\to \infty}\log[G(t)]$.
The DQPT is signaled by the singular behavior of $R(t)$ at the critical time $t_{c}$.
Replacing the time t by a complex time $z=t+i\tau$, leads to a complex Loschmidt echo amplitude
$G(z)=\langle \Psi_{0}|e^{-zH}|\Psi_{0}\rangle$.
The zeros of the function $G(z)$ are called Fisher zeros lying on the complex time ($z$) plane. The Fisher zeros form a structure and when they cross the real time axis, the DQPT occurs at real time $t_{c}$.
As it is not always the case to find analytical formula for the Fisher-zeros, I am looking for methods to calculate the Fisher zeros numerically. I consider that the Hamiltonian can be represented as a finite dimensional matrix. A physical system could be a system of non-interacting fermions in one spatial dimension. $|\Psi_{0}\rangle$ is also accessible numerically and $G(t)$ can be numerically computed for large system size.
General Relativity is not enough, is there a direction for improving?
General Relativity is a kind of beautiful and abstractive mathematical description of the physical world, it does not reveal the physical nature of the physical world.
General Relativity sprang out of Maxwell’s equations; hence, the ideal approximation of frictionless vacuum was inherited. General Relativity was built up from the foundation of Lagrange and Hamilton formulated mechanics, there is a certain limitation in both these formulations since they assume the forces to be conservative in their standard forms, friction and energy dissipation are not handled by these two formulations of mechanics including the abstractive theory of General Relativity based on them.
The author discovered a new direction for advancing, please read two articles by clicking the following links and give your kind advice:
I got a question (in a Question paper) as follows:-
A three-sphere is like a two-sphere. It consists of all points equidistant from a fixed point (the origin) in four dimensional space. Consider a particle free to move on a three sphere. How many conserved quantities does this system possess?
The answer say's 6 conserved quantities are there, but how is it possible? Can anyone kindly explain.
Dear Professors,
In many Journal papers, Cross & Hamilton model is used to know that effect of different shaped Factors like (Cylinder, Brick, Blades etc.).
Can we use Casson Model for the effect of different shaped Factors like (Cylinder, Brick, Blades etc.). Please give me any suggestions.
Thanks & Regards.
Over the years I have repeatedly encountered the problem that NVT dynamics of simple liquids and solids with a Langevin thermostat do not manage to keep the desired temperature. I have observed this behavior for several atomic and molecular liquids with VASP as well as with ASE. Sometimes this can be fixed by reducing the timestep and/or increasing the coupling constant (within reasonable limits), but in some cases, even this does not help.
My current system is pre-equilibrated liquid pentane at T = 300 K at its experimental volume with all masses set to 10 amu and a semi-empirical GFN0-xTB Hamiltonian. I have tried increasing the coupling constant from 0.02 au (suggested value) to 0.05 au and 0.10 au and decreased the time step from 4 fs to 2 fs and 1 fs, but even after 20 ps of simulation time and a pre-equilibration of 10 ps with an even higher coupling constant the average T of the simulation remains at ~290 K instead of the desired 300 K.
I have made similar observations in countless VASP simulations of atomic liquids and solids (DFT Hamiltonian), so I'm starting to think this is a fundamental problem of the LV thermostat? If I'm not completely mistaken that behavior means that the thermostat can not put energy into the simulation fast enough. But where is the energy going? With such a short time step energy conservation should be really good but obviously it isn't.
What is my misconception, what am I doing wrong, and how can I fix that behavior?
Any help would be greatly appreciated.
Jan
Dear all,
I'm trying to employ replica-exchange with solute tempering 2 (REST2) in a system, but I have performance issues in the calculations. I'm using openmpi-3.0.0 and gromacs 2019.4 patched with plumed 2.5.5.
To run the simulation, I am using the following configuration,
#PBS -l select=10: ncpus=6
export=OMP_NUM_THREADS=10
mpirun -np 10 gmx_mpi mdrun -s production.tpr -plumed plumed.dat -v -deffnm production -multidir 0 1 2 3 4 5 6 7 8 9 -replex 400 -hrex -dlb no
and I get the following message:
WARNING: On rank 0: oversubscribing the available 48 logical CPU cores per node with 60 threads. This will cause considerable performance loss.
Could anyone help me please?
Hi all,
Currently I am working on NEGF methodology. I read the atomistic Hamiltonian calculation method. But I need to know the method to generate single-band effective mass Hamiltonian for semiconductors with atoms arranged in honeycomb structure like hBN, chair-Germanane which has two different atoms per unit cell.
For the Hamiltonian , H=hw (a*a). How could one evaluate the green's function in the path integral approach using coherent states for a fermion (or boson) ? In particular, one can evaluate this in phase space using (q, p) as integral variable and calculate the green's function in terms of the position variable. How do one determine the same in the coherent state path integral for the above harmonic oscillator (second quantized notation) ? Most importantly how could one evaluate the step by step integral for every time slice in coherent states to get the green's function ? Please suggest some reference where it has worked out
I have a parameterised tight-binding Hamiltonian. I also have energies from DFT calculations for a given k-path.
I need to determine the value of the parameters in the TB hamiltonian.
The method I am attempting is to match the energies at high-symmetry points and around those points. In the literature they talk about least squares fitting but I'm not sure how to do this.
Any help is appreciated
How is Hamiltonian Monte Carlo is better than Markov Chain Monte Carlo method in Bayesian computations?
It is needed in the preparation of survey of methods for self-adjointness like Hamiltonian operator, Laplacian operator, etc.
The pitch value set in music and guitar group in tablature are connected by adjunction of tangent-cotangent bundles. Tuning g is the tangent gradient to the flow of pitch on guitar. It determines the directional derivative at every point in tablature. Intonation f is cotangent. It connects every point on guitar to a pitch. Tuning g:(set→ group), which might be 0 5 5 5 4 5 (Standard tuning), is a left adjoint pullback vector used by guitarist as an algorithm to construct tablature by the principle of least action. The right adjoint f:(group→ set), respectively 0 5 10 15 19 24, is a forgetful vector transforming fret number vectors to the codomain pitch number vectors by intonation at a specific pitch level. When the tablature is played, the frequency spectrum observed seems to forget the tablature group, but it can be proven an efficient Kolmogorov algorithm for learning the tuning exists.
The symmetry of (0 5 5 5 4 5) and (0 5 10 15 19 24) is obvious. The second vector is just the summation of the first. The first vector gives the intervals between strings. It points in the direction of steepest pitch ascent. The second vectors gives the pitch values of the open strings. When added to the fret vector, 0 5 10 15 19 24 gives the pitch vector.
The tablature is pitch-free and the music is tablature-free. These vectors form a Jacobian matrix on the transformation.
I want to know if a mathematician can see the tangent-cotangent relation of these two vectors. If not, then what is required to convince?
Does it help to know that addition and multiplication are the same? That the vectors are open subsets of the octave intervals? Do I need to prove a partition exists, or is it obvious?
Is the tensor notation clear?
Is there any mathematician out there that can say something useful about tablature?
When I receive many notifications of citations of my 1975 paper on the duality condition for quantum fields, I take a look at the new reference. But although I can roughly follow the arguments (having not worked in this field for many decades), I don't have a good sense of the context for all this work on Modular Hamiltonians, Entanglement, Conformal/AdS, etc. Where is this research heading? What large questions are these papers hoping to solve.
I and a few others have suggested turbulence plays a role in the subatomic realm.
Hamiltonian formulations become chaotic with long time. Nature does not. Newton speculated in his "Opticks" that because planet orbits did not seem to decay, the aether he was suggesting was not imparting non-conservative force on matter. Hence, it seems the Hamiltonian has some applicability.
Entropy has been observed in the universe. Basically, entropy is taking energy from interactions. Entropy should play a role in the subatomic realm. But how?
Taylor described harmonic motion on the string using Newtonian physics as a smooth manifold. This is absolutely unequivocal in Incremental Methods Direct and Indirect. The string field is uniform, tangent-cotangent bundles are (almost) everywhere perpendicular to the string. The curvature of the string is constant because the string always follows the shortest path.
As the first to describe the equation of harmonic motion, Taylor should get credit for the principle of least action but Euler wrote the formal integral.
But Euler said No, the curve of the string can be any continuous curve. To prove this Euler wrote a series of functions, presumably with the string modes on the monochord (kanon, or measuring rod in Greek) in mind. The use of a transcendental series is similar to Fourier harmonic analysis.
Euler and Bernoulli apparently disagreed on whether the number of terms in the series was infinite. They may have thought the series added up to 1, but Cantor showed the series in not coherent because it does not converge.
Show the question I have here is whether the string manifold is smooth or merely continuous.
First, there is no addition function on the monochord which allow two modes to add. They cannot add because they have different critical points and a point cannot be critical and not critical at the same time.
Second, if the string curve is a combination of waves with different frequency, and therefore different energy levels then those waves on the string that have higher energy will simply minimize on the fundamental.
On Research Gate and Stack-exchange (where I am an outlaw banned for life, like the Jesuits opposed to infinitesimals), I have asked perhaps a hundred questions that have never been answered.
I mean, come on! Of course Taylor was correct. It is easy to see the string manifold is smooth because manifolds cannot exist without smooth functions!
I'd like to hear from John M Lee, Pavel Grinfeld, Liviu Nicolaescu, Marco Marzzucchelli, Giuseppe Buttazzo. People who know a smooth manifold when they see one.
Just as Euler's idea became Fourier (useful but just not in music), Taylor's principle later became the Lagrangian, later Hamiltonian principle.
The string is fundamental to science so if physics and mathematicians do not understand it, what else do they have wrong?
The questions you cannot answer are the best ones.
I have attached Taylor's diagrams showing how he analyzed string motion. Even in Latin the words "cycloid" and "constant curvature" are clear.




Hamilton, M. A. X. (1959). The assessment of anxiety states by rating. British journal of medical psychology, 32(1), 50-55.
Hamilton, M. (1969). Diagnosis and rating of anxiety. Br J Psychiatry, 3(special issue), 76-79.
Recently, I am working on PT symmetric non-hermitian lattices. A PT symmetric non-hermitian Hamiltonian gives real eigenvalue spectrum in the unbroken symmetry region whereas it gives imaginary eigenvalue spectrum in the broken symmetry region. Can anyone explain why non-hermitian term appears in a system? What is the meaning of 'loss and gain' in a system? What is the significance of imaginary part of energy eigenvalue?
Currently, I am studying the MCMC and its variants, i.e., Hamiltonian MC, however, I am not sure what is the best approach to practically diagnosing the convergence and quality of MCMC samplers. At this moment, I diagnose the convergence based on the central limit theorem (CLT). I found that CLT is not the best approach to diagnose the convergence because, for Gaussian case, I can use any optimization methods which show superiority above MCMC samplers.
Kindly seek your advice in this matter.
Great thanks!
The first contributions by quantum mechanics (QM) to electromagnetism (EM) were due to the work by Max Planck in postulating the photon, later by Einstein in postulating the stimulated emission of photons and in calculating the Einstein A and B coefficients, predicting the laser -- light emission by stimulated emission of radiation. See https://www.aps.org/publications/apsnews/200508/history.cfm
Macroscopically, both motion of charges and magnetic moments seem, at first, to be responsible for the magnetic field, but the B moment is made up up only by motion of charges according to special relativity (SR) ... still, there are no monopoles in nature, while QM adds the magnetic moment of particles.
Einstein first pointed it out, when he proposed SR, albeit with no QM. Using modifications by Minkowski, SR was applied by Einstein to general relativity (GR). We live in at least 4D, said Einstein. Still, GR is not compatible with QM.
But in EM, Maxwell's equations are NOT equivalent to the relativistic equations for the field strength tensor (as some presumed), because they exclude QM, such as in the Aharonov-Bohm effect and the laser.
Here, gravitoelectromagnetism (proposed by Oliver Heaviside and further developed by Olev Jefimenko) was revealed to be incorrect and not covariant with SR (does not show Lorentz covariance), and does not include QM. Their (Heaviside and Jefimenko) ideal of electromagnetic theory therefore falls short by not accepting the rules of SR and QM. This is well-known.
Physically, GR seems right, as it agrees with Minkowski SR. One would need to use not vectors but tensors, as both sides of an equation A = A must transform equally under transformations such as rotation, mirroring, or translation. And one can also use scalars, following the formalism of the Euler-Lagrange equation. Therefore, GR was used although incompatible with QM.
Some people say that "there is no need for the Euler-Lagrange equation in mechanics, because essentially it does not go beyond Newton's laws." Yes, that is WP says, but is wrong.
Newton's laws do not include a way to add SR and QM, but the Euler-Lagrange equation does. There are flaws in Newton's laws as well (not in the Euler-Lagrange equation), some documented elsewhere, such as absolute time and "demonologically based" action-reaction.
The magnetic moment of the electron and of a neutral particle are then taken into account, properly, by adding QM and SR in the Euler-Lagrange equation -- not by adding Maxwell equations. EM seems complete with SR and QM, excluding the Maxwell equations.
In all of that one apparent lack of coherency remains -- Why is GR incompatible with QM?
We found that the answer lies not in physics per se, but in the use of conventional mathematics, which predicates a supposed Newtonian "continuity", "infinitesimal", and "infinity" that, however, do not exist in Nature -- and then we modified and extended GR to be compatible with QM, by using proper mathematics.
Constructive mathematics, such as Digital Constructivism, should be used in GR and elsewhere, where the notion of "there exists" is strictly interpreted as "can be constructed". Everywhere is digital, quantum. If the quantum does not seem digital, this is a sign that one has not reached the quantum -- still, one has a mixture. The Curry-Howard correspondence, in mathematics, indicates also that there is no continuity, and no "number" or quantity as infinite, no epsilons and deltas of Cauchy, no infinitesimals.
After a preprint period in academic circles and in RG, for open comments, our answer and proposed solution -- why modified GR is compatible with QM -- is now published, and can be seen free under Kindle Unlimited, or obtained inexpensively on ebook and paperback formats, for example at: https://www.amazon.com/dp/B07ZXRQQJX
What is your qualified opinion?
Imagine that we have point interacting particles, that moving according law like x(i, t+dt)=x(i,t)+f(i,j,t)*dt. Or also velocities can be included. In the limit dt -> 0 we can get equations for well known continuous case with Hamiltonian. Can we have Hamiltonian for small dt?
Hello dear users,
I am trying to simulate lysozyme in water with REST2. The temperature range I have is 300-500K and the number of replicas is 20. I have attached the mdp file and the data.out file that contains the error message here. My simulation fails after the first swap saying the potential is infinite. I believe my system should be well equilibrated because the .gro file I used to generate .tpr files for REST2 is from a 360-ns regular MD at 300K. I have also took a look at the replicas that fail and found that those who failed were involved in Hamiltonian replica exchange swap. Does anyone have any idea? Thanks in advance!
Regards,
Peiyin Lee
My little knowledge on C -Symmetry is that .
All Hamiltonians are crashing on C but will not allow it to crash . Do you agree ?
I was trying to calculate the eigenstate of molecularly projected self consistent Hamiltonian (MPSH) for two terminal molecular device in ATK-2016.3. The zero bias calculation can be done as it is given in the tutorials , but finite bias calculation is not there. I am looking for the bias dependent calculation. My question is that for finite bias MPSH calculation would I have to optimize the device geometry first in presence of applied bias? Please suggest me.
Thanks in advance
How to form a effective hamiltonian for a system starting from a Hamiltonian of the system+environment? Is there any pedagogical review available in this topic?
Suppose [C,H] =/=0 where C charge conjugation operator and H is the Hamiltonian. Give physical meaning .. If above relation is zero .Give physical meaning .
B.Rath
Hi,
I have a 4x4 Hamiltonian describing a part of my system. To get a holistic view of the situation I need to do a charge inversion on the matrix. What is the 4x4 charge inversion operator? And what is the logic behind it (e.g. how is it built up, for example -i (\tau_0 direct product \sigma_y))?
The main question is that how one can define the interaction Hamiltonian for the entangled particles which are far away. If it is same with the SPDC Hamiltonian, then the expectation value of the momentum <P> doesn't depend on the inter-distance between the quantum particles.
In the formulation of the NMR spin Hamiltonian, it is assumed that the direct dipolar spin-spin interaction vanishes in isotropic medium (e.g. solution). This is a consequence of the direct spin-spin coupling interaction being purely anisotropic, and hence averaging to zero for a free rotating molecule. My question is about whether the same assumption holds in solid state NMR for a powder sample where all possible orientations of the molecule with respect to the field may be present as well ?
Thanks in advance. Julie.
I have hamilton syringes two 5 and 10 microlitrres. How do I clean Hamilton's syringe?
I am trying to solve a hamiltonian of an electron moving in the presence of 10 (Z=1) nuclear centres located in three dimensional place(locations are quite random and there is no symmetry in its arrangement). My hamiltonian has one kinetic energy term for the single electron and 10 nuclear attraction term of that single electron with all the fixed classical nuclear centers. After solving the Hamiltonian using LCAO method I got wave functions. How do I verify that the answer is correct ? I thought of plotting the wave functions (or probability density function) to get to know the distribution ? Is there any software or something where I can plot the wave functions or probability density function of the electron and visually see that ? The problem that I am facing here is that wavefunction is 3D and I need 4 dimensions to visualize that wave function. So how do I do that ? Can someone suggest some softwares or some reference or some other way of verifying the answers ?
A graph G is hypohamiltonian if G does not contain a hamiltonian cycle but for any v ∈ V (G) the graph G − v does contain a hamiltonian cycle. Replacing in the preceding sentence “cycle” by “path”, we obtain the definition of a hypotraceable graph.
We call a vertex cubic if it has degree 3, and a graph cubic if all of its vertices are cubic. Consider a graph G. Two edges of G are independent if they have no common vertices. The girth of a graph is the length of its shortest cycle.
Hi, I'm trying to extend my Matlab NEGF solver from crystal structures to molecules. All I need as inputs are:
- the molecular Hamiltonian (H) which models the channel, that is pretty much complex wrt crystal one
- self-energies that model the interactions between the contacts and the channel
Where I can get them? Do atomistic tools like ATK, GAMESS or Quantum Espresso provide them in a MATRIX FORMAT?
HP filter is widely used to decompose the trend and cyclical component but Hamilton (2017). " Why You Should Never Use the Hodrick-Prescott Filter " criticized this method. What other possible alternatives to HP method?
It is well known that for a typical halo orbit around L1 or L2 libration point in circular restricted three-body problem its monodromy matrix has eigenvalues of the following form:
- lambda1 > 1
- lambda2 = 1 / lambda1 < 1
- lambda3 = lambda4 = 1
- lambda5 = lambda6*, |lambda5| = |lambda6| = 1
It is also well known that eigenvectors associated with lambda1 and lambda2 linearly approximate directions along the unstable and stable invariant manifolds, respectively. What about other lambdas?
As I understand, the compex pair (lambda5 and lambda6) is associated with a two-dimensional invariant subspace in which vectors rotate by the angle rho, where lambda5 = exp(i*rho). Am I right?
What about lambda3 and lambda4? Since the system of equations in CR3BP is Hamiltonian and autonomous, each periodical orbit has at least 2 eigenvalues equal to +1. So, in our case, the algebraic multiplicity is 2. What about geometric multiplicity? As I understand, there is at least one eigenvector, assotiated with 1, it is the direction along the orbit. Is it true that another independent eigenvector (if any) is directed along the family of halo orbits?
It is well known that there are periodical three-dimensional orbits around L1 and L2 libration points in circular restricted three-body problem called halo orbits. Existence of these orbits is justified numerically: anybody can state a system of nonlinear equations (conditions of symmetry and orthogonality to the xz plane) and solve it numerically to obtain a solution with high precision. But is there any analytical proof that these periodical orbits exist, mathematically?
As I know, existence of the Lyapunov orbits in CR3BP is a consequence of the Lyapunov's centre theorem:
- Meyer, K. R. and Hall, G. R. (1992). Introduction to Hamiltonian Dynamical Systems and the N-Body Problem. Applied Mathematical Sciences, vol. 90. Springer-Verlag, New York.
But why halo orbits exist? Why there is an energy level at which there is a bifurcation from the planar Lyapunov orbits that gives rise to halo orbits?
Hello everyone,
I have written a small code that performs linear Spin Wave for simple antiferromagnetic Heisenberg Hamiltonian. It takes spin Hamiltonian as an input and performs Holstein-Primakoff, Fourier transformations as well as linearization.
Now, I would like to extend it for the further neighbors. Now, let's say, I have a site A and a cite B, and now I add A2 and B2 sites to my model system, so that I can add, let's say, J_2*\vec{S_A}*\vec{S_A2} Heisenberg term to my initial model. Do I have to add J_1*\vec{S_A2}*\vec{S_B2} aswell? And what about the boundary conditions then?.. Also, if I add a third neighbor, I am working with a decently sized cluster already...
Also, in my case, I expect system to stay bipartite, i.e. have two magnetic sublattices. But what if I don't know what the magnetic order would be?
Thank you!
Ekaterina
In one-particle, one-dimensional quantum mechanics, if the spectrum of the Hamiltonian is given, can the form of the potential be determined? For instance, can all potentials with spectrum of the form 1/(n+a)2 be determined?