# Physics

68
Is there more than one theory of gravity that can demonstrate compatibility with the Einstein equivalence principle?

I am looking for theories that give different observations, i.e. that are measurably different.

• Observations must match Minkowski space in free space.  This is a theoretical question.  Not looking for radical theories that would violate Minkowski space.
• Each theory must be able to demonstrate that observations in an accelerated elevator in free space are the same as in an elevator sitting on a planetary surface.
• Each must reduce to the Newtonian limit, but equivalence must be universally valid.  Meaning, the incremental application of equivalence starting in the Newtonian limit may place constraints on the "form" of metrics.
• Please show how multiple elevators over an extent of space are integrated to give the different observations of each theory, i.e. how the same local observations integrate differently.

Updated question 3/15/15.

24
Is Relativity's Equal Footing Treatment of Space and Time Correct in its Entirety?

As well known, the General Theory of Relativity (GTR) was introduced along with the principle that there exists no system of coordinates that is preferred, but that any arbitrary system of coordinates would do. In order for one to be able to find an arbitrary system of coordinates, the GTR introduced the complex mathematical machinery of covariance which amongst others requires that the Laws of Physics be expressible in tensor form, that is to stay, any of the Laws of Physics -- not just in the GTR.

For sometime, I have been at variance with this issue after noticing something reality subtle about this approach. During coordinate transforms in the GTR, I am of the view that it is not only erroneous to transform the time coordinate but also erroneous to introduce a time dependence in the space coordinates as this is tantamount to a physical alteration of the system when in actual fact, a change of the coordinates is not a physical alteration of the system under consideration.

I will be happy to hear your view(s) on my thoughts on this matter.

I agree with Mohamed El Naschie. If you were to change the minus sign in the Minkowski metric to a plus sign giving a metric ( +1, +1,+1, +1) this gives you what looks like Euclidean four Space as though time is like a special dimension. But this is strictly from a mathematical transformation of t -> It. In order to formulate Maxwell Equations that are invariant with respect to space-time coordinate transformations in this "new" space, it would now be necessary to change the Lorentz transformations somewhat. And it is also necessary to add a plus sign to the time-derivative term in the Maxwell equation expressing Faraday's law of induction. At first the results look exciting because you might just believe that there is a 'parallel universe' where particles can exceed the speed of light. But in the end, the solutions for electromagnetic waves are self damping in a vacuum and the Lenz law operates in reverse. The importance of the minus sign in the Minkowski metric is extremely important.

7
Can we direct sound like a laser beam,means can we make sound linear?
Like the way we point the light using lenses , can we have a device to focus the sound ???

Regarding parametric generation of sound through mechanisms described by nonlinear acoustics, you could check out the literature in underwater acoustics, where such effects are commonly utilized.

32
Is logical independence of the square root of minus one a consequence of arithmetic's incompleteness?

In a formal arithmetical system, axiomatised under the field axioms, the square root of minus one is logically independent of axioms.  This is proved using Soundness and Completeness Theorems together. This arithmetic is incomplete and is therefore subject to Gödel's Incompletenss Theorems. But can it be said that the logical independence of the square root of minus one, is a consequence of incompleteness?

8
How does one calculate interface energy?
If we have two surfaces of different materials, how do we calculate the interface energy in between the surfaces?

Dear Sandhya,

if you are interested to calculate the interface energy between two faces of  two different "crystalline" materials, the Dupré's formula rules this simple problem:

-first the surface energy of each surface is needed

- secondly, their adhesion energy is needed as well.

If you are interested in, we developed a nice program to do that , and I should send you some papers about it.

Hoping this helps,

Dino

3
What is the relation between M and H for different type of antiferromagnetic materials?

How the MH curve will be and how the effect of high field (at room temperature) will affect MH curve for these different type of antiferromagnetic (A, C and G type) .

What will be the basic difference between the paramagnetic and antiferromagnetic MH curve up to 9 Tesla?

Note: the Neel temperature is well above (640 K) and spin flip field is high (18T).

At very high field the antiferromagnets undergo metamagnetic transitions (spin flip transition). But these are generally apparent at field as high as 12 T.

5
Can you please suggest a free NIR spectra database?

For theoretical evaluation I need the NIR spectra of certain components. Can you please suggest the free online/package NIR spectra database/library?

Dear Vignesh,

The US National Institute of Standards and Technology (NIST) also provides nice databases and online chemistry databases.

You might look at this link:

http://srdata.nist.gov/gateway/gateway?property=IR+spectra

3
Are there any models describing Optics Theory using the String Theory or M theory?

There are many versions of the String Theory. They provide more general (and maybe more exact) models of elementary particles, so it is expected that they also will contribute to the theory of  the optics.

If these models do exist, for what purpose they are practically used or can be used?

Which benefits provides or can provide Optics that is based on the String Theory?

From E = mc2 to E = mc2/22—A Short Account of the Most Famous Equation in Physics and Its Hidden Quantum Entanglement Origin
Affiliation(s)
Department of Physics, University of Alexandria, Alexandria, Egypt.
ABSTRACT
Einstein’s energy mass formula is shown to consist of two basically quantum components E(O) = mc2/22 and E(D) = mc2(21/22). We give various arguments and derivations to expose the quantum entanglement physics residing inside a deceptively simple expression E = mc2. The true surprising aspect of the present work is however the realization that all the involved “physics” in deriving the new quantum dissection of Einstein’s famous formula of special relativity is actually a pure mathematical necessity anchored in the phenomena of volume concentration of convex manifold in high dimensional quasi Banach spaces. Only an endophysical experiment encompassing the entire universe such as COBE, WMAP, Planck and supernova analysis could have discovered dark energy and our present dissection of Einstein’s marvelous formula.
KEYWORDS
Special Relativity, Varying Speed of Light, Hardy’s Quantum Entanglement, Dark Energy, Measure Concentration in Banach Space, ‘tHooft Fractal Spacetime, Witten Fractal M-Theory, E-Infinity Theory, Transfinite Cellular Automata, Golden Mean Computer, Endophysics, Finkelstein-Rössler-Primas Theory of Interface
Cite this paper
Naschie, M. (2014) From E = mc2 to E = mc2/22—A Short Account of the Most Famous Equation in Physics and Its Hidden Quantum Entanglement Origin. Journal of Quantum Information Science, 4, 284-291. doi: 10.4236/jqis.2014.44023.
References

11
Does a prism affect electromagnetic waves passing through it, other than those light waves visible to the human eye?
I am a non-physicist.

Moreover, the refractive index is wavelength dependent and usually reduces smoothly at longer wavelengths according to Sellmeier eq. It leads to varying slope dn/dw in terms of wavelength. This optical property is called Dispersion. Prism is a dispersive element. The optical material is chosen with low absorption at operating spectral range.

4
In a semiconductor, what is the physical meaning of the envelope function approximation?

In a semiconductor, what is the physical meaning of the envelope function approximation?

23
Is photon -photon scattering possible?

The interaction of photons with matter such as Compton and Thompson scattering are well-known at higher photon energies. What about the scattering events between photons? Those likely occur at higher energies where the photons resemble to be particles? If it is possible, the cross-section may be extremely low.

Yes of course you can construct loop diagrams in Quantum Electrodynamics (QED) where photon-photon scattering is possible. In tree level this scattering is not possible because even though photons are the mediators of electromagnetic interactions, they themselves do not carry any charge. However, in QED, a photon can easily couple to the   e+epair.   Using this coupling four times, it is possible to scatter light by light. See for example the notes available on Nuclear and Particle physics in the web page of the University of Manchester. It is also possible to have a rough estimate of the cross section in the following way. You can see that amplitude is proportional to g4 and cross-section is proportional to g8. Therefore the cross section is proportional to α4. Here α is the dimensionless QED coupling whose value is 1/137.

http://oer.physics.manchester.ac.uk/NP/Notes/Notes/Notesse30.xht

4
What is instantaneous rotation?

I understand instantaneous strain, but this paper measures instantaneous rotation. What does this represent physically and what are its units?

• Source
##### Article: Rotation of single rigid inclusions embedded in an anisotropic matrix: A theoretical study
[Hide abstract]
ABSTRACT: This paper presents a theoretical analysis of instantaneous rotation of elliptical rigid inclusions hosted in a foliated matrix under bulk tensile stress. The foliated matrix is modelled with orthotropic elastic rheology, considering two factors as measures of anisotropy: and , where μ0 is the shear modulus parallel to the foliation plane and and are the Young moduli along and across the foliation, respectively. Normalized instantaneous inclusion rotation (ω) is plotted as a function of the bulk tension direction (α) with respect to the long axis of the inclusion, taking into account two parameters: (1) anisotropic factors m and n, and (2) the inclination of the foliation plane to the long axis of inclusion (θ). In the case of θ=0°, ω versus α variations are sinuous, showing maximum instantaneous rotation in positive and negative sense at α=45 and 135°, respectively, irrespective of m and n values. The magnitude of maximum ω increases with decrease in m, i.e. increasing degree of anisotropy in the matrix. On the other hand, decreasing the value of the anisotropic factor n results in decreasing instantaneous rotation. ω increases with the aspect ratio R of inclusion, assuming an asymptotic value when R is large. This asymptotic value is larger for lower values of m. In case of θ≠0°, ω versus α variations are asymmetrical, showing maximum instantaneous rotation at varying inclusion orientation for different m. For given m and n, with increase in θ the sense of instantaneous rotation reverses at a critical value of θ.
Journal of Structural Geology 04/2005; 27(4-27):731-743. DOI:10.1016/j.jsg.2004.12.005

The value of the angular velocity vector omega(t) at time t ? If so, the unit is Hz or, equivalently, radians per second.

Cartan has a nice treatment of this in terms of the rotation rate of a system of unit vectors that define a system of coordinates that rotate with a rigid body, often called "body axes". Denote the unit vectors of this coordinate system by e_i. Then d e_i / dt = {\omega^k}_i e_k, where {\omega}^k}_i is a mixed rank 2 tensor. Its covariant components \omega_{ki} are antisymmetric, and are the natural description of "angular velocity" - the usual angular velocity vector is not a natural description, being a pseudo-vector constructed from this rank 2 tensor by setting \omega^i = - V^{ijk} \omega_{jk} / 2. Here, the rank three tensor V^{ijk} is the contravariant form of the volume tensor, given by V^{ijk} = (1/\sqrt{ det g } \varepsilon^{ijk}, where det g is the determinant of the (covariant) metric tensor g_{ij} = e_i \cdot e_j, and \varepsilon^{ijk} is the Levi-Civita permutation symbol, sometimes erroneously called a tensor. (It looks like a tensor if your restrict transformations to rotations, but is in reality an instruction for permutation.)

Brief inspection of the paper you quote (I would have to read it in detail to be certain) suggests that you are dealing with the antisymmetric part of the deformation tensor describing either a flow or the deformation of an elastic solid. The deformation tensor is the Jacobian matrix of the velocity field (in the case of a flow) or of the displacement field (in the case of an elastic deformation). Thus, for a flow, it is the tensor v_{i;j}, where v_i is the covariant velocity field, and ;j indicates the covariant derivative with respect to x^j (same as a partial derivative in Euclidean coordinates). Then \omega_{ij} = v_{ij} - v_{ji}. To say that this is the instantaneous velocity of a pure shear is to say that v_[ij} has zero contraction (trace), that is, {v^i}_i =0. In this case, \omega_{ij} represents the instantaneous angular velocity of a frame consisting of three vectors e_i that co-rotate with an infinitesimal fluid element as the fluid flows. If you construct the associated angular velocity vector (formula given above) the direction of the vector is the axis of rotation of this element at time $t$, while the magnitude of the vector is its angular velocity of rotation about this axis at time t.

These concepts are transferable, mutatis mutandi, to the case of elastic deformation of solid media.

5
Which techniques are used for three-dimensional imaging outside the visible light?

Since conventional optics does not always work in the range outside the visible light (UV, IR) , it is very interesting how 3D imaging can be applied in this range.

Can somebody give relevant references?

Hi Vladimir. There is no information structure 'blur' in the phenomenon of vision. No motion blur, no depth of field. There are no pictures in vision. No picture frames and no frames per second. As there are no pictures to call up then there is no binocular fusion going on. We have developed a new form of illusionary space based on perceptual structure not the fundamentals of optics. This is termed Vision-Space instead of picture space. Its spatial saliency is not on the 1,2,3D curve. It's experiential spatial saliency or ExpD! Presentation list attached and articles are on my page.

14
Can you recommend other approaches to describe light propagation except ray propagation approach and wave field propagation?

I want to know, if I have additional options to represent light propagation. Mainly to describe imaging process. Relevant references are welcome. Thank you in advance!

This may or may not be of interest. We have developed a new form of illusionary space called Vision-Space. The space that occurs to us within the phenomenon of vision. This includes a field potential that's set out radially when we either make a fixation or centre it on ourselves. The point being that it has nothing to do with the fundamentals of optics. The irritating thing for physics is that this field structure is generating proximity cues that relate to the real distances between objects and surfaces in the environment. So the field structure is provisioning an implicit form of spatial awareness even on a monocular basis (this is quite wrongly referred to a 'peripheral vision'). Nothing to do with 'depth' perception through occlusion or perspective cues. Vision is actually entirely non-photographically rendered. I attach a list of presentations.  Why is this of interest here? Because this field structure appears to derived from information in noise. We appear to be unfolding a data potential from noise that's emanating from the environment. A form of decoherence at the retina with the pays element being preserved and streamed? A function for rods at photopic levels working with ipRGC on something other than light intensity? Environmental signal in radiance? There are articles on my page. "Having the courage of your perceptions" attempts to cover the physics angle.

6
Is there any software to analyze ftir results?

I have got ftir transmission curve but can not analyze it, can anyone suggest the best way to analyze it with either software or  manual methods related to analysis of ftir curve. The manual analysis should include formulas or tables to justify the results through physics

Libraries always help you for known compound. But if  you synthesize new compound then you should refer to the reference / text books

10
Which important points should be considered in plotting a bifurcation diagram?

When we want to plot a bifurcation diagram for a flow or map, we should consider some important points. Some of them have been mentioned in [1]. For example we should be careful about coexisting attractors, transient behaviors, proper quality of the diagram (resolution, marker size,…), and dealing with more than one bifurcation parameters.
I want to know if there are additional important points that some may encounter during obtaining a bifurcation diagram.
[1] J. C. Sprott, “A proposed standard for the publication of new chaotic systems” Int. J. Bifurcation Chaos, 21, 2391 (2011).

While plotting bifurcation diagram, one should know the range of the parameter to be varied as well as the initial conditions. Stability of fixed points at different parameter values provide necessary knowledge in this regard

18
Do you think that scientific research is more or less efficient now than in 1900's?
In the first half of 1900's (1900-1950) most of modern physics theories and discoveries were achieved (quantum physics, atomic physics, nuclear physics, high energy physics, astrophysics, solid state physics, solid state electronics, microelectroics, integration...) with much less means, equipments and funding than now.

In the second half of 1900's (1950-2000), most of the discoveries were more or less an improvement or a limited innovation with respect to what was already found before 1950. At the same time the funding being put in research are increasing drastically.

Do you agree that there is a decreasing efficiency of scientific research worldwide which started at the end of the 1900's and is more evident in 2000' years?

Following the previous posts I can comment from my experience as Director of scientific organizations, and Director of Research in an industrial international company. Managing research is difficult because it requires to conciliate two, often, contradictory goals : leave as much freedom as possible to the people with sufficient ressources, and get results that can be useful for you or others (the management of patents is a business). One key factor to achieve that is the organization of research around structuring topics, meaning that there is a continuity between projects, and the results which have been obtained by one team can be used by others. This is not only a matter of documentation, publishing papers, but also of concepts that are clearly understood by everybody, of models that can be easily adjusted, or in industry of models which can forecast the results, saving time and money.

I think that even in fundamental research it is important to supply this framework in which all the workers can do their job. However there is the risk that this framework stifles innovation, and the risk is specially high when the framework is ill designed. There are hundreds of interpretations of quantum physics, and a lot of time is wasted just to try to understand what it means...

22
How do we create virtual electrical elements in electronics? Are they really "elements" or circuits?

To build electronic circuits, first of all we need the natural electrical elements resistors, capacitors and inductors. However, in many cases we are not satisfied with the performance of these passive components and try to improve them in an artificial way. For this purpose, in electronics we have been inventing a variety of clever and sophisticated techniques to create artificial (synthetic, virtual) elements. The question is, "How do we make virtual elements?"

Like magicians, in electronics we convert the imperfect passive elements into perfect active "elements" (by applying the virtual ground configuration)... or we transform some element (a capacitor) into its dual (an inductor) by swapping the voltage across and the current through it (gyrators)... or we transmute the passive circuits into their opposite mirror doubles (negative impedance)... or we even create completely new electrical elements (memristors)... Thus, for some reasons, we frequently replace the natural electrical elements by their circuit equivalents - a gyrator, multiplier, memristor, negative resistor (capacitor, inductor...)

It is important to note that all these virtual "elements" (electronic circuits) emulate only particular properties (usually, the time behavior) of the genuine elements... they are not real, they are just an illusion...

Genuine elements. The general property of passive electrical elements is taking (consuming) energy from the input source; resistors dissipate this energy while capacitors and inductors store (accumulate, "steal") it. But how do they do it?

Let's assume the considered passive element is connected in series to the exciting voltage source. What does it do in this case? It subtracts a portion of voltage from the whole input voltage: the resistor "creates" an opposing voltage drop across itself while the capacitor and inductor "create" an opposing voltage (a kind of emf). Resistors do this by throwing out (dissipating) energy while capacitors and inductors do it by taking energy from the excitation source, accumulating it into itself and setting it against the input source. In the first case there is a voltage drop while in the second case there is a voltage (emf).

So, we can emulate these passive elements by replacing them with some other elements producing the same opposing voltage (having an opposite to the input voltage polarity when travelling along the loop). Then, we can modify or even create mirror active (negative) "copies" of these passive elements by replacing them with sources producing the same but now "helping" voltage (having the same as the input voltage polarity when travelling along the loop). This is the main idea of the substitution and inverse substitution theorem perfectly considered by Prof. Lutz von Wangenheim in his work:

* Emulating by (varying) voltage. First, we may replace the original elements by varying voltage sources and this is the most natural way of making emulated capacitors and inductors (as they behave as varying through time voltage sources). Op-amp gyrator, multiplying, memcapacitive and meminductive circuits do it in this way. In these circuits, the op-amp output voltage represents the voltage across the according capacitor or inductor.

In the case of the true negative resistor, the ordinary ohmic resistor is replaced again with a voltage source (exactly as in the case of gyrators and multipliers) but it has the same polarity as the input voltage source so that it adds an additional  voltage to the input voltage. For example, the negative impedance converter with voltage inversion (VNIC) is a dynamic voltage source emulating a negative resistor by adding a voltage that is equal to the voltage drop across a real ohmic resistor.

It is interesting that we can change the properties of the ordinary constant voltage source by  properly varying its voltage (as in the attached picture below).
The emulation by including an additional voltage source is the basis of the Miller theorem (see https://en.wikipedia.org/wiki/Miller_theorem#Applications).

* Emulating by (varying) resistance. But a memristor can do the same by replacing the voltage by an equivalent voltage drop across a dynamic time-dependent resistor. Transistor gyrator and multiplying circuits do it in a similar way.

It is interesting that we can change the properties of the ordinary ohmic resistor by properly varying its resistance. A good example of this technique is the creation of the negative differential resistor:

So, to emulate passive elements (to create virtual elements), we may replace the elements behaving as resistors with (properly varying) resistors, and elements behaving as sources - with (properly varying) sources. But it seems we can do it by swapping these correspondences - replacing the elements behaving as sources with resistors, and elements behaving as resistors with sources... Am I right? Please, discuss.

------------------------

I was inspired to ask this question mainly by the numerous discussions between me and Prof. Lutz von Wangenheim mostly in the questions below...

https://www.researchgate.net/post/Are_electrical_sources_elements_with_static_negative_impedance_If_so_is_there_any_benefit_from_this_concept

https://www.researchgate.net/post/Does_the_op-amp_in_all_the_inverting_circuits_with_negative_feedback_behave_as_a_negative_impedance_element_negative_resistor_capacitor_etc

https://www.researchgate.net/post/Does_the_amplifier_in_negative_feedback_systems_possess_negative_impedance?

...and especially, by his idea about the inverse substitution theorem (IST) proposed by him in the question below:

https://www.researchgate.net/post/Can_we_formulate_Kirchhoffs_laws_for_resistances_KRL_and_conductances_KGL_based_on_KVL_and_KCL/2

Quote: "In basic theory, people talks about the quadripole. But a practitioner will never ask himself if a transistor is a quadripole or not.."

Perhaps he will not ask himself, but he should be aware that he actually is using a quadripole (at latest during applying of feedback).

99+
What is the biggest scientific coincidence that you know?
For me the two more important are:

1. The phase transition liquid-solid for the water is that the solid state is less dense.
2. The dielectric screening in metals is such that the Coulomb interaction among the electrons falls at a distance of the Bohr radius.

The first one has many important applications as the one of allowing the live in rivers during winter or so on. On the other hand, there are also very interesting electric and thermodynamic phase transitions for this material

The second, thanks to have a so local electric interaction it allows to have almost free electrons at quite high electronic density in matter and therefore to apply theories so useful as the bands in solids. Over all in metals

Dear Robin,

I have only reading the abstract of the paper, could I have the rest of the paper? Thank you

3
1. How to make a battery set-up using activated carbon/iron oxide composite powder?

This composite has good capacitance behavior due to it's high surface area. Hence, your suggestion is valuable for me to use these composite in a battery's application.

Dear Rashid & Dear Ceyhun,

Yes, Lithium ion batteries only.

46
Is it possible to neutralize all the positive resistances in a circuit by an equivalent negative resistance?

I have asked this question with two purposes - first, at the request of Barrie Gilbert to terminate the irrelevant discussions in the question below...

https://www.researchgate.net/post/whats_the_real_behavior_of_rc_circuits

... and second, to answer the question of Erik Lindberg asked at the end of this discussion.

In the discussed arrangement (an RC circuit with various leakages), there are three resistances in parallel - R, Rc and Rv, and the equivalent resistance is Re = R||Rc||Rv. My idea is to connect a variable (N-shaped) negative resistor in parallel to Re and begin adjusting its resistance RN. Depending on its value, it will "eat" some part of Re and it will dissapear (become infinite):

1. RN = Rv (Rv is neutralized) or RN = Rc (Rc is neutralized)

2. RN = Rc||Rv (both Rc and Rv are neutralized)

3. RN = Rc||Rv||R (all the positive resistances are neutralized)

In case 2 (a load canceller), I thought we should obtain a perfect exponential shape... and this should solve the leakage problem. The next my idea was that if we continue decreasing this "destroying" negative resistance beyond this point of exact leakage neutralization, it will begin "eating" a part of the positive resistance of the "useful" resistor R... and finally (case 3), it will destroy all the resistance R. This means that the resistor R as though already has an infinite resistance... and behaves as an ideal current source... Actually, this is the idea of the Howland current source and its special case here - Deboo integrator. But while in the classic Deboo integrator the negative resistor (INIC) neutralizes only the positive resistance R, here it neutralizes all the resistances in parallel (the useful R and harmful leakages).

So, my question now is, "What happens if we try to neutralize all the positive resistances by an equivalent negative resistance (case 3)?"

My doubt is that, as a result of this 100% neutralization, this circuit will become unstable, and if the negative resistance begins dominating over the equivalent positive resistance, the effective resistance (the result of the neutralization) would become fully negative. And here, I suppose, the voltage across the capacitor will begin self-increasing in an avalanche like manner... From other side, the reactance of the capacitor C still remains... and it is a kind of a positive "resistance" (impedance)... and it turns out the circuit should remain stable...

The same problem exists in the Wien bridge oscillator... and it is solved there by applying a non-linear negative feedback in the INIC... Maybe it is possible to keep the circuit stable in a similar way?

we are talking for a current mirror with increased output resistance where the regulated cascode is  much better than a simple cascode and the output swing is the same

2
What are the anomalies of water?
Water has density, diffusion, specific heat, compressibility unusual behaviors. Why?

Take a look at these books,

Ben Mustapha, Zied, et al. "Automatic classification of water-leaving radiance anomalies from global SeaWiFS imagery: Application to the detection of phytoplankton groups in open ocean waters." Remote Sensing of Environment 146 (2014): 97-112.

Sun, Chang Q. "Structure order, local potentials, and physical anomalies of water ice." arXiv preprint arXiv:1402.3880 (2014).

10
What is the physical significance of bessel's function in acoustics ?

why is it used in acoustical formulation ?

Physical meaning? Then Bessel function is the base function (or eigen function) to represent your solution in the radial direction for the physical problem.

11
What is the shape of this induced field induced by temporal current variation in a in a coil?

My question is prompted by Problem P. 4263 (May 2010) of the webpage www.komal.hu, which contains physics problems from Hungary (in English).

OK; if that is the case, I consider the question answered. Thanks to you all.

7
Which 3D dissipative chaotic flow has the highest Kaplan–Yorke dimension?

Most of the continuous chaotic systems (chaotic flows) like Lorenz, Rössler, Sprott systems (cases B-S) have a Kaplan–Yorke dimension slightly greater than 2:
1. Is there any 3D dissipative chaotic flow with a Kaplan-Yorke dimension near 3?
2. Which 3D dissipative chaotic flow has the highest Kaplan-Yorke dimension?
Regards,

In a dissipative chaotic flow of 3D system, the highest value of KY dimension is depends on the system. Generaly, KY dimension is changed continuously between 2 to 3 in a 3D ode system.

99+
Why do you think that it is particularly physics the science which seems to show more proclivity to get surrounded by crackpots?
Before answering this post, make yourself sure you are not a quack, else you shall be deleted. Please check:
John Baez crackpot guide http://math.ucr.edu/home/baez/crackpot.html
Siegel's Are you a quack? http://insti.physics.sunysb.edu/~siegel/quack.html

Oh bologna!

18
What are the measures used in different countries to stimulate publication activity?
The main problem of Post-Soviet science is connected with its weak "visibility" that leads to its weak global competitiveness. Very weak growth rates of publication activities of the Post-Soviet countries are noted. In these countries, publication activities of scientists in the journals that are included into the Web of Science and SCOPUS databases are by no means stimulated.
On the SCIMAGO platform, by means of the operator «Compare», I generated graphics on dynamics of publications by Russian and Ukrainian scientists in comparison with the total publication activity in Iran and Turkey (graph).
It is well known that Iran and Turkey implemented stimulating measures aimed at supporting the publication activities of their scientists many years ago. About ten years ago in Turkey a reward of $100 to$300 US dollars was offered for one SCI- publication, depending on the impact factor of the journal. In Iran for one such publication, a reward ranging from 300 to 500 Euros is currently offered by the State University. Besides, they have government grants for the support of such publication activities (up to 20,000 Euros for approximately ten publications). This explains the reason why in 2012 Iran bypasses Russia in total publication activities (graph).
I’m interested in the examples of stimulating measures that are being granted by different countries in the form of publication micro-grants. Generalization of these measures would allow to adapt them for the conditions of Post Soviet countries, where in many fields of knowledge their is absence of publication practice of results of researches in internationally recognized journals.

Instiutional-level cash bonus schemes for publishing in approved journals appear to be far more common than is often realised.

The practice is not confined to the Third World, but documented for individual institutions in European countries such as Austria, Denmark, France,  Italy, Netherlands, and Norway. It is also found in elsewhere, in countries as diverse as the United States, Australia, South Korea, Egypt, Israel and the Philippines.  I have been attempting to assemble information about such institutional policies for a year or two, but it is extremely difficult to get a coherent picture.

I suspect that it will be almost impossible to demonstrate the effectiveness of institutional schemes as they rarely occur in a vacuum.  The same factor applies to national-level schemes. They seem effective enough (within narrow parameters), but correlation does not prove causation.  Countriesoften bring in such reward programs as part of a wider policy package. When such schemes are cut, the context may be a general reduction in funding for higher education.

7
Is there any condition for the wave packets of different particles to interact?

Do the wave packets of different particles interact at all? If yes, What are the different conditions for such interaction and what is the mathematical treatment for them?

It is evident that the question you have raised is far from being trivial.

As I stressed is not well posed, as you can see from the answer you are receiving. From the QM point of view as soon as you define two particles and you add a label to them boson or fermions you define an interaction, implicit in the symmetrization or anti-symmetrization of the initial wave function describing the  evolution of the quantum system. Again the problem is that when you provide the time evolution of a such a system you define a single system not interacting with itself.

The best

Pino