Questions related to Physics
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
I am trying to plot and analyze the difference (or similarities) between the path of two spherical pendulums over time. I have Cartesian (X/Y/Z) coordinates from an accelerometer/gyroscope attached to a weight on a string,
If I want to compare the path of two pendulums, such as a spherical pendulum with 5 pounds of weight and another with 15 pounds of weight, how can I analyze this? I am hope to determine how closely the paths match over time.
Thanks in advance.
Dear fellow mathematicians,
Using a computational engine such as Wolfram Alpha, I am able to obtain a numerical expression. However, I need a symbol expression. How can I do that?
I need the expression of the coefficients of this series.
where csc: cosecant (1/sin), and csch: hyperbolic cosecant.
Thank you for your help.
Using the Boltztrap and Quantum espresso I was able to calculate the electronic part of thermal conductivity but still struglling for the phononic part of thermal conductivity.
I tried the SHENGBTE but that demands a good computational facility and right now I am not having such type of workstation. Kindly suggest some other tool that can be useful for me in this regard.
Dr Abhinav Nag
I'm getting repetitively negative open circuit potentials(OCP) vs. Ag/AgCl reference electrode for some electrodes during the OCP vs. time measurements using an electrochemical workstation. What's the interpretation of a negative open circuit potential? Moreover, I also have noticed that it got more negative on illumination. What's the reason behind it? Are there some references? Please help.
In the below I give some very dubious speculations and recent theoretical articles about the question. Maybe they promote some discussion.
1.) One can suppose that every part of our reality should be explained by some physical laws. Particularly general relativity showed that even space and time are curved and governed by physical laws. But the physical laws themself is also a part of reality. Of course, one can say that every physical theory can only approximately describe a reality. But let me suppose that there are physical laws in nature which describe the universe with zero error. So then the question arises. Are the physical laws (as an information) some special kind of matter described by some more general laws? May the physical law as an information transform to an energy and mass?
2.) Besides of the above logical approach one can come to the same question by another way. Let us considers a transition from macroscopic world to atomic scale. It is well known that in quantum mechanics some physical information or some physical laws dissapear. For example a free paricle has a momentum but it has not a position. Magnetic moment of nucleus has a projection on the external magnetic field direction but the transverse projection does not exist. So we can not talk that nuclear magnetic moment is moving around the external magnetic field like an compass arror in the Earth magnetic field. The similar consideration can be made for a spin of elementary particle.
One can hypothesize that if an information is equivalent to some very small mass or energy (e. g. as shown in the next item) then it maybe so that some information or physical laws are lossed e.g. for an electron having extremely low mass. This conjecture agrees with the fact that objects having mass much more than proton's one are described by classical Newton's physics.
But one can express an objection to the above view that a photon has not a rest mass and, e.g. rest neutrino mass is extremely small. Despite of it they have a spin and momentum as an electron. This spin and momentum information is not lost. Moreover the photon energy for long EM waves is extremely low, much less then 1 eV, while the electron rest energy is about 0.5 MeV. These facts contradict to a conjecture that an information transforms into energy or mass.
But there is possibly a solution to the above problem. Photon moves with light speed (neutrino speed is very near to light speed) that is why the physical information cannot be detatched and go away from photon (information distribution speed is light speed).
3.) Searching the internet I have found recent articles by Melvin M. Vopson
which propose mass-energy-information equivalence principle and its experimental verification. As far as I know this experimental verification has not yet be done.
I would be grateful to hear your view on this subject.
How can we calculate the number of dimensions in a discrete space if we only have a complete scheme of all its points and possible transitions between them (or data about the adjacency of points)? Such a scheme can be very confusing and far from the clear two- or three-dimensional space we know. We can observe it, but it is stochastic and there are no regularities, fractals or the like in its organization. We only have access to an array of points and transitions between them.
Such computations can be resource-intensive, so I am especially looking for algorithms that can quickly approximate the dimensionality of the space based on the available data about the points of the space and their adjacencies.
I would be glad if you could help me navigate in dimensions of spaces in my computer model :-)
Have these particles been observed in predicted places?
For example, have scientists ever noticed the creation of energy and
pair particles from nothing in the Large Electron–Positron Collider,
Large Hadron Collider at CERN, Tevatron at Fermilab or other
particle accelerators since late 1930? The answer is no. In fact, no
report of observing such particles by highly sensitive sensors used in
all accelerators has been mentioned.
Moreover, according to one interpretation of uncertainty
principle, abundant charged and uncharged virtual particles should
continuously whiz inside the storage rings of all particle accelerators.
Scientists and engineers make sure that they maintain ultra-high
vacuum at close to absolute zero temperature, in the travelling path
of the accelerating particles otherwise even residual gas molecules
deflect, attach to, or ionize any particle they encounter but there has
not been any concern or any report of undesirable collisions with so
called virtual particles in any accelerator.
It would have been absolutely useless to create ultrahigh vacuum,
pressure of about 10-14 bar, throughout the travel path of the particles
if vacuum chambers were seething with particle/antiparticle or
matter/antimatter. If there was such a phenomenon there would have
been significant background effects as a result of the collision and
scattering of the beam of accelerating particles from the supposed
bubbling of virtual particles created in vacuum. This process is
readily available for examination in comparison to totally out of
reach Hawking’s radiation which is considered to be a real
phenomenon that will be eating away supposed black holes of the
universe in a very long future.
for related issues/argument see
Consider the two propositions of the Kalam cosmological argument:
1. Everything that begins to exist has a cause.
2. The universe began to exist.
Both are based on assuming full knowledge of whatever exists in the world which is obviously not totally true. Even big bang cosmology relies on a primordial seed which science has no idea of its origin or characteristics.
The attached article proposes that such deductive arguments should not be allowed in philosophy and science as it is the tell-tale sign that human wrongly presupposes omniscient.
Your comments are much appreciated.
after a quite long project, I coded up a python 3D, relativistic, GPU based PIC solver, which is not too bad at doing some stuff (calculating 10000 time steps with up to 1 million cells (after which I run out of process memory) in just a few hours).
Since I really want to make it publicly available on GitHub, I also thought about writing a paper on it. Do you think this is a work worthy of being published? And if so, what journal should I aim for?
When studying statistical mechanics for the first time (about 5 decades ago) I learned an interesting postulate of equilibrium statistical mechanics which is: "The probability of a system being in a given state is the same for all states having the same energy." But I ask: "Why energy instead of some other quantity". When I was learning this topic I was under the impression that the postulates of equilibrium statistical mechanics should be derivable from more fundamental laws of physics (that I supposedly had already learned before studying this topic) but the problem is that nobody has figured out how to do that derivation yet. If somebody figures out how to derive the postulates from more fundamental laws, we will have an answer to the question "Why energy instead of some other quantity." Until somebody figures out how to do that, we have to accept the postulate as a postulate instead of a derived conclusion. The question that I am asking 5 decades later is, has somebody figured it out yet? I'm not an expert on statistical mechanics so I hope that answers can be simple enough to be understood by people that are not experts.
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
During AFM imaging, the tip does the raster scanning in xy-axes and deflects in z-axis due to the topographical changes on the surface being imaged. The height adjustments made by the piezo at every point on the surface during the scanning is recorded to reconstruct a 3D topographical image. How does the laser beam remain on the tip while the tip moves all over the surface? Isn't the optics static inside the scanner that is responsible for directing the laser beam onto the cantilever or does it move in sync with the tip? How is it that only the z-signal is affected due to the topography but the xy-signal of the QPD not affected by the movement of the tip?
or in other words, why is the QPD signal affected only due to the bending and twisting of the cantilever and not due to its translation?
I'm searching for a good collaborator or a research group that might want to tackle an interesting problem involving the relationship between quantum dots generating nanoparticle clusters and their DNA/proteins corral. This relationship is encapsulated by geometric proximity, that is I'm looking for someone who might know how quantum mechanics impacts something like these nanoparticles, such as how close a nanoparticle is to another nanoparticle or a protein and whether sized clusters form. Ping me if you're in the bio sciences, computational biology, chemistry, biology or physical sciences and think you might be able to shed some light on the above.
I do recognise that there’s a well-known problem (hard though it is) of establishing how consciousness emerges or can be accounted for in physical processes. But I can’t at all agree that there’s a naturalistic, absolute hard problem of consciousness, because it’s an incoherent concept.
Nobody (at least nobody with a clue) supposes that neurophysiology can explain a qualitative difference in the way you and I experience the content of my music mix playing quietly in the background, or see the light reflect off a rainbow, or any of the other ways in which our qualitative experience discriminates from that of other live organisms. To suppose that just because you don’t know the mechanisms of the experience in your own head you will deny them the existence of them in somebody else’s is bizarre and reductionist.
Construct an imaginary metaphor of a magical, wizardry, thing-maker consciousness and you haven’t explained the qualitative data there either. It’s still the question of how consciousness comes into the work whether any magical things happen or whether there’s anybody there at all. To suppose a separate, inexplicable, mysterious, magic ingredient does neither any explanatory good, solve the hard problem, nor explain the evidence. All such arguments for a separate consciousness occurrent substance do, again, be it a magic nonsense or magic substance involved, reduce the hard problem of explaining thisness-of-consciousness (to pick a crazy approach) to the very same hard problem of explaining how consciousness arises in the first place.
If you identify the hard problem entirely with the mechanism through which the feeling-of-redness arises, or "the feeling of the future in an invariant past", or anything else you allude to, then you plainly have just traded in one way of asking a very simple question of the wrong approach. The question is, how do the millions of biological chunks and sub-systems interact with one another and integrate information over time and space? The sense of sight, sound, touch and soil all raise a “hard problem” of projection-understanding and categories-beyond-the-reliable-input-enumeration because by a vast over-engineering of the metaphor arms race (as even you must agree) the response-device signals of a single kind of appropriate examination will allow all in-the-know people to interpret an external reality quite differently. But the “hard problem” isn’t WHY is it that we can punch those signals at all, or make sense of the signals that come out the other end. That’s just the default condition of our very real neurological symposium. Whereas the “humanness” of that experience is also an entirely benignly apparent phenomenon, just as water’s polar nature is an entirely benignly apparentity.
For me the cardinal point is to reckon with how we perceive our own subjective value via multi-sensory data input both direct and indirect in both our two and three dimensional waking experience. And because at the very least you have to be wrong or qualified immensely if you think it’s not merely the interaction between general anatomy, organisation, information processing and output of your brain and all subjective processes such that personal conclusions then magically appear as relevant claims about reality.
P.S. I don't think evolution throws up any magical consciousness, either on its petri-dish experiments, or those novelty subjectiveness media that it comes up with sometimes. So I'd like to challenge that viewpoint, particularly in terms of our understanding of the nuances.
The above question emerges from a parallel session  on the basis of two examples:
1. Experimental data  that apparently indicate the validity of Mach’s Principle stay out of discussion after main-stream consensus tells Mach to be out, see also appended PDF files.
2. The negative outcome of gravitational wave experiments  apparently does not affect the main-stream acceptance of claimed discoveries.
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Eighty years after Chadwick discovered the neutron, physicists today still cannot agree on how long the neutron lives. Measurements of the neutron lifetime have achieved the 0.1% level of precision (~ 1 s). However, results from several recent experiments are up to 7 s lower than the (pre2010) particle data group (PDG) value. Experiments using the trap technique yield lifetime results lower than those using the beam technique. The PDG urges the community to resolve this discrepancy, now 6.5 sigma.
I think the reason is “tropped p ”method had not count the number of protons in the decay reaction(n→p+e+ve+γ).As a result ,the number of decay neutrons obtained were low .This affected the measurement of neutron lifetime.Do you agree with me?
Hallo every one,
I did nanoidentation experiment :
1 photoresist with 3 different layer thicknesses.
My results show that the photoresist is harder when it has thicker layer..
I can't find the reason in the literature.
Can any one please explaine me why is it like that??
is there any literature for this?
Hope you are doing well!
What are the best books in Materials Science and Engineering (Basics and Advanced)? Moreover, what are the best skills (or materials topic related) that materials scientists have to develop and to acquire?
Thanks in advance
I use Fujikura CT-30 cleaver for PCF cleaving to use for supercontinuum generation. Initially, it seems like working fine as I could get high coupling efficiency (70-80%) in the 3.2um core of PCF. However, after some time (several hours) I notice that coupling efficiency decreases drastically and when I inspect the PCF endface with an IRscope, I could see a bright shine on the PCF end facet, which is maybe an indication that the end face is damaged. Also, I want to mention that the setup is well protected from dust and there is no chance of dusting contaminating the fiber facet.
Please suggest what should be done to get an optimal cleave, shall I use a different cleaver (pls suggest one) or there are other things to consider.
If so, experimental results and related theory might also be helpful ...
Following Plato's theory of forms, the most objective reality is represented by idealized, non-physical forms or ideas. The relationship between physical objects and these forms is what gives them their "essence." He uses the well-known cave analogy to illustrate this relationship in his Republic. These shapes are frequently referred to as models or templates from which erroneous projections or copies of the physical world are created.
And modern science uses the quantum wave function that can only be explained mathematically and for which physicality is explicitly absent!
Our physical reality is an illusion of the essence's projection!?!
Forgive some of my ignorance in the math for thermodynamics and heat exchange but my background is heavier in Chemistry and could use some help.
The project is to keep about 70L of water in an aquarium at 17C when the ambient temperature is 22C in the room. The original project built had the following set up:
(Top to Bottom):
1. 80x80x38mm fan running at 5700 RPMs and 76CFM
2. 80x80x20mm copper fin heatsink (0.5mm fin thickness and 40 fins with a 3.5mm bottom thickness)
3. 2-TEC1-12706 hot side towards heatsink, cold side down towards water block (Imax: 6.4A, Umax: 15.4V, Qmax: (dT=0) 63W, dTmax=68C)
4. 40x80x12mm water block centered under the heatsink (surrounded on the sides with 20mm styrofoam and 10mm styrofoam at the back)
5. ~26mm thick styrofoam
6. Wood base
• All power is supplied by an AC/DC converter (12V 20A 240W)
• Power to the system is managed by a W1209 Temperature Control Module (Relay)
• Water flow is achieved by a 4L/min water pump (slowest I can find)
This set up is only cooling the water to 18C at night and will slowly creep up to 18.7 across the day so I know this set up is not keeping up with the heat load. (also worth noting that output temp is about 1.5-2C cooler than input temp to the waterblock). My hypothesis is that the water does not have enough time in the water block for good thermal exchange or that the cooler is not creating enough of a dT in the water block to absorb the amount of heat needed to in that cycle time. The fact that the Aluminum water block has a 5x lower specific heat than water is what is making me think either more contact time or great dT is needed.
My thoughts were to swap out the water block for an 40x200x12mm water block and increase the number of peltier coolers from 2->5 and going with the TEC1-12715 (Imax: 15.6A, Umax: 15.4V, Qmax: (dT=0) 150W, dTmax=68C).
This is where is am lost in the weeds and need help. I am lacking in the intellectual horsepower for this. Will using the 5 in parallel do the trick and not max out the converter? OR Will using 5 in series still produce the needed cooling effect with the lower dT associated with the lower amperage? Or is there another setup someone can recommend? I am open to feedback and direction, thank you in advance.
In a hypothetical situation where I have two wires, ones cross section is a cylinder, and the others a star. Both have the same cross section area, both have the same length. What are the differences in electrical properties ?
Are there any experiments done looking into this ?
Also what would happen if a wire had a conical shape, by length ?
Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.
The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:
The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:
Preprint Space Rest Frame (Dec 2021)
This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.
This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.
I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.
It still seems quite a shocking realisation.
For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.
In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?
I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.
The 2023 ranking is available through the following link:
QS ranking is relatively familiar in scientific circles. It ranks universities based on the following criteria:
1- Academic Reputation
2- Employer Reputation
3- Citations per Faculty
4- Faculty Student Ratio
5- International Students Ratio
6- International Faculty Ratio
7- International Research Network
8- Employment Outcomes
- Are these parameters enough to measure the superiority of a university?
- What other factors should also be taken into account?
Please share your personal experience with these criteria.
I would like to know how to measure a solid's surface temperature with fluid on it. The fluid will react with the solid surface and generates heat, so the temperature between the solid and the fluid is the crucial data I need. Here, I can only think of two options:
1. Thermal couple: Use the FLAT surface thermal couple and attach it to the surface of the solid to measure the data. For example, I can use Thin Leaf-Type Thermocouples for Layered Surfaces (omega.com) or Cement-On Polyimide Fast Response Surface Thermocouples (omega.com)
Pros: fast response, high accuracy
Cons: cannot guarantee that the measured data accurately represents the surface temperature
2. Infrared temperature sensor:
Pros: directly measure the surface temperature, high accuracy
Cons: slow response, the data might be affected by the fluid
Is there any other way to do the measurement or any suggestions?
Thank you very much in advance to anyone who answers this question.
Lee's disc apparatus is designed to finsd thermal conductivity of bad conductors. But I am having a doubt that, since soil having the following properties:
1. consists of irregular shaped aggregates
2. Non uniform distribution of particles
3. Presence of voids
Can we use Lee's disc method find thermal conductivity of soil???
I am interested to know the opinion of experts in this field.
LIGO and cooperating institutions obviously determine distance r of their hypothetical gravitational wave sources on the basis of a 1/r dependence of related spatial strain, see on page 9 of reference below. Fall-off by 1/r in fact applies in case of gravitational potential Vg = - GM/r of a single source. Shouldn’t any additional effect of a binary system with internal separation s - just for geometrical reasons - additionally reduce by s/r ?
In order to represent our observations or sight of a physical process and to further investigate it by conducting experiments or Numerically models? What are basics one need to focus ? Technically, how one should think? First, thing is understanding, you should be there! If we are modeling a flow we have to be the flow, if representing a let's say a ball, you have to be the ball! To better understand it! What are others?
Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.
Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.
Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.
In my previous question I suggested using the Research Gate platform to launch large-scale spatio temporal comparative researches.
The following is the description of one of the problems of pressing importance for humanitarian and educational sectors.
For the last several decades there has been a gradual loss in quality of education on all its levels . We can observe that our universities are progressively turning into entertaining institutions, where students parties, musical and sport activities are valued higher than studying in a library or working on painstaking calculations.
In 1998 Vladimir Arnold (1937 – 2010), one of the greatest mathematicians of our times, in his article “Mathematical Innumeracy Scarier Than Inquisition Fires” (newspaper “Izvestia”, Moscow) stated that the power players didn’t need all the people to be able to think and analyze, only “cogs in machines,” serving their interests and business processes. He also wrote that American students didn’t know how to sum up simple fractions. Most of them sum up numerator and denominators of one simple fraction with the ones of the other, i.e. as they did it, 1/2+ 1/3 according to their understand is equal to 2/5 . Vladimir Arnold pointed out that with this kind of education, students can’t think, prove and reason – they are easy to turn into a crowd, to be easily manipulated by cunning politicians because they don’t usually understand causes and effects of political acts. I would add, for myself, that this process is quite understandable and expected because computers, internet and consumer society lifestyle (with its continuous rush for more and newer commodities we are induced to regard as a healthy behavior) have wiped off young people’s skills in elementary logic and eagerness to study hard. And this is exactly what the consumer economics and its bosses, the owners of international businesses and local magnates, need.
I recall a funny incident that happened in Kharkov (Ukraine). One Biology student was asked what “two squared” was. He answered that it was the number 2 inscribed into a square.
The level and the scale of education and intellectual decline described can be easily measured with the help of the Research Gate platform. It could be appropriate to test students’ logic abilities, instead of guess-the-answer tests which have taken over all the universities within the framework of Bologna Process which victorious march on the territories of former Soviet states. Many people can remember the fact that Soviet education system was one of the best in the world. I have therefore suggested the following tests:
1. In a Nikolai Bogdanov-Belsky (1868-1945) painting “Oral accounting at Rachinsky's People's school”(1895) one could see boys in a village school at a mental arithmetic lesson. Their teacher, Sergei Rachinsky (1833-1902), the school headmaster and also a professor at the Moscow University in the 1860s, offered the children the following exercise to do a mental calculation (http://commons.wikimedia.org/wiki/File:BogdanovBelsky_UstnySchet.jpg?uselang=ru):
(10 х 10 + 11 х 11 + 12 х 12 + 13 х 13 + 14 х 14) / 365 = ?
(there is no provision here on Research Gate to write square of the numbers,thats why I have writen through multiplication of the numbers )
19th century peasant children with basted shoes (“lapti”) were able to solve such task mentally. This year, in September, this very exercise was given to the senior high school pupils and the first year students of a university with major in Physics and Technology in Kyiv (the capital of Ukraine) and no one could solve it.
2. Exercise of a famous mathematician Johann Carl Friedrich Gauss (1777–1855): to calculate mentally the sum of the first one hundred positive integers:
1+2+3+4+…+100 = ?
3. Albrecht Dürer’s (1471-1528) magic square (http://en.wikipedia.org/wiki/Magic_square)
The German Renaissance painter was amazed by the mathematical properties of the magic square, which were described in Europe firstly in Spanish (the 1280s) and Italian (14th century) manuscripts. He used the image of the square as a detail for in his Melancholia I painting , which was drawn in 1514, and included the numbers 15 and 14 in his magic square:
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
Ask your students to find regularities in this magic square. In case this exercise seems hard, you can offer them Lo Shu (2200 BC) square, a simpler variant of magic square of the third order (minimal non-trivial case):
4 9 2
3 5 7
8 1 6
4. Summing up of simple fractions.
According to Vladimir Arnold’s popular articles, in the era of computers and Internet, this test becomes an absolute obstacle for more and more students all over the world. Any exercises of the following type will be appropriate at this part:
3/7 + 7/3 = ? and 5/6 + 7/15=?
I think these four tests will be enough. All of them are for logical skills, unlike the tests created under Bologna Process.
Dear colleagues, professors and teachers,
You can offer these tasks to the students at your colleges and universities and share the results here, at the Research Gate platform, so that we all can see the landscape of the wretchedness and misery resulted from neoliberal economics and globalization.
Time is what permits things to happen. However, as a physical grandeur, time must emerge as a consequence of some physical law (?). But, how time could emerge as a consequence of something if " consequence", " causation", implies the existence of the time?
A long copper plate is moved at a speed v along its length as suggested in the attachment. A magnetic field exists perpendicular to the plate in a cylindrical region cutting the plate in a circular region. A and B are two fixed conducting brushes which maintain contact with the plate as the plate slides past them. These brushes are connected by a conducting wire.
Is there a current in the wire? In which direction?
My understanding of the significance of Bell's inequality in quantum mechanics (QM) is as follows. The assumption of hidden variables implies an inequality called Bell's inequality. This inequality is violated not only by conventional QM theory but also by experimental data designed to test the prediction (the experimental data agree with conventional QM theory). This implies that the hidden variable assumption is wrong. But from reading Bell's paper it looks to me that the assumption proven wrong is hidden variables (without saying local or otherwise), while people smarter than me say that the assumption proven wrong is local hidden variables. I don't understand why it is only local hidden variables, instead of just hidden variables, that was proven wrong. Can somebody explain this?
I am trying to plot 4 races on one polar plot using "hold on" command. the function I am using is "polar2". when plotting 2nd or 3rd trace, seems to create new axis (may be). the trace is extending.
Is there any other way to plot 2-3 plots in the same polar plot without using "hold on" command in Matlab?
If you have a substance of density x g/mol with boiling point y and one with density z and boiling point a and we make a 90% mix of 1 and 2, what is the resulting density and boiling point?
Over the last few months, I have come across several posts on social media where scientists/researchers even Universities are flaunting their ranking as per AD Scientific Index https://www.adscientificindex.com/.
When I clicked on the website, I was surprised to discover that they are charging a fee (~24-30 USD) to add the information of an individual researcher.
So I started wondering if it's another scam of ‘predatory’ rankings.
What's your opinion in this regard?
As you know peristaltic pump has a Constant fluid flow direction but Changing fluid flow rate. I was wondering if it is possible to produce a Constant (Steady) fluid flow rate using peristaltic pump.
A thin, circular disc of radius R is made up of a conducting material. A charge Q is given to it, which spreads on the two surfaces.
Will the surface charge density be uniform? If not, where will it be minimum?
Imagine a row of golf balls in a straight line with a distance of one metre between each golf ball. This we call row A. Then there is a second row of golf balls (row B) placed right next to the golf balls in row A. We can think of the row A of golf balls as marking of distance measurements within the inertial frame of reference corresponding to row A (frame A). Similarly the golf balls in row B mark the distance measurements in frame B. Both rows are lined up in the x direction.
Now simultaneously all the golf balls in row B start to accelerate in the x direction until they reach a steady velocity v at which point the golf balls in row B stop accelerating. It is clear that the golf balls in row B will all pass the individual golf balls of row A at exactly the same instant when viewed from frame A. It must also be the case that the golf balls in the rows pass each other simultaneously when viewed from frame B.
So we can see that the distance measurements in the frame of B are the same as the distance measurements in row A. The row of golf balls is in the x direction so this suggests that the coordinate transformation between frame A and frame B should be x - vt.
This contradicts the Lorentz transformation equation for the x direction which is part of the standard SR theory.
If we were to replace the golf balls in row B with measuring rods of length one metre then in order to match the observations of the Michelson Moreley experiment we would conclude that measuring rods must in general experience length contraction relative to a unique frame of reference. So this thought experiment suggests that we need to maintain distances as invariant between moving frames of reference while noting that moving objects experience length contraction.
This also implies the existence of a unique frame of reference against which the velocity v is measured.
Preprint Space Rest Frame (March 2022)
I would be interested to see if the thought experiment can be explained within standard Special Relativity while retaining the Lorentz transformation equations.
Human dynasty in its millennium era. We have identified fire from the friction of stones and now we are interacting with Nano robots. Once it was a dream to fly but today all the Premier league, La liga and Serie A players travel in airplane at least twice in a week due to the unprecedented growth of human science. BUT ONE THING IS STILL ELUDING IN THE GLITTERING PROFILE OF HUMAN DYNASTY.
Although we have the gravitation theory, Maxwell's theory of electromagnetism, Max Planck's Quantum mechanics, Einstein's relativity theory and in most recently the Stephen Hawking's Big bang concepts...… Why can't we still revert back and forth into our life?
Any possibilities in future?
Why? in terms of mathematics, physics and theology??
How much does the existence of advanced laboratories and appropriate financial budgets and different support for a researcher's research affect the quality and quantity of a researcher's work?
The formula for sin(a)sin(b) is a very well know highschool formula. But is there a more general version for the product of m sine function?
Q. : Students asked me that "we only study about different forms of energy, one form of energy getting converted to another forms of energy. But no one knows what is energy."
Ans: No answer
Q. Sir Why do we have to call something by the name that a scientists used long back, can't we change it.
Ans: Science or engineering is field of perspective, how one looks at something matters. But everything that you read in a book can be changed. If you wish you can express it in a different manner.
It is the terminology, that we learn, in which people who observed certain phenomenon used to explain a particular concept we follow. Learn. To make world understand what you have to say you have to first make them ready to understand. Otherwise no one would know.
Has anyone considered the idea of using the deuterium molecule for nuclear fusion. I see that the nuclear fusion of a deuterium atom with a proton is possible in stars at a million degrees Kelvin. What I am talking about is using a deuterium molecule which has all the right ingredients for helium (2 protons, 2 neutrons, 2 electrons).
The idea would be to try to achieve the reaction using a strong and varying magnetic field. The deuterium molecule must be aligned with the magnetic field so that the protons start to oscillate their position with the magnetic field accelerating the protons towards each other and the natural positive charge repulsion pushing them apart. Presumably the nuclei will align so that the neutrons are closer together than the protons and the objective is to force the neutrons within range of the "strong nuclear force". It might be advantageous to put the electrons into an excited state so that they are closer to the right position for the helium electron orbital shell.
Arrow of time (e.g. entropy's arrow of time): Why does time have a direction? Why did the universe have such low entropy in the past, and time correlates with the universal (but not local) increase in entropy, from the past and to the future, according to the second law of thermodynamics? Is this phenomenon justified by the Gibbs law and the irreversible process?
With respect to all the answers, in my opinion, no answer to such questions is completely correct.
It is very difficult for me to choose between these two majors for the master's degree
Although I think this is a question for many other students as well
Regardless of interest, which of these two disciplines do you think has a better future? Which has more job markets, in the US and Europe? Which one is more suitable for studying abroad? And which one has more income? Are jobs related to organic chemistry less than analytical chemistry?
Please share with me if you have information about these two fields and their job market.
One thing I noted in academia is that competition can sometimes be just as fierce as in the world of business.
Sometimes it can be small and petty like who should be first author, often triggered by purely selfish reasons and following justifications.
In other cases competition can be about grands, effectively rendering someone unemployed in some cases. I have seen bullying, discrimination more frequently than in the world of business, the place I come from.
This is truly the dark side of academia, there are also positive things but these are things that make me sick to my stomach.
What is your experience? Do you agree with my rather dark view? If not, why? If yes, how can we fix it?
Best wishes Henrik
Consider two particles A and B in translation with uniformly accelerated vertical motion in a frame S (X,Y,T) such that the segment AB with length L remains always parallel to the horizontal axis X (XA = 0, XB = L). If we assume that the acceleration vector (0, E) is constant and we take the height of both particles to be defined by the expressions YA = YB = 0.5 ET2, we have that the vertical distance between A and B in S is always (see fig. in PR - 2.pdf):
1) YB - YA = 0
If S moves with constant velocity (v, 0) with respect to another reference s(x,y,t) whose origin coincides with the origin of S at t = T = 0, inserting the Lorentz transformation for A (Y = y, T = g(t - vxA/c2), xA = vt) into YA= 0.5 ET2 and the Lorentz transformation for B (Y = y, T = g(t - vxB/c2), xB = vt + L/g) into YB= 0.5 ET2 we get that the vertical distance between A and B in s(x,y,t) is:
2) yB - yA = 0.5 E (L2v2/c4- 2Lvt/c2g)
which shows us that, at each instant of time "t" the distance yB - yA is different despite being always constant in S (eq.1). As we know that the classical definition of translational motion of two particles is only possible if the distance between them remains constant, we conclude that in s the two particles cannot be in translational motion despite being in translational motion in S.
More information in:
I'm currently looking at the rheological properties of the polymer Xanthan Gum. focusing on its dynamic viscosity to be more specific. I'm assessing the effects of pH (ranging from 3.6 to 5.6, 0.4 increment, total of 6 pH's) on the dynamic viscosity of xanthan gum solution (dissolving xanthan gum powder into acetic buffer with equal ionic strength, concentration is kept at 0.04%).
Firstly, my viscosity data collected shows that, as pH increases from 3.6 to 4.0 then 4.4, the viscosity increases; but as I bring up the pH from 4.4 to 4.8, 4.8 to 5.2, then lastly 5.2 to 5.6, the increasing viscosity trend plateaus and the increase in viscosity is less significant compared to the 3.6-4.4 jump. At this range, does pH has an effect on the viscosity of xanthan gum based on its molecular configuration? Though some sources states that xanthan gum's viscosity remains stable and unchanged within the range of pH 3-12 at a high concentration like 1% not 0.04%, yet some suggest pH still plays an effect, though I'm not sure how on the chemical and molecular aspect.
A possible conjecture I can think of is the xanthan gum's order-disorder and helix-coil transition is affected by protonation. In figure 2, it demonstrates how electrolytes affect the structure of the polymer; in figure 3, it shows how at a state of a helical rod and no longer a random coil, it is capable to hydrogen bonds among each other. Hence, I'm wondering of pH plays an effect on it's structural transition, such that the increased intermolecular forces at the form of a helical rod would make it more viscous in solution.
Here are the resources I have used so far:
Brunchi, CE., Bercea, M., Morariu, S. et al. Some properties of xanthan gum in aqueous solutions: effect of temperature and pH. J Polym Res 23, 123 (2016). https://doi.org/10.1007/s10965-016-1015-4
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of flat spacetime.
Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we assume that the origins coincide x = X = 0 at time tP = 0 we may write down the space-like component of the Lorentz transformation as:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP , inserting x =xP , X = XP , T = TP into 2)3) yields:
4) XP = g (xP - vtP)
5) TP = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
More information in:
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
I am studying integral transforms (Fourier, Laplace, etc), to apply them in physics problems. However, it is difficult to get books that have enough exercises and their answers. I have found that in particular the Russian authors have excellent books where there are a lot of exercises and their solutions.
Generally observed at low strain rate for fine grained material.
For industrial scale
which are viable materials?
which parameters need to alter?
Kindly express your views.