Science topics: Physics

Science topic

# Physics - Science topic

Physics related research discussions

Questions related to Physics

Dear all

Hope you are doing well!

What are the best books in Materials Science and Engineering (Basics and Advanced)? Moreover, what are the best skills (or materials topic related) that materials scientists have to develop and to acquire?

Thanks in advance

^_^

I use Fujikura CT-30 cleaver for PCF cleaving to use for supercontinuum generation. Initially, it seems like working fine as I could get high coupling efficiency (70-80%) in the 3.2um core of PCF. However, after some time (several hours) I notice that coupling efficiency decreases drastically and when I inspect the PCF endface with an IRscope, I could see a bright shine on the PCF end facet, which is maybe an indication that the end face is damaged. Also, I want to mention that the setup is well protected from dust and there is no chance of dusting contaminating the fiber facet.

Please suggest what should be done to get an optimal cleave, shall I use a different cleaver (pls suggest one) or there are other things to consider.

Thanks

If so, experimental results and related theory might also be helpful ...

Forgive some of my ignorance in the math for thermodynamics and heat exchange but my background is heavier in Chemistry and could use some help.

The project is to keep about 70L of water in an aquarium at 17C when the ambient temperature is 22C in the room. The original project built had the following set up:

(Top to Bottom):

1. 80x80x38mm fan running at 5700 RPMs and 76CFM

2. 80x80x20mm copper fin heatsink (0.5mm fin thickness and 40 fins with a 3.5mm bottom thickness)

3. 2-TEC1-12706 hot side towards heatsink, cold side down towards water block (Imax: 6.4A, Umax: 15.4V, Qmax: (dT=0) 63W, dTmax=68C)

4. 40x80x12mm water block centered under the heatsink (surrounded on the sides with 20mm styrofoam and 10mm styrofoam at the back)

5. ~26mm thick styrofoam

6. Wood base

• All power is supplied by an AC/DC converter (12V 20A 240W)

• Power to the system is managed by a W1209 Temperature Control Module (Relay)

• Water flow is achieved by a 4L/min water pump (slowest I can find)

This set up is only cooling the water to 18C at night and will slowly creep up to 18.7 across the day so I know this set up is not keeping up with the heat load. (also worth noting that output temp is about 1.5-2C cooler than input temp to the waterblock). My hypothesis is that the water does not have enough time in the water block for good thermal exchange or that the cooler is not creating enough of a dT in the water block to absorb the amount of heat needed to in that cycle time. The fact that the Aluminum water block has a 5x lower specific heat than water is what is making me think either more contact time or great dT is needed.

My thoughts were to swap out the water block for an 40x200x12mm water block and increase the number of peltier coolers from 2->5 and going with the TEC1-12715 (Imax: 15.6A, Umax: 15.4V, Qmax: (dT=0) 150W, dTmax=68C).

This is where is am lost in the weeds and need help. I am lacking in the intellectual horsepower for this. Will using the 5 in parallel do the trick and not max out the converter? OR Will using 5 in series still produce the needed cooling effect with the lower dT associated with the lower amperage? Or is there another setup someone can recommend? I am open to feedback and direction, thank you in advance.

In a hypothetical situation where I have two wires, ones cross section is a cylinder, and the others a star. Both have the same cross section area, both have the same length. What are the differences in electrical properties ?

Are there any experiments done looking into this ?

Also what would happen if a wire had a conical shape, by length ?

Having worked on the spacetime wave theory for some time and recently published a preprint paper on the Space Rest Frame I realised the full implications which are quite shocking in a way.

The spacetime wave theory proposes that electrons, neutrons and protons are looped waves in spacetime:

The paper on the space rest frame proposes that waves in spacetime take place in the space rest frame K0:

Preprint Space Rest Frame (Dec 2021)

This then implies that the proton which is a looped wave in spacetime of three wavelengths is actually a looped wave taking place in the space rest frame and we are moving at somewhere between 150 km/sec and 350 km/sec relative to that frame of reference.

This also gives a clue as to the cause of the length contraction taking place in objects moving relative to the rest frame. The length contraction takes place in the individual particles, namely the electron, neutron and proton.

I find myself in a similar position to Earnest Rutherford when he discovered that the atom was mostly empty space. I don't expect to fall through the floor but I might expect to suddenly fly away at around 250 km/sec. Of course this doesn't happen because there is zero resistance to uniform motion through space and momentum is conserved.

It still seems quite a shocking realisation.

Richard

For those that have the seventh printing of Goldstein's "Classical Mechanics" so I don't have to write any equations here. The Lagrangian for electromagnetic fields (expressed in terms of scalar and vector potentials) for a given charge density and current density that creates the fields is the spatial volume integral of the Lagrangian density listed in Goldstein's book as Eq. (11-65) (page 366 in my edition of the book). Goldstein then considers the case (page 369 in my edition of the book) in which the charges and currents are carried by point charges. The charge density (for example) is taken to be a Dirac delta function of the spatial coordinates. This is utilized in the evaluation of one of the integrals used to construct the Lagrangian. This integral is the spatial volume integral of charge density multiplied by the scalar potential. What is giving me trouble is as follows.

In the discussion below, a "particle" refers to an object that is small in some sense but has a greater-than-zero size. It becomes a point as a limiting case as the size shrinks to zero. In order for the charge density of a particle, regardless of how small the particle is, to be represented by a delta function in the volume integral of charge density multiplied by potential, it is necessary for the potential to be nearly constant over distances equal to the particle size. This is true (when the particle is sufficiently small) for external potentials evaluated at the location of the particle of interest, where the external potential as seen by the particle of interest is defined to be the potential created by all particles except the particle of interest. However, total potential, which includes the potential created by the particle of interest, is not slowly varying over the dimensions of the particle of interest regardless of how small the particle is. The charge density cannot be represented by a delta function in the integral of charge density times potential, when the potential is total potential, regardless of how small the particle is. If we imagine the particles to be charged marbles (greater than zero size and having finite charge densities) the potential that should be multiplying the charge density in the integral is total potential. As the marble size shrinks to zero the potential is still total potential and the marble charge density cannot be represented by a delta function. Yet textbooks do use this representation, as if the potential is external potential instead of total potential. How do we justify replacing total potential with external potential in this integral?

I won't be surprised if the answers get into the issues of self forces (the forces producing the recoil of a particle from its own emitted electromagnetic radiation). I am happy with using the simple textbook approach and ignoring self forces if some justification can be given for replacing total potential with external potential. But without that justification being given, I don't see how the textbooks reach the conclusions they reach with or without self forces being ignored.

The 2023 ranking is available through the following link:

QS ranking is relatively familiar in scientific circles. It ranks universities based on the following criteria:

1- Academic Reputation

2- Employer Reputation

3- Citations per Faculty

4- Faculty Student Ratio

5- International Students Ratio

6- International Faculty Ratio

7- International Research Network

8- Employment Outcomes

- Are these parameters enough to measure the superiority of a university?

- What other factors should also be taken into account?

Please share your personal experience with these criteria.

Hello,

I would like to know how to measure a solid's surface temperature with fluid on it. The fluid will react with the solid surface and generates heat, so the temperature between the solid and the fluid is the crucial data I need. Here, I can only think of two options:

1. Thermal couple: Use the FLAT surface thermal couple and attach it to the surface of the solid to measure the data. For example, I can use Thin Leaf-Type Thermocouples for Layered Surfaces (omega.com) or Cement-On Polyimide Fast Response Surface Thermocouples (omega.com)

**Pros:**fast response, high accuracy

**Cons:**cannot guarantee that the measured data accurately represents the surface temperature

2. Infrared temperature sensor:

**Pros:**directly measure the surface temperature, high accuracy

**Cons:**slow response, the data might be affected by the fluid

Is there any other way to do the measurement or any suggestions?

Thank you very much in advance to anyone who answers this question.

Lee's disc apparatus is designed to finsd thermal conductivity of bad conductors. But I am having a doubt that, since soil having the following properties:

1. consists of irregular shaped aggregates

2. Non uniform distribution of particles

3. Presence of voids

Can we use Lee's disc method find thermal conductivity of soil???

I am interested to know the opinion of experts in this field.

LIGO and cooperating institutions obviously determine distance r of their hypothetical gravitational wave sources on the basis of a 1/r dependence of related spatial strain, see on page 9 of reference below. Fall-off by 1/r in fact applies in case of gravitational potential V

_{g}= - GM/r of a single source. Shouldn’t any additional effect of a binary system with internal separation s - just for geometrical reasons - additionally reduce by s/r ?In order to represent our observations or sight of a physical process and to further investigate it by conducting experiments or Numerically models? What are basics one need to focus ? Technically, how one should think? First, thing is understanding, you should be there! If we are modeling a flow we have to be the flow, if representing a let's say a ball, you have to be the ball! To better understand it! What are others?

In my previous question I suggested using the Research Gate platform to launch large-scale spatio temporal comparative researches.

The following is the description of one of the problems of pressing importance for humanitarian and educational sectors.

For the last several decades there has been a gradual loss in quality of education on all its levels . We can observe that our universities are progressively turning into entertaining institutions, where students parties, musical and sport activities are valued higher than studying in a library or working on painstaking calculations.

In 1998 Vladimir Arnold (1937 – 2010), one of the greatest mathematicians of our times, in his article “Mathematical Innumeracy Scarier Than Inquisition Fires” (newspaper “Izvestia”, Moscow) stated that the power players didn’t need all the people to be able to think and analyze, only “cogs in machines,” serving their interests and business processes. He also wrote that American students didn’t know how to sum up simple fractions. Most of them sum up numerator and denominators of one simple fraction with the ones of the other, i.e. as they did it, 1/2+ 1/3 according to their understand is equal to 2/5 . Vladimir Arnold pointed out that with this kind of education, students can’t think, prove and reason – they are easy to turn into a crowd, to be easily manipulated by cunning politicians because they don’t usually understand causes and effects of political acts. I would add, for myself, that this process is quite understandable and expected because computers, internet and consumer society lifestyle (with its continuous rush for more and newer commodities we are induced to regard as a healthy behavior) have wiped off young people’s skills in elementary logic and eagerness to study hard. And this is exactly what the consumer economics and its bosses, the owners of international businesses and local magnates, need.

I recall a funny incident that happened in Kharkov (Ukraine). One Biology student was asked what “two squared” was. He answered that it was the number 2 inscribed into a square.

The level and the scale of education and intellectual decline described can be easily measured with the help of the Research Gate platform. It could be appropriate to test students’ logic abilities, instead of guess-the-answer tests which have taken over all the universities within the framework of Bologna Process which victorious march on the territories of former Soviet states. Many people can remember the fact that Soviet education system was one of the best in the world. I have therefore suggested the following tests:

1. In a Nikolai Bogdanov-Belsky (1868-1945) painting “Oral accounting at Rachinsky's People's school”(1895) one could see boys in a village school at a mental arithmetic lesson. Their teacher, Sergei Rachinsky (1833-1902), the school headmaster and also a professor at the Moscow University in the 1860s, offered the children the following exercise to do a mental calculation (http://commons.wikimedia.org/wiki/File:BogdanovBelsky_UstnySchet.jpg?uselang=ru):

(10 х 10 + 11 х 11 + 12 х 12 + 13 х 13 + 14 х 14) / 365 = ?

(there is no provision here on Research Gate to write square of the numbers,thats why I have writen through multiplication of the numbers )

19th century peasant children with basted shoes (“lapti”) were able to solve such task mentally. This year, in September, this very exercise was given to the senior high school pupils and the first year students of a university with major in Physics and Technology in Kyiv (the capital of Ukraine) and no one could solve it.

2. Exercise of a famous mathematician Johann Carl Friedrich Gauss (1777–1855): to calculate mentally the sum of the first one hundred positive integers:

1+2+3+4+…+100 = ?

3. Albrecht Dürer’s (1471-1528) magic square (http://en.wikipedia.org/wiki/Magic_square)

The German Renaissance painter was amazed by the mathematical properties of the magic square, which were described in Europe firstly in Spanish (the 1280s) and Italian (14th century) manuscripts. He used the image of the square as a detail for in his Melancholia I painting , which was drawn in 1514, and included the numbers 15 and 14 in his magic square:

16 3 2 13

5 10 11 8

9 6 7 12

4 15 14 1

Ask your students to find regularities in this magic square. In case this exercise seems hard, you can offer them Lo Shu (2200 BC) square, a simpler variant of magic square of the third order (minimal non-trivial case):

4 9 2

3 5 7

8 1 6

4. Summing up of simple fractions.

According to Vladimir Arnold’s popular articles, in the era of computers and Internet, this test becomes an absolute obstacle for more and more students all over the world. Any exercises of the following type will be appropriate at this part:

3/7 + 7/3 = ? and 5/6 + 7/15=?

I think these four tests will be enough. All of them are for logical skills, unlike the tests created under Bologna Process.

Dear colleagues, professors and teachers,

You can offer these tasks to the students at your colleges and universities and share the results here, at the Research Gate platform, so that we all can see the landscape of the wretchedness and misery resulted from neoliberal economics and globalization.

Complex systems are becoming one of very useful tools in the description of observed natural phenomena across all scientific disciplines. You are welcomed to share with us hot topics from your own area of research.

Nowadays, no one can encompass all scientific disciplines. Hence, it would be useful to all of us to know hot topics from various scientific fields.

Discussion about various methods and approaches applied to describe emergent behavior, self-organization, self-repair, multiscale phenomena, and other phenomena observed in complex systems are highly encouraged.

Time is what permits things to happen. However, as a physical grandeur, time must emerge as a consequence of some physical law (?). But, how time could emerge as a consequence of something if " consequence", " causation", implies the existence of the time?

A long copper plate is moved at a speed v along its length as suggested in the attachment. A magnetic field exists perpendicular to the plate in a cylindrical region cutting the plate in a circular region. A and B are two fixed conducting brushes which maintain contact with the plate as the plate slides past them. These brushes are connected by a conducting wire.

**Is there a current in the wire? In which direction?**

My understanding of the significance of Bell's inequality in quantum mechanics (QM) is as follows. The assumption of hidden variables implies an inequality called Bell's inequality. This inequality is violated not only by conventional QM theory but also by experimental data designed to test the prediction (the experimental data agree with conventional QM theory). This implies that the hidden variable assumption is wrong. But from reading Bell's paper it looks to me that the assumption proven wrong is hidden variables (without saying local or otherwise), while people smarter than me say that the assumption proven wrong is local hidden variables. I don't understand why it is only local hidden variables, instead of just hidden variables, that was proven wrong. Can somebody explain this?

I am trying to plot 4 races on one polar plot using "hold on" command. the function I am using is "polar2". when plotting 2nd or 3rd trace, seems to create new axis (may be). the trace is extending.

Is there any other way to plot 2-3 plots in the same polar plot without using "hold on" command in Matlab?

If you have a substance of density x g/mol with boiling point y and one with density z and boiling point a and we make a 90% mix of 1 and 2, what is the resulting density and boiling point?

Over the last few months, I have come across several posts on social media where scientists/researchers even Universities are flaunting their ranking as per AD Scientific Index https://www.adscientificindex.com/.

When I clicked on the website, I was surprised to discover that they are charging a fee (

**~24-30 USD**) to add the information of an individual researcher.So I started wondering if it's another scam of ‘predatory’ rankings.

**What's your opinion in this regard?**

As you know peristaltic pump has a Constant fluid

**flow direction**but Changing fluid**flow rate**. I was wondering if it is possible to produce a Constant (Steady) fluid flow rate using peristaltic pump.A thin, circular disc of radius R is made up of a conducting material. A charge Q is given to it, which spreads on the two surfaces.

Will the surface charge density be uniform? If not, where will it be minimum?

Imagine a row of golf balls in a straight line with a distance of one metre between each golf ball. This we call row A. Then there is a second row of golf balls (row B) placed right next to the golf balls in row A. We can think of the row A of golf balls as marking of distance measurements within the inertial frame of reference corresponding to row A (frame A). Similarly the golf balls in row B mark the distance measurements in frame B. Both rows are lined up in the x direction.

Now simultaneously all the golf balls in row B start to accelerate in the x direction until they reach a steady velocity v at which point the golf balls in row B stop accelerating. It is clear that the golf balls in row B will all pass the individual golf balls of row A at exactly the same instant when viewed from frame A. It must also be the case that the golf balls in the rows pass each other simultaneously when viewed from frame B.

So we can see that the distance measurements in the frame of B are the same as the distance measurements in row A. The row of golf balls is in the x direction so this suggests that the coordinate transformation between frame A and frame B should be x - vt.

This contradicts the Lorentz transformation equation for the x direction which is part of the standard SR theory.

If we were to replace the golf balls in row B with measuring rods of length one metre then in order to match the observations of the Michelson Moreley experiment we would conclude that measuring rods must in general experience length contraction relative to a unique frame of reference. So this thought experiment suggests that we need to maintain distances as invariant between moving frames of reference while noting that moving objects experience length contraction.

This also implies the existence of a unique frame of reference against which the velocity v is measured.

Preprint Space Rest Frame (March 2022)

I would be interested to see if the thought experiment can be explained within standard Special Relativity while retaining the Lorentz transformation equations.

Richard

Human dynasty in its millennium era. We have identified fire from the friction of stones and now we are interacting with Nano robots. Once it was a dream to fly but today all the Premier league, La liga and Serie A players travel in airplane at least twice in a week due to the unprecedented growth of human science. BUT ONE THING IS STILL ELUDING IN THE GLITTERING PROFILE OF HUMAN DYNASTY.

Although we have the gravitation theory, Maxwell's theory of electromagnetism, Max Planck's Quantum mechanics, Einstein's relativity theory and in most recently the Stephen Hawking's Big bang concepts...…

**Why can't we still revert back and forth into our life?****Any possibilities in future?**

**if not..**

**Why? in terms of mathematics, physics and theology??**

How much does the existence of advanced laboratories and appropriate financial budgets and different support for a researcher's research affect the quality and quantity of a researcher's work?

The formula for sin(a)sin(b) is a very well know highschool formula. But is there a more general version for the product of m sine function?

Q. : Students asked me that "we only study about different forms of energy, one form of energy getting converted to another forms of energy. But no one knows what is energy."

Ans: No answer

Q. Sir Why do we have to call something by the name that a scientists used long back, can't we change it.

Ans: Science or engineering is field of perspective, how one looks at something matters. But everything that you read in a book can be changed. If you wish you can express it in a different manner.

It is the terminology, that we learn, in which people who observed certain phenomenon used to explain a particular concept we follow. Learn. To make world understand what you have to say you have to first make them ready to understand. Otherwise no one would know.

Has anyone considered the idea of using the deuterium molecule for nuclear fusion. I see that the nuclear fusion of a deuterium atom with a proton is possible in stars at a million degrees Kelvin. What I am talking about is using a deuterium molecule which has all the right ingredients for helium (2 protons, 2 neutrons, 2 electrons).

The idea would be to try to achieve the reaction using a strong and varying magnetic field. The deuterium molecule must be aligned with the magnetic field so that the protons start to oscillate their position with the magnetic field accelerating the protons towards each other and the natural positive charge repulsion pushing them apart. Presumably the nuclei will align so that the neutrons are closer together than the protons and the objective is to force the neutrons within range of the "strong nuclear force". It might be advantageous to put the electrons into an excited state so that they are closer to the right position for the helium electron orbital shell.

Richard

Arrow of time (e.g. entropy's arrow of time): Why does time have a direction? Why did the universe have such low entropy in the past, and time correlates with the universal (but not local) increase in entropy, from the past and to the future, according to the second law of thermodynamics? Is this phenomenon justified by the Gibbs law and the irreversible process?

With respect to all the answers, in my opinion, no answer to such questions is completely correct.

Greetings

It is very difficult for me to choose between these two majors for the master's degree

Although I think this is a question for many other students as well

Regardless of interest, which of these two disciplines do you think has a better future? Which has more job markets, in the US and Europe? Which one is more suitable for studying abroad? And which one has more income?

*Are jobs related to organic chemistry less than analytical chemistry?*Please share with me if you have information about these two fields and their job market.

Thanks

Dear friends

One thing I noted in academia is that competition can sometimes be just as fierce as in the world of business.

Sometimes it can be small and petty like who should be first author, often triggered by purely selfish reasons and following justifications.

In other cases competition can be about grands, effectively rendering someone unemployed in some cases. I have seen bullying, discrimination more frequently than in the world of business, the place I come from.

This is truly the dark side of academia, there are also positive things but these are things that make me sick to my stomach.

What is your experience? Do you agree with my rather dark view? If not, why? If yes, how can we fix it?

Best wishes Henrik

Consider two particles A and B in translation with uniformly accelerated vertical motion in a frame S (X,Y,T) such that the segment AB with length L remains always parallel to the horizontal axis X (X

_{A}= 0, X_{B}= L). If we assume that the acceleration vector (0, E) is constant and we take the height of both particles to be defined by the expressions Y_{A}= Y_{B}= 0.5 ET^{2}, we have that the vertical distance between A and B in S is always (see fig. in PR - 2.pdf):1) Y

_{B}- Y_{A}= 0If S moves with constant velocity (v, 0) with respect to another reference s(x,y,t) whose origin coincides with the origin of S at t = T = 0, inserting the Lorentz transformation for A (Y = y, T = g(t - vx

_{A}/c^{2}), xA = vt) into Y_{A}= 0.5 ET^{2}and the Lorentz transformation for B (Y = y, T = g(t - vx_{B}/c^{2}), x_{B}= vt + L/g) into Y_{B}= 0.5 ET^{2}we get that the vertical distance between A and B in s(x,y,t) is:2) y

_{B}- y_{A}= 0.5 E (L^{2}v^{2}/c^{4}- 2Lvt/c^{2}g)which shows us that, at each instant of time "t" the distance y

_{B}- y_{A}is different despite being always constant in S (eq.1). As we know that the classical definition of translational motion of two particles is only possible if the distance between them remains constant, we conclude that in s the two particles cannot be in translational motion despite being in translational motion in S.More information in:

I'm currently looking at the rheological properties of the polymer Xanthan Gum. focusing on its dynamic viscosity to be more specific. I'm assessing the effects of pH (ranging from 3.6 to 5.6, 0.4 increment, total of 6 pH's) on the dynamic viscosity of xanthan gum solution (dissolving xanthan gum powder into acetic buffer with equal ionic strength, concentration is kept at 0.04%).

Firstly, my viscosity data collected shows that, as pH increases from 3.6 to 4.0 then 4.4, the viscosity increases; but as I bring up the pH from 4.4 to 4.8, 4.8 to 5.2, then lastly 5.2 to 5.6, the increasing viscosity trend plateaus and the increase in viscosity is less significant compared to the 3.6-4.4 jump. At this range, does pH has an effect on the viscosity of xanthan gum based on its molecular configuration? Though some sources states that xanthan gum's viscosity remains stable and unchanged within the range of pH 3-12 at a high concentration like 1% not 0.04%, yet some suggest pH still plays an effect, though I'm not sure how on the chemical and molecular aspect.

A possible conjecture I can think of is the xanthan gum's order-disorder and helix-coil transition is affected by protonation. In figure 2, it demonstrates how electrolytes affect the structure of the polymer; in figure 3, it shows how at a state of a helical rod and no longer a random coil, it is capable to hydrogen bonds among each other. Hence, I'm wondering of pH plays an effect on it's structural transition, such that the increased intermolecular forces at the form of a helical rod would make it more viscous in solution.

Here are the resources I have used so far:

Brunchi, CE., Bercea, M., Morariu, S.

*et al.*Some properties of xanthan gum in aqueous solutions: effect of temperature and pH.*J Polym Res***23,**123 (2016). https://doi.org/10.1007/s10965-016-1015-4How can I numerically solve coupled polynomial or transcendental equations using MATHEMATICA?

1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)

2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)

I am studying integral transforms (Fourier, Laplace, etc), to apply them in physics problems. However, it is difficult to get books that have enough exercises and their answers. I have found that in particular the Russian authors have excellent books where there are a lot of exercises and their solutions.

Greetings,

Ender

Can the IMCOUNTOR image diagram in MATLAB be used to run on holograms as well?

The image below is an example of an activated sludge cell under a light microscope and the second figure is an example of another cell.

Generally observed at low strain rate for fine grained material.

For industrial scale

which are viable materials?

which parameters need to alter?

Kindly express your views.

Jerk is defined as the rate of change in acceleration. But I would like to know some practical applications of Jerk inorder to have better understanding. I kindly request to suggest me some examples.

Please, see the attached file RPVM.pdf. Any comment will be wellcome.

More on this subject at:

Kindly express your views what we can decipher from Storage modulus and tan δ.

w.r.t. to failure mechanism, tribological performance etc.

As every keyword have certain experts associated with it, based on their area of expertise. Can we initiate a devoted section for research collaboration, especially for the calls requiring bilateral or multinational collaborator?

Presently, My group is looking for a German collaborator in the area of " High performance based C-C composites" on following call "http://www.dst.gov.in/sites/default/files/DST-DFG-JointCall-10AUG2018.pdf"

Finding it difficult to have a relevant and interested group from Germany. kindly text me if anyone finds this call interesting.

Please share how I should proceed to search for interested researchers from Germany, My present approach is searching research papers on said area and mailing to the researcher if find relevant.

Why expectation value of angular momentum square operator <J

_{x}^{2}> = <J_{y}^{2}> ? How can we prove this?Hi

I am a physicist from Denmark planning a quantum experiment.

I need an cryostat that can go below 500 millikelvins.

I looking for any advice from researchers who have worked with cryostats before.

What are the best options for flexibility, price, maintains and operation?

I propose the following idea.

1. attach ions to a virus

2. localize it with ion trap

3. burn a hole in the middle thus destroying DNA

4. put in water

I would like to hear both physicists and biologists? Is this possible?

If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?

Consider the powerful central role of differential equations in physics and applied mathematics.

In the theory of ordinary differential equations and in dynamical systems we generally consider smooth or C^k class solutions. In partial differential equations we consider far more general solutions, involving distributions and Sobolev spaces.

I was wondering, what are the best examples or arguments that show that restriction to the analytic case is insufficient ?

What if we only consider ODEs with analytic coeficients and only consider analytic solutions. And likewise for PDEs. Here by "analytic" I mean real maps which can be extended to holomorphic ones. How would this affect the practical use of differential equations in physics and science in general ? Is there an example of a differential equation arising in physics (excluding quantum theory !) which only has C^k or smooth solutions and no analytic ones ?

It seems we could not even have an analytic version of the theory of distributions as there could be no test functions .There are no non-zero analytic functions with compact support.

Is Newtonian physics analytic ? Is Newton's law of gravitation only the first term in a Laurent expansion ? Can we add terms to obtain a better fit to experimental data without going relativistic ?

Maybe we can consider that the smooth category is used as a convenient approximation to the analytic category. The smooth category allows perfect locality. For instance, we can consider that a gravitational field dies off outside a finite radius.

Cosmologists usually consider space-time to be a manifold (although with possible "singularities"). Why a manifold rather than the adequate smooth analogue of an analytic space ?

Space = regular points, Matter and Energy = singular points ?

According to Mach's Principle remote masses of the universe may be assumed to have some influence on local phenomena like, in particular, the appearance of inertial forces. As mentioned in a recent discussion on "aether", see reference below, the cumulative gravitational potential of remote masses is about 10

^{8}times larger than the local potential of the sun at the earth's distance. Close to the sun's surface the local potential of the sun according to lower distance is about two orders of magnitude larger which is still 10^{6}times smaller than the cumulative potential originating from background masses of the universe. Interesting to note that the Shapiro delay when being assigned to a local retardation of luminal speed due to some locally enhanced gravitational potential this will have a relative effect of similar order.For example, I have two vectors

**A**and**B**in 2D rectangular coordinates (x,y). I can calculate the scalar (dot) product as (**A**,**B**)=A_{x}B_{x}+ A_{y}B_{y}. In polar coordinates (r,phi), it will be (**A**,**B**) = A_{r}B_{r}+ A_{phi}B_{phi}, since these coordinates are ortogonal and normalized. If I want to make transition what should I write?1) A

_{r}B_{r}+ A_{phi}B_{phi}= (A_{x}^{2}+A_{y}^{2})^{0.5}(B_{x}^{2}+B_{y}^{2})^{0.5}+atan(A_{y}/A_{x})atan(B_{y}/B_{x}) without Lame coefficient or2) A

_{r}B_{r}+ A_{phi}B_{phi}=(A_{x}^{2}+A_{y}^{2})^{0.5}(B_{x}^{2}+B_{y}^{2})^{0.5}(1+atan(A_{y}/A_{x})atan(B_{y}/B_{x})) with Lame coefficient.And finally, both these cases are not the same as (

**A**,**B**)=A_{x}B_{x}+ A_{y}B_{y}. How to explain this inconsistency?Dear friends!

I hope you had a wonderful Christmas. Im very much interested in teaching and I would like to ask your views on a highly ( to me at least) interesting topic. There are probably as many methods of teaching methods as there are lecturers but here are a few types accepted in literature.

- Teacher/lecturer-Centered
- Student-Centered / Constructivist Approach.
- Inquiry-Based Learning.
- Flipped Classroom.
- Cooperative Learning.
- Personalised Education.

Which one do you use and why?

Best wishes Henrik

Hello, Im currently in an Engineering module of Fluid Mechanics, and I was hoping if anyone could assist me in the process with these 2 questions while my Professor is absent. I feel my process is correct, but i keep getting answers only close to what the real answer is. I wont be reanswering these questions, so anyone who can properly get through it and arrive at the answer would be appreciated

**Q1**

'A fluid of density 807 kg/m3 flows through a sudden contraction into to a pipe of diameter 17 mm, with final mean velocity 2.4 m/s.

If the total pressure loss is 401 Pa, calculate the diameter of the vena contracta, in mm to 1 decimal place'

Answer: 14.3 ± 0.05

**Q2**

**'**Two sealed tanks, whose total pressures differ by 3,764 Pa, a are connected by a pipe of length 82 m and diameter 22.8 cm. A fluid of density 897 kg/m3 flows through from one tank to the other at a steady rate. Calculate the mass flow rate in SI units, to 3 s.f., when

- the contraction loss coefficient is 0.5 at the entry to the pipe
- the friction factor for the pipe = 0.05(1+(20x10^3 / d)
- the downstream tank can be assumed to be infinte.

Answer: 23.1 ± 0.05 (kg/s)

Thank you

Dear Sir/Madam,

I'm trying to create a WFN file for the B2H6 molecule, but I can't have the correct structure of this molecule. The GJF input file is attached. The attached file has the error "Link 9999" that shows my initial starting structure is not good. The related LOG file is attached too. please let me know how can I have the correct initial starting structure of B2H6?

Generally, how to draw a 3 center 2 electron bond in GaussView?

Thanks in advance.

Kind Regards,

Hadiseh

Cosmological explanations for our apparently fine-tuned universe are basically divided between a) a vastly huge multiverse of universes with varying fundamental force and mass constants, including the cosmological constant (where our apparently fine-tuned universe is just one universe in this multiverse), or b) a cosmic intelligence that fine-tuned our universe at its beginning to evolve stable galaxies, life and developed minds. In scientific terms, which explanation is preferable? Are there other options? Is a cosmic mind a viable scientific hypothesis for explaining our universe's origin?

What can cause severe bending conditions on the wind power plant's tower(only the tower part)? I assume that as the wind hits the blades it create a bending moment which vary linearly from wing tip to wing base. This moment will be transferred from blade's base to the top of the tower ( we have then a model like a beam with one end fixed and a bending moment on top). And I think that this moment has larger condtribution to tower's bending than the force coming from the wind as it hits the structure. Correct If I am wrong.

I am asking this question on the supposition that a classical body may be broken down in particles which are so small in size that quantum mechanics is applicable on each of these small particles. Here number of particles tends to uncountable (keeping number/volume as constant).

Now statistical mechanics is applicable if practically infinite no. of particles are present. So if practically infinite number of infinitely small sized particles are there, Quantum Statistical Mechanics may be applied to this collection. (Please correct me if I have a wrong notion).

But this collection of infinitesimally small particles make up the bulky body, which can be studied using classical mechanics.

There does exist a rough mathematical relationship between the modern force strengths, notably between gravity and the others. This relationship involves time- the times at which the forces emerge with their own modern identities from the conditions at that time.

*(PDF) Coarse Force-Strength Relationships*. Available from: https://www.researchgate.net/publication/356189385_Coarse_Force-Strength_Relationships [accessed Nov 13 2021].Preprint Coarse Force-Strength Relationships

I am wondering if it is possible to measure mode-locked laser stability (timing jitter, Noise) with an oscilloscope if the pulse duration is in the femtoseconds regime (lets say 150fs). If so, what type of measurement on an oscilloscope would quantify laser stability. What should be the bandwidth of the photodetector and oscilloscope?

The threats that global warming has recently posed to humans in many parts of the world have led us to continue this debate.

So the main question is that what actions need to be taken to reduce the risk of climate warming?

Reducing greenhouse gases now seems an inevitable necessity.

In this part in addition to the aforementioned main question, other specific well-known subjects from previous discussion are revisited. Please support or refute the following arguments in a

**scientific**manner.% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Updated Discussions of Global Warming (section 1) *** ---------------%

The rate of mean temperature of the earth has been increased almost twice with respect to 60 years ago, it is a fact (Goddard Institute for Space Studies, GISS, data). Still a few questions regarding physical processes associated with global warming remain unanswered or at least need more clarification. So the causes and prediction of this trend are open questions. The most common subjects are listed below:

1) "Greenhouse effect increases temperature of the earth, so we need to diminish emission of CO2 and other air pollutants." The logic behind this reasoning is that the effects of other factors like the sun's activity (solar wind contribution), earth rotation orbit, ocean CO2 uptake, volcanoes activities,

**etc are not as important as greenhous effect. Is the ocean passive in the aforementioned scenario?**2) Two major physical turbulent fluids, the oceans and the atmosphere, interacting with each other, each of them has different circulation timescale, for the oceans it is from year to millennia that affects heat exchange. It is not in equilibrium with sun instantaneously. For example the North Atlantic Ocean circulation is quasi-periodic with recurrence period of about 7 kyr. So the climate change always has occurred. Does the timescale of crucial players (NAO, AO, oceans, etc) affect the results?

3) Energy of the atmospheric system including absorption and re-emission is about 200 Watt/m2 ; the effect of CO2 is about how many percent to this budget ( 2% or more?), so does it have just a minor effect or not?

4) Climate system is a multi-factor process and there exists a natural modes of temperature variations. How anthropogenic CO2 emissions makes the natural temperature variations out of balance.

6) Some weather and climate models that are based on primitive equations are able to reproduce reliable results. Are the available models able to predict future decadal variability exactly? How much is the uncertainty of the results. An increase in CO2 apparently leads in higher mean temperature value due to radiative transfer.

7) How is global warming related to extreme weather events?

Some of the consequences of global warming are frequent rainfall, heat waves, and cyclones. If we accept global warming as an effect of anthropogenic fossil fuels, how can we stop the increasing trend of temperature anomaly and switching to clean energies?

8) What are the roles of sun activities coupled with Milankovitch cycles?

9) What are the roles of politicians to alarm the danger of global warming? How much are scientists sensitive to these decisions?

10) How much is the CO2’s residence time in the atmosphere? To answer this question precisely, we need to know a good understanding of CO2 cycle.

11) Clean energy reduces toxic buildups and harmful smog in air and water. So, how much building renewable energy generation and demanding for clean energy is urgent?

% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Discussions of Global Warming (section 2) *** ---------------%

Warming of the climate system in the recent decades is unequivocal; nevertheless, in addition to a few scientific articles that show the greenhouse gases and human activity as the main causes of global warming, still the debate is not over and some opponents claim that these effects have minor effects on human life. Some relevant topics/criticisms about global warming, causes, consequences, the UN’s Intergovernmental Panel on Climate Change (IPCC), etc are putting up for discussion and debate:

1) All the greenhouse gases (carbon dioxide, methane, nitrous oxide, chlorofluorocarbons (CFCs), hydro-fluorocarbons, including HCFCs and HFCs, and ozone) account for about a tenth of one percent of the atmosphere. Based on Stefan–Boltzmann law in basic physics, if you consider the earth with the earth's albedo (a measure of the reflectivity of a surface) in a thermal balance, that is: the power radiated from the earth in terms of its temperature = Solar flux at the earth's cross section, you get Te =(1-albedo)^0.25*Ts.*sqrt(Rs/(2*Rse)), where Te (Ts) is temperature at the surface of the earth (Sun), Rs: radius of the Sun, Rse: radius of the earth's orbit around the Sun. This simplified equation shows that Te depends on these four variables: albedo, Ts, Rs, Rse. Just 1% variation in the Sun's activity lead to variation of the earth's surface temperature by about half a degree.

1.1) Is the Sun's surface (photosphere layer) temperature (Ts) constant?

1.2) How much is the uncertainty in measuring the Sun's photosphere layer temperature?

1.3) Is solar irradiance spectrum universal?

1.4) Is the earth's orbit around the sun (Rse) constant?

1.5) Is the radius of the Sun (Rs) constant?

1.6) Is the largeness of albedo mostly because of clouds or the man-made greenhouse gases?

So the sensitivity of global mean temperature to variation of tracer gases is one of the main questions.

2) A favorable climate model essentially is a coupled non-linear chaotic system; that is, it is not appropriate for the long term future prediction of climate states. So which type of models are appropriate?

3) Dramatic temperature oscillations were possible within a human lifetime in the past. So there is nothing to worry about. What is wrong with the scientific method applied to extract temperature oscillations in the past from Greenland ice cores or shifts in types of pollen in lake beds?

4) IPCC Assessment Reports,

IPCC's reports are known as some of the reliable sources of climate change, although some minor shortcomings have been observed in them.

4.1) "What is Wrong With the IPCC? Proposals for a Radical Reform" (Ross McKitrick):

IPCC has provided a few climate-change Assessment Reports during last decades. Is a radical reform of IPCC necessary or we should take all the IPCC alarms seriously? What is wrong with Ross argument? The models that are used by IPCC already captured a few crudest features of climate change.

4.2) The sort of typical issues of IPCC reports:

- The summary reports focus on those findings that support the human interference theory.

- Some arguments are based on this assumption that the models account for most major sources of variation in the global mean temperature anomaly.

- "Correlation does not imply causation", in some Assessment Reports, results gained from correlation method instead of investigating the downstream effects of interventions or a double-blind controlled trial; however, the conclusions are with a level of reported uncertainty.

4.3) Nongovernmental International Panel on Climate Change (NIPCC) also has produced some massive reports to date.

4.4) Is the NIPCC a scientific or a politically biased panel? Can NIPCC climate reports be trusted?

4.5) What is wrong with their scientific methodology?

5) Changes in the earth's surface temperature cause changes in upper level cirrus and consequently radiative balance. So the climate system can increase its cooling processes by these types of feedbacks and adjust to imbalances.

6) What is your opinion about political intervention and its effect upon direction of research budget?

I really appreciate all the researchers who have had active participation with their constructive remarks in these discussion series.

% -----------------------------------------------------------------------------------------------------------%

% ---------------- *** Discussions of Global Warming (section 3) *** ---------------%

In this part other specific well-known subjects are revisited. Please support or refute the following arguments in a

**scientific**manner.1) Still there is no convincing theorem, with a "very low range of uncertainty", to calculate the response of climate system in terms of the averaged global surface temperature anomalies with respect to the total feedback factors and greenhouse gases changes. In the classical formula applied in the models a small variation in positive feedbacks leads to a considerable changes in the response (temperature anomaly) while a big variation in negative feedbacks causes just small variations in the response.

2) NASA satellite data from the years 2000 through 2011 indicate the Earth's atmosphere is allowing far more heat to be emitted into space than computer models have predicted (i.e. Spencer and Braswell, 2011, DOI: 10.3390/rs3081603). Based on this research "the response of the climate system to an imposed radiative imbalance remains the largest source of uncertainty. It is concluded that atmospheric feedback diagnosis of the climate system remains an unsolved problem, due primarily to the inability to distinguish between radiative forcing and radiative feedback in satellite radiative budget observations." So the contribution of greenhouse gases to global warming is exaggerated in the models used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). What is wrong with this argument?

3) Ocean Acidification

Ocean acidification is one of the consequences of CO2 absorption in the water and a main cause of severe destabilising the entire oceanic food-chain.

4) The IPCC reports which are based on a range of model outputs suffer somehow from a range of uncertainty because the models are not able to implement appropriately a few large scale natural oscillations such as North Atlantic Oscillation, El Nino, Southern ocean oscillation, Arctic Oscillation, Pacific decadal oscillation, deep ocean circulations, Sun's surface temperature, etc. The problem with correlation between historical observations of the global averaged surface temperature anomalies with greenhouse gases forces is that it is not compared with all other natural sources of temperature variability. Nevertheless, IPCC has provided a probability for most statements. How the models can be improved more?

5) If we look at micro-physics of carbon dioxide, theoretically a certain amount of heat can be trapped in it as increased molecular kinetic energy by increasing vibrational and rotational motions of CO2, but nothing prevents it from escaping into space. During a specific relaxation time, the energetic carbon dioxide comes back to its rest statement.

6) As some alarmists claim there exists a scientific consensus among the scientists. Nevertheless, even if this claim is true, asking the scientists to vote on global warming because of human made greenhouse gases sources does not make sense because the scientific issues are not based on the consensus; indeed, appeal to majority/authority fallacy is not a scientific approach.

% ---------------- *** Discussions of Global Warming (section 4) *** ---------------%

In this part in addition to new subjects, I have highlighted some of responses from previous sections for further discussion. Please leave you comments to support/weaken any of the following statements:

1) @Harry ten Brink recapitulated a summary of a proof that CO2 is such an important Greenhouse component/gas. Here is a summary of this argument:

"a) Satellites' instruments measure the radiation coming up from the Earth and Atmosphere.

b) The emission of CO2 at the maximum of the terrestrial radiation at 15 micrometer.

b1. The low amount of this radiation emitted upwards: means that "back-radiation" towards the Earth is high.

b2. Else said the emission is from a high altitude in the atmosphere and with more CO2 the emission is from an even higher altitude where it is cooler. That means that the emission upwards is less. This is called in meteorology a "forcing", because it implies that less radiation /energy is emitted back into space compared to the energy coming in from the sun.

The atmosphere warms so the energy out becomes equals the solar radiation coming in. Summary of the Greenhouse Effect."

At first glance, this reasoning seems plausible. It is based on these assumptions that the contribution of CO2 is not negligible and any other gases like N2O or Ozone has minor effect. The structure of this argument is supported by an article by Schmidt et al., 2010:

By using the Goddard Institute for Space Studies (GISS) ModelE radiation module, the authors claim that "water vapor is the dominant contributor (∼50% of the effect), followed by clouds (∼25%) and then CO2 with ∼20%. All other absorbers play only minor roles. In a doubled CO2 scenario, this allocation is essentially unchanged, even though the magnitude of the total greenhouse effect is significantly larger than the initial radiative forcing, underscoring the importance of feedbacks from water vapour and clouds to climate sensitivity."

The following notions probably will shed light on the aforementioned argument for better understanding the premises:

Q1) Is there any observational data to support the overall upward/downward IR radiation because of CO2?

Q2) How can we separate practically the contribution of water vapor from anthropogenic CO2?

Q3) What are the deficiencies of the (GISS) ModelE radiation module, if any?

Q4) Some facts, causes, data, etc relevant to this argument, which presented by NASA, strongly support this argument (see: https://climate.nasa.gov/evidence/)

Q5) Stebbins et al, (1994) showed that there exists "A STRONG INFRARED RADIATION FROM MOLECULAR NITROGEN IN THE NIGHT SKY" (thanks to @Brendan Godwin for mentioning about this paper). As more than 78% of the dry air contains nitrogen, so the contribution of this element is not negligible too.

2) The mean global temperature is not the best diagnostic to study the sensitivity to global forcing. Because given a change in this mean value, it is almost impossible to attribute it to global forcing. Zonal and meridional distribution of heat flux and temperature are not uniform on the earth, so although the mean temperature value is useful, we need a plausible map of spatial variation of temperature .

3) "The IPCC model outputs show that the equilibrium response of mean temperature to a doubling of CO2 is about 3C while by the other observational approaches this value is less than 1C." (R. Lindzen)

4) What is the role of the thermohaline circulation (THC) in global warming (or the other way around)? It is known that during Heinrich events and Dansgaard‐Oeschger (DO) millennial oscillations, the climate was subject to a number of rapid cooling and warming with a rate much more than what we see in recent decades. In the literature, these events were most probably associated with north-south shifts in convection location of the THC. The formation speed of North Atlantic Deep Water (NADW) affects northerly advection velocity of the warm subtropical waters that would normally heat/cool the atmosphere of Greenland and western Europe.

I really appreciate all the researchers who have participated in this discussion with their useful remarks, particularly Harry ten Brink, Filippo Maria Denaro, Tapan K. Sengupta, Jonathan David Sands, John Joseph Geibel, Aleš Kralj, Brendan Godwin, Ahmed Abdelhameed, Jorge Morales Pedraza, Amarildo de Oliveira Ferraz, Dimitris Poulos, William Sokeland, John M Wheeldon, Michael Brown, Joseph Tham, Paul Reed Hepperly, Frank Berninger, Patrice Poyet, Michael Sidiropoulos, Henrik Rasmus Andersen, and Boris Winterhalter.

%%-----------------------------------------------------------------------------------------------------------%%

Our answer is YES. This is, however, a frequent question, and the answer has been: no. For context, see the video 2016 Patrusky Lecture by Steven Weinberg, on "What's the matter with quantum mechanics?"

We take the reasoned position: yes. Thinking otherwise would be to give up on deductive reasoning, on physics, on causality.

What is your qualified opinion?

Hi,

I have a question in the field of computational physics. What is the physical meaning of Memory Kernel in Generalized Langevin equation? As I am not a physicist I have no sense to this concept and I need more simple description.

Thanks a lot

when a 2DEG is subjected to the magnetic field, the energy is split in the form of Landau levels. and the QHE is explained on that basis. however, in the case of quantized resistance is obtained without a magnetic field. then how Landau levels are formed in QSHE?

In non-local measurements, we apply current between two leads and measure voltage on different leads away from the current leads. to calculate resistance, do we need to divide the non-local voltage by current - as such current is not flowing through the voltage leads?

can you please suggest good literature on non-local measurements?

Thanks

I have several confusions about the Hall and quantum Hall effect:

1. does Hall/QHE depend on the length and width of the sample?

2. Why integer quantum Hall effect is called one electron phenomenon? there are many electrons occupying in single landau level then why a single electron?

3. Can SDH oscillation be seen in 3D materials?

4. suppose if there is one edge channel and the corresponding resistance is h/e^2 then why different values such as h/3e^2, h/4e^2, h/5e^2 are measured across contacts? how contact leads change the exact quantization value and how it can be calculated depending on a number of leads?

5. how can we differentiate that observed edge conductance does not have any bulk contribution?