Science topic

Scattering - Science topic

Explore the latest questions and answers in Scattering, and find Scattering experts.
Questions related to Scattering
  • asked a question related to Scattering
Question
3 answers
Dear all, according to the equation, we can get the inverse relaxation time for the ionized impurity scattering mechanisms. But I don't know how to get the epsilon and electron wave vector. Looking forward to your answer.
Relevant answer
Answer
You ask two things. First you want to know how to do the integration. This is part of the standard impurity scattering theory, known as Herring-Brooks theory. Key is the assumption of isotropic scattering, so integration over the angles yields 4 \pi and the integration over the modulus k becomes k^2 dk. To do the whole derivation here, will be too much. Secondly you talk about degeneracy. This question is ambiguous. Each state in a bandstructure calculation is defined by a wave vector k and an energy E(k). Let us call this a state. This state itself can be degenerate for reasons of symmetry and in practice the degeneracy can be 1 (general), 2 or 3. Correspondingly there can be 2, 4 or 6 charge carriers, electrons or holes in the state. This is the occupancy. Whether these states will really be occupied is determined by the statistics, the influence of temperature. For semiconductors this is described by Fermi-Dirac. If the difference between FD and Boltzmann-statistics is negligible, the statistics is called degenerate, again.. However, please study the literature. To explain this in detail will take pages.
  • asked a question related to Scattering
Question
4 answers
For details, please open the attached file.
Relevant answer
Answer
Dear Mr. Aktas,
Answer should be in language English.
  • asked a question related to Scattering
Question
3 answers
Is it refractive index
Relevant answer
Answer
The index of refraction plays a role, but it is not the most important parameter. What matters most is the structure of the surface. Because there are many ways a surface might be structured, I hesitate to describe it as “roughness”. However, that’s the general idea. There are different parameterized descriptions of surface roughness, and most of them can be directly translated into scattering profiles.. In fact, going the other direction, scatterometry is a standard metrology technique for characterizing surface roughness. So what I’m trying to say is that scattering and roughness go hand-in-hand. Roughness, or similar surface structure, or similar micro inhomegeneity is the most important component of scattering.
if the surface has features that tend to be large compared to the wavelength of the light, then you can describe the scattering pretty accurately by assuming each little wavelength sized region is locally smooth. Each little region contributes its bit of Fresnel coefficient reflection and/or refraction. Angle of incidence equals angle of reflection, and Snell’s law applies. (which is how and why the index matters). Ray tracing applies. The net scattering is the sum of all the pieces (coherent sum if you are scattering a coherent light source) This simple construction doesn’t describe multiple scatterings. Particularly steep facets may reflect into another facet on the same surface, so sometimes a ray scatters twice or more. However that is often unimportant as in many cases the surface roughness doesn’t get that steep anywhere.
  • asked a question related to Scattering
Question
1 answer
How is precipitation predicted and how can we use satellites and gauges such as Climate Prediction (CPC) Formation Algorithm (CMORPH), Global Precipitation Mapping Satellite (GSMaP), Tropical Precipitation Measuring Mission (TRMM), and some other satellites; Precipitation analysis (TMPA) used?
Precipitation is one of the important components of the global water cycle and is related to atmospheric circulation In climate and climate change that is used for weather forecasting, hydrological process modeling, disaster monitoring, etc. Because precipitation varies widely in space and time, it is accurate and reliable. Higher temporal and spatial precipitation products are needed for stakeholder decision-making. Local-scale decision-making is needed. Precipitation data can be both temporally and spatially. presented a series of spatial and temporal events indicating the tendency to increase rainfall in a particular region and the spatio-temporal distribution directly affecting the availability Water sources in rivers or watersheds. This availability of precipitation data is an important part of hydrological analysis, however, its inclusion is often insufficient and incomplete due to several factors such as the lack of both spatial and temporal observation data. Precipitation time series data, uneven number Precipitation stations, a limited number of observers and system observations, as well as manual data entry. this is . It is also difficult to obtain surface precipitation in real time. Observational data, which require preliminary investigation before they can be used directly. However, there is a need for accurate spatio-temporal and long-term precipitation data in climate change prediction, simulation study, hydrological forecasting, floods, landslides, droughts, disasters. Management and investigation of water resources Several factors that contribute to uncertainty, such as observation errors, boundary errors or initial conditions, model or system errors, scale differences and unknown parameter heterogeneity, have a significant impact on mass and distributed hydrological performance. models. Usually, the highest amount of precipitation is considered. An important meteorological input in hydrological and water quality studies is accurate measurement of precipitation. For reliable and consistent hydrological forecasts, the quantity and quality of water, the accuracy of precipitation data, is needed. including intensity, duration, geographic patterns and extent, significant impact on land surface output and hydrological models. has it. Large-scale hydrological models often rely on remotely sensed precipitation data from satellite sensors due to the lack of ground-sensing equipment and rain gauge networks. Gauges or satellites show regional and temporal variability and measurement errors although ground sensor Networks such as rain gauges and radars provide the most amount. Direct observations of surface precipitation and frequent They provide measurements with high time frequency The systems have significant drawbacks. Gauges limited to Point-scale observations, but they are also susceptible Misleading readings due to wind and evaporation effects. In addition, spatial interpolation of point-based observations Adds uncertainty to the final grid in addition to measurement errors Spatial precipitation dataset The distribution and density of gauges are critical factors Adequacy measurement has been shown by several studies to be fragmented And irregular rain gauge networks have a significant impact It can be based on the uncertainty of the hydrological model and that uncertainty It decreases with increasing densitometer or optimization distribution pattern. ground radar On the other hand, networks often provide continuous Spatial coverage with high spatial and temporal resolution. However, their accuracy is affected by signal attenuation and Extinction, surface scattering, illumination and effects, and Uncertainty in reflectivity-rainfall-rate relationship The latest technologies, such as remote sensing technology, It can overcome the lack or unavailability of precipitation data In the previous period, this means through satellite The possibility of obtaining precipitation data remotely measurements, thereby simplifying the collection process At any time and from any region, satellites generally have several Advantages over surface observation rain stations The measurement of precipitation amounts is one of the above spatial and temporal resolution with a wide coverage area, Near real-time data, continuous recording, quick access, weather effects, less field variability and easy data collection Because of the free download now, there are several satellite-based precipitation products available, each of which is different. Degrees of accuracy of the Climate Prediction Center (CPC) Formation Algorithm (CMORPH), Global Precipitation Mapping Satellite (GSMaP), Tropical Precipitation Measuring Mission (TRMM),Multisatellite Precipitation Analysis (TMPA) and others.
Abbas Kashani added a reply
How precipitation is predicted and how to use satellites and gauges such as Climate Prediction Formation (CPC), Global Precipitation Mapping Satellite (GSMaP), Tropical Rainfall Measuring Mission (TRMM) and some other satellites. Precipitation analysis (TMPA) is used?
Precipitation is one of the important components of the global water cycle and is related to atmospheric circulation. in climate and climate change used for weather forecasting, hydrological process modeling, disaster monitoring, etc. Because precipitation varies greatly in space and time, it is accurate and reliable. Higher spatial and temporal precipitation products are needed for stakeholder decision making. Local decision-making is needed. Precipitation data can be both temporal and spatial. It presented a set of spatial and temporal events that indicate the tendency for rainfall to increase in a particular region and the spatio-temporal distribution that directly affects availability. Water sources in rivers or watersheds. The availability of precipitation data is an important part of hydrological analysis, however, its inclusion is often insufficient and incomplete due to several factors such as the lack of spatial and temporal observation data. Precipitation time series data, uneven number of precipitation stations, limited number of observers and system observations, as well as manual data entry. it is . It is also difficult to obtain surface precipitation in real time. Observational data that require preliminary investigation before direct use. However, in climate change prediction, simulation study, hydrological forecasting, floods, landslides, droughts, disasters, there is a need for accurate spatio-temporal and long-term precipitation data.
Management and investigation of water resources. Several factors that contribute to uncertainty, such as observation errors, boundary errors or initial conditions, model or system errors, scale differences and heterogeneity of unknown parameters, have a significant impact on mass and scattered hydrological performance. models. It is usually considered to be the highest amount of rainfall. An important meteorological input in hydrological and water quality studies is the accurate measurement of precipitation. For reliable and consistent hydrological forecasts, water quantity and quality, the accuracy of precipitation data is required. including intensity, duration, geographic patterns and extent, significant impact on land surface output and hydrological models. has it. Large-scale hydrological models often rely on remotely sensed precipitation data from satellite sensors due to the lack of ground sensing equipment and rain gauge networks. Gauges or satellites show regional and temporal changes and measurement errors even though the ground sensor
Share
Relevant answer
Answer
Statistical methods use previous weather data and machine learning methods to predict precipitation. These methods analyze trends and correlations between distinct variables (e.g., humidity, wind patterns) and precipitation.
  • asked a question related to Scattering
Question
4 answers
Hello everyone. I got my raw data from VNA in the form of S11 (log), S11( Deg)....and so on. if you guys please assist me in that issue, I'll be thankful to you.
Note: I've attached my raw data excel form for your references.
Regards & Thanks
Relevant answer
Answer
The power ratio is 10^(dB/10). Electric field ratio is 10^(dB/20).
The S parameters are complex numbers 10^(dB/20)*cos(degrees)+j10^(dB/20)*sin(degrees)
  • asked a question related to Scattering
Question
1 answer
Dear colleagues!
I have encountered a weird bump in the BKG. Below at the picture you can see that the normal decay of air scattering intensity drops as usual, but then relatively sharply elevates average BKG level. If this would be everywhere this would not be a problem since everything would be on the same level, however such wave makes it very difficult to analyze peaks at lower angles. I tried various ways to organize optics to mitigate this issue and the best I could get is to shift the beginnigh of the bump towards 8 degrees instead of 6 via moving incident slit closer to the sample.
The manufacturer engineer said that this can be eliminated completely if you install parabolic mirror for parallel beam optics. However I personally think that this is BS, and such bump should not be there in the first instance.
Does anyone here have ideas of the origin of such bump and possibly how this could be removed.
The instrument is in reflection geometry, no monochromator, array detector, incident soller slit, knife edge is installed.
Relevant answer
Answer
the fact, when shifting the primary slit towards the sample and then the background onset shifts a bit, may show that you struggle with Compton scatter from parts of the sample support which are left and/or right to the net expected beam footprint at the sample surface. You should mask (cover) these areas with a bit of lead foil (strips) or similar strong absorbing foil. For a first trial, the gap between these shield(s) may be (a bit) smaller than the sample width.
Just have a look what happens to your background...
Good luck and
best regards
G.M.
  • asked a question related to Scattering
Question
2 answers
I am using Rhodamine6G as gain medium and silver nanoparticles as scatterers on a microscope slide and laser input 532 nm comes from above.
Relevant answer
Answer
Thanks Mike Albert for your detailed and informative answer.
  • asked a question related to Scattering
Question
1 answer
It was found in marine sediments in the Alboran Sea during the Pleistocene. Septate, brown, around 32x15 micrometers, with perforations scattered across the surface, and two apical pori.
Relevant answer
Answer
reminds of Bispora
  • asked a question related to Scattering
Question
1 answer
We are developing the way to 'hyper'-scale X-ray diffraction and total scattering measurements and analysis. I envisage this playing a prominent role in searching large synthetic/processing parameter spaces for developing new materials, for coupling to materials prediction tools, and most importantly for democratising access to high-quality synchrotron data/analysis for researchers around the world and even new communities. I would like to hear researchers' opinions on this topic and particularly ideas for use-cases, especially out-of-the-box ones. Please comment if you have some ideas, I would love to get a discussion going.
Relevant answer
Answer
Standardizing the collection and storage of high-quality diffraction/scattering data, especially experimental data, is of great significance for fostering AI model training and development. This will make a substantial contribution to the interdisciplinary community. I am looking forward to your work.
  • asked a question related to Scattering
Question
3 answers
Line defect - Photonic crystal in comsol . I am  able to get electric field Vs Arc length plot. I would like to know procedure to get electric field versus frequency (spectrum distribution) plot.I am using scattering boundary condition. It would be great if any body let me know how to proceed.I am trying the same as below link.
Relevant answer
Answer
Hi
if you want obtain the band gap graph for 2D-photonic crystal, there is an example in COMSOL's library.
In addition this video maybe can guide you
  • asked a question related to Scattering
Question
5 answers
Hi, I performed a simulation of a Transmitarray structure with an impinging plane wave, designed to direct the output beam at 20° right with respect to the axe perpendicular to the structure. But processing the results, it seems that at the output there are both the contributions of the incident plane wave and the desired beam. How just the scattered field can be obtained?
Relevant answer
Answer
I think that is sensible.
You have managed to cancel the lobe by about 15 dB, which is fair, but not spectacular cancellation. 20 dB is goodish, and 40 dB would be very difficult. I'm not sure how I would go about trying to improve it, though.
The question is "Is it good enough for you?" Being able to explain it and being reasonably sure that it doesn't affect the lobe you want much is also important. It looks as if your correction hardly affected the size of the diffracted lobe at all, and any further corrections should have even less effect on it.
I'm slightly surprised that subtracting the forward scatter hasn't affected the diffracted lobe more. The forward scatter looks about 20 dB down on the diffracted lobe, so has the potential to give about 2 dB ripple.
It does look like you have done your subtraction using the complex data, as you should have done, but it would be worth checking again that you have done it correctly.
  • asked a question related to Scattering
Question
9 answers
Given the well-documented quantum Rayleigh scattering of single photons in a dielectric medium
[1] A. P. Vinogradov, V. Y. Shishkov, I. V. Doronin, E. S. Andrianov, A. A. Pukhov, and A. A. Lisyansky, “Quantum theory of Rayleigh scattering,” Opt. Express 29 (2), 2501-2520 (2021).
How can a single photon propagate in a straight-line inside a dielectric?
Otherwise, synchronized detections of the 'entangled' photons is impossible.
Relevant answer
Answer
Good day Thomas,
The dimensions of the photon - which is a small amount of EM energy- are evaluated as in:
A. Vatarescu, “Instantaneous Quantum Description of Photonic Wavefronts and Applications”, Quantum Beam Sci. 2022, 6, 29.
Only a group of identical photons can propagate in a straight-line inside a dielectric medium by having the photon locally absorbed by a dielectric dipole re-emitted through stimulated Rayleigh emission by the remaining photons coming just behind it. This is also the process that gives rise to the phase shift of the optical beam relative to propagation through vacuum.
  • asked a question related to Scattering
Question
1 answer
I tried finding list of species in various literature but information is very scattered. As i am new to North East i am finding it very difficult to compile the list of RET species.
Relevant answer
Answer
1. Consult government and research publications:
- Reach out to the Assam Forest Department, the Assam State Biodiversity Board, and the Zoological Survey of India (ZSI) to access their published reports and checklists on the biodiversity of Assam.
- Look for research articles and reports published by universities, research institutions, and conservation organizations working in the region.
2. Refer to national and international databases:
- The IUCN Red List of Threatened Species (https://www.iucnredlist.org/) is a comprehensive inventory of the global conservation status of plant and animal species.
- The Wildlife Institute of India (WII) maintains a database of threatened species in India, including those found in Assam.
- The Botanical Survey of India (BSI) and the Zoological Survey of India (ZSI) also maintain national-level databases on the flora and fauna of India.
3. Consult field guides and regional literature:
- Look for field guides and identification manuals specific to the flora and fauna of Northeast India or Assam.
- Search for relevant journal articles, book chapters, and conference proceedings that focus on the biodiversity of Assam.
4. Engage with local experts and conservation organizations:
- Reach out to local NGOs, research institutions, and wildlife experts working in Assam to tap into their knowledge and experience.
- Attend regional conferences, workshops, or networking events to connect with researchers and conservationists who might have relevant information.
5. Compile the data and cross-reference:
- As you gather information from various sources, create a comprehensive list of the RET species found in Assam.
- Cross-reference the species names and their conservation status across multiple sources to ensure accuracy and completeness.
- Organize the information in a spreadsheet or database format for easy reference and further analysis.
Good luck; partial credit AI
  • asked a question related to Scattering
Question
3 answers
The sunlight is not visible in the space because there is no particles in space to scatter and reflect the sunlight. Am I right?
Relevant answer
Answer
I think you have it right, but you stated it a little strangely. Yes the sky looks dark because there are no particles to scatter the sunlight back towards your eye. I think that is the main point you are making.
However, it is not quite right to say sunlight isn’t visible in space. I think the more proper way to think of it is that light travels in straight lines. If you look towards the sun you will certainly see it. The light travels directly from the sun to your eye and you see the sun (much clearer sharper and brighter than looking at the sun through earth’s atmosphere). Similarly, if there is something to scatter the sunlight, for example the space shuttle, light travels directly from the sun to the space shuttle where it scatters off the surfaces and then some of it travels directly to your eye and you see the space shuttle (again, brighter and sharper than looking at it through atmosphere).
Looking in a direction where there is nothing to scatter the light, sunlight is passing through that empty space you are looking at, but it isn’t traveling toward your eye. With nothing to scatter it, it continues traveling in a straight line away from the sun, never being redirected back towards your eyes, so you don’t see it. No light is coming to you from that direction and so the sky looks dark.
  • asked a question related to Scattering
Question
1 answer
Hello, I have the results for scattering rates, from epw software, but am wondering how I can plot the graphs of scattering rates as a function of energy as we have in this paper fig 6(a). .kindalyshare some information.
Thank You,
# Electron linewidth = 2*Im(Sigma) (meV)
# ik ibnd E(ibnd) Im(Sigma)(meV)
1 2 0.74585345124266E+01 0.00000000000000E+00
1 3 0.74585345124268E+01 0.00000000000000E+00
1 4 0.74585347474930E+01 0.00000000000000E+00
2 2 0.69732345805421E+01 0.00000000000000E+00
2 3 0.73937712638922E+01 0.00000000000000E+00
2 4 0.73937714320523E+01 0.00000000000000E+00
3 2 0.56870328376178E+01 0.59380777038025E+01
3 3 0.72215714326816E+01 0.11854851602414E+01
3 4 0.72215715157624E+01 0.11854851602414E+01
4 2 0.39942422518922E+01 0.13872317960828E+02
4 3 0.69880162209008E+01 0.20934857870599E+01
4 4 0.69880162474195E+01 0.20934857870599E+01
5 2 0.22701285746477E+01 0.79408123683872E+01
5 3 0.67274356213107E+01 0.26055621742724E+01
5 4 0.67274356625937E+01 0.26055621742724E+01
Relevant answer
Answer
Hi Km Sujata,
When you calculate scattering rates using the transport module they will be printed in the file inv_tau.fmt. These scattering rates will be calculated for the temperature you set in the epw input file. You need to compute them for each temperature you're interested. Note that what they have in the paper are the lifetimes, which are the inverse of scattering rates. Also note that you have several scattering rates in that file and what they plot on the figure are average lifetimes, so you will have to average them accordingly to what you want to do.
Best,
Bruno
  • asked a question related to Scattering
Question
1 answer
We have elastic scattering excitation function data in tabular form and want to obtain partial wave scattering phase shift data for each partial wave say l=0, 1, 2....whats the process to do so and is their any code available to do so
Relevant answer
Answer
Dear Prof. Anil Khachi
Could you be more specific, for example the function that you call "excitation function data" is complex?
There are several cases if the elastic scattering is nonrelativistic. See for example: Landau and Lifshitz Vol. 3 Quantum Mechanics, non relativistic theory, chapter VII. Pergamon 1965.
Best Regards.
  • asked a question related to Scattering
Question
2 answers
"How do we understand special relativity?"
The Quantum FFF Model differences: What are the main differences of Q-FFFTheory with the standard model? 1, A Fermion repelling- and producing electric dark matter black hole. 2, An electric dark matter black hole splitting Big Bang with a 12x distant symmetric instant entangled raspberry multiverse result, each with copy Lyman Alpha forests. 3, Fermions are real propeller shaped rigid convertible strings with dual spin and also instant multiverse entanglement ( Charge Parity symmetric) . 4, The vacuum is a dense tetrahedral shaped lattice with dual oscillating massless Higgs particles ( dark energy). 5, All particles have consciousness by their instant entanglement relation between 12 copy universes, however, humans have about 500 m.sec retardation to veto an act. ( Benjamin Libet) It was Abdus Salam who proposed that quarks and leptons should have a sub-quantum level structure, and that they are compound hardrock particles with a specific non-zero sized form. Jean Paul Vigier postulated that quarks and leptons are "pushed around" by an energetic sea of vacuum particles. 6 David Bohm suggested in contrast with The "Copenhagen interpretation", that reality is not created by the eye of the human observer, and second: elementary particles should be "guided by a pilot wave". John Bell argued that the motion of mass related to the surrounding vacuum reference frame, should originate real "Lorentz-transformations", and also real relativistic measurable contraction. Richard Feynman postulated the idea of an all pervading energetic quantum vacuum. He rejected it, because it should originate resistance for every mass in motion, relative to the reference frame of the quantum vacuum. However, I postulate the strange and counter intuitive possibility, that this resistance for mass in motion, can be compensated, if we combine the ideas of Vigier, Bell, Bohm and Salam, and a new dual universal Bohmian "pilot wave", which is interpreted as the EPR correlation (or Big Bang entanglement) between individual elementary anti-mirror particles, living in dual universes.
Fred-Rick Schermer added a reply:
Abbas Kashani
A lot to work with, Abbas.
However, I am standing in a completely different position, and want to share my work with you. I hope you are interested about this completely distinct perspective.
My claim is that Einstein established a jump that is not allowed, yet everyone followed along.
Einstein and Newton's starting point is the behavior of matter through space. As such, one should find as answer something about the behavior of matter moving through space, and yet Einstein did not do that.
To make the point understandable quickly, Einstein had not yet heard about the Big Bang yet. So, while he devised his special relativity, he actually had not incorporated the most important behavior of matter through space.
Instead, he ended up hanging all behaviors of matter on spacetime. It does not matter that his calculations are correct.
--
Let me find a simple example to show what is going on.
We are doing research on mice in a cage, and after two years we formulated a correct framework that fully captures all possible behaviors of these mice in the cage. That's the setup.
Now comes the mistake:
The conclusion is that the cage controls the mice in their behaviors.
Correctly, we would have said that the mice are in control of themselves, yet the cage restricts them in their behavior. We would not say that the cage controls the mice.
Totally incorrect of course, and yet that is what Einstein did. He established a reality in which matter no longer explains the behavior of matter through space, but made it space (spacetime) that explains the behavior of matter. It is a black&white position that has to be replaced by the correct framework (which is a surprise because it is not based on one aspect, but on both aspects).
--
I know I am writing you from a perspective not often mentioned, and it may not interest you. I'll find out if you are interested in delving deeper into this or not.
Here is an article in which I delve into this matter more deeply:
Article On a Fully Mechanical Explanation of All Behaviors of Matter...
On a Fully Mechanical Explanation of All Behaviors of Matter, Replacing Albert Einstein’s General Relativity Theory
Anomalies in the behavior of matter, such as seen with the precession of Mercury, led researchers to look for the ether as the additional aspect that would explain the anomalies. Or, in the case of Albert Einstein, this led to appointing a curvature to Spacetime to explain the anomalies. This paper explains the anomalies based on an additional behavior of matter instead. The additional behavior of matter is known by all, but for some reason did not get incorporated into the prevailing scientific models.
When Albert Einstein published his General Relativity theory, he did not yet know about the materialization process, now commonly known as the Big Bang theory. That means that the behavior of matter based on the materialization process itself did not get incorporated in his framework. While Einstein will have reviewed this new Big Bang information for his General Relativity theory, he did not look for a mechanical explanation that would explain the anomalies.
What Einstein produced was a mathematical model to explain the anomalies (including predicting some outcomes that were not yet known). As such, the mathematical information is correct and is therefore not the subject matter of this paper. Instead, it is the explanation underneath the celestial outcomes that is distinct from Einstein’s gravitational model. A far more normal overall mechanism is proposed to be the reason for all behaviors of matter moving through space and that means that Spacetime can be discarded (though not the mathematical calculations).
The reason the mathematical information is correct, but the explanation of General Relativity is not, is based on the First Motion of matter. The Big Bang event produced a ‘sent-off’ action for matter. This means that all matter in the entire universe is on the move. There exists no matter that is at a standstill, and as such the lack of matter at a standstill should be understood as matter being a result, and how the materialization process itself produced that First Motion for all matter.
The amount of gravity in a galaxy that is required for a pure GR model is insufficient, and either the ether or dark matter are proposed to fill that gap. In the First Motion model, however, the currently known amount of gravity is exactly all the gravity there should be. There is no gap; there isn’t anything missing.
The specific point why Einstein’s mathematical framework is correct, but not the underlying reality, is that this First Motion action occurs in a ‘straight’ line through space. There is no gravity involved in this linear direction. Gravity is discovered only with the subsequent motions of matter.
· Second Motion: Circular motion of matter in a galaxy.
· Third Motion: Revolution of planets around their star.
· Fourth Motion: Spinning action of planets (moons in tow).
Therefore, the mathematical framework predicts the specific motions of matter, yet it does not explain why. While this may appear a minor aspect, it is a major aspect as this paper will show.
· Einstein’s GR uses gravity to fully explain the anomaly of Mercury’s precession.
· First Motion uses First Motion + Gravity to explain the anomaly.
--
To explain what is going on for a galaxy, and why less gravity is in play than required in GR, an analogy may help make this plain and obvious quickly. The analogy is that of 200 ice skaters. They are all skating in a group on a frozen canal. All are going at the same speed, in the same direction, in the same environment, at the same time.
Very clearly, one can see group activities, such as racing, pushing, hanging on to the strongest skater, playing, etcetera. Yet the vital aspect to understand is that the group is not skating as a group. In fact, the group is not skating as a group at all.
When an individual decides to stop skating, then the remainder of the group moves on. This shows that each skater is skating on his or her own power. There is no collective power for this group; the individuals are all doing the skating, and not the group.
For each of the 100 billion masses in the Milky Way, there is no option to stop ‘skating’. The First Motion that was put in place 13.8 billion years ago is on-going. There is no escape from this motion unless something specific interferes with the First Motion of a mass.
· All masses in a galaxy are moving in the same direction through space, at the same speed, at the same time, in the same environment.
That means that while there are collective behaviors noticeable and that gravity does play a role internally, the individual masses are not controlled by just gravity. The prime mover for each mass is applied to each mass and is not associated with the group.
There is no need to look for the ether or for dark matter, because the First Motion declares that there is just the amount of gravity required that has already been mapped fully. The group is a group because the prime mover of each of the individual masses is doing the exact same thing at the same speed, in the same direction, in the same environment.
--
This setup also indicates that the anomaly of Mercury’s precession can be explained by the specific aspects of First Motion in combination with the other motions. Note how this is a complexity and may take time to understand.
First an example of Sun, planet Earth and the Moon to warm up the mind.
These celestial bodies are like a truck, a car and a motorcycle, all speeding on the freeway in one and the same direction. The truck drives in a near-perfect straight line, whereas the car and the motorcycle going at the same speed also circle the truck (while the motorcycle circles the car as well). Their overall speeds are the same. They are on the same road, each driving the roadway by themselves.
· Important to note is that the Sun is not involved in the revolving actions that the planets are involved in.
The following is essential to understand: the Sun ‘sits’ in the center of the Solar System swirl and is not involved in revolving. Therefore, the planets show extra behaviors (revolving and spinning) that the Sun is not involved in.
Mercury is the planet closest to the central position of the Solar System’s swirl, while revolving and spinning. Not gravity, but the position in the swirling action of the Solar System is key. Keep in mind that all celestial bodies are moving at their fastest speed in the same direction.
To make the specific situation more understandable, one more analogy, this time about the Eye of the Hurricane. The closer to the Wall of the Eye of the Storm, the more an item will be swept up by the wind force. Meanwhile, in the Eye itself, there is no wind force. Where the center has a minimumexpression of wind force, the location right next to it presents a maximumexpression of wind force. There is no gradual change between this minimum and maximum, other than the gradual change in wind force when being further removed from the Eye of the Storm, from the maximum then to the minimum found much further out. The force is zero in the center, one right next to it, and then gradually diminishing toward zero again, at the edge of the entire storm.
The Sun is found in the net-zero position of the Solar System swirl. The Sun is therefore not affecting the precession anomaly of Mercury. It is Mercury’s specific location in the swirl that causes the anomaly to occur. It is closer to the Eye; Mercury is closer to the net-zero position of the Solar System swirl. It is affected disproportionately in its precession due to this closeness to the center (though not located in the center itself).
This visual from an article published in Nature (“Curved space-time on a chip”) is used to show Einstein’s GR with the gravitationally heaviest entity, the Sun, located in the center. The reason being is that the Sun does the curving that is then affecting the entity (be it either Mercury or for that article, photons) also shown in the image.
The same image can be used to show how First Motion + Gravity functions.
The Sun ‘sits’ in the center of the swirling motion of the Solar System. A requirement is then that the Sun is mostly made up of light-weighted materials, otherwise it would have been thrown out of this position a long time ago.
Indeed, while the Sun has amassed enormous amounts of material, hydrogen and helium make up most of the Sun. Despite heavier materials being present and despite the enormous amounts of materials being present, the Sun can be declared a light-weighted mass. It ‘sits’ in this central location because the light-weighted materials cannot get thrown out of that position.
One more analogy to make this easier to envision. The Sun is then like a very large but light-weighted ball ending up in a maelstrom in front of the Norwegian coast. This large ball cannot get pulled under due to its size and light-weighted essence, and it cannot go anywhere else because the maelstrom captured it. The Sun is physically stuck in place in the center of the Solar System swirl (Third Motion), while the entirety of the Solar System is on the move (in First and Second Motions).
Then, Mercury’s position should become obvious as well. Mercury is involved in Third and Fourth Motions (as well as First and Second Motions). The maelstrom is affecting the precession of Mercury; it becomes distinct compared to the other planets revolving around the Sun because the effects of the maelstrom play a role on Mercury whereas the maelstrom does not directly affect the specific behaviors of the other planets revolving around it. All other planets are located at a greater distance from the center of the Solar System swirl.
As visual aid, one can envision the behavior of a plane, its flight path mapped out on a flat screen or shown with the planet as backdrop. In one case, the straight line appears curved. In the other case, the line is straight instead. The interesting part is that the anomalies are not expressed like a flight path on the curved surface of a planet, but rather on the curved edge of the Wall of the Eye of the Storm.
Mercury’s anomaly is real, but in GR the reason is the Sun, whereas in FM+G the reason is found with the edge of the net-zero position of the Solar System swirl.
In both cases, GR or First Motion, the line is bent toward the viewer, and the effects therefore the same. Yet the GR model makes it all out to be as gravity based, and therefore ends up missing a large amount of gravity to explain how a galaxy is held together. In First Motion, there is no missing gravity.
--
A point to reiterate is how the model is complex and yet the various parts need to be understood as one model.
First Motion: Straight line of action (involving all matter). Not based on gravity.
Second Motion: Trajectory for Sun and Solar System. Gravity involved.
Third Motion: Trajectory just for planets in Solar System. Gravity involved.
Fourth motion: Planets spinning, moons in tow. Gravity involved.
· Each spinning, swirling reality will produce that Wall of the Eye, and this leaves a discussion about gravity wide open. That discussion is not part of this article.
Each swirling reality will produce a net-zero position in the center. Earth has its own spinning reality, stuck in the center of that swirling reality. The Solar System has the Sun stuck in the net-zero position. A galaxy’s center is more complex even still (but left unaddressed in this article as well).
The trajectory for planets is based on their own action in the larger Solar System setting with the Third Motion. Most planets are not pulled toward the center action of the First and Second Motions; they are far more involved in their own actions. Mercury, however, is placed in the position closest to where the First and Second Motion have their greatest influence. This becomes visible in the precession anomaly of Mercury.
--
A mechanical model explains all behaviors of matter moving through space.
Where Einstein envisioned two or three motions, he did not incorporate the most important motion, the First Motion. He left it out, even after becoming aware of it.
When models are not based on all motions, then researchers can claim that the ether is real or that Spacetime is a reality for matter.
--
Note once more how this does not involve any changes to the mathematical model. If the mathematical model is like a dog, then the issue discussed in this paper is about whether a dog wags its tail or whether the tail wags the dog. The dog itself is not the issue. The mathematical framework is not the issue.
Einstein’s GR is wagging the dog.
Ether is wagging the dog.
Dark matter is wagging the dog.
First Motion has the dog wag its tail.
--
First Motion is part of the Big Whisper model, which is a twin Big Bang model, yet it explains fully the behavior of matter through space and does so in a mechanical manner.
Fred-Rick Schermer
On an Alternate Approach to Calculating Space Expansion
This is a write-up to express an alternate approach to calculating Space Expansion.
Please help me improve this paper. The biggest problem to overcome is not the data, but the model used for calculating Space Expansion and the suggested acceleration. Next to the two known methods, there is a modification available to the FLRW metric when a model is picked that is not dependent on an inflationary epoch.
--
At issue is whether as system can start from zero. The claim is that no such system is available, and this will then have consequences for how the expansion rate of space is calculated.
The question next is whether we can envision the materialization process to have started from close to that zero position, or whether another model is more logical, such as the suggested Big Whisper model, a twin Big Bang model, it then not containing an inflationary epoch. Rather, the source for matter is then not associated with the zero position, but rather much further removed, found closer to the Cosmic Microwave Background Radiation itself.
If we are positioned at number 9 in the decimal system and look at the speed in which we arrived from 0 at that number 9 spot, the answer will be quite different from when we calculate the speed from 1 to that same number 9 spot. As such, the model is vital to get the calculations right.
The calculations will be off automatically if we use a model that does not reflect what happened 13.8 billion years ago.
It is very important to understand that a choice is made by physicists to declare the starting point as the zero position. It appears illogical that the material result we are fully aware of would have appeared from an improbable position such as the zero position. It is more logical that energy that we know ended up becoming matter would have taken up space initially already.
It is far more logical to start from 1 than from 0 when actual entities are considered. It is not logical to have actual entities begin from a position that equals zero.
If we follow the Lambda-CDM model, then the starting point is one of extreme high tension. That indicates a rather clear storyline that does not start from a zero position, cannot start from a zero position.
That leaves us with the question which position to start out with. And that depends on which model we are using. Change the model, and the starting point changes.
Some models may get close to the zero position, nearly making the currently used calculations be sufficient when starting out from the zero position.
Other models are much further removed from the zero position.
If the Cosmic Microwave Background Radiation is set at a 380,000-year distance from the mathematical center, then any position between 5,000 and even 375,000 years can be suggested as the starting point for the materialization process model and its subsequent actual entities.
Anything less than a 5,000-year distance and the problem did not truly go away of needing actual space for whatever ended up becoming the known actual entities. Anything greater than a 375,000-year distance and there appears to not be enough distance in space for the original energy to transform into the known actual entities.
In the Big Whisper model, a suggested distance is that the starting point would be found near 17,500 years distance from the CMBR. As such, the difference between the prevailing calculation position and the proposed starting point for space expansion calculations is then 362,500 years.
If the universe is estimated to be 13,800,000,000 exact, then a subtraction of 362,500 years is required to discover the first moment in time that the actual entities were first around.
While this specific time/distance is just a suggestion, the point is that this leads to a completely different outlook on the expansion rate of space and its potential acceleration. And that is the point of this article: What model are we using because each model is unique and will have its own calculations for the expansion rate of space?
Note: In the Big Whisper model (named for Penzias and Wilson) the center of the materialization process did not become matter. Matter derived from Zone 2, not from Zone 1 in the center, and Zone 2 is suggested to have established the reason for matter to come about at a 362,500-year distance from its mathematical center.
--
Thank you for your careful consideration about the current models used for calculating the expansion of space and this alternate approach that declares that models are the true uncertainty in the calculations, and not the data. What truly happened at the materialization process informs the model, and the prevailing Lambda-CDM model appears incorrect about the starting point of the materialization process and melds things together that should not be melded together.
Fred-Rick Schermer
On Replacing Albert Einstein’s General Relativity Theory with a Mechanical Model
When Albert Einstein published his General Relativity theory, he did not yet know about the materialization process, now commonly known as the Big Bang theory. While he will have reviewed this new information for his GR theory, he did not look for a mechanical explanation. This paper explores the possibility that a mechanical aspect provides the foundation to predict the same outcomes GR predicted, plus a few more predictions.
First the lay of the land, because the Big Whisper model is a distinct model. Wilson and Penzias discovered the ‘whisper’ of the materialization process, and this provides a good substitute in name for what is in essence a twin Big Bang model. Both models are near identical from the Cosmic Microwave Background Radiation on outward. Yet, with the pre-CMBR scenarios, both models differ – and differ starkly.
A quick point to mention is that the pre-CMBR scenario of the Big Bang model contains the possibility that Time, Space, Matter and Energy all appeared for the first time. This option is not available for the Big Whisper model. It declares that the materialization process is nothing more than the moment some energy transformed into matter. Note that the scientific data allows both scenarios to be considered. The BB would have all begin some 13.8 billion years ago, whereas the BW only has the transformation occur at that time, and the universe (without matter) then already in existence.
The last playful item to discuss in this quick lay of the land is an example that shows two conclusions when walking into a room with a broken toy on the floor:
1. Everything of the toy is still present in the room where the toy ended up breaking. Nothing evaporated into thin air; the toy is just all in tatters.
2. The special trick the toy was capable of performing is gone forever.
This is then the scenario also followed in the Big Whisper model; a prior state ended up breaking at a fundamental level, establishing a new universal foundation then in pieces. The original ability gone forever. Had this been a broken vase then each piece would have unified properties available per each piece but no longer at the overall level. The space between the broken pieces would then represent nothing less than no longer part of the vase. Space would then just be space.
--
Inside the Big Whisper model, the First Motion is a term used for the prime mover of matter once the CMBR point has been crossed. It declares that matter was already on the move and that the Milky Way, for instance, is involved in three/four motions.
The First Motion declares that specific motion of the entire galaxy collectively moving in a singular direction, away from the materialization process.
· This is the motion that Albert Einstein did not capture.
The Second Motion is the motion of the entire galaxy swirling around itself.
The Third Motion is the swirling motion of a solar system.
The Fourth Motion is the spinning and swirling motions of planets and their moons.
What should become obvious immediately is that the Sun is involved in the First, Second, and Third Motion, but not in the Fourth Motion. Meanwhile, planets are involved in all four motions.
--
The Big Whisper model is distinct from any model proposed so far.
For instance, when discussing the outbound motion considered for all matter, then the origin of this First Motion is considered a catapulting action. It is therefore not an exploding action, but rather a retracting action.
Once enormous pressures had been established, and once that setup broke, all that proto energy retracted toward their original starting points from which their participation in the pressure buildup had begun. However, due to damage sustained in Zone 2 (and not in the center location, Zone 1), there was no returning to the original state of the universe possible any longer. The First Motion got established like a cocked gun sustaining damage and then triggered.
This will be discussed in more details later.
Einstein was very smart and understood that the anomalies seen among the behavior of matter, for instance, with the precession of Mercury, had to have an actual framework that would explain these finer aspects. He correctly envisioned his Spacetime framework, yet once he had acquired knowledge about the materialization process, he did not review whether a mechanical aspect could explain the finer behavioral aspects of matter just as well.
In short, Einstein missed out on one Motion. This one motion is not considered in his General Relativity theory. Interestingly, he did capture the framework well. Yet as a result, one can state that it was Newton who had his feet on solid grounds, albeit on a planet floating through space, but that Einstein put his feet on space first. It is important to understand that the subject matter for Einstein (and Newton) always was the behavior of matter, and not the characteristics of space or time. The importance of mentioning this is that Newton and Einstein are not positioned in the same direction of matter.
The framework Einstein provided works well in most cases, but it does not predict the lesser amount of gravity seen among masses in a galaxy. General Relativity requires there to be more gravitational force to keep a galaxy together than what is found in place.
The First Motion of the Big Whisper model explains that when a collective is on the move at the same (highest) speed and in the exact same direction, then less gravity is needed to keep this collective together.
· When twenty people are skating as the same speed and in the same direction, then they may indeed all be family members. However, the perception of this being a group is based first on their collective behavior in that singular direction, and only secondary on the familial or friendship band they may possess. Their bond may be understood, but their condition as a group is based first on their primary action.
--
The focus of the First Motion is a ballistic, mechanical aspect that involved and continues to involve all matter in the universe. It predicts why there is less gravity than required to hold a galaxy together.
The First Motion model is quite complex.
In plain English, if all gravitational forces were to be eliminated from the Milky Way today, then the galaxy would still be moving at the same fastest speed it is today, though then just in its single straight-line direction that was established 13.8 billion years ago. The reason for the First Motion is therefore not placed with a force, but with the happenstance of the materialization process directly itself. The entire Milky Way is moving at its fastest speed in a single common direction, and only the internal movements are gravity based.
Let’s review the setup. What needs to be accepted in this model is that proto energy already existed, and that it had an ability, just like the toy had before it broke. This is called proto force, and the entire setting can be declared as unified, though this would then be true particularly from our perspective.
The proto force caused a collective inward motion among all proto energy. In effect, this caused three temporary conditions to come into existence.
Zone 1 is found in the center of the collective inward motion. All proto energy in Zone 1 is stuck in place. The pressure coming in from all sides is at its maximum, though nothing is damaged in Zone 1.
Zone 2 is found right next to the equilibrium point of the entire collective motion. The equilibrium point itself is where the last layer of stuck proto energy got added to Zone 1. Since there is a tiny bit of friction available in the first layer of Zone 2, proto energy in Zone 2 ended up churning. This churning caused proto energy to be damaged, to become a quark soup, still under enormous pressures.
Zone 3 is the largest setting, also with inward motion, but the pressure levels never reached the maximum pressure of Zone 1, nor the very high levels of pressure seen with Zone 2.
With Zone 2 churning its proto energy into a quark soup, the equilibrium point established by the collective pressure shifted away from the edge of Zone. With Zone 3 continuing to add inward pressure, some of this energy may have entered Zone 2 and this would get churned into quark soup then as well. With the churning continuing, the equilibrium point would shift away from the edge of Zone 1, found then internally within Zone 1.
This provided some release of pressure for the proto energy in Zone 1 and a slow bulging outward would get established for this zone. Soon, however, and much like Old Faithful in Yellowstone Park waiting for the conditions to be just right, Zone 1 would catapult outwardly. All proto energy of Zone 1 and Zone 3 and all churned quark soup of Zone 2 would catapult outwardly.
Notice how this is a retracting action and not an exploding action. A return by compressed proto energy to the original starting point of the collective inward motion is a retraction, not an explosion.
However, due to the damage of proto energy in Zone 2, there was no returning to the situation it had been before the start of that action. With Newton’s Laws in hand, the motion put in place continued. This is the First Motion.
While the First Motion can be said to have been put in place by a force, the force itself did not survive. It had not been possible to return to the original situation. The vase broke, and the foundation for the universe no longer unified but scattered among its pieces. Each piece shows the unified qualities of the setup that broke the vase.
To complete the Big Whisper model in quick steps, the quark soup of Zone 2 finally reached the CMBR points where there was now space to move about. Instantly, the quarks all aligned to become neutrons and protons.
· This model shows that electrons entered the material realm via a different route.
Undamaged, the electrons provided the negative charges to counter the positive charges of the protons. The charged reality among matter is that from an overall perspective a neutral situation is discovered, yet at the subatomic level a charge between protons and electrons is found.
What this model makes possible is to view the First Motion as an expression not of a force, but of the destruction of a force. Viewed from the current perspective, the proto force is the potential that would end up becoming the weak-nuclear force, the strong-nuclear force, the electromagnetic force, and the gravitational force, no longer unified.
No counter action has interfered with this singular outward First Motion since, beyond the occasional happenstance, such as smaller galaxies combining into a large galaxy. Our entire Milky Way and all its energy and material started out from proto energy, from just a small part of the entire Zone 2 energy, and all this got involved in the catapulting action, propagated by Zone 1.
--
Replacing Albert Einstein’s GR theory leads to making the same predictions plus additional predictions. The simple nature of mechanisms explaining anomalies would indicate that a reasoned alternative to Einstein’s theory is indeed available. The First Motion model explains the precession anomaly of Mercury and other phenomena as mechanical in essence; Mercury’s anomaly is then not based on gravity, but on the specifics of the four motions it is involved in (some based on gravity indeed, but not all).
A quick review of the motions of planet Earth can help show what the mechanics are:
· Spin of Earth itself
· Revolution around the Sun
· Circular motion of the Milky Way
· The speeding away of the entire Milky Way setting in a singular direction
The first three actions are gravitationally involved actions. There is the spin of the planet, its revolution around the Sun, but also the large circular motion of the galaxy, each larger setting operating at a faster speed than the smaller ones. The last action on this list, however, is predicted by the First Motion model and is fully not-based on gravity. As such, one can state that Einstein’s theory does not acknowledge this original motion, whereas the First Motion model is squarely based on it as the prime mover.
Just to provide an additional view on the complexity of layers of the First Motion model: The Sun moves through space whereas Earth moves through space the same way as the Sun while also circling the Sun. The levels of motion are therefore not identical for star and planet. This motion-based distinction plays a vital role in explaining Mercury’s anomalous precession. However, this requires further specific insights into the importance of the prime mover in relationship to the various layers of circling motion.
--
An exhibit at the Exploratorium in San Francisco shows, albeit in limited fashion, how Motion and Gravity are distinct aspects for the resulting outcome.
A cylinder filled with water and silver slivers can get spun by visitors of the Exploratorium. Once in motion, the swirl inside the cylinder picks up all silver slivers in the water, who are then randomly moving about in the swirling motion.
Once the visitor stops the exhibit from spinning, the internal swirl continues, though slowing down with time. Before the swirling has come to a complete stop, all silver slivers are collected in a single heap in the center on the bottom of the cylinder.
The slowing motion ends up causing the zone in the center of the swirl to widen some, in which area there is no motion. The randomly moving silver slivers end up touching, one by one, this net-zero zone and stopped from moving further with the swirling motion. Once caught in the center zone, there is no other possibility for the silver sliver than to follow the gravitational attraction of planet Earth. All silver slivers end up in that single heap in the center on the bottom of the cylinder, until the next visitor cranks up the exhibit that picks up all slivers from the bottom once more.
There are therefore two aspects involved in this exhibit. The first aspect is the swirl of motion with its net-zero central location. Only after a silver sliver entered the central net-zero zone can gravity play its role. With the swirling motion in play, gravity has no control over the result except for that center spot that widens up over time when the swirling motion slows. Once the speed is increased, the net-zero spot in the center contracts and the speed is too much for gravity to grab a hold -- all silver slivers end up moving about once more.
--
If this exhibit represented a galaxy, the visitor would not be able to capture all that was happening, because a galaxy has on average 100 billion stars. It is better to start at square one to understand how the Milky Way functions on both the action of the First Motion as the prime foundation of motion with gravity then taking control wherever it can, influencing the motion of matter additionally.
The Milky Way is based on just a tiny, small section of the pre-CMBR proto-Energy. The singular direction of all collective matter would have shown not only the specific and different moments of release of pressure, but also the churning motion as had taken place in Zone 2. As such, one can distinguish circular motions occurring among matter involved with the prime mover.
The four motions, as expressed in the simple example for planet Earth, were most likely all present already at the CMBR. During the catapulting action, the damaged proto energy had to wait until the CMBR location before the opportunity existed for this quark soup to align itself as neutrons and protons. Naturally, quarks would align at first opportunity, and the distinguished outcomes seen with the CMBR imaging is therefore explained as areas where pressure had relaxed to such levels this was possible.
· Release of pressure predicts the differentiations of resulting outcomes among the CMBR data as well as the damaging action of churning Zone 2 into a quark soup. The First Motion model predicts the distinctions seen among data of the CMBR. It could not have been a fully smooth outcome.
Once the First Motion is accepted, then the next step involves identifying the two characteristics of swirls. The outer regions of the swirl will be slower than the inner regions of the swirl, while the center itself is net-zero motion. This net-zero area has a changing width depending on the size and speed of the swirl.
An important aspect to reiterate with an example is that an eddy in a river can collect leaves, whereas a maelstrom in front of the Norwegian coast can pull entire tree branches under. This shows that the speed and size of the swirl declare whether material is collected in the center of the swirl or not. The process for mass formation is more complex still but not discussed in this article.
Yet once light-weighted materials enter an eddy or maelstrom, then this can collect into a much larger setting that does not get pulled out of the center location. Once there is a large collection of light-weighted materials, then heavier materials can also get collected in that center spot.
By mass, the Sun is nearly 100% hydrogen and helium with various metals making up less than 0.1% of the mass. If we envision a maelstrom in the center of the Solar System swirl, then the light-weighted mass, enormous as it otherwise is, will not get pulled out of that spot.
The maelstrom image explains the major implication for the precession of Mercury because the anomaly is then not explained by gravity but by the closeness to the central net-zero zone. In effect, Mercury’s precession is influenced like a small football in a maelstrom in front of the Norwegian coast that already contains a very large football in the center, representing the Sun. The closer the ball nears the center, the faster it will spin.
Stepping back from this image, one can see once more how the Sun is involved in three motions whereas Mercury is involved in four motions. The fewer motions, the more the prime mover will express the results. The more motions, the less the prime mover expresses the results other than Mercury and the Sun both moving at the same speed together with the entire Milky Way.
In essence, the First Motion exerts more power on the Sun than on the planets. Yet, when close enough, the First Motion can tug on Mercury.
--
The precession of Mercury is predicted in the First Motion model since it is closest to the swirling center of just these three motions. With the Sun located in the center of the Solar System Swirl, Mercury is located closest near the maelstrom of the swirl. Not the Sun, but the nearest object or mass would experience the maelstrom most intensely. Planets further out would be involved in their own swirling motion inside the Solar System swirl, influenced less by just the prime mover.
The final prediction is therefore that gravity plays a lesser role than in Newton’s classical mechanics or Einstein’s General Relativity because the prime mover occurs first and is today still the fastest speed that matter in the Milky Way is involved in.
While the First Motion model and Einstein’s General Relativity framework are near identical, the First Motion predicts and explains aspects that GR does not predict or explain. As such, it could be an excellent competitor for declaring our universe more precisely.
Newton’s work can be presented as a positive, but Sepia Tone photograph, old-fashioned.
Einstein’s work can be presented as a negative, perfect in what it shows, but not from a positive perspective.
The Big Whisper model, with its First Motion action not based on a force but on the destruction of a force, is then a modern positive photograph.
Fred-Rick Schermer
On the Big Whisper, a Mechanical Big Bang Model
The Big Bang model, called Big Whisper, is named for Penzias and Wilson who discovered the whisper of the materialization process. It is the oldest scientific data we have about the universe, though the oldest scientific data does not mean the universe is not older than the Cosmic Microwave Background Radiation.
The Big Whisper model is mechanically complete, which means that the storyline it follows is a full explanation of what happened. What the model does not provide is the certainty that no other model could also declare a mechanically complete storyline. Nor does the model reach for the position that is easily attained in religion: making everything one.
· The Big Whisper model is not about the creation of the universe.
The only scientific data that rivals the CMBR is the fact that energy does not get lost. While there may be uncertainty about the extent of energy never being lost, the Big Whisper model accepts that energy was already being present prior to matter coming about.
· The Big Whisper model is about the transformation of energy into matter.
Despite Einstein’s fantastic work, the Big Whisper model recognizes Spacetime only as a framework that accurately predicts the behavior of matter in what commonly is understood as the gravitational framework. This model has an additional level on top of Spacetime and at that level there is no gravitational involvement expressed. In short, Newton may have worked with two motions involving the behavior of matter; Einstein may have worked with three motions involving the behavior of matter, but the Big Whisper model uses four motions as the standard to understand the behavior of matter.
--
The Big Whisper model contains a scientific step that is absent in the Lambda-CDM model. It declares that the materialization process was two-staged. It had an inbound motion followed by an outbound motion. Compared to the Lambda-CDM model, an extra step is involved while the reason for this extra step is still and fully associated with the known result.
First some structural exercises, just for the human mind:
When folding space and folding it more and more, one could potentially end up in a zero location when all space got folded neatly into a single spot.
When folding energy, however, one could never end up in a zero location when all the folding that was possible got done. Energy is not identical to space; energy takes up space. As such, a distinction between space and energy must be understood to comprehend this model.
Additionally, when discussing the material results of the universe, one must recognize that the results are distinct from the source that created the result. It does not matter what the original state was, but it must be fully acknowledged that it delivered the results, nevertheless. Energy can therefore not be equated with space because the source for matter will have had different properties.
Lastly, the material result from an immaterial source can only mean one thing: The source did not survive the creation of matter intact. Using an analogy to make the point, an omelet cannot result from an unbroken egg. There are no constructions available in which a truly distinct result came forth from a source without a fundamental alteration of that source.
--
The transformation process of energy into matter in the Big Whisper model is not about the transformation of all energy into matter. Rather, establishing a distinction between all immaterial energy itself is the essential aspect of the model.
To show this, the extra step already mentioned needs to be explained as essential to the outcome.
When making a cake, four steps are involved, and not three.
1. Getting the cake ingredients
2. Mixing the ingredients into a batter
3. Putting the batter in the oven
4. Done
In the Lambda-CDM model, just three steps are investigated. Instead of getting the cake ingredients and mixing this into a batter, physicists are investigating cake particles. Instead of looking for the ingredients, the cake is considered both the outcome and the starting point in some kind of diminished quality. The storyline of the Lambda-CDM model is therefore illogical, despite the accuracy of the scientific data used to explain the result.
A result that never existed before cannot grow from a starting point. A result that never existed before points to a fatal occurrence that took place fully in the prior state. In the Big Whisper model, that fatal occurrence is about that prior state splitting itself into distinct parts and that led to some of that state to end up as matter.
--
As mentioned, the Big Whisper model starts out with a collective inbound motion among all (immaterial) energy. While there is no reason provided why this started up, the inbound motion can be fully understood as the starting requirement for the subsequent outbound motion seen among all matter. A pressured state got established first, and the essence of the Big Whisper model is that this inbound motion did not stop. Like a toy wound up too much, the internal mechanism erupted. A disconnect occurred because the force that established the extreme pressure is not the same as a force that did the breaking.
From this, the largest level of behavior of matter in our current universe is automatically not based on a force. It is based on the destruction of a force.
--
The collective inbound motion delivered three temporary areas of pressure.
A: Zone 1, in which all (immaterial) energy was stuck in place due to the incoming pressures from all sides. This follows the mechanical idea that same will compress same to the maximum extent possible.
B: Zone 2, in which all (immaterial) energy was under enormous pressure, but friction was indeed possible. This friction caused a churning among the energy in Zone 2, and that churned the energy into a quark soup, still under enormous pressure.
C: Zone 3, in which all (immaterial) energy took part in establishing the collective inbound pressure. None of Zone 3 experienced extreme pressure, except very close to Zone 2.
This is a temporary setup. Required is that Zone 1 reached the maximum pressure possible, it got fully stuck. If any transmission of information was part of the prior state of immaterial energy, then Zone 1 could not be transmitting any information.
Zone 2 would not appear until all collective inbound pressure had reached the maximum extent possible for Zone 1. Exactly at the spot where the equilibrium was achieved between fully stuck in place and no longer fully stuck in place, that is where Zone 2 appeared. Since the pressures were enormous, the first potential for friction would show a devastatingly strong expression of that force. This is then much like the Eye of the Hurricane representing a net-zero location of wind force in the center, while the Wall of the Eye situated immediately next to it unleashes a wind force expression to the maximum extent possible.
The immaterial energy in Zone 2 is the source for matter, and Zones 1 and 3 are not churned into a quark soup.
--
It may be prudent to reiterate that immaterial energy is a requirement for this model. The known material outcome must be based on a source, and since matter was not present for that source, one can declare the source as immaterial.
Another aspect that can be ascribed to the source is that the ability to establish this setup was indeed available.
Walking into a room and finding a broken toy on the floor may help see the prior state better, though it cannot be declared enough that we do not have any scientific data about the prior state other than the certainty it could establish the result.
Two facts can be gleaned from a broken toy on the floor:
1. Everything about the toy is still present, just in tatters.
2. The special trick the toy could perform is now gone for good.
The minute something breaks, the reality that existed prior ended up becoming the reality in which everything is still there, but now with a flaw either replacing the prior reality or being added to the prior reality.
--
Zone 2 is the location in which original immaterial energy got damaged. Instead of remaining immaterial, the energy became material. Still under enormous pressure, Zone 2 ended up containing a quark soup.
Notice how this setup establishes major distinctions with the Lambda-CDM model. Since matter did not derive from the exact center (and Zone 1 could have been quite extensive in size), the calculations for matter’s appearance will be distinct as well. For instance, the super-hot starting conditions in the Lambda-CDM model are not found in the Big Whisper model.
· The adiabatic cooling process took place across 380,000 years in the Lambda-CDM model, yet it could have been as short as 5,000 years in the Big Whisper model. This difference establishes a major distinction in heat outcome for the starting position. Obviously, an extreme churning of immaterial energy will have produced heat, yet there is no need to pronounce this as super-hot.
Another (physically impossible) solution that the Lambda-CDM model embraces – cosmic inflation – is also absent in the Big Whisper model. There is no singularity from which matter derived, no central zone, no central aspect at all. Zone 2 is not found in the center. As such, the behavior of matter on its outbound journey will have been ‘normal’ as it should be, since cosmic inflation is not an aspect that belongs in a mechanical storyline.
With the inbound motion not stopping, the equilibrium point shifted nevertheless with the churning of immaterial energy of Zone 2. Through this churning action, the energy of Zone 2 stopped being available for continuing the collective inbound motion. The pressure dropped in Zone 2, and while the pressure from Zone 3 will have moved inwardly to the extent possible, the maximum pressure on Zone 1 received a release during this event. With the amount of pressure coming from Zone 3 not reestablishing the prior equilibrium seen for the boundary of Zone 1, the entire Zone 1 ended up catapulting outwardly.
With the catapulting action of Zone 1, the damaged energy of Zone 2 and the immaterial energy of Zone 3 would also end up catapulting outwardly.
--
To complete the model’s outcome, the second stage with the resulting outbound motion for all energy involved caused pressures to subside. The further away all this energy moved from the starting point, the more the pressure would drop.
At the Cosmic Microwave Background Radiation point, the quark soup finally had its moment to align itself into neutrons and protons, which can be declared as at first opportunity.
Like the batter in a cake mix, the outcome would have been smooth, though imperfections would of course have been present.
Zone 2 is the location from which matter in the universe derived. That is, protons and neutrons. The surprise comes from the electrons.
Once the protons are established with their positive charge, a negative countercharge had to be produced as well. The negative electrons are pulled in from the remaining immaterial energy, using just a fraction of that immaterial energy.
Therefore, matter got established via two different pathways. The first was a pathway of damage, and the second pathway one of reestablishing a neutrally charged outcome. It is at the subatomic level that the charges tell their story, whereas the material outcome overall does not tell this story.
With the electrons, a connection is established between the material protons (and neutrons) and the remainder of immaterial energy. As such, the spatial universe is the stage on which the material universe shows the nature of matter in both disconnecting and connecting manners.
Even the remaining immaterial energy does no longer connect at a universal level. The largest settings in which ‘islands of energy’ can be found are galaxies.
--
The Big Whisper model is named for Penzias and Wilson who discovered the whisper of the materialization process. https://www.imdb.com/title/tt1434807/plotsummary/?ref_=tt_ov_pl
Fred-Rick Schermer
On Unifying the Four Forces through Synergy
This is a write-up to express the potential for unifying the four forces through synergy.
Please help me improve this paper. The biggest problem to overcome is that the mind needs to understand what is presented. There is no data required to show the manner of unification presented in this single-page paper.
Synergy is that outcome that is distinct from the other outcomes, and yet is not based on anything new. It is a collective outcome of the parts that is itself distinct from the parts.
The four forces are:
· Weak-nuclear force
· Strong-nuclear force
· Electromagnetic force
· Gravitational force
The odd one out is Gravity; it is seemingly impossible to connect it to the other forces. Yet Gravity can be declared the synergistic outcome of the other forces. From three forces there is then automatically a fourth, distinct force.
A simple example to show this is found with the following:
· Fathers
· Mothers
· Children
· Families
The fourth group of Families is already present when the first three groups are acknowledged. From three groups, we get a fourth, distinct group.
Interestingly, we can associate these human realities with forces, for instance, the force of a mother to protect her child. Or the ability of children forcing us to laugh.
The family force is also well known. It is not an individual force, but rather the collective that has (or tries to have) an impact on an individual within the family.
Same for gravity. It can be distinguished from the other forces in that it has very general characteristics, applicable to all matter.
Thank you for helping me promote this synergistic explanation that unifies the forces with a secondary level existing among the known forces.
Relevant answer
Answer
Sorry, i have no idea about it. Relativism is science besides being philosophy.
  • asked a question related to Scattering
Question
3 answers
I have been trying to measure the size of the human lysozyme protein with DLS. The instrument is Otsuka ELS-Z 1000. The upper and lower measurement size limits are 10 nm and 4000 nm respectively, and they cannot be changed in my SOP. However, after the measurement, a 'Recalculate' option is available which allows me to set a size lower than 10 nm. I have tried to measure the size for months now- I have changed buffers and concentrations, but to no use. The concentration is always optimal before and during measurement (~20,000 cps) but there are always insufficient data points, and the recalculate option cannot be used if there isn't sufficient data.
Measurement Conditions Temperature :25.0 (°C) Diluent Name :WATER Refractive Index :1.3328 Viscosity :0.8878(cP) Scattering Intensity :17957 (cps) Attenuator 1 :100.0 %
Measurement delay: 300 s
Scattering angle: 165 degrees
I clean the cell well (with soap solution, acetic acid 5%, DI water, ethanol), dry with Nitrogen air flow, use protein concentrations between 1-10 mg/ml, use KCl buffer at pH 2.5 (Lysozyme is most stable at pH less than 4/5 (I also tried PB, 1x PBS and 0.1x PBS) and use 0.2 micromolar membranes to avoid dust.
The measurements are successful with larger proteins, or when there is aggregation, but not with my samples. Any advice?
Relevant answer
Answer
you mention the mention the scattering intensity of your sample as about 20 kcps. What is the scattering intensity of just your buffer?
How does the correlation function look? Are there enough data points on the correlation function to detect a decay?
  • asked a question related to Scattering
Question
4 answers
Hello all,
I am doing point mutation using overlap methods. I got my overlap , transformed to vector got positive colony pcr result as well. Isolated the plasmids again did pcr with the plasmids got positive result then sent the same plasmids for sequencing. The alignment from the sequencing result is scattered. After seeing the results I again did pcr with the same plasmid and I can see positive result. 4-5 times I did this and got false sequencing result.
What could be the reason for false sequencing result? Should I go ahead with expressions and purification with this plasmid without confirming through sequencing?
Relevant answer
Answer
That’s a very poor match in the alignment, is that your insert? Either the sequence quality is poor or the DNA isn’t a match, I’m leaning towards the sequence quality being poor. I would prep up fresh DNA and retry the sequence but send 3-6 preps to have extras. Maybe do an extra wash step first to remove excess salts too. Good luck!
  • asked a question related to Scattering
Question
1 answer
What is a correct way to estimate s0 parameter for Volcano plot visualization in Perseus?
The documentation says: "Artificial within groups variance (default: 0). It controls the relative importance of t-test p-value and difference between means. At s0=0 only the p-value matters, while at nonzero s0 also the difference of means plays a role. See (Tusher, Tibshirani, and Chu 2001) for details." Now the article states: "To ensure that the variance of d(i) is independent of gene expression, we added a small positive constant s0 to the denominator of Eq. 1 (i. e. d(i) = (avg-state1(i)- avg-state2(i))/(gene_specific_scatter(i) + s0)). The coefficient of variation of d(i) was computed as a function of s(i) in moving windows across the data. The value for s0 was chosen to minimize the coefficient of variation. For the data in this paper, this computation yielded s0 = 3.3."
Now should I calculate the CV for my data and then estimate the s0 or am I missing something?
Relevant answer
Answer
My best way to answer this:
following an articel by Gianetto (2016 - Uses and misuses of the fudge factor... DOI 10.1002/pmic.201600132) I downloaded the siggenes package for R and perfomed analysis on my dataset. I guess this is the only rigorous way of doing it.
  • asked a question related to Scattering
Question
1 answer
Mean, median, Range, one way ANOVA, Pearson correlation, spatial distribution with scatter plot, species richness and Shannons index are applied to the data. Is there any other tools are to be included for the data?
Relevant answer
Diversity indices, such as Shannon or Beta, abundance of species, and presence-absence data :)
  • asked a question related to Scattering
Question
1 answer
What is the accuracy of optical scattering from microplastics?
Relevant answer
Answer
The accuracy of optical scattering from microplastics can vary depending on several factors, including the size and composition of the microplastics, the wavelength of the incident light, and the measurement techniques employed. Studying microplastics poses challenges due to their small size and diverse shapes, making it difficult to obtain accurate and consistent measurements.
Here are some considerations regarding the accuracy of optical scattering from microplastics:
  1. Size and Shape of Microplastics: The accuracy of optical scattering measurements is influenced by the size and shape of microplastics. Small particles may exhibit different scattering patterns than larger ones, and irregular shapes can complicate the analysis.
  2. Wavelength of Incident Light: The wavelength of the incident light plays a crucial role in the scattering process. Different wavelengths interact with microplastics in distinct ways. Researchers may use various wavelengths to study different characteristics of microplastics.
  3. Material Composition: The accuracy of scattering measurements can also depend on the material composition of the microplastics. Different polymers or additives in microplastics may exhibit unique scattering properties.
  4. Measurement Techniques: Various techniques can be employed to measure optical scattering from microplastics, including microscopy, spectroscopy, and light scattering instruments. The accuracy of the results depends on the chosen method and the calibration procedures applied.
  5. Environmental Conditions: Environmental conditions, such as water turbidity, can affect the accuracy of optical scattering measurements. In aquatic environments, the presence of other particles or organic matter may influence the scattering characteristics.
  6. Research Advances: Ongoing research and technological advancements may improve the accuracy of optical scattering measurements over time. Innovations in instrumentation and methodologies contribute to refining our understanding of microplastic properties.
It's important to note that optical scattering is just one aspect of microplastics analysis. Researchers often use a combination of techniques, such as spectroscopy, microscopy, and chemical analysis, to comprehensively study microplastics in different environments. While optical scattering can provide valuable information, the interpretation of results requires careful consideration of the experimental setup and potential limitations.
Overall, the accuracy of optical scattering measurements from microplastics is an active area of research, and improvements continue to be made as scientists work towards better understanding and addressing the challenges associated with microplastic pollution.
  • asked a question related to Scattering
Question
4 answers
Like bar graph, box plot, scatter plot etc. and attractive data visualization for various qualitative traits ?
Relevant answer
Answer
Beside these 12 Data Visualization Tools (https://www.geeksforgeeks.org/data-visualization-tools/), I would recommend a few more free software solutions.
  • asked a question related to Scattering
Question
3 answers
Which factors really lead to a significant difference in 2 cross section values?
Relevant answer
Answer
The Raman cross section experiences a boost when analyzing samples in the presence of metal nanoparticles. This enhanced Raman cross section primarily results from mechanisms like chemical and electromagnetic enhancements associated with Surface-Enhanced Raman Spectroscopy (SERS). Nevertheless, the energy of the excitation wavelength and factors like the size, shape, arrangement (hotspots), and the adsorption behavior of analyte molecules on the metal nanoparticles also play significant roles in SERS enhancement. In contrast to conventional Raman scattering, which faces several limitations, including sample concentrations, laser power, wavelength-dependent Raman measurements, fluorescence background interference, and noise-to-signal ratio, SERS significantly mitigates these drawbacks and augments its cross section.
  • asked a question related to Scattering
Question
1 answer
I have a finite array of unit cells, completely backed with PEC. When I try to illuminate it with incident plane wave, I get strong scattered waves from the PEC, apart from scattering at desired directions. But the PEC at the back of the metasurface is supposed to block transmission of the incident wave. Still I get strong far field scattering.
I used both open boundary and FE-BI boundaries for the whole metasurface array for this this purpose with no positive results.
Kindly help me.
Relevant answer
Answer
The total field is the sum of the incident field and the scattered field. The incident field would have continued past place where the PEC layer is. When the PEC is in the way there is no field there, it is is in shadow. Part of the scattered field is the field that cancels the incident field to give zero or lower field in the shadow. For this reason there is scattered field behind PEC.
  • asked a question related to Scattering
Question
1 answer
In a closed room , a candle lights up a certain area. Or a tubelight lights up the room. Intensity of light or light just got scattered? We will differentiate it cause it's scattered equally in the room which has a finite area. Am I right?
Relevant answer
Answer
Sun's light is not infinite. It's finite ( it's reducing or remaining in equilibrium) according to me. But feels like integration. We have a speed of light, C. But light is a derivative. lt's in motion, so derivative.
If time had speed, rate of change of time - which l intuitively feel can be determined.
  • asked a question related to Scattering
Question
1 answer
I was conducting EBSD (Electron Back Scatter Diffraction) of a forged beta titanium alloy (Ti-10V-2Fe-3Al).
The alloy cylinder was heat treated (solutionized + aged) after forging in beta phase field. However after conducting EBSD (along and perpendicular to direction of forging) the phase fractions turned out to be different. It is similar to what was found in this paper (on Page 4) ( ).
Can someone pls explain why we are getting this difference?
I am getting 60 percent beta 40 percent alpha along Longitudinal direction but 85 percent alpha and 15 percent beta along transverse direction.
Relevant answer
Answer
If you have anisotropic 3d microstructure you can - performing 2d-type analysis - have highly non-representative defect densities. In case of dislocation densities (counting density of intersection points) you can have this if all dislocations are parallel to one direction. Also in the case of a strictly lamellar microstructure, the phase fractions can be highly different, if you the lamellae normal is perpendicular or inside the 2d section.
  • asked a question related to Scattering
Question
3 answers
I am trying to plot the scatter and fitting line plot with below code
sns.regplot(x='np.array(Y_test)',y='model.predict(X_test)')
Error is showing
Must pass `data` if using named variables.
x are the actual value and y are the predicted value of test variable.
Then what data I have to pass
Relevant answer
Answer
Thank you. This is helpful.
  • asked a question related to Scattering
Question
1 answer
I study undoped CdTe substrates for applications in THz photoconductive emitters. The antennas fabricated on the substrate have been excited above the bandgap photon energy and biased by a high external bias field of ~50 kV/cm. The excess energy of generated free carriers in the Gamma valley is slightly lower (~0.92 eV) than that of the L valley (1.09 eV). I wonder what the main scattering mechanism in CdTe would be and what the likelihood of scattering electrons with excess energy at an applied bias field of 50 kV/cm would be. Thanks
Relevant answer
Answer
Saçılma olasılığı,yarıiletken yüzeyin fotonlarla doyma olasılığı yaklaşık olarak aynı olur.
  • asked a question related to Scattering
Question
1 answer
How to disable grid line in scattering parameter plot in CST? (I am not asking about unchecking the bounding box)
Relevant answer
Answer
  • asked a question related to Scattering
Question
2 answers
Mie scattering occurs when the structural scale is comparable to the wavelength, so what about Mie resonances? And what is the connection between them? And how are localized resonances related to them?
Relevant answer
Answer
Mie saçılması ile dalga boyu ilişkilendirilebildiği gibi, mie rezonansı da dalga boyunun rezonansını ölçer. Herhangi bir saçılma ve dağılımda rezonansı ölçer.
  • asked a question related to Scattering
Question
7 answers
There exist some basic models for the angle dependence of sigma0. In R.E. Clapp, 1946, “A theoretical and experimental study of radar ground return” three such models are presented:
[1] sigma0(theta) = constant * cos(theta)2 called: “Lambert’s law”
[2] sigma0(theta) = constant
[3] sigma0(theta) = constant * cos(theta)
[3] is actually more complicated since it can also include multiple reflections from deeper layers of the surface. If however one only considers direct reflections the model takes the form as shown above.
In Ulaby, Moore, Fung, 1982, “Microwave Remote Sensing, Active and Passive” vol. II the authors also discuss these models of Clapp.
With models [1] and [3] one cosine(theta) term accounts for the decrease in incident power per unit surface area when the radar measures the ground return under angle theta. With [1] a seccond cosine(theta) term is added in accordance with Lambert’s law: a radiating surface whose angle-dependent emission is according to I = I0 * cos(theta) [Wm-2].
The well-known integral form of the radar equation applied to surface returns is (see for example Ulaby1982):
Prx = Ptx * [ lambda2 / (4 pi)3 ] * integral[ G2 / R4 * sigma0(theta) , dA ]
What I don’t understand is why there is not a cosine term in this equation by default? So
Prx = Ptx * [ lambda2 / (4 pi)3 ] * integral[ G2 / R4 * sigma0(theta) * cos(theta) , dA ]
Because the way I see it: regardless of the scattering properties of any surface the incident power per unit surface area must be rescaled according to cos(theta).
Relevant answer
Answer
@ Jan Hofste: Thanks for asking the question above - it's been something I've been recently trying to figure out. I've also been looking for a PDF copy of Clapp's 1946 report. In your last post you offered to send it to @ C.Chew, could I ask you to send it to me also? I've searched extensively for the report, but have not be able to source a copy. Thanks in Advance, Daithí
  • asked a question related to Scattering
Question
1 answer
Are you a researcher looking to streamline your reference management process? Look no further! 🚀✨
In today's rapidly evolving world of academia, researchers must stay organized and maintain a seamless workflow for citing and managing references. That's where reference management software like EndNote comes to the rescue! 🎯💡
🔸 Simplify Your Citation Process: Bid farewell to the days of manually tracking citations and formatting references. EndNote and similar tools automate the entire process, allowing you to focus on what truly matters—your research! 📝🔬
🔸 Centralize Your References: Say goodbye to scattered files and endless searches. With EndNote, you can create a comprehensive library of your references, making it effortless to retrieve, sort, and organize your sources. 🗂️🔎
🔸 Collaborate Seamlessly: Working on a collaborative project? Reference management software enables smooth collaboration by facilitating the sharing of references and bibliographies among team members. 💪🤝
🔸 Discover Alternatives: While EndNote is a powerful tool, researchers must explore different options based on their needs. Explore alternatives like Mendeley, Zotero, or RefWorks, each offering unique features and benefits. 🔄🔀
✅ In today's competitive research landscape, investing time in mastering reference management software is a game-changer. Not only does it save you valuable time, but it also ensures accurate and consistent citations, boosting the credibility of your work. 🏆💼
📣 Embrace the power of reference management software and take your research to new heights! Let's stay organized, efficient, and at the forefront of innovation. 💪✨
Relevant answer
Answer
Hello Abdelmohsen,
Thank you very much for sharing important and relevant information. I use Endnote. It is really very beneficial and useful.
Regards
Israt Jahan
  • asked a question related to Scattering
Question
3 answers
We assume both are correct.
div D = ρ applies to emw propagation in infinite boundless space.
And Div D = ρ + α dV/dt applies to emw scattering under Diriclet boundary conditions on the voltage V in bounded space.
But the question arises, can it be proven theoretically or experimentally?
Relevant answer
Answer
As a general rule, any physical equation must contain time.
Classical time-independent PPDEs and LPDEs where the electromagnetic energy transfer is assumed to be instantaneous are incomplete.
They only take place because the speed of transfer of their em energy is close to that of light.
Moreover, it is quite surprising that the solution of time-dependent PPDEs and LPDEs is more accessible than that of time-independent solutions.
In a related question/answer, it was assumed that:
A time-dependent Poisson PDE expressed by:
dU/dt)partial=D Nabla^2 U+S . . .(1)
And also assumed
Laplace time dependent PDE as:
dU/dt)partial=D Nabla^2 U . . . (2)
(In normal conventions.)
Subject to Dirichlet boundary conditions or any other suitable BC can replace classical PDE.
Note that Eq 1 or Eq 2 implies that:
Div D = ρ + α. dV/dt . . . (3)
It is clear that equation (3) applies to emw scattering under Dirichlet boundary conditions on the voltage V in bounded space.
As for the proof of , Equation 3 can be proven theoretically via an appropriate statistical technique and experimentally via thermal cooling curves or via sound reverberation curves in audio rooms.
The above statement is valid because of the analogy between PPDE and LPDE with Diriclet boundary conditions and heat diffusion equation and sound diffusion equation in audio rooms with the same Diriclet boundary conditions.
  • asked a question related to Scattering
Question
3 answers
How to make a scattering region
Relevant answer
Answer
You can find the TranSiesta-TBTrans best example in the test folder of siesta-master where script and various input files are given.@
  • asked a question related to Scattering
Question
10 answers
Psychological theories are scattered and lack sufficient coherence. Researchers have worked without attention to each other. These valuable theories, if they are put together, find great value. isn't it?
Relevant answer
Answer
I think not only the theories of learning and development, the theories of various schools of psychology can be integrated into one theory.
  • asked a question related to Scattering
Question
4 answers
It is common practice to represent a poor scattering channel like millimter wave MIMO channel with a clustered channel model consisting of few clusters of rays. I have also seen in a few papers that poor scattering nature is also captured in terms of a Finite Dimensional channel.
1. Can anyone clarify the difference between the two channel models?
2. As per my knowledge, both channel models are used to reflect the poor scattering property of practical MIMIO channel. Which one to use under what channel conditions?
I shall be highly obliged if someone could spare time and share his/her knowledge.
Relevant answer
Answer
These are two models that are meant to describe similar phenomena. The clustered model is based on geometry, which makes it appropriate to describe semi-realistic scenarios with spatial consistency when users move. 3GPP uses that kind of models for evaluation.
The finite dimensional model can be used to study limited scattering without having to make overly specific assumptions about the geometry.
So it is mostly a question about how much geometrical structure one wants to select vs. abstract away.
One can take typically rewrite the former model into the second one using the beamspace representation.
  • asked a question related to Scattering
Question
4 answers
Hi all,
I am looking for areas of application of the neutron back scattering technique in industries
Relevant answer
Answer
Neutrons, in particular thermal neutrons, have a very high scattering cross-section with hydrogen, so irradiating a sample with thermal neutrons and observing a large degree of backscatter probably means the sample has a high hydrogen (usually water) content. Measuring the backscatter of thermal neutrons from a sample of KNOWN composition, may convey if the sample is contaminated with hydrogen (i.e., water). An example that comes to mind is measuring the contamination of crude oil by ground-water. In the old days when I was working in this area, miniature sources of Cf-252 surrounded by a moderating shell were used to create beams pdf thermal neutrons for various applications, ranging from back-scatter analysis to neutron activation analysis. Applications were potentially in the oil drilling industries and medical body composition studies. Fast neutrons (from the Cf-252) fell out of favor in the medical field in the 198's after the very high quality factor/RBE they possessed was discovered specially at very low dose.
  • asked a question related to Scattering
Question
7 answers
I want to simulate a discrete random  media with FDTD method.
the simulation environment is air that is filled with random spherical particles (that are small to the wavelength) with defined size distribution.
what is an efficient  and simple way to create random scatterers in large numbers in FDTD code?
i have put some random scatterers in small area but i have problem producing scatterers in large numbers.
 any tips, experiences and suggestions would be appreciated.
thanks in advance.
Relevant answer
Answer
Are you asking how to make your particles a helix/spiral shape? First, before doing that, I recommend running simulations that will just use the effective properties that your helices should give. That will let you explore device ideas without needed the complexity and inefficiency of having to resolve the spirals in your grid. Second, you will want to create a small simulation of a helix to retrieve the effective properties. This is usually called homogenization or parameter retrieval. Third, if you need to, move on to your more complicated 3D simulation.
That still leaves the question of how to build a helix in a 3D grid. One way you can do this is to create the helix in a CAD software such as SolidWorks or Blender. You can export that model as an STL file, which is just a surface mesh that MATLAB (or whatever software you are using) can import. From there, there are codes available that can import those STL files and "voxelize" them into a 3D array. If you are using MATLAB, search the MathWorks website for "voxelize" and you will find multiple solutions. Once you have that, you can create copies of the spiral in your FDTD grid. This approach will let you import even more complicated shapes relatively easily. Alternatively, you can create the spiral directly in your grid. I would do this by creating a array that has a list of points along the center of your spiral. From there, you can calculate what points your FDTD grid should be assigned to that spiral by calculating their distance to the points on the spiral. If they are within a certain distance, assign the material properties of your spiral. Otherwise, assign the material properties of whatever medium the spirals are residing in.
I am sure there are plenty of other ways to do this. If you are interested, I dedicated almost all of Chapter 1 to describing techniques for building geometries into arrays for simulation using the finite-difference method. I do not specifcally talk about spirals, but you may find some of that chapter very helpful. Here is a link to the book website:
Hope this helps!!
  • asked a question related to Scattering
Question
4 answers
Some of the results of EDX analysis are quite scattered e.g. for Ca K: from about 3% to 27% at. They agree don't with the atomic composition of the compound. Any comment on it? Please answer
Relevant answer
Answer
Dear Yaseen Muhammad , you may share your EDX spectra;
there might be a superposition of Ca K-radiation with the L radiation of for example of
Ca K-alpha (~3,69keV) <--> Sb L-alpha ( ~3,6 keV )
or
Ca K-alpha (~3,69keV) <--> Te L-alpha ( ~3,77 keV )
In order to avoid such superposition you should go for an excitation energy of just above Ca K-excitation (e.g. 4,1keV; Ca K-edge ~ 4,04keV).
The L-edge/binding energies for Sb and Te are above that energy value and their L lines should not show up at all in this case.
Best regards
G.M.
  • asked a question related to Scattering
Question
1 answer
It seems to me that everyone just refers to J. W. Goodman's book "Speckle phenomena in optics: Theory and Applications". I agree this is a good book, however in my opinion there are some differences in ultrasound.
I would like to find answers to the questions:
- how realistic is it to assume that the ultrasound signal has a single spectral component (monochromatic light assumption in Goodman)? Is this assumption required?
- what is the effect of ultrasound transducer and the transformation from pressure to RF signal?
Thank you for your answers.
Relevant answer
Answer
  • asked a question related to Scattering
Question
1 answer
How do you create and customize data visualizations in Jamovi, such as bar charts, scatter plots, and box plots, to effectively communicate your findings to various audiences?
Relevant answer
Answer
  1. Open Jamovi and load your data into the software.
  2. Click on the "Explore" button on the top right corner of the screen.
  3. Select the type of visualization you want to create from the list of options (e.g., bar chart, scatter plot, box plot, etc.).
  4. Drag and drop the variables you want to visualize into the appropriate columns in the "Variables" box.
  5. Customize your visualization using the various options available in the "Customize" tab, such as changing the axis labels, adding titles, adjusting the color scheme, etc.
  6. Once you have created your visualization, you can export it as an image or copy and paste it into another document.
To effectively communicate your findings to various audiences, it's important to consider the purpose of your visualization and the preferences of your audience. For example, if you are presenting to a general audience, you may want to use simple and easy-to-understand visualizations, such as bar charts and line graphs, and avoid cluttering your visualization with too much information. On the other hand, if you are presenting to a technical audience, you may want to use more complex visualizations, such as scatter plots and box plots, and include additional information, such as confidence intervals and statistical tests.
  • asked a question related to Scattering
Question
2 answers
The low energy neutron's scattering cross section is σs, when the target nucleus could be deemed as bound. However, when the target nucleus is unbound, the scattering cross section is [A/(A+1)]2σs. The reason for the presence of factor [A/(A+1)]2 is the reduced mass of neutron when the unbound target nucleus would be recoiled.
Some literatures say that the unbound scattering length is also [A/(A+1)] that of the bound one. Does anyone know which literatue would be one of the best to show this relationship?
Thanks.
Relevant answer
Answer
see H. A. Bethe, Rev. Mod. Phys. 9, 71 (1937)
  • asked a question related to Scattering
Question
2 answers
For a particular pixel across multiple co-registered InSAR images, the amplitude value of the received echo may fluctuate from a mean value based on how sustained/changing is the scattering mechanism by the targets within such pixel over time. The selection of pixels as persistent scatterers candidates used for creating a deformation map through time series analysis of several acquisitions is primarily based upon the amplitude dispersion index thresholding at low values in order to properly estimate the phase stability/dispersion only when the Signal-to-Noise ratio is high enough for such pixels, according to the work of (Ferretti et al., 2001).
Source of the figure:
(PDF) Permanent scatterers in SAR interferometry. IEEE Trans Geosci Remot Sen (researchgate.net)
1) What is meant by phase stability in this case?
2) How is the phase's standard deviation across a time series affected by the amplitude dispersion?
3) How does the contribution of uncompensated propagation disturbances such as the atmospheric phase contribution and the satellite's orbital position inaccuracies besides other sources of noise affect the phase stability ?
Relevant answer
Answer
Hi there,
1 - In this context, phase stability refers to the consistency of the phase values acquired by the InSAR system over time for a particular pixel. A pixel with high phase stability will have phase values that are consistent across multiple acquisitions, while a pixel with low phase stability will have phase values that vary significantly over time. Phase stability is an important factor in InSAR analysis as it directly affects the accuracy of the deformation measurement.
2 - The amplitude dispersion across a time series of InSAR images can affect the phase's standard deviation by introducing noise to the system. When the amplitude of the signal varies significantly over time, it can result in a loss of coherence between the two acquisitions being compared, leading to increased phase noise and decreased phase stability. The selection of persistent scatterers candidates (PSCs) based on low amplitude dispersion index thresholding is done to ensure that the selected pixels have high coherence, which in turn ensures higher phase stability.
3 - Uncompensated propagation disturbances such as atmospheric phase contribution and satellite orbital position inaccuracies can significantly affect the phase stability of InSAR images. The atmospheric phase contribution is particularly problematic, as it can introduce significant phase noise into the system. This noise can be mitigated using various techniques, such as using a weather model to correct for the atmospheric contribution, or by using differential InSAR techniques that cancel out the atmospheric phase contribution. Satellite position inaccuracies can also affect the phase stability, particularly if the satellite's orbit is not accurately known. This can result in phase errors that can be corrected using various techniques, such as using GPS data to refine the satellite orbit.
  • asked a question related to Scattering
Question
1 answer
Hi there!
I was hoping that someone who works/ed with Aqualog Horiba (with polarizers) could assist me in this matter. Currently I am developing a methodology to analyse protein-polymer (nanomolar range) interaction using EEM.
During the experiments, even applying different equipment settings and sample conditions, there is a sensible change on light scattering (+-30% on intensity) whereas emission is fairly constant (error +- 5%).
For these experiments I'm using 10x2 mm quartz cuvettes freshly cleaned and using the same sample per cuvette, avoiding other external changes that would impact our results.
In addition, these samples are summited to light scattering analysis (DLS) and there is no correlation between scattering and size. Anyone had similar issues?
Relevant answer
Answer
The HORIBA Aqualog is a unique optical spectrometer that is the gold standard in environmental water research around the world for the study of color dissolved organic matter (CDOM).
Please use a scanning fluorometer for EEMs, but a much faster and better A-TEEM spectrometer for colored dissolved organic matter (CDOM)
  • asked a question related to Scattering
Question
1 answer
We are analysing such dataset and it seems that such EIS measurements are not fully consistent. The phases are scattered and very limited in absolute value. The Magnitude values look a little better but still not satisfactory.
Has someone also investigated this dataset and would like to discuss shortly with me about it?
Notice about the frequency measurements : we try to distribute the frequency values in linear as well as in logarithmic mode. But the Nyquist or bode does not look consistent.
Suggestions? Opinions?
Relevant answer
Answer
Yes. It is happening to me and resistance data is not consistent when I am using Biologic EIS equipment. The suggestion is to use Faraday Cage to avoid magnetic wave in the lab.
  • asked a question related to Scattering
Question
3 answers
Looking at the data from this radar I get various scattering centers with their coordinates as well as a value for the RCS. My question is, how does it actually calculate the RCS? Is it just using the radar equation together with the observed distance and the measured received power density? I could not find an explicit explanation on how exactly the RCS is calculated.
Thank you for your help!
Relevant answer
Answer
I understood your demand. Exactly, it is the ratio between the reflected and emitted power density on a target.
  • asked a question related to Scattering
Question
2 answers
Hi, I have a ramachandran scatted plot (as shown below) obtained using the "gmx rama" command of gromacs. Can anyone suggest, how one can obtain a contour plot of the same? Any leads would be appreciated.
Relevant answer
Answer
Dear Prasad Rama,
  1. Save the Ramachandran plot data in a text file.
  2. Open the text file in software that supports contour plotting, like Origin or Gnuplot.
  3. Choose the appropriate contour plotting options and settings, such as contour levels, color schemes, and labeling.
  4. Generate the contour plot and save it in the desired format.
Contour plot will show the distribution of phi and psi angles in the protein structure, with different contour levels representing different probabilities of finding residues in a particular region of the Ramachandran plot.
Best,
Satyendra
  • asked a question related to Scattering
Question
3 answers
Generally, pure metals are expected to exhibit lower thermal conductivity with increasing temperature (lattice vibration-induced scattering and a decrease in the mean free path).
1. What about alloys/solid solutions? Can anyone explain the basic physics?
2. Can Fe/Ni-based high-temperature capable alloys show increasing thermal conductivity with temperature (13 W/mK to 26 W/mK)
Minor variations in the trend may be acceptable in alloy systems due precipitation and dissolution mechanisms. But this increase appears very significant to me. Why?
Relevant answer
Answer
Heat can be conducted by both the free electrons and phonons. When the metal is pure, the heat is dominantly conducted by the electrons and electrons are scattered more with increasing temperature because of the shorter mean free path due to the lattice vibrations and vacancies as you explained. Although increased lattice vibration means increased phonon activity, this increase cannot compensate the decrease in the thermal conductivity because of scattering of the electrons.
For alloys, the mechanism of thermal conduction is already phonon-dominated as dissimilar atoms naturally create scattering sites for electrons. This is why solid solution of two elements of similar thermal conductivity creates an alloy with less thermal conductivity as in the example of Fe (79 W/mK) and Ni (91 W/mK). Since thermal conductivity is already phonon-dominated in this case, an increase in temperature increases the lattice vibrations and therefore increases the thermal conductivity.
  • asked a question related to Scattering
Question
2 answers
It is a method used to created an interferometric time series developed by Hooper [2004-2007] following the persistent scatterer approach of Ferretti 2001, which produces surface line-of-sight deformation with respect to the multi-temporal radar repeat-passes and mitgate the temporal phase decorrelations due to the instrument errors, DEM error and atmospheric contribution to the phase delay.
SNAP2StaMPS is a Python workflow developed by José Manuel Delgado Blasco and Michael Foumelis collaboration with Prof. A. Hooper to automate the pre-processing of Sentinel-1 SLC data and their preparation for ingestion to StaMPS. Much appreciation goes to the great work of those honorable professors.
StaMPS method follows 8 steps that are carried out on Matlab on a virtual Unix machine or a Linus OS system
If I understood this correctly. Step 3 selects separate groups (each subset contains a pre-determined density of pixels per square kilometer, those pixels have random phase) from the initially selected PS pixels in step 2 that was based on their calculated spatially correlated phase, spatially uncorrelated DEM error that is subtracted from the remaining phase, temporal coherence.
Then, in step 4 (weeding) those groups of pixels per unit kilometers are further filtered and oversampled and in each group, a selected pixel with highest SNR is taken as a reference pixel and the noise for the neighbouring adjacent pixels is calculated, then based on a pre-determined value of (‘weed_standard_deviation’), some of those neighbouring pixels are dropped and others are kept as PS pixels.
A) Am I correct?
B) What is a pixel with random phase?
C) What is the pixel noise? is it related to having multiple elementary scatterers where none of them is dominant therefore their backscattererd signal is recieved at a different collective phase at each aquision even if the ground containing those scatterers were stable over time ?
D) Due to the language barrier, I have read Hooper's 2007 paper , but I couldn't fully understand what the difference between correlated and uncorrelated errors are, and what spatially correlated/uncorrelated errors means.
E) what is difference between by the spatially uncorrelated DEM (look angle) error that is filtered at StaMPS step 3, removed at Step 5, and spatially correlated look angle error that is removed at Stamps Step 7?
There are attached some test results and I would appreciate if someone inform me how I may remove the persistent atmospheric contribution. I have only used the basic APS linear approach using TRAIN toolbox developed by Bekaert 2015
Relevant answer
Answer
I can't answer your question, but I do have to commend you for the most excellent detail, format and structure of any question I have seen on ResearchGate over all the years and thousands of questions I've seen go by!
  • asked a question related to Scattering
Question
1 answer
Respected Nick Papior,
Sir Iam using SIESTA v4.0.2
I want to fix electrode position in scattering region calculation. In manual of v4.0.2 SIESTA it is given as
%block GeometryConstraints
position from -1 to -8 # to fix atoms
%endblock GeometryConstraints
but when i tried this atoms are not fixing there respective positions.
i also tried many syntaxes to fix positions but nothing worked.
Any help will be highly appreciated.
Thanks & Regards
Shanmuk
  • asked a question related to Scattering
Question
4 answers
I want to design a horn antenna which only produces incident field along theta (it's got copolarization along theta). how can I do that?
Relevant answer
Answer
pl first simulate horn and check E plane first . Then orient the posotion of horn accordingly.
  • asked a question related to Scattering
Question
3 answers
Laser diffraction mathod is a process of sample analysis in metal extracting metallurgy from ore. In this process while doing lase analysis finer particles inducing more scatter than coarse. Why?
Relevant answer
Answer
We need more information to be sure of your issue. A single small particle will scatter less light than a single coarse particle. However, if you are talking about similar mass concentrations, a milligram of 1 micrometer diameter particles of a given material will have one thousand times as many particles as a milligram of 10 micrometer diameter particles. Mie theory gives a more sophisticated view of scattering as a function of size and refractive index and angle -- see https://en.wikipedia.org/wiki/Mie_scattering -- but first you need to decide whether you want an answer for equal mass concentration or equal numbers of partcles.
  • asked a question related to Scattering
Question
1 answer
I have a cylindrical dipole arrangement and I have run the DDSCAT simulation in online mode with the ddscat.par file having NPLANES=1, in the nanoHUB portal and it's alright but in case of offline mode its showing an error, the output is given below. >  **** Select Elements of S_ij Matrix to Print ****  >REAPAR CMDFRM=TFRAME : scattering directions given in Target Frame  >REAPAR **** Specify Scattered Directions ****  >REAPAR JPBC=0, about to read specs for scattering plane   1  >>>>> FATAL ERROR IN PROCEDURE: REAPAR  >>>>>  Error reading ddscat.par file  >>>>> EXECUTION ABORTED But I can run the offline simulation with NPLANES=0, so I cannot understand its meaning. Please explain this to me and also explain the meaning of NPLANES.
Relevant answer
توضيح اكثر
  • asked a question related to Scattering
Question
8 answers
Any software that can process this?
Relevant answer
Answer
Thank you Daniel.
Yes, the data is normally distributed, the scatter plot is plotted according to Principal Component 1and 2. And I need to design 2 eclipsed, i.e., inner and outer eclipse.
The limit of this panel is based on an ellipse in which more than 95% of the population is included. The inner ellipse includes about one-half of the
population.
The link for the paper as below:
https://pubmed.ncbi.nlm.nih.gov/17613722/ refer to Figure 4 and Appendix B
  • asked a question related to Scattering
Question
6 answers
In text books defined that if in a collision the total kinetic energy before and after collision is conserved then it is called elastic collision other wise in-elastic ( momentum conserve in both cases).
The Compton scattering total kinetic energy is conserved before and after collision, but is called inelastic scattering. In literature it says that "
Elastic scattering occurs when there is no loss of energy of the incident primary particle. Elastically scattered particle can change direction but do not change their wavelength.
Inelastic scattering occurs when there is an interaction that causes loss of energy of the incident primary particle. In-elastically scattered particle have a longer wavelength."
Now my question : (1) Is the collision and scattering are different?
If yes, then (2) how define both the collision and scattering and how to distinguish?
Relevant answer
Answer
Collision is the cause of the effect known as scattering. Collision can be physical or non-physical but both can scatter a said particle
  • asked a question related to Scattering
Question
6 answers
Dear Sir,
I have designed a Raman Spectromter but struggling with signal, Could you please help me in this Regards.
1. have designed Spectrogrpah and calbratd with Xe Ar lamps. working nice.
2. an objective lens is used to direct the laser light and collect the scattering from sample.
3. semrock make edge filter is used to remove laser line
only laser iine is appearing not a single hump of Raman spectra.
Relevant answer
Answer
The long pass will let a significant amount of the 785 nm laser with a 6 nm bandwidth though and will be at a much higher intesity than the Raman signal. You are seeing a big "baseline" spike in your spectrum (one with rhodamine) and that amount of light in the spectrometer gets scattered internally and creates a large background. What is your spectrometer and its specifications?
With a good notch filter and a laser designed for Raman spectroscopy you can look at the laser line and not have it swamp the Raman signal on either side.
There are a lot of details to getting a Raman signal that can bite you. Go through everything and eliminate all sources of stray light. Good luck.
  • asked a question related to Scattering
Question
1 answer
I have two raster dataset of NDVI and LST, both calculated from Landsat data. I have extracted the NDVI and LST values for each pixel. Now I want to calculate the soil moisture index using the LST values. For that I have plotted the LST vs NDVI scatter plot. But I am unable to put two different regression lines to the dry edge and the wet edge of the scatter diagram. How to draw the graph?
Relevant answer
Answer
Howdy Daniel,
I did the exercise for the XInjiang province in China. It was published as well. Have a look at the following paper to get a grasp on the problem approach.
Cheers,
Frank
  • asked a question related to Scattering
Question
3 answers
Hi,
I am working on the HEC HMS rainfall-runoff simulation and I have seen many research authors showing, scattered plots for calibration and validation by comparing simulated discharge and observed discharge in both cases. The graphs are attached, can someone explain to me what they tryna show?
Relevant answer
Answer
Hello,
The graphs show the deviations from 1:1 line (perfect simulation results).
Try the followings which could be if help:
- Adding upper and lower confidence interval limits
- Showing on log-log plot axes
  • asked a question related to Scattering
Question
11 answers
Dear Sir/Mam
I am facing a problem related to Transiesta (scattering) calculation, i selected the optimized molecule from gaussian software and attached left and right electrodes to it. After scattering (Transiesta) calculation is over, i checked the carbon to carbon distance of the optimized molecule, the distance between c-c atoms after scattering calculation is increased than before why it is happening and is this ok or not.
Thankyou
Relevant answer
Answer
i will check
  • asked a question related to Scattering
Question
5 answers
I have to classify the urban growth into infill, sprawl, ribbon and scattered development. All literatures gave theoretical base but couldn't find tutorials for practical approach. Can anyone aid me in finding materials for the same?
Relevant answer
Answer
Hello,
Landscape metrics calculated through Fragstats may be of help. There are some indices regarding the spatial pattern analysis.
  • asked a question related to Scattering
Question
7 answers
I am calculating the Scattering and absorption of nanoparticles (Silica is the core and Ag is the shell). I selected the refractive index of the solver FDTD as 1.44. I am not getting the results
Does anyone know how to calculate scattering and absorption on Lumerical
Thanks
Relevant answer
Answer
The procedure is to illuminate your particle with a TFSF source.
Inside the TFSF box, you'll need to surround the particle with a power sensor, that will give you the absorption cross-section avec proper normalization.
Then you'll need to surround the TFSF box itself with another power sensor to extract the scattering cross section.
If that's already what you're doing, describe where things go wrong.
  • asked a question related to Scattering