Science topic

Physical Optics - Science topic

Explore the latest questions and answers in Physical Optics, and find Physical Optics experts.
Questions related to Physical Optics
  • asked a question related to Physical Optics
Question
3 answers
Hi,
I would like to understand the link between GRD (ground resolved distance) experimental value and the GSD (ground sample distance) theoretical value. I saw somewhere the following formula used: GRD=2*k*GSD when the factor 2 is to get the value of two-pixel ( cyc/mm ), and k would represent a factor that includes all other influences such as turbulence, aerosol or camera aberration.
When k>1 and if k=1 then we talk about an ideal system.
I would like to know is there a formula to calculate k directly to be able to find the GR? Is there a maximum value of k where one can say that the only influence is the atmosphere and that the camera is limit diffracted?
And is there sources talking of this factor, I have not found any on internet.
Thank you very much
Relevant answer
Answer
In remote sensing, GRD is the smallest distance that can be resolved by the sensor. It is influenced by various factors such as atmospheric conditions, sensor resolution, and altitude. GSD, on the other hand, is the distance between the centers of two adjacent pixels on the ground. The sensor resolution and the altitude determine it.
The formula you mentioned, GRD=2kGSD, relates the two parameters, where k is a constant that incorporates all the influences other than the sensor resolution and altitude. This constant can be used to calculate the GRD value for a given GSD.
However, there is no direct formula to calculate k. It is generally determined experimentally by measuring the GRD values for different atmospheric conditions and sensor settings. In practice, k values can range from less than one to several, depending on the atmospheric conditions and sensor characteristics.
One can say that the camera is limit diffracted when k reaches its maximum value, which is typically around 1.5-2. Beyond this value, the sensor performance is limited by atmospheric conditions, and further improvements in the sensor resolution will not lead to a significant increase in the GRD.
Several authors discuss the influence of atmospheric conditions on remote sensing parameters, including GRD and GSD. Some of the useful sources are the books "Remote Sensing and Image Interpretation" by Lillesand and Kiefer and "Introduction to Remote Sensing" by Campbell and Wynne, and the journals "IEEE Transactions on Geoscience and Remote Sensing" and "Remote Sensing of Environment."
  • asked a question related to Physical Optics
Question
7 answers
Consider the problem of known electromagnetic source in free space near a perfect electric conductor (PEC) object. Finding the total electromagnetic field in this situation can be done through solving Maxwell's equations numerically. At high frequencies, this becomes very computationally expensive in terms of time and memory. Approximate techniques like Physical Optics (PO) are then used.
PO in the above situation is based on finding the electric currents on the surface of the PEC object, getting the exact scattered field due to these currents, and adding it to the original fields radiated by the source.
From reading of various resources and references, it is always mentioned that PO has two inherently introduced approximations:
1- ignoring the diffraction effect, which is complemented afterwards through methods like PTD (physical theory of diffraction).
2- the PO current on the PEC surface is itself approximate.
And here comes my question: the justification of point 2 above. Why is the PO current considered approximate, although it is calculated from a true boundary condition at the PEC surface?
Relevant answer
Answer
One important approximation in PO is tangent plane of the surface. So the divergence factor in PO is omitted.
Besides, PO can not consider the diffraction of edge, so lead to PTD.
  • asked a question related to Physical Optics
Question
10 answers
I need to verify the results of the classical PEC or İmpedance half plane results obtained by the theory of Physical Optics by using HFSS or CST.
I attached the figures of the geometry and the total wave result for the incident wave angle of 60°, the total wave (=incident wave+geometrical optical wave+diffracted wave) and
diffracted wave in MATLAB.
How can I verify it in HFSS or CST or any suggestion?
Relevant answer
Answer
Are you plotting the fields at a particular distance from the edge, or are these normalised far-field results? They look to me like the far-field scattering from the 50 wavelength square plate. I assume the plate is not 50 wavelengths square at 1, 2, and 3 GHz, but only at one of them, perhaps 3 GHz.
It is not possible to plot the far-field of a plane wave - it is a delta-function, so what you have is the polar diagram of the reflected wave, centred about 120 degrees, the polar diagram of the shadow the plate makes in the plane wave (see Babinet's principle), centred about 240 degrees, and low-level diffraction between these angles, and outside them too, but not so much.
I think your initial GO and GTD result (not PO) that you are trying to verify is in the near-field. The near-field for an infinitely long edge on a semi-infinite half-plane extends out to infinity.
  • asked a question related to Physical Optics
Question
5 answers
Actually the major problem of using In2S3 is the scarcity of In  in contrary to ZnS  which is composed with more abundant material.
  • asked a question related to Physical Optics
Question
4 answers
The principles of wavefront reconstruction based on a geometric-optical reflection of reconstructing light from the surfaces with constant phase differences between the object and reference waves can also be used for a temporal reconstruction of the object ultrashort pulse [1]. This can be illustrated by the following simple example. Let the object beam consists of two δ-pulses delayed with respect to each other by τ. We suppose that the object and reference beams propagate in opposite directions forward to each other and also that δ-pulse is used as the reference one. In that case the interference fringe structure will consists of two parallel planes separated by a distance τc/2 where c is the velocity of light. If we use the δ-pulse for reconstruction it will be reflected sequentially from one plane and then from the other. The time delay between two reflected pulses will be equal to τ. So, the object pulse temporal structure was reconstructed by simple geometric-optical reflection. The question is: How this mechanism of the object pulse temporal reconstruction relates to the known methods of time-resolved holography ([2, 3] and other)? 1.https://www.researchgate.net/publication/238033164_Ultrashort_light_pulse_scattering_by_3D_interference_fringe_structure
2. Rebane, A., & Feinberg, J. (1991). Time-resolved holography. Nature, 351(6325), 378-380.
3. Mazurenko, Y. T. (1990). Holography of wave packets. Applied Physics B, 50(2), 101-114.
Relevant answer
Answer
This question is discussed in my recent paper:
Article Geometrooptical Mechanism of Wave-Front Reconstruction
  • asked a question related to Physical Optics
Question
7 answers
I simulated an optical system in both sequential and nonsequential modes. I used a Gaussian entrance beam for my system and then, compared the results of "Physical optics" in sequential mode and "coherent irradiance" in non-sequential mode.
The problem is that the results don't coincide with each other!
Now, I want to know which one is more accurate and reliable?
Thanks!
Relevant answer
Answer
You should ensure that you are using the same coating files in both setups and the same specifications for other parameters. I would suggest repeating NSC mode but ignore scattering and splitting. You should somehow get close results.
Regards. - Hossien
  • asked a question related to Physical Optics
Question
3 answers
I would like to model a antigen-antibody complex as a dielectric layer with a finite thickness. For example, a single layer of prostate specific antigens (PSA) and anti-PSA antibodies.
Could anyone suggest where I can find relevant references on their physical and optical properties?
Relevant answer
Answer
Hi Nuttawut,
the links below are some good leads. There is a very old one to remind people that for several proteins the specific refractive index increments have been determined, and can be determined. The other one is a more recent one.
Interestingly, both papers discuss the question whether to use a general value for the protein refractive index increment, dn/dc, (as e.g. done in the Biacore software, which uses the conversion factor G = 1000 RU / (ng/mm2), or a protein specific one (possibly based on the amino-acid composition).
Note that for some applications, like absolute protein concentration determination by CFCA, size determination of the Analyte, or agreement between calculated and measured Rmax, this assumption can be important. See e.g. our publication for an antibody-ELP fusion protein, which also discusses differences in orientation of the molecules (see the supplements).
Cheers Markus
  • asked a question related to Physical Optics
Question
2 answers
i have got polarization map from FDTD of near field e&m fields. now i want to fit that image by solving polarization parameters(polarization angle,strokes parameters etc) on each pixels. Now how can i fit that image i mean how i can solve that parameters on each pixel what is the best way????? Has any one Matlab code for that??? 
Relevant answer
Answer
Dear Mr. Asy
I m sorry, your questions is not related to my professional filed.
  • asked a question related to Physical Optics
Question
15 answers
The non-holographic mechanism of achromatic wavefront reconstruction is based on a geometric-optical reflection of reconstructing radiation from surfaces with constant phase differences between the object and reference waves used to record the interference fringe structure in the medium bulk [1]. This mechanism was realized by femtosecond recording of the interference fringe structure in very thick medium [2]. However, it seems that some experimentally easier ways of realization are possible. Maybe some other waves instead of light can be used, etc. Can anybody suggest a new method of realization of the non-holographic mechanism of achromatic wavefront reconstruction?
Relevant answer
Answer
My previous comment can be considered as a program of future experimental research in this direction for anybody who is interest in it. Now I can suggest the question for theoretical study. This is a problem: what types of fields could be reconstructed by geometric-optical mechanism? It is clear that this is not arbitrary field. We tried to make a first step in this direction at the end of [1]. However, this is just the first step only.
  • asked a question related to Physical Optics
Question
4 answers
   In PT-symmetric dual waveguide systems, what is the meaning of reciprocal and nonreciprocal wave propagation? has it some relationship to unidirectional invisibility?
Relevant answer
Answer
In simplest form one can say a reciprocal two-port systems means either of the port can be taken as input or output. BUT in non-reciprocal systems that is not valid. 
  • asked a question related to Physical Optics
Question
4 answers
Hi actually i am trying to get the near field scattering spectra around a metal alloy by using a new numerical method for that method we have a system which is called polarization indirect microscopy. i have experimental results from that system which are in terms of polarization parameters like stokes parameters ,polarization angle etc. same i have performed in calculations but i have got more field variations(High frequency) in images of near field as compared to the experimental. so i was asking why i have got more variations in scattering near field spectra???
Relevant answer
Answer
I guess it would be helpful if you could share with us some more information about the system you are studying. In any way, in my experience the results of FDTD simulations are only as good as the dielectric functions you use. E.g. the optical constants of gold in the middle infrared vary depending on the source drastically and so do the plasmon absorptions of corresponding structures concerning frequency and intensity...
  • asked a question related to Physical Optics
Question
4 answers
Nd:YAG laser palced in a Z cavity having mirrors of 1.3 um coating.
Relevant answer
Answer
Thank you Mr. John F Black. 
  • asked a question related to Physical Optics
Question
3 answers
Especially in the case of Z-Scan analysis of nanoparticles, people are using the term 'Optical limiting effect'. Can any one please explain about Limiting effect and its significance.
Relevant answer
Answer
Optical limiting is actually often a desired effect.
It limits the transmission through a substance (like a nano-Dispersion). As the intensity of the laser increases, the material starts to show non-linear optical effects, often scattering. Scattering of course then contributes to the apparent absorption and reduces the laser intensity while passing through the medium.  This usually occurs for metal or metal-composite nanoparticles more than for other materials. Therefore, the addition of these metals can aid in limiting other thermal effects in the analyte... I have attached some papers that you may find helpful
  • asked a question related to Physical Optics
Question
4 answers
I am looking for the detailed third-order nonlinear susceptibility xof nitrogen to calculate third order harmonics.
Relevant answer
  • asked a question related to Physical Optics
Question
1 answer
What I'm looking for is the equivalent an strain/stress induced birefringence, but for on the chi 3 tensor instead of the chi 1 tensor.
Relevant answer
Answer
Also, my interest is limitted to high-precision metrology.
  • asked a question related to Physical Optics
Question
3 answers
I have recorded interference patterns using a ccd camera, and would like to plot a graph of pixel intensity against the cross sectional data points.
Relevant answer
Answer
Dear Alex, Dear Eugene; thank you for your insights. I am helped.
  • asked a question related to Physical Optics
Question
3 answers
effective medium theory for glancing angle deposition
Relevant answer
Thanks juan sir.
  • asked a question related to Physical Optics
Question
5 answers
I want better understand this subject. I have a few questions about it? In which cases it is preferable to use matrices of Jones, and in which cases it is preferable to use matrices of Stoces to describe polarized light propagation through the numerous optical elements? What are the restrictions of both these models? Which similar models for same purpose are you know and can recommend? Thank you in advance!
Relevant answer
Answer
The answer of Antoine Weis is correct. Parvins answer is not wrong. You can use Stokes Matrices for polarized and unpolarized light, but Jones matrices for polarized light only and they are easier to handle. The second part of Lee's answer is not correct. You can use Jones matrices also for light with radial/azimuthal polarization (angular momentum), but they depend on the azimuthal angle. See Xin et al Optics Letters Vol.39 (7), p.1984 (2014) "Generation of optical vortices by using spiral phase plates..."
  • asked a question related to Physical Optics
Question
2 answers
I want to understand the UAPO Theory for calculation of diffracted field of dielectric wedge. Please suggest me books etc for it.
Relevant answer
Answer
Thanks Sir
  • asked a question related to Physical Optics
Question
6 answers
Optical coherence explains frequency shifts by "Impulsive Stimulated Raman scattering" (ISRS) in excited atomic hydrogen. In particular it gives the values of  frequency shifts of most quasars and so called "compact galaxies" (Karlsson's law).
Observed dotted or not circles are explained by superradiance of Strömgren spheres.
Relevant answer
Answer
I do not have a cosmological model, I criticize the big bang because:
- it supposes that the frequency shifts observed in astronomy result from an expansion of the universe, while there is, at least an optical effect which may explain this and which is commonly used in labs, named "impulsive stimulated Raman scattering" (ISRS).
In labs, ISRS is obtained using short light pulses, but its theory works with the pulses which make ordinary time-incoherent light, in excited atomic hydrogen.- Frequency shifts occur by an exchange of energy between light beams propagating in excited atomic hydrogen. From Planck's law, the light beams have a temperature. The exchanges of energy obey thermodynamics, thus:
- generally, light transfers energy to cold background radiation, therefore it is redshifted.
- The expansion of solar wind cools it so that excited hydrogen appears between 10 and 15  AU. In this region, energy is transferred from sunlight to microwaves exchanged between Earth and Pioneer 10 and 11 probes. Their increase of frequency is considered as provided by an "anomalous acceleration".
- As redshift results from a light-matter interaction, as refraction, it has a dispersion that expansion does not produce. Explanation of  dispersion of the multiplets observed in the quasars requires a change in time of tha universal fine structure constant.
- As  redshifts requires excited atomic hydrogen,, very hot objects get an extra redshift. These objects seem anomalously far. Distance of these objects are over-evaluated:
    -  Quasars are the "accreting neutron stars" (which have never been observed thought they are computed bright). They are generally in our galaxy as showed by their
angular speed which gives them a speed larger than c if their distance is deduced from their redshift.
      - Spiral galaxies are surrounded by excited atomic hydrogen which increases their distance deduced from big bang interpretation of Hubble's law. Thus dark matter and energy are introduced...
Use of correct, well verified laws of physics provides a much simpler sight of Universe.
  • asked a question related to Physical Optics
Question
12 answers
I used TMM to design a filter in one dimensional photonic crystal using Matlab. What other techniques I should do ?
Relevant answer
Answer
The TMM usually refers to a 1D code that only models dielectric slabs. When it is generalized to handle inhomogeneities, it is more commonly called the method of lines or rigorous coupled-wave analysis. I cover all these methods in detail here:
As Sergey said, TMM is inherently unstable. When it is fixed, it resembles scattering matrices which have emerged as the most popular way to model multi-layer problems. I have found the enhanced transmittance matrix method to be almost 10x faster than scattering matrices, but less memory efficient. There are also R and H matrices that claim greater efficiency as well, but I not have personally worked with them. Again, check out the lectures above. Hopefully you will like them.
  • asked a question related to Physical Optics
Question
7 answers
Physical optics propagation is normally done between planes perpendicular to the optical axis. How then can e.g. lenses be treated? As far as I know this is done e.g. in ZEEMAX by geometric ray tracing from a plane in front of the lens to a plane behind the lens. How are the ray directions in the front plane determined and how is the electric field reconstructed in the back plane? I would be very appreciative for any hint, citation etc.
Relevant answer
Answer
One common approach is to resolve the non-paraxial (vector) field into plane waves, which then propagate through the optical train according to the Huygen-Fresnel principle. According to this principle, each point on every interface is illuminated by each of these plane waves and then acts as a point source of spherical waves. These outgoing spherical waves then may be decomposed into plane waves and allowed to propagate further. At the end, all of the contributions of all of the plane waves are recombined, in phase and with their full vector nature, along the surface of interest. Just as it sounds, this is very messy. Fortunately, Emil Wolf and his collaborators worked out formalism, and the result is manageable. The original reference is E. Wolf, Electromagnetic Diffraction in Optical Systems. I. An Integral Representation of the Image Field, Proc. R. Soc. Lond. A 253, 349-357 (1957). The technique is now is called the Debye-Wolf integral formulation.