Science topic
Physical Optics - Science topic
Explore the latest questions and answers in Physical Optics, and find Physical Optics experts.
Questions related to Physical Optics
Hi,
I would like to understand the link between GRD (ground resolved distance) experimental value and the GSD (ground sample distance) theoretical value. I saw somewhere the following formula used: GRD=2*k*GSD when the factor 2 is to get the value of two-pixel ( cyc/mm ), and k would represent a factor that includes all other influences such as turbulence, aerosol or camera aberration.
When k>1 and if k=1 then we talk about an ideal system.
I would like to know is there a formula to calculate k directly to be able to find the GR? Is there a maximum value of k where one can say that the only influence is the atmosphere and that the camera is limit diffracted?
And is there sources talking of this factor, I have not found any on internet.
Thank you very much
Consider the problem of known electromagnetic source in free space near a perfect electric conductor (PEC) object. Finding the total electromagnetic field in this situation can be done through solving Maxwell's equations numerically. At high frequencies, this becomes very computationally expensive in terms of time and memory. Approximate techniques like Physical Optics (PO) are then used.
PO in the above situation is based on finding the electric currents on the surface of the PEC object, getting the exact scattered field due to these currents, and adding it to the original fields radiated by the source.
From reading of various resources and references, it is always mentioned that PO has two inherently introduced approximations:
1- ignoring the diffraction effect, which is complemented afterwards through methods like PTD (physical theory of diffraction).
2- the PO current on the PEC surface is itself approximate.
And here comes my question: the justification of point 2 above. Why is the PO current considered approximate, although it is calculated from a true boundary condition at the PEC surface?
I need to verify the results of the classical PEC or İmpedance half plane results obtained by the theory of Physical Optics by using HFSS or CST.
I attached the figures of the geometry and the total wave result for the incident wave angle of 60°, the total wave (=incident wave+geometrical optical wave+diffracted wave) and
diffracted wave in MATLAB.
How can I verify it in HFSS or CST or any suggestion?
Actually the major problem of using In2S3 is the scarcity of In in contrary to ZnS which is composed with more abundant material.
The principles of wavefront reconstruction based on a geometric-optical reflection of reconstructing light from the surfaces with constant phase differences between the object and reference waves can also be used for a temporal reconstruction of the object ultrashort pulse [1]. This can be illustrated by the following simple example. Let the object beam consists of two δ-pulses delayed with respect to each other by τ. We suppose that the object and reference beams propagate in opposite directions forward to each other and also that δ-pulse is used as the reference one. In that case the interference fringe structure will consists of two parallel planes separated by a distance τc/2 where c is the velocity of light. If we use the δ-pulse for reconstruction it will be reflected sequentially from one plane and then from the other. The time delay between two reflected pulses will be equal to τ. So, the object pulse temporal structure was reconstructed by simple geometric-optical reflection. The question is: How this mechanism of the object pulse temporal reconstruction relates to the known methods of time-resolved holography ([2, 3] and other)? 1.https://www.researchgate.net/publication/238033164_Ultrashort_light_pulse_scattering_by_3D_interference_fringe_structure
2. Rebane, A., & Feinberg, J. (1991). Time-resolved holography. Nature, 351(6325), 378-380.
3. Mazurenko, Y. T. (1990). Holography of wave packets. Applied Physics B, 50(2), 101-114.
I simulated an optical system in both sequential and nonsequential modes. I used a Gaussian entrance beam for my system and then, compared the results of "Physical optics" in sequential mode and "coherent irradiance" in non-sequential mode.
The problem is that the results don't coincide with each other!
Now, I want to know which one is more accurate and reliable?
Thanks!
I would like to model a antigen-antibody complex as a dielectric layer with a finite thickness. For example, a single layer of prostate specific antigens (PSA) and anti-PSA antibodies.
Could anyone suggest where I can find relevant references on their physical and optical properties?
i have got polarization map from FDTD of near field e&m fields. now i want to fit that image by solving polarization parameters(polarization angle,strokes parameters etc) on each pixels. Now how can i fit that image i mean how i can solve that parameters on each pixel what is the best way????? Has any one Matlab code for that???
The non-holographic mechanism of achromatic wavefront reconstruction is based on a geometric-optical reflection of reconstructing radiation from surfaces with constant phase differences between the object and reference waves used to record the interference fringe structure in the medium bulk [1]. This mechanism was realized by femtosecond recording of the interference fringe structure in very thick medium [2]. However, it seems that some experimentally easier ways of realization are possible. Maybe some other waves instead of light can be used, etc. Can anybody suggest a new method of realization of the non-holographic mechanism of achromatic wavefront reconstruction?
In PT-symmetric dual waveguide systems, what is the meaning of reciprocal and nonreciprocal wave propagation? has it some relationship to unidirectional invisibility?
Hi actually i am trying to get the near field scattering spectra around a metal alloy by using a new numerical method for that method we have a system which is called polarization indirect microscopy. i have experimental results from that system which are in terms of polarization parameters like stokes parameters ,polarization angle etc. same i have performed in calculations but i have got more field variations(High frequency) in images of near field as compared to the experimental. so i was asking why i have got more variations in scattering near field spectra???
Nd:YAG laser palced in a Z cavity having mirrors of 1.3 um coating.
Especially in the case of Z-Scan analysis of nanoparticles, people are using the term 'Optical limiting effect'. Can any one please explain about Limiting effect and its significance.
I am looking for the detailed third-order nonlinear susceptibility x3 of nitrogen to calculate third order harmonics.
What I'm looking for is the equivalent an strain/stress induced birefringence, but for on the chi 3 tensor instead of the chi 1 tensor.
I have recorded interference patterns using a ccd camera, and would like to plot a graph of pixel intensity against the cross sectional data points.
effective medium theory for glancing angle deposition
I want better understand this subject. I have a few questions about it? In which cases it is preferable to use matrices of Jones, and in which cases it is preferable to use matrices of Stoces to describe polarized light propagation through the numerous optical elements? What are the restrictions of both these models? Which similar models for same purpose are you know and can recommend? Thank you in advance!
I want to understand the UAPO Theory for calculation of diffracted field of dielectric wedge. Please suggest me books etc for it.
Optical coherence explains frequency shifts by "Impulsive Stimulated Raman scattering" (ISRS) in excited atomic hydrogen. In particular it gives the values of frequency shifts of most quasars and so called "compact galaxies" (Karlsson's law).
Observed dotted or not circles are explained by superradiance of Strömgren spheres.
I used TMM to design a filter in one dimensional photonic crystal using Matlab. What other techniques I should do ?
Physical optics propagation is normally done between planes perpendicular to the optical axis. How then can e.g. lenses be treated? As far as I know this is done e.g. in ZEEMAX by geometric ray tracing from a plane in front of the lens to a plane behind the lens. How are the ray directions in the front plane determined and how is the electric field reconstructed in the back plane? I would be very appreciative for any hint, citation etc.