Science topics: PhysicsOptics
Science topic

# Optics - Science topic

Explore the latest questions and answers in Optics, and find Optics experts.
Questions related to Optics
• asked a question related to Optics
Question
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
• asked a question related to Optics
Question
For my experiments I need circularly polarized laser light at 222 nm.
Can anyone tell me is there any important difference between a λ/4 phase plate and a Fresnel rhomb? Prices seem to be comparable. My intuition tells me that phase plate could be significantly less durable due to optical damage of UV AR coatings on all of the surfaces. Also it seems that it is much harder to manufacture the UV phase plate since there are very limited light sources for testing. While Fresnel rhomb seems to be more easy to produce and apochromat. What am I supposed to choose?
I think that absorption is not a problem since I want to install the rhomb or λ/4 into the seed pulse before the final amplifier. The pulse that I will work with is only needed for injection locking of spectrum and to the large extent can be strongly attenuated. Same with space, it's not crucial.
It seems that in my case (nanosecond, narrow bandwidth pulse) the difference between rhomb and plate can be neglected. When in comes to ultrashort/broadband pulse the thickness of a rhomb becomes a drawback since it introduces additional dispersion (pulse stretching) and even self-focusing and absorption.
• asked a question related to Optics
Question
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
Hi Vladimir Farber, I think one of the fields of Light Field Images missing to be mentioned is the "Quality Assessment of Light Field Images" when they are propagating through a communication channel. For your kind reference, one of the published papers in this field is :
"Exploiting saliency in quality assessment for light field images."
• asked a question related to Optics
Question
Hi. I have a question. Do substances(for example Fe or Benzene ) in trace amounts (for example micrograms per liter) cause light refraction? and if they do, is this refraction large enough to be detected? and also if they do, is this refraction unique for each substance?
I also need to know if we have a solution with different substances, can refraction help us determine what the substances are? can it measure the concentration?
It is also on ResearchGate. Author Shangli Pu. Measurement of refractive index of magnetic fluid by Retro - reflection on fiber optics end face.
• asked a question related to Optics
Question
I have profiled a collimated pulsed laser beam (5mm) at different pulse energies by delaying the Q-switch and I found the profile to be approximately gaussian. Now I have placed a negative meniscus lens to diverge the beam and I put a surface when the beam spot size is 7 mm. Should the final beam profile (at the spot size = 7 mm) be still gaussian? Or the negative lens will change the gaussian profile? Is there any way to calculate the intensity profile theoretically, without again doing the beam profiling by methods like Razor blade method? Thanks.
Gaussian laser beams propagate as gaussian laser beam if their intensity is sufficiently weak so that they do not affect the refractive index of the medium through which they propagate and a linear approximation is valid. Furthermore the
"diameter" of the laser beam should be large compared to the wavelength so that a Fresnel approximation is valid (at least a few micrometers). Also the medium should be (fairly) homogeneous. This is related to the paraxial approximation used (small angles of deflection). The formulas for gaussian beam propagation a easily found in various textbooks. There is no fundamental difference between divergent and convergent lenses. The standard paraxial matrix formulation used in geometrical optics can be used to calculate gaussian beams transformation. The beam is characterized by its "waist diameter Wo" and the position of this waist "xo" with respect to the lens.
• asked a question related to Optics
Question
I would like to calculate the Mode Field Diameter of a step index fiber at different taper ratios. I understand that at a particular wavelength, the MFD will be decreasing as the fiber is tapered. It may increase if it's tapered more. I am looking to reproduce the figures ( attached ) given in US Patent 9946014. Is there any formula I may use ? Or it involves some complex calculations?
Using COMSOL or MATLAB or other simulation softwares it is easy to calculate the MFD. You need consider the change of wave-guiding difference as the tapering diameter decreasing: initially silica/(silica+Ge) and then air/silica. I believe you cannot use a simple formula to get the accurate result
• asked a question related to Optics
Question
Dear all,
- Magnification (2.5 x to 5x)
- Working distance
- field of vision
- Galilean (sph/cyl) vs Kepler (prism)
- TTL vs non TTL/flip
- Illumination
- Post use issues (eye strain/ headache/ neck strain etc)
- Recommended brand
- Post sales services
Thank you
#Surgery #Loupes # HeadandNeck #Surgicaloncology #Otolaryngology
• asked a question related to Optics
Question
I am using a 550mW green laser (532nm) and I want to measure its intensity after being passed through several lenses and glass windows.
I found a ThorLabs power meter but it is around \$1200.
Any cheaper options to measure the intensity of the laser?
(high accuracy is not required)
• asked a question related to Optics
Question
I want to know if the number of fringes and their shape is an important factor for the accuracy of phase definition?
• asked a question related to Optics
Question
I want to calculate the propagation constant difference for LP01 and LP02 modes for a tapered SMF-28 (in both core and cladding).
Is there a simple formula that I can use? My goal is to see if the taper profile is adiabatic or not.
I am using this paper for my study : T. A. Birks and Y. W. Li, "The shape of fiber tapers," in Journal of Lightwave Technology, vol. 10, no. 4, pp. 432-438, April 1992, doi: 10.1109/50.134196.
equation in attached figure
I recommend optical interferometry for measuring the fiber's refractive index
• asked a question related to Optics
Question
Solvents for the immersion oil are carbon tetrachloride, ethyl ether, Freon TF, Heptane, Methylene Chloride, Naptha, Turpentine, Xylene, and toluene.
What is the best of these to clean the surface? Toluene? Heptane? I'd like to stay away from more dangerous chemicals if possible and have something that evaporates easily.
Methyl alcohol is the best
• asked a question related to Optics
Question
I would like to calculate the return loss for a splice / connection between two different fibers. One of the fiber is having a larger core diameter and larger cladding diameter compared to the other fiber. I was considering the approach laid out in this paper which takes about identical fibers : M. Kihara, S. Nagasawa and T. Tanifuji, "Return loss characteristics of optical fiber connectors," in Journal of Lightwave Technology, vol. 14, no. 9, pp. 1986-1991, Sept. 1996, doi: 10.1109/50.536966.
I added a few screenshots from the paper.
Looks like a good paper and a good approach.
• asked a question related to Optics
Question
Hello
I am new to the field and I would like to ask on what is the criteria to say whether a photoswitchable compound or optogenetic molecule has fast kinetics and high spatiotemporal resolution at the cell-free model, cellular/in vitro model, in vivo and ex vivo model? Is there a consensus criterion to quantitatively qualify if a compound has a fast kinetic and high spatiotemporal resolution in these models?
For instance, if a compound becomes fully activated when turned on by light in less than 30 min, does it have fast kinetics?
On the other hand, if a compound can precisely activate certain neuronal regions in the brain but it has off-target activations in the surrounding regions around 20 uM from the region of activation, does it have high spatiotemporal resolution?
I may have mixed-up some terms here, I will be glad if this will be clarified in the discussion.
Thanks.
That's the least I could do.
• asked a question related to Optics
Question
In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.
Problem solved : The reel was getting pinched and deflected at the bare fiber adapter to the detector causing a huge drop in power.
Vincent Lecoeuche Thanks for your thoughts as well.
• asked a question related to Optics
Question
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By changing the synchronization condition of the clocks of inertial frames, the answer to 1) and 2) seems to be affirmative. This synchronization clearly violates global Lorentz symmetry but it preserves Lorenzt symmetry in the vecinity of each point of the flat spacetime.
---------------
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we consider the space-like component of the Lorentz transformation X = g(x-vt) we know that the origin of both frames coincide x =X = 0 at t = 0. If we want x = X = 0 at tP = 0 we have to write:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP equations 2)3) become:
4) X = g (xP - vtP)
5) T = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
Cameron Rebigsol I understand your world view and Sir Isaac Newton would have agreed with you. Newton explained gravity and made the connection between the gravity on Earth (e.g. the falling apple) and the motion of the moon. He worked out that it would all be explained by an inverse square law of distance. Even Newton was a bit puzzled about how this "action at a distance" worked.
James Clerk Maxwell pointed out that this "action at a distance" was not a good explanation and felt that there had to be some mechanism through the medium to produce electromagnetism and gravity.
I agree with the viewpoint of Maxwell and I do take as my starting assumption that General Relativity is completely correct as there is sufficient evidence for this. Then the question of "action at a distance" is resolved because it is the state of the medium (i.e. spacetime) which is the underlying cause of the gravitational and electromagnetic forces.
Richard
• asked a question related to Optics
Question
My set up is as follows : Elliptically polarized light at input -> Faraday Rotator -> Linear Polarizer (LP) -> Photodiode
The LP is set such that the power output is minimum. I use a lock -in-amplifier to measure the power change due to the Faraday effect. I have a more or less accurate measurement of the magnetic field and the length of the fiber. The experimental Faraday rotation (Rotation Theta= Verdet constant*MagneticField*Length of fiber) , is more than the theoretical prediction, so I was wondering if I am observing the effect of elliptical polarization at the input to the system.
Yes, you can say both polarizations get rotated. Taking each component separately, they both would get rotated by the same amount, and superposition applies, so together they do the sum of what each of the pieces would do.
However, if it helps, that is not the only way to think of it. We like to think in terms of linear polarization. We like to think of arbitrary polarization as a superposition of two orthogonal linear polarizations. It’s easy to make the diagrams. It also makes sense for linear polarizers and linear retarders. However, that is not the only choice. You can just as easily express an arbitrary polarization as the superposition of left and right circular polarizations. In the basis of right and left circular polarization a Faraday rotator is in fact a phase retarder. if the two components have equal amplitude the result is linear polarization. The relative phase determines the orientation of the linear polarization, so retarding the phase rotates the linear polarization. If the two components have different amplitude, you get an ellipse where the major and minor axes are the sum and difference of the amplitudes. Again, if you retard the phase, the whole ellipse just rotates.
As to why your experiment is producing answers that don’t quite seem right, I think this measurement has several things that can confuse the result. Although I don’t see why you wouldn’t put a polarizer on the entrance, I doubt the entering ellipticity is really the problem. That should just reduce your modulation amplitude making the signal a little weaker, but it shouldn’t impact the phase. A much more likely culprit is linear birefringence in the fiber. Fibers can have significant residual birefringence from the manufacturing, but also bending through the fiber acts as a retardation. For example, sequential coils of fiber called fiber paddles are sold as polarization manipulators.
• asked a question related to Optics
Question
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
Regarding the first problem, there are many examples. For instance, the paper by O. Penrose, Bose-Einstein condensation in an exactly soluble system of interacting particles'', esearchportal.hw.ac.uk/en/publications/bose-einstein-condensation-in-an-exactly-soluble-system-of-intera
Cf. also, the paper by E. Lieb and R. Seiringer, Proof of Bose-Einstein Condensation for Dilute Trapped Gases'',
Regarding the second problem, the boundary conditions break Lorentz invariance. That's why the question isn't well-posed, whether in the classical limit or when quantum effects must be taken into account. In a finite volume it requires care to define the propagation velocity properly, since the equilibrium field configurations describe standing waves.
• asked a question related to Optics
Question
Please, see the attached file RPVM.pdf. Any comment will be wellcome.
More on this subject at:
I think that an interesting point is that, using units with c = 1, the 4-velocity (dt,dx,0,0) is a 1-tensor that is the same for any offset of clocks of the inertial frame. Then we have that the 4-velocity (dt,dx,0,0) transforms the same for any synchronization, it satisfies the Einstein addition of velocities and consequently it also satisfies the principle of constancy of speed of light. On the other hand, as it behaves like a tensor under Lorentz transformations, the relativity principle holds for it an for all derived 1-tensors like velocity, acceleration and so on.
• asked a question related to Optics
Question
You can find the wording in the attached file PR1-v3.pdf. Any comment will be wellcome.
More on this topic at:
I think that an interesting point is that, using units with c = 1, the 4-velocity (dt,dx,0,0) is a 1-tensor that is the same for any offset of clocks of the inertial frame. Then we have that the 4-velocity (dt,dx,0,0) transforms the same for any synchronization, it satisfies the Einstein addition of velocities and consequently it also satisfies the principle of constancy of speed of light. On the other hand, as it behaves like a tensor under Lorentz transformations, the relativity principle holds for it an for all derived 1-tensors like velocity, acceleration and so on.
• asked a question related to Optics
Question
According to ASHRAE the values of tb and td for Atlantic, I want their values according to cities or latitude and longitude
Thanks
Calculation procedure for the Clear-Sky Beam and Diffuse Solar Irradiance as dependent on the time and the location can be found here (developed in Python):
• asked a question related to Optics
Question
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?
• asked a question related to Optics
Question
If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?
Dear all, I think the easiest way is an underground city. Shullters are built for similar protection purposes. My Regards
• asked a question related to Optics
Question
So far, I made the simulation for the thermal and deformation analysis. I know that neff-T graph can be plotted. TEM modes can also.
Murat Baran Hi. When coupled with the optical and electrical modules, the thermal module generates thermal maps (see the attached figure) in COMSOL. The article (cited below) from which the attached figure has been collected deals with the 3D modeling of a thin-film CZTSSe solar cell in COMSOL and takes various heat sources into consideration including SRH nonradiative recombination, Joule heating and the conductive heat flux magnitude (thermalization). According to the article: The thermal analysis allows the optimization of device stability by determining which heating source is the cause of performance drop over time. Hope this helps. Best.
Reference:
Zandi, Soma & Saxena, Prateek & Razaghi, Mohammad & Gorji, Nima. (2020). Simulation of CZTSSe Thin-Film Solar Cells in COMSOL: Three-Dimensional Optical, Electrical, and Thermal Models. IEEE Journal of Photovoltaics. PP. 1-5. 10.1109/JPHOTOV.2020.2999881.
• asked a question related to Optics
Question
For an actual laser system, the ellipticity of a Gaussian beam is like in the picture attached (measured by me). Near the laser focus the ellipticity is high and falls down drastically at Rayleigh length. Then increases again. This is a "simple astigmatic" beam. Can anyone explain this variation?
P.S. In the article (DOI: 10.2351/1.5040633), the author also found similar variation. But did not explain the reason.
Well, first, we know that if you had a perfectly circular Gaussian intensity profile with a perfect plane wave phase front then you would find a perfectly circular Gaussian intensity profile everywhere along the beam. Your beam must differ slightly in some way from the perfect single mode description. This is completely normal. At the very least there must always be some aperture clipping (can’t have a Gaussian out to infinity!). Typically there is also some slight amount of a second mode. Neither the clipping nor the higher mode content need to be axially symmetric. However, a little asymmetry doesn’t necessarily cause the beam to measure as being too asymmetric in some locations.
Also it is not unusual to find something different in the midfield than in the near field and far field. For example an elliptical beam in the near field Fourier transforms to an elliptical beam in the far field with the axes swapped. However somewhere in between the beam is symmetric and the ellipticity approaches zero.
What is interesting about your measurement is that it does the opposite of that. It is symmetric in the near and far field and slightly elliptical in between. I think the same idea of the “in between” looking different applies, but the exact explanation is a little less obvious. However, I’m not terribly surprised. In aligning the laser the builder tweaks the alignment until they get the best result looking at the far field. Naturally any imperfections are hidden and show up in the midfield.
Finally I will add that the ellipticity measurement can lie to you. Often ellipticity is calculated for the whole beam or perhaps a 1/e^2 encircled energy contour. This emphasizes low tails. Small amounts of extra diffraction orders can radically affect the measurement. I bet if you measured the FWHM instead you would find less deviation.
• asked a question related to Optics
Question
What are the advantages of using SiO2 substrate over Si substrate for monolayer graphene in photonics?
Dear
Behnam Farid
In fact, I thought about the hot electron injection to the substrate.
As we're expecting localized or propagating plasmons in graphene, isn't it possible to have electron leakage from the graphene to high doped substrates? Instead, SiO2 or other insulators offer electrostatic charge transfer, which may facilitates the graphene-graphene hybridization in periodic structures like 1D graphene ribbons. Is this a valid claim?
• asked a question related to Optics
Question
I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.
Dear Anatoly Smolovich , in addition to the previous answers, I would like to add probably one of the most popular materials for replication: PDMS, there are several commercial preparations of this silicone, but probably Sylgard184, by Dow, is the more commly used. It normally requires a mild curing temperature (80ºC for two hours or even at room temperature, but requiring longer curing times) and if temperature is an issue they also have some UV-curable PDMS. It has a key advantage over rigid polymers, given that it is an elastomer and that facilitates the demolding process without harming both the original and/or the replica. I also had good experiences with Microresist as pointed by Daniel Stolz , in fact I use to make the replica with PDMS (reverse replica) and the direct replica of the original with Ormocomp by Microresist, with very good results. These resins are solvent free and that is important both for avoiding damage of the original and to minimise the shrinking, so that the grating period is mantained.
Both, PDMS and Ormocomp do not need application of pressure, unlike the case of the PP replicas made for injection molding in the article provided by Przemyslaw Wachulak , (of course you can apply pressure but it is not necessary, just a glass slide or a cover slide will be enough.
About your late question related to the treatment of the original grating surface treatment:
It will depend on the nature of the grating and its surface. If your grating is made of glass or metal (alone) most antiadhesive treatments would work. If it is made of some polymer, you will need to know what polymer to apply some material that does not damage the grating.
If your grating is made of any material and coated with a thin metallic coat, then you should check that the antiadhesion material and the replication resin (or the solvent) are not going to damage the thin metallic film by disturbing the adhesion between the substrate material and the metal coat.
Hope this helps.
• asked a question related to Optics
Question
Hello dear friends,
I am trying to add our own optical components in the sample compartment region of our Nicolet 380 FTIR. Also, our sample is small, so we need to shrink the size of the beam spot with an iris to avoid signals from the substrate.
Therefore, we want to use a mid IR sensor card to help me find where and how large the light beam is. However, the IR sensor card does not show any color change when I put it in the light path (of course, when the light source is on). The mid IR sensor card we use can detect light in the wavelength range of 5-20 um, the min detectable power density is 0.2 W/cm2.
Did I miss anything here? And do you have any suggestions how I shall detect the beam spot, its position and size?
Any suggestions will be highly appreciated! Thank you in advance!
Best regards,
Ziwei
In general those cards are designed to work with a continuous beam. Don't forget that the IR beam from an FTIR is modulated and in my experience those cards don't work in an interferometer.
The Al foil approach mentioned above will work well but if this is an ongoing problem why not use a beam condenser? Those optics are designed to condense the beam and put the sample exactly where you get the maximum energy. Both Pike Technologies (https://www.piketech.com/product/ms-beam-condensers/) and Specac (https://www.specac.com/en/products/ftir-acc/transmission/solid/microfocus-beam-condenser) provide them and the 380 is a very common instrument so that there should be no concerns about unusual spectrometer configurations.
• asked a question related to Optics
Question
like of cavity length is 50 mm, so what value of thermal focal should be so that it doesn't have any effect on the stability of laser cavity. f>50mm or f<50mm
• asked a question related to Optics
Question
Some 2D galvos have both axes on the horizontal plane. It seems much easier to manufacture. However, some high-end galvos such as those from Cambridge have one of the axes tilted by a small angle. What is the benefit of that?
This is done only to reduce the size of the scanning head, so that the edge of the second mirror does not go beyond the plane bounding the first mirror. Then the f-theta lens can be positioned closer to the movable mirrors.
• asked a question related to Optics
Question
I have a long length (L) of coiled SMF-28 on a spool and I want to measure the "beat length (Lb)" of the entire spool using some simple means as described. (1) inject linearly polarized broadband light (for example, from a superluminescent source) (2) record the optical spectrum using an optical spectrum analyzer (OSA), after transmission through the fiber and another polarizer (3)That spectrum will exhibit oscillations with a period Δλ, from which the polarization beat length can be calculated using Lb = (Δλ/λ)*L.
My questions are (a) what is the typical resolution for the OSA used in such measurements (b) should I rotate the polarizer such that the power is maximized for the center wavelength of my source ? (c) if I am missing anything else that I should consider
Vincent Lecoeuche thank you for your answer, if I want to measure the beat length of the PM fiber can I use a tunable laser to sweep from 1500 nm to 1600 nm and then calculate the ripple spacing for the entire spectrum get the beat length ? Also, will it be any useful to launch linearly polarized light midway between the fast and slow axis of the PM fiber that is being tested ?
• asked a question related to Optics
Question
I have synthesized gold nanorods using seed mediated method with the aspect ratio of 4.4. when these nanorods are suspended in water, two peaks corresponding to transverse and longitudinal oscillations appear. When they are coated on glass slide they exhibit only one absorbance peak? The light source used is unpolarized light. Why is it not possible to virtually observe two peaks for nanorods when coated on glass slide?
Dear Keerthana Narayanan , I would like to remark Muwei Ji answer, that is also contained in the answers provided by Yuri Mirgorod and Rüdiger Mitdank . When you have a suspension of nanorods they are, in principle, randomly distributed into the solvent volume, and each one is isolated from the rest of them. So certainly you will get information of both longitudinal and transversal localized plasmons when recording UV-Vis spectrum of that suspension.
However, when you cast your suspension over a substrate, and let the solvent evaporate, the gold nanorods can aggregate. Aggregation can be random if evaporation is fast and nanorod concentration was high, and that would shift the LSPRs, even merging them as your spectrum is showing. It would be like if you have larger nanoparticles (formed by random accumulation of rods). So the transversal plasmon shifts towards larger wavelengths and the longitudinal one shifts towards smaller wavelengths.
If evaporation rate is low and nanorod concentration is smaller, they would form some kind of ordered arrangements, specially when the aspect ratio is large as in your case. Then the UV-Vis can change repect to the suspension, but this time the longitudinal plasmon would be more evident and less shifted while the transversal plasmon would be less important and less shifted.
As Muwei pointed, the best way to have a clear idea of what is happening on your casted sample is just use a microscopic technique to observe the rods distribution, SEM or AFM would be perfect techniques to observe your already deposited sample (if you just want to analyze the rods size and shape TEM would be perfect too). SEM would let you see a wider range of your sample faster than AFM, and that could be useful because rods distribution can change depending on the area of your sample. A common effect is the accumultion of particles on the so called coffee ring, where the solvent is evaporated and the effect of surface tension transports more particles causing a rim or coffe ring, while the internal part of the casted surface can show a thin layer of particles.
Hope this helps.
Good luck with your research and my best wishes!
• asked a question related to Optics
Question
I am interested in defining the heterogeneity and similarities among metalenses and their advantages in the current and new applications, and identify some of their future improvements and characteristics.
Dear Ivan Moreno,
Here below is some info you're looking for:
The advantages of metalenses over diffractive lenses
• asked a question related to Optics
Question
I'm curious if anyone can share their measurement of the coupling loss as a function of the gap between two SMF FC/APC fibers at various wavelengths. If not, it would be great if you can refer me to a datasheet or a paper where this type of measurement was done.
Thanks!
you may ahve a look at equation 1a and 1b for a description of the gap and wavelength dependence of the coupling loss/ transmission in a butt joint SM-fiber connection:
But, sorry, no experimental data yet...
Best regards
G.M.t
• asked a question related to Optics
Question
Hi,
My input Jones vector is E1 = [ 0 ; 1] , the Jones Matrix is M = [ a + ib , 0 ; 0 , c+id] , the output is E2 = M*E1 = [ 0 ; x + iy]. Now I want to know the phase shift between vertical and horizontal polarization of the light wave.
E1 is 2x1, M is 2x2 and E2 is 2x1.
Is E2 elliptically polarized but then it does not have any X component, I am confused.
I think E2 is not elliptically polarized, it should be linear polarized in X compontent. Then I think it's a tricky exam question, with a simple answer that there is not phase difference.
• asked a question related to Optics
Question
I've come across a formula n= 1/Ts + Sqrt(1/Ts-1) where n is the refractive index and Ts is the transmittance. Is this formula valid? How is this formula arrived at?
You can use the well--know method called "The envelope method". Please refer to the following reference J. C. Manifacier, J. Gasiot and J. P. Fillard, (1976), Journal of Physics E, Vol. 9, pp. 1002-1004
• asked a question related to Optics
Question
I'm working on silicon-graphene hybrid plasmonic waveguides at 1.55um. for bilayer graphene my effective mode indices are near the source that I'm using but for trilayer they are not acceptable. For modeling the graphene I use relative primitivity or refractive index in different applied EV.
I attached my graphene relative primitivity and refractive index calculation code and one of my COMSOL file related to fig5.19b in the source.
• asked a question related to Optics
Question
you can use ns3, have look at the documentation.
• asked a question related to Optics
Question
I'm planning to modify a finite tube length compound microscope to allow the use of "aperture reduction phase contrast" and "aperture reduction darkfield" according to the following sources:
Piper, J. (2009) Abgeblendeter Phasenkontrast — Eine attraktive optische Variante zur Verbesserung von Phasenkontrastbeobachtungen. Mikrokosmos 98: 249-254. https://www.zobodat.at/pdf/Mikrokosmos_98_4_0001.pdf (in German).
The vague instructions state:
"In condenser aperture reduction phase contrast, the optical design of the condenser is modified so that the condenser aperture diaphragm is no longer projected into the objective´s back focal plane, but into a separate plane situated circa 0.5 – 2 cm below (plane A´ in fig. 5), i.e. into an intermediate position between the specimen plane (B) and the objective´s back focal plane (A). The field diaphragm is no longer projected into the specimen plane (B), but shifted down into a separate plane (B´), so that it will no longer act as a field diaphragm.
As a result of these modifications in the optical design, the illuminating light passing through the condenser annulus is no longer stopped when the condenser aperture diaphragm is closed. In this way, the condenser iris diaphragm can work in a similar manner to bright-field illumination, and the visible depth of field can be significantly enhanced by closing the condenser diaphragm. Moreover, the contrast of phase images can now be regulated by the user. The lower the condenser aperture, the higher the resulting contrast will be. Importantly, halo artifacts can be reduced in most cases when the aperture diaphragm is partially closed, and potential indistinctness caused by spherical or chromatic aberration can be mitigated."
The author combined finite 160 mm tube length objectives, a phase contrast condenser designed for finite microscopes, and an infinity-corrected microscope to get the desired results.
However, how would one accomplish this in the simplest way possible?
Golshan Coleiny thank ýou for your reply. I assume you mean, for example, modification of the illuminator field lens to displace the conjugate aperture plane of the field diaphragm? Kind regards, Quincy
• asked a question related to Optics
Question
I wanted to calculate the magnification of a system where I am using a webcam lens and was wondering if this could be done by applying the simple lens equation? If yes, then what would I consider my "v" to be in this case since I'm dealing with a lens assembly (unknown # of lenses and unknown separation between them)? I just know the EFL in this case.
If you know the principal planes of your lens you can use the simple lens equation if you set the object distance as the distance between the object and the first principal plane and the image distance as the distance between the second principal plane and the image while using the EFL as the focal length.
Otherwise, the ray tracing can be quite complicated - even the procedure described by Piergiacomo will not be completely correct if at least one of the individual lenses in a lens system are "thick".
• asked a question related to Optics
Question
It is known that in case particles having size less than one-tenth the wavelength of light, Rayleigh scattering occurs and the scattering is isotropic. But for particles whose sizes are comparable to the wavelength of light, the scattering is more towards the forward direction and hence is anisotropic.
Dear Somnath Sengupta , thanks for such interesting question, since we often get the mental image of isotropic Rayleigh scattering and the anisotropic Mie scattering in the forward direction but we hardly stop to think about this "odd" behaviour, which could even make us think that is counterintuitive because a larger particle "should" scatter more in the backwards direction, but actually it doesn´t:
First we can considerer two scatterers, one tiny (Rayleigh) and another larger (Mie). Now we are going to focus on two dipoles on each particle (1 and 2). In the tiny particle the two dipoles are necessarily close each other, while in the larger one they could be fairly separated.
Now we send a coherent light beam that hits both particles (and their two dipoles) and check what happens with the scattered light from each particle:
Tiny (Rayleigh) = dipoles 1 and 2 very close because (r<< wavelength)
Because 1 and 2 are very very close the wavelength hits them almost in the same moment, say in the crest of the wave. This interaction produces a new wave forward and backward from each dipole. The forward and backward waves are in phase with the main wave, and because the two dipoles are so close they respective waves are practically in phase too, both forward and backward so the scattering is isotropic.
Big (Mie) = dipoles 1 and 2 are separated because (r => wavelength)
Because 1 and 2 are separated the main wave hits them at different times, say 1 in the crest and 2 in the valey of the wave. In the forward direction both scattering waves are in phase with the main wave and therefore they reinforce themselves by positive interference. However, the backward waves are out of phase and therefore they cause negative interference, reducing its intensity and therefore explaining the anisotropic nature of Mie scattering.
The larger the particle, the further apart could be the dipoles and bigger would be the anisotropy.
Note: This explanation involved just two dipoles, for simplicity, but a real particle could have lots of them.
Hope this helps. Good luck with your research and thanks for making questions that make us think.
• asked a question related to Optics
Question
I have tried Keller's,tucker's and Barker's etchant but they aren't working.I am interested to get the optical micrography.but i'm not getting anything. :(
Hello Dr. Parth, this method worked well for me: preetching with H3PO4 for 4 min, and followed by coloration step with Weck's reagent for few seconds, up to 15s. You can find more details in this paper: https://doi.org/10.1017/S1431927618012400
• asked a question related to Optics
Question
Hello everyone , Could you please help me with a dilemma . I have done my Masters in Physics but I am interested in doing PhD in image Sensing using ML . I have been warned that in future I could face disadvantage as Image sensing is not a core physics field and jobs may be scarce. Additionally I would be doing interdisciplinary work which is yet not so prevalent in our country. The topic I am getting is in NIT and i shall try for IIT even with less interesting but more job friendly ( CMP, High Energy Physics ) Job topics... what shall I do ?
That's really great, please put your scares away and start your Ph.D. journey during that just think about how to enjoy the trip. During your study, you can little bit change your direction according to the various knowledge contribution and be sure that with your supervisor you could learn many aspects, just start and do your best with high potential and patient. When you finish remember to pull someone else and help him to pass because at that time you will get exciting and feeling good with yourself. All the best
• asked a question related to Optics
Question
In any OSL phosphor we require optical energy more that the thermal trap depth of that trap for optical stimulation. For example in case of Al2O3:C we require 2.6 eV photon to detrap the electron from the trap having 1.12 eV thermal trap depth. How are they related to each other?
For a given trap, E(optical) is always > E(thermal), because of the Franck-Condon principle. As a result, transitions on a configurational coordinate diagram always take place vertically, meaning that the transition is much faster than the lattice relaxation time. Once ionized optically the defect’s lattice configuration relaxes to a new configurational coordinate via the emission of phonons. Thermal excitation, however, includes the phonon emission and lattice reconfiguration takes place simultaneously. Thus E(optical) = E(thermal) + E(phonons), with the latter term given by the Huang-Rhys factor.
If experimentally measured energies ( for example E(optical) using OSL, E(thermal) using TL) are either unphysically different or approximately the same, I would question whether the two methods are actually probing the same defect, and/or whether or not the E(optical) and E(thermal) values are correctly obtained from the data, before launching into detailed possible explanations.
• asked a question related to Optics
Question
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
• Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
I am really delighted to hear from you the history of an ever-lasting discussion about sensor and actuator definitions. I have always found it annoying that the sensor definition has usually been preferred as a "too specific" definition to serve only for an interface of an electrical/electronic system and an "other" system/medium with different form of signal(s).
Besides, that diiscussion, I can start another one:
There are many commercial integrated devices which are called "sensor"s, although in fact they are not basic sensors but are more complicated small systems which may also include electronic amplifier(s), filter(s), analog-digital-converter, indicators etc. For sure, these are very convenient devices for electronic design, but I think it is not correct to call them "sensor". Such a device employs a basic sensor but besides it provides other supporting electronic stages to aid the electronic designer. I don't know if there is a specific name for such devices.
Best regards...
• asked a question related to Optics
Question
Does anybody know what is the maximum power of laser sources (QCL, VECSEL, and so on) in THz regime?
I'm trying to realize a nonlinear effect in THz regime using a THz source without using DFG or SFG. I need 200 mw or more for my device. Is it doable? is there any source to generate that power?
Dear
Farooq Abdulghafoor Khaleel
Many thanks for your response and valuable information.
TOPTICA Photonics has unveiled some commercial THz sources with 0.1-6 THz spectrum in mW range.
• asked a question related to Optics
Question
Dear all,
I recently meet a problem when I use RCWA codes.
In the same structure, it tooks fewer time when using the FDTD solution.
I need set a lot of orders to calcuate the structure which can reach the similar result.
So I have a question that how can I judge the accuracy of the simulation when I use RCWA codes? and How to judge the orders I need?
Thanks
Sai Chen I am writing the code with matalb, base on Rumpf's lecture. first step. I just consider the normal insert. But very bad news, I the calculate the inverse matrix, no matter inv, ^(-1), pinv，it can not give the right results. And when the harmonics become large (such 40), it give a lot of warning, It seem it is the problem of calculate the inverse matrix. Thank you, do you know why? And the code I write is very short. Less than 200 lines.
• asked a question related to Optics
Question
Hello all,
I have some idea on how to measure the external quantum efficiency for my perovskite LEDs, but I want to calibrate that setup for which I want to measure the External Quantum effficiency of a normal 5 mm LED. How should I go forward with it? All suggestions/ help would be appreciated. Thank you
Jitesh Pandya
Adding to the colleagues above, you can use a standard solar cell in the shortcircuit mode pf operation where the out put photon flux of the diode can be recieved by the solar cell provided that the area of the solar cell is made large enough to receive the whole flux of the LED.
If the spectral response S(lambda) in mA/ photonic power of the solar cell is measured at the wavelength of the diode then one can get the the input photon flux what is that emitted from the diode at the same time, one can calculate the
input photonic power as Pphotonic= I/ S where I is the measured current and and S is the sensitivity at the intended wavelength.
If elaborated this method can work well in spite of its simplest.
Best wishes
• asked a question related to Optics
Question
Hello everyone,
I'm trying to implement a material with non-diagonal conductivity in my FDTD code. By the way, I'm using Dr. Elsherbeni's code for my purpose. Although I managed to implement diagonal anisotropy in my code, my code seems to be unstable for non-diagonal matrices. Through research, I've found out that my updating equations are not correct. Since it is necessary to interpolate the fields in irrelevant positions, it seems the updating equations also have to be organized differently than the isotropic case.
I attach the equations in a PDF below. the first equation on every page represents the equations in half-steps and the second one represents the updating equations implemented in the code.
Any help or hint would be appreciated.
I also have to point out that the source for the equations is the paper in the link below:
You are most welcome, Dear Amin Pishevar .
Best Regards.
• asked a question related to Optics
Question
I am looking for an overview on how FEM-Simulations are used in Optics. Especially, when it was first used and for what kind of systems.
• asked a question related to Optics
Question
The interference pattern is probably in the form of stripes / straight lines due to the tilt of the two interfering wavefronts. By adjusting the second mirror, the tilt can be reduced to a single fringe. Why do we get stripes and not circular fringes ?
In my opinion, your mirrors are still tilted with respect to each other. Circular fringes appear when the two mirrors are parallel.
• asked a question related to Optics
Question
I am looking to find possible methods to temporally overlapping a nanosecond pulsed laser (280 Hz - ~ 6 ns - 532 nm - beam diameter ~ 4 mm) with a picosecond pulsed laser (78 Mhz - ~ 10 ps - 565 nm - beam diameter ~ 4 mm) with delay line mirrors.
ATM I am using a fast PD with 1ns Rise Time (https://www.thorlabs.com/thorproduct.cfm?partnumber=DET210/M) and a 10 GS/s oscilloscope. However, I only can see the attached signals coming up from ns and ps sources when they run separately, and since the amplitude of the detected signal from ps is much low (~20 mV), it is hard to adjust the other one with it. One way which comes to mind is to lower the intensity of the ns laser with density filters but is there any other alternative to this.
Do you want to synchronize pulsing of two lasers? I would assume that picosecond laser is likely self-oscillating but nanosecond one can be externally triggered. in such case if you need to synchronize them, you need to extract 78Mhz and turn it into digital clock which can be digitally divide by 278571 using some CPLD/FPGA board to make 280 Hz or so to clock pulsing of the nanosecond laser.
To extract 78 MHz you can use Phtodetector --> RF amplifier (e.g. Mini-Circuits ZFL-1000LN+ ) followed by some 78Mhz centered RF bandpass filter and then some high speed comparator board to obtain digital clock.
• asked a question related to Optics
Question
Hello;
It is well known that when light reaches an optical element, part of it is lost through absorption, diffusion, and back reflection. In the case of mirrors, this value is well characterized and a realistic estimate would be around 4-5% (or less depending of the material). However, I cannot find similar information on commercial or scientific sites for beam-splitters. For example, in a well-known optical products company, if we enter the raw data the percentage of reflected and transmitted light adds up to more than 100% at some points on the curve! Without a doubt this has to do with the measurement methodology.
In the case of scientific articles, some estimate this absorption to be around 2% assuming that it is a block or sheet of a certain material (ignoring ghost images). However, this does not make sense since it would then be more interesting to use a dichroic beam splitter than a mirror in certain circumstances.
Of course everything will depend on the thickness, material used, AR treatment. However, I cannot find a single example and I am not able to know the order of magnitude. Does anyone know of any reference where a realistic estimate of the useful light that is lost when using a beam splitter of whatever characteristics is made?
Thanks !
I think your premise is flawed. There isn’t going to be “an answer” because tailoring this parameter and trading it against other properties you might like is the crux of coating design and the answer might be anything over a wide range depending on what was designed under what set of constraints. For example, your example of a mirror being 4% is at best a rule of thumb and most often completely wrong. Over a fair range of wavelengths bare i coated aluminum happens to be around 4% absorptive. However Silver is only 2% absorptive in that range. Bare Gold may be terribly absorptive at shorter wavelengths. At longer wavelengths it doesn’t reach the 98% of silver, but over much of the IR and aluminum become terribly absorptive and gold is the best. More importantly, mirrors are rarely uncoated and a dielectric coating can raise the reflectivity of metallic mirrors above 99%. See for example Edmunds “ultrafast” enhanced aluminum coating.
And that is just metallic coatings. Metal is useful over a wide wavelength range when you don’t know what a mirror is going to be used for. However, if you know the wavelength (and what acceptance angle you need, and other constraints) you can use a pure dielectric stack. Dielectric mirrors can be made very close to 100% reflective. What’s more, very little light is absorbed, so what little doesn’t reflect transmits.
That brings us to beam splitters. It is not at all difficult to make a dielectric coating where essentially no light is absorbed. It is all either transmitted or reflected. Adding the reflected to the transmitted should yield just about 100% every time. When you found placed where they appeared to add up to higher than 100%, that is just experimental error or round off error, but they probably do add up to almost 100%
• asked a question related to Optics
Question
“The interaction of a field with a thin scattering layer corresponds to multiplication with a diagonal matrix“
Original from：Wetzstein, Gordon, et al. "Inference in artificial intelligence with deep optics and photonics." Nature 588.7836 (2020): 39-47.
Xiaohui Zhu , here is a short answer. For details, I suggest to read the book: "Introduction to Fourier Optics" by Goodman, J. W. (Roberts and Co, 2005), chapter 5 and Appendix B.
A scattering layer is typically composed of an optically dense material, with a refractive index significantly different than the one of air, and then the propagation velocity of an optical disturbance is less than the velocity in air.
Since the layer is thin, the sole effect of it is to shift the phase of waves when they are passing through it. Such a phase shift, as compared to air, results in a phase delay
\Delta \phi = k(n-1)d
where n is the index of refraction of the layer's material, d its thickness.
Transformations involving phase shifts are associated with the diagonal elements of the transformation matrix.
• asked a question related to Optics
Question
I want to stabilize (carrier-envelope offset) SESAME modelocked, linear-cavity laser with rep. rate 31.6 MHz by method of f-2f interferometry.
Can anyone suggest:
1- what are the typical feedback electronics one needs for laser stabilisation.
Specifically, PID controller specs.
2- Voltage to current converter (to pump diode current)
3- Since frequency spacing is low (31.6MHz), would dichroic mirrors work efficiently to filter f and 2f components of the supercontinuum other there is a better option?
Traditional feedback stabilization is difficult for most SESAM mode-locked lasers. One major problem is the long upperstate lifetime of these materials, which are typically in the millisecond range. The second is the rather low output coupling compared to fiber lasers. Both these effects limit the speed of the servo loop to sub-kHz bandwidth. Another recurring problem is the S/N of the beat note, which often barely reached the necessary 30dB in 100kHz RBW.
Let me therefore suggest the use of the feed-forward method: https://www.osapublishing.org/ol/fulltext.cfm?uri=ol-44-22-5610&id=423127
Additionally, this method does not require any fancy feedback electronics, and you don't have to act back on the pump current. Details are in the two papers. In particular, the OE describes some of the best performance ever observed for an oscillator.
Concerning the use of a dichroic mirror: yes, this will always work independent of frequency spacing as f- and 2f components are an optical octave apart.
• asked a question related to Optics
Question
It is no doubt that VASP can be used to obtain the optical property at visible range （in the order of eV ）. Is there someone can tell me how to calculate optical property only at THz range （in the order of meV）. One may argue that VASP would output optical property at all range. However it will cost lots of time unnecessarily.
Were you able to do that?
• asked a question related to Optics
Question
The term "phase" is always a confusing thing for me. When we recoding images, we say that we have recorded the amplitude and phase. Amplitude I am able to relate/ physically understand by connecting with intensity. As, intensity increases, the amplitude will also increase. But the phase term is still I am not able to digest. I am not able physically understand the phase term, like understand the amplitude term. Can anyone explain this?
I have studied the mathematics of Phase. But I am not able to physically relate it.
A gradient in the refractive index affects the (relative) phase of an x-ray beam, which is evaluated by appropriate interferometric techniques. A 2D representation of the differential phase distiribution is called the differential phase image...
• asked a question related to Optics
Question
I like to do linear laser measurements with environmental compensation and without environmental compensation. The parameters affecting linear laser reading are Pressure, Temperature, Humidity. For experimental purposes, I need to vary the Pressure, Temperature, and Humidity value. How I can select the Pressure, Temperature, and Humidity value? (randomly or any procedure available). Hint: The standard Pressure = 1013.25mbar, Temp=20C, Humidity=50%
And How much reading is required?
As continue answer of Okasatria Novyanto I'd like to say that you need to use the system of 4 equations (as a minimum lot of equations) that relate 4 measured values ​​of the refractive index to 4 unknown parameters: the air temperature, the air pressure, humidity, and CO2 concentration.
• asked a question related to Optics
Question
1) What happens to the absorption coefficient of semi crystalline pigmented polymer films when thickness is in microns?
2) Do we neglect the polymer total reflectance from top incident surface and bulk material?
3) Will the polymer surface reflectance at the bottom surface will be considered at such micron thick pigmented polymer films?
4)How does scattering affects the absorption phenomenon in pigmented semi-crystalline polymer films?
Dear Foram Dave,
Firstly, the wavelength of visible light is of the order of 0.5 micron. So the grain size is comparable to the light wave and every effect as mentioned by you will have dominating contribution.
For micron order grains many of the mentions effects are negligible for X ray as the wavelength of X ray is of the order of 1 A which is 1000 times less than the grain size of micron order.
Thanks
N Das
• asked a question related to Optics
Question
How to measure an optics thermal drift error in a laser measurement system(Renishaw XL80)?
Please tell me about optics thermal drift and what is the source for optics thermal drift. And How to find the value of optics thermal drift. And how to reduce it?
• asked a question related to Optics
Question
Like our's mass elements are majorly Carbon, Hydrogen, Nitrogen, Oxygen, Calcium, etc. Upon watching professional ghost hunting Youtube documentary videos, It's evident that ghosts are visible to us in the forms of dense gaseous state(black shades). They have the nature to apply force on objects. To do that, they must possess mass. Their mass elements may not exist in our periodic table. There would a way to find our periodic elements present on Earth If some researches start on.
Hi everyone, this is just hypothetical and critical thinking to unfold the mystery.
• asked a question related to Optics
Question
I was dealing with creating point spread function(PSF) images from measured irradiance and wavefront data(shack-hartmann wavefront sensor data)
and I figured that the PSF image turned out too low resolution compared to the reference PSF.
What I found by accident was that I could increase the resolution by adding zero layers around the irradiance and wavefront data and simply increasing the size of the data matrices, with the raw data at the center.
For example, my raw data are 100 by 100 matrices, mostly occupied with data, and I add 100 surrounding layers of zero elements, giving (100+2*100) by (100+2*100) matrices with the original raw data at the very centers. And I obtain a higher resolution PSF of 300 by 300, not the original 100 by 100 PSF.
PSF derivation was computed as below.
PSF=abs(fftshift(fft2(U))).^2; % psf derivation
And I could go on to increase the resolution with more zero layers.
My question is, is it okay to add as many zero layers as I want to increase the PSF resolution, as long as I keep track of the PSF axis scales?
or should this way be avoided? If so, why?
Thank you
This second approach is based on the Huygens–Fresnel principle: Each grid point in the pupil-plain is considered as emitter of a spherical wave (wavelet) with the initial phase of the wavefront. The ('coherent’) sum of all wavelet amplitudes on each point of a grid in the image plain equals the field distribution and the square value yields the intensity of the image in the detector plain. This approach can be regarded as a numerical solution of a slight simplification of the Fresnel–Kirchhoff diffraction formula.
It is a quite flexible approach. For example, it works also for curved surfaces. E.g. the ray tracing software 'Zemax' supports this algorithm (in addition to the FFT approach).
However, it is still asumes some special requisites. For example, it is assumed that the only beam-limiting element in the system is the aperture-stop, which is responsible for the diffraction. This is called the "single-step-diffraction approach". In typical camera systems, this is however not the case at the FoV edges. But the approach still seems to yield reasonable results. I am not exactly sure, how to determine when this approach really breaks down. This is something I would like to find out.
Best regards, Christof.
• asked a question related to Optics
Question
Hi,
Is there a mathematical equation or formula to find the extinction coefficient or absorption coefficient of a thin layer based on transmittance or from the refractive index of the material?
Optical and electronic properties for As-60 at.% S uniform thickness of thin films: Influence of Se content
also see attached file
• asked a question related to Optics
Question
Talking to Dr. Jörn Schliewe inspired me to raise this illustrated question and how you may call these barriers in the experiment of diffraction? Would you call it n-slits or n-obstacles?
Well, first, it’s N+1 obstacles or if you don’t want to count the long walls at either end for some reason, N-1obstacles, but certainly not N obstacles.
It certainly doesn’t matter what you call it. In your picture the two terms are both correct, and not mutually exclusive. It is in, in fact, N+1 obstacles forming N slits.
I don’t think anyone misunderstands that slits are formed by barriers, and if you talk about N slits everyone will instantly picture a barrier with slits in it. However, on a practical note, at optical wavelengths it generally isn’t possible to have free standing barriers like this. Instead the solid wall continues above and below. Generally a transmissive grating looks like a solid barrier with ”slits” cut into it. So the ”slits” term is constructivist. It is indicative of how the structure is created. You cut slits into a foil or similar. That is the dictionary definition of slit: a narrow cut. That is also how this became the standard terminology in optics because in the early experiments that is literally how gratings were made. We’ve greatly improved our “knife”, but fundamentally that is still how subtractive transmission gratings are still made today.
Terminology is for understanding, and often it uses similarity for recognition. No one thinks the arrow slits in a castle wall were literally made by cutting, but they look like cuts. If you call them slits everyone understands what you are talking about. That is the only important cr for terminology.
In optics we always talk about the slits. This is probably because we are focused on the light. Each slit is treated as a source, we propagate on using Huygen’s principle, etc. It doesn’t really matter what the barriers are so long as they exist. However, we have to talk about slit width and slit spacing, so in what an artist might call “negative space” we are inevitably also describing the barrier. Everyone gets that. I don’t think I’ll switch to explicitly talking about the barriers any time soon
• asked a question related to Optics
Question
Can anyone tell what is the reason of these sharp frequency modulation in SPM broadened power spectrum. This is the spectrum of mode-locked laser with 0.4nm initial bandwidth at 1064nm. After amplification to 400mW (in YDF) and propagating through 6m length of PM-980, such spectrum appeared. I am wondering how to get rid of these modulations to enable efficient pulse compression.
Pls see the attached picture.
As far as I can see, you are showing the spectrum on a log scale, but these oscillations are essentially a hallmark of the SPM, and you can estimate the total accumulated nonlinear phase from the number of the spectral oscillations. This was observed for the first time by Roger Stolen in the 1970s:
Self-phase-modulation in silica optical fibers
R. H. Stolen and Chinlon Lin
Phys. Rev. A 17, 1448 – Published 1 April 1978
A more detailed discussion can be found in the textbook by Govind Agrawal, Nonlinear Fiber Optics. I would estimate the total nonlinear phase in your fiber as about 3.5 \pi.
• asked a question related to Optics
Question
I would like to simulate bi-layer thin film transmittance. Any free software?
Mahesha M G kindly share software on
• asked a question related to Optics
Question
Is there any information about refraction indices and extinction coefficients for some types of Stainless Steel?
Not sure if it has stainless steel, but a good reference to have saved nonetheless.
• asked a question related to Optics
Question
I like to do project in laser measurement system. My objective is to eliminate or reduce the cosine error, abbe error in Laser measurement system. I have Rhenishaw XL80 system.
Please tell the procedure to eliminate that particular error. And help me to complete my project.
Dear Joshua,
# In order to eliminate or reduce the cosine error, in my opinion, you should have to tilt your laser head so that the misalignment angle (theta) is sufficiently small. You can do that by trial and error.
# In order to eliminate or reduce the abbe error, in my opinion, you should have to reduce or remove abbe offset (d). In my understanding, there are any 2 (two) methods, i.e. Zero abbe offset configuration and Abbe error compensation. Please see on the attachment file. The disadvantage of Zero Abbe offset configuration is the system size becomes large. For Abbe error compensation, I did not have experience with it. I am still learning about Abbe error compensation. Generally, you need a double pass interferometer. I suggest you read the publication from Dr. @Jong-Ahn Kim. I remember that he published a paper in 2012 about angular compensation to remove abbe error. He is an expert in this field.
• asked a question related to Optics
Question
If The Laser head is parallel to the machine axis, the cosine error will be zero(θ=0). If the laser head tilted means the theta value will be some angle.
My question is how to measure that particular θ value (angle between machine axis and laser path). I cannot use use normal bevel protractor. Specify the particular equipment used to measure that particular angle.
Dear Joshua,
I agree with Prof. Srini Vasan . I would like to add information only.
Actually, Misalignment angle (theta) is about ratio between “laser beam spot in the first to the second position (delta d)” with length (lopt or l).
How to get it?
In the laser interferometry system, you can attach a pointer target at measurement face of the mirror reflector. Generally, the mirror reflector is mounted on moving part. Furthermore, a pointer target is included with the optics set. It was like a metal sheet with a hole and a target logo on it.
In the first step, change shutter control of your laser head to aperture mode and then direct the laser beam to a target on the mirror reflector.
In the second step, move your machine with displacement value is 1000 mm (for example).
Ideally, the laser beam spot in the first step and the second step has same position (delta d = 0). But in practice, the laser beam spot in the first step and the second step has different position (delta d = ? mm). Please see the laser beam spot on the pointer target. You can measure it by ruler (I think enough).
Finally, you can calculate misalignment angle (theta), i.e. delta d = ? mm divided by 1000 mm (for example).
• asked a question related to Optics
Question