Science topics: PhysicsOptics
Science topic

# Optics - Science topic

Explore the latest questions and answers in Optics, and find Optics experts.
Questions related to Optics
• asked a question related to Optics
Question
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
1) The diffraction pattern is oscillatory in nature. For monochromatic light, you won't have a sudden dark fringe follow by a sudden bright one. You'll see gradual transitions between both.
2) Those are obviously not monochromatic lights. As such, there is no reason all wavelengths should be extinguished at the same time in dark fringes. One would instead see color variations.
3)Let's do dome rough calculations. In the k-space the diffraction pattern for an aperture or object will be similar to sinc(a*k) where a is the half width of the object.
That means the first 0 will be for k=pi/a.
But that is the tangential component of hhe k vector only, or the sine of its projection.
So, if you want to find the corresponding angle you have
Sin(ang) = pi/a / (k0) = lambda/(2a)
Let's say 2a=3cm and lambda=600nm (yellow), we obtain
sin(ang) = 600e-9 / 3e-2 = 2e-5 ~ang
As such, if you where at 10m from the latter, the corresponding length projected on the sensor of your camera would be roughly
L ~ ang * 10 = 0.2 mm, wich is much smaller than the sensor itself. So we should see the diffraction pattern of it had any actual energy in it.
You did not see diffraction.
• asked a question related to Optics
Question
Hi All,
I am trying to generate the 3D corneal surface from the Zernike Polynomials. I am using the following steps, can anyone please let me know whether they are accurate
Step 1: Converted the cartesian data (x, y, z) to polar data (rho, theta, z)
Step 2: Nomalised the rho values, so that they will be less than one
Step 3: Based on the order, calculated the Zernike polynomials (Zpoly), (for example: if the order is 6, the number of polynomials is 28 )
Step 4: Zfit = C1 * Z1 + C2 * Z2 + C3 * Z3 + ......... + C28 * Z28
Step 5: Using regression analysis, calculated the coefficient (C) values
Step 6: Calculated the error between the predicted value (Zfit) and the actual elevation value (Z)
Step 7: Finally, converted the polar data (rho, theta, Zfit) to Cartesian coordinates to get the approximated corneal surface
Thanks & Regards,
Nithin
Seems fine to me. Step 6 will tell you if you did it right. The residual should be small, and also shouldn’t show any low order structure that seems too similar to any of the fit zernikes.
• asked a question related to Optics
Question
Hi there,
I hope you are doing well.
In the lab we have different BBO crystals, however, in the past, they did not mark them so we don't know which crystal is which. I appreciated it if somebody have an idea about how to measure the thickness of BBO crystals.
The second question is, are the BBO crystals sandwiched by two glasses or not? If yes is the measurement become complicated?
Best regards,
Aydin
• asked a question related to Optics
Question
How else can we explain :
Imprimis : That a light ray has different lengths for different observers. (cf. B.)
ii. That the length of a light ray is indeterminate? - both gigantic, and nothing, within the Einstein- train embankment carriage : (cf. B.)
iii. That a light ray can be both bent and straight. Bent for one observer, and straight for another : (cf. C.)
iv. That a light rays "bends" mid-flight in an effort to be consistent with an Absolute event which lies in the future : (cf. C.)
v. That these extraordinary things -- this extraordinary behaviour, (including the "constancy of speed") are so that the reality is consistent among the observers -- in the future. (cf. D, B, C)
vi. That light may proceed at different rates to the same place--- wholly on account of the reality at that place having to be consistent among the observers : (cf. D, A)
---------------------------------------------------------
B. --
C.--
D.--
Light is natural. We are closing to the nature of light.
• asked a question related to Optics
Question
Hello everyone,
I have made several optical phantoms with different weight ratio of ink into PDMS, from 0wt% to 5wt%. I have measured the transmission (%) and reflection (%) of each sample.
From there I calculated the absorption with the Beer-Lambert law, A=log(I0/I), with I0 being the transmission with 0wt% of ink and I the transmission of the sample desired.
I can therefore get the absorption coefficient of the phantoms with the formula: ua = A/thickness.
Therefore I have a linear relationship between the weight percentage of the phantoms and their absorption coefficient.
Now my issue is that I want to create a phantom of 2cm thickness but with a ratio of ink to PDMS known.
Should I assume the absorption coefficient will not change from the 2mm sample to the 2cm one ?
Otherwise, how do I determine the absorption coefficient of my new phantom?
Of course, I cannot measure the transmission of this sample as it is too thick now.
Thank you for your help!
Determining the optical properties of a phantom with a different thickness based solely on the properties of a thinner phantom can be challenging. While there might be some assumptions and approximations involved, I can provide you with some guidance on how to approach this issue.
Firstly, it is important to note that the Beer-Lambert law assumes a linear relationship between the absorption coefficient and the thickness of the medium. However, this assumption may not hold true for all materials and scenarios. In your case, the ink-PDMS mixture might exhibit nonlinear behavior as the thickness increases, especially if there are scattering effects or other factors involved.
To estimate the absorption coefficient of your new 2cm-thick phantom with a known ink-PDMS ratio, you can consider the following approaches:
1. Use a calibration curve: Based on the linear relationship you have established between the weight percentage of ink and the absorption coefficient in the 2mm-thick phantoms, you can create a calibration curve. Plot the weight percentage of ink against the corresponding absorption coefficient for your various samples. Then, extrapolate the calibration curve to estimate the absorption coefficient for the 2cm-thick phantom at the desired ink-PDMS ratio. However, keep in mind that extrapolation introduces additional uncertainties, and the accuracy of the estimation may vary.
2. Consider theoretical models: Explore theoretical models or empirical equations that relate the absorption coefficient to the material composition, such as the Mie theory or effective medium approximations. These models take into account the composition and structure of the material and can provide estimations of the absorption coefficient for different thicknesses. However, the accuracy of these models depends on the specific characteristics of your ink-PDMS mixture.
3. Conduct additional experiments: While it may not be feasible to directly measure the transmission of the 2cm-thick phantom, you could consider alternative experimental methods or techniques that can provide insights into the optical properties. For example, you could use diffuse reflectance spectroscopy, which measures the reflectance of light from the surface of the phantom. By analyzing the reflectance data, you may be able to infer certain optical properties, including the absorption coefficient.
In any case, it is important to acknowledge the limitations and uncertainties associated with estimating the optical properties of a phantom with a different thickness based on data from a different thickness. Ideally, conducting experimental measurements on the 2cm-thick phantom would provide the most accurate and reliable results. However, if that is not possible, the approaches mentioned above can serve as initial estimations, but they may require further validation and verification.
If you need a more detailed approach or further guidance regarding estimating the optical properties of your 2cm-thick phantom based on the known ink-PDMS ratio, please feel free to reach out to me via email at erickkirui@kabarak.ac.ke. I will be more than happy to assist you in exploring additional strategies or discussing specific theoretical models that could be relevant to your specific case.
• asked a question related to Optics
Question
I have a laser with a spectral bandwidth of 19nm and is linearly chirped. The Fourier transform limit is ~82fs assuming a Gaussian profile. I have built a pulse compressor based on the available transmission grating (1000 lines/mm, see the attachment); however, I noticed that the minimum achievable dispersion (further decrease in dispersion is limited by the distance between the grating and horizontal prism) of the compressor is greater than what is required for the optimal pulse compression supported by the optical bandwidth. Is there a way to decrease dispersion further in this setup? or Are there any other compressor configurations using single-transmission grating which might have more flexible dispersion control?
I think it possible to use only one prizm (grating double pass scheme): you can to decrease twice the dispertion
• asked a question related to Optics
Question
there is a fiber coupled EOM setup after the double pass AOM setup. the intensity of light is constant(checked with power meter) before the input of EOM but there is a 10% fluctuations in the power just after the EOM, due to which I am unable to lock to a signal that i see in the oscilloscope. This signal is fluctuating up and down on the screen. Is there any solution or could I be doing wrong somewhere, although I have realigned all optical elements numerous times to get it done correctly, but still facing the problem.
thank you for your suggestion, I will try that.
• asked a question related to Optics
Question
I am developing a maths model with Matlab code of optical coherence tomography signal for checking algorithms of extracting B-scans. Now I am struggling with taking into account optic systems such as sample scanner and spectrometer which distort optical signal. I am thinking to simulate this right in Matlab. Would it be better to use special soft for optic simulation and then transfer some coefficients into Matlab code to emitate optics (I suppose I could do it with 2D Fourier Transform besause all other parts of OCT system I implemented in spectral domain). Is there any code examples or tutorials?
Many thanks to you, Thorsten Zwinger !
• asked a question related to Optics
Question
So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)
General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?
You are right, the photon has a spiraling trajectory, just like the electron. This explains the associated wave of both, at least partly, there is still the mystery of the constant of Plank! Why are both behaving in a similar manner? QM is just a superficial theory based on the associated wave.
JES
• asked a question related to Optics
Question
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
thanks
Time-bin encoding is a technique used in quantum information science to encode a qubit of information on a photon. Quantum information science makes use of qubits as a basic resource similar to bits in classical computing. Qubits are any two-level quantum mechanical system; there are many different physical implementations of qubits, one of which is time-bin encoding.
• asked a question related to Optics
Question
The intensity of each ray in RayOptics module of COMSOL is easily obtained after ray tracing. I need to find the radiation intensity in the mesh points which would be related to all the rays crossing a point . Does anybody know how I can get the continuous contours of radiation intensity? Should I use accumulator? If yes, what should be the settings of the accumulator?
Have you solved this problem? In comsol, when a ray leaves the mesh unit, the accumulator will not accumulates the ray power. This also borders me.
• asked a question related to Optics
Question
1. The necessity of a polarization controller for single-mode fiber. Is a polarization controller necessary for single-mode fibers? What happens when you don't have a polarization controller?
2. Optical path matching problem. How to ensure that the two arms of the optical path difference match, any tips in the adjustment process? If the optical path difference exceeds the imaging distance, will interference fringes fail to appear?
Only these questions for the time being, if there are more welcome to point out.
1. The polarization can change during propagation. This will degrade the visibility of the fringes. You need a polarization controller or a polarization-maintaining fiber.
2. By the imaging distance, do you mean the distance to the sample or the sample's dimension? In any case, if the OPD exceeds the coherence lenght of your source, the interference fringes disappear. In order to match the OPD, we typically sweep the reference arm a long distance and record the output intensity. Another method is to monitor the output spectrum using a spectrometer. The spectrum shows oscillations for non-zero OPD. When the OPD approaches zero, the spectrum oscillations tend to disappear.
• asked a question related to Optics
Question
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
Gerhard Martens Thanks! I guess that is my problem solved.... Thanks for your input and suggestions.... :-D
• asked a question related to Optics
Question
1) Can the existence of an aether be compatible with local Lorentz invariance?
2) Can classical rigid bodies in translation be studied in this framework?
By taking into account that the synchronization of clocks of inertial frames is just a gauge, we may use one synchronization that clearly violates Lorentz invariance globally but preserves it locally in the vecenity of each point of flat spacetime. Then the answer to 1) and 2) seems to be affirmative.
Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .
---------------
We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate xP to be tP. Let the offset (t – tP) between the two clocks be:
1) (t – tP) = v (x - xP)/c2
where (t-tP) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c2) we get:
2) T = g (tP - vxP/c2)
On the other hand, if we assume that the origins coincide x = X = 0 at time tP = 0 we may write down the space-like component of the Lorentz transformation as:
3) X = g(x - vtP)
Assuming that both clocks are placed at the same point x = xP , inserting x =xP , X = XP , T = TP into 2)3) yields:
4) XP = g (xP - vtP)
5) TP = g (tP - vxP/c2)
which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and xP is different from 0 and xP is placed at the origin of coordinates, we may insert xP = 0 into 2)3) to get:
6) X = g (x - vtP)
7) T = g tP
which is a change of coordinates that it:
- Is compatible with GPS simultaneity.
- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.
- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.
- Can be applied to solve the 2 problems of the preprint below.
- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...
Thus, we may conclude that, considering the synchronization condition 1):
a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.
b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).
c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.
a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.
There is a difference between Lorentz transformations and scale transformations.
Special relativity satisfies Lorentzian symmetry due to the constancy of the speed of light and the special relativity principle.
However, in reality, the aether makes the speed of light locally invariant, so the Lorentz transformation is not necessary.
• asked a question related to Optics
Question
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Thanks for all comments.
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
• asked a question related to Optics
Question
For my experiments I need circularly polarized laser light at 222 nm.
Can anyone tell me is there any important difference between a λ/4 phase plate and a Fresnel rhomb? Prices seem to be comparable. My intuition tells me that phase plate could be significantly less durable due to optical damage of UV AR coatings on all of the surfaces. Also it seems that it is much harder to manufacture the UV phase plate since there are very limited light sources for testing. While Fresnel rhomb seems to be more easy to produce and apochromat. What am I supposed to choose?
I think that absorption is not a problem since I want to install the rhomb or λ/4 into the seed pulse before the final amplifier. The pulse that I will work with is only needed for injection locking of spectrum and to the large extent can be strongly attenuated. Same with space, it's not crucial.
It seems that in my case (nanosecond, narrow bandwidth pulse) the difference between rhomb and plate can be neglected. When in comes to ultrashort/broadband pulse the thickness of a rhomb becomes a drawback since it introduces additional dispersion (pulse stretching) and even self-focusing and absorption.
• asked a question related to Optics
Question
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
Thank you in advance!
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
Hi Vladimir Farber, I think one of the fields of Light Field Images missing to be mentioned is the "Quality Assessment of Light Field Images" when they are propagating through a communication channel. For your kind reference, one of the published papers in this field is :
"Exploiting saliency in quality assessment for light field images."
• asked a question related to Optics
Question
Hi. I have a question. Do substances(for example Fe or Benzene ) in trace amounts (for example micrograms per liter) cause light refraction? and if they do, is this refraction large enough to be detected? and also if they do, is this refraction unique for each substance?
I also need to know if we have a solution with different substances, can refraction help us determine what the substances are? can it measure the concentration?
Thanks for your help
It is also on ResearchGate. Author Shangli Pu. Measurement of refractive index of magnetic fluid by Retro - reflection on fiber optics end face.
• asked a question related to Optics
Question
I have profiled a collimated pulsed laser beam (5mm) at different pulse energies by delaying the Q-switch and I found the profile to be approximately gaussian. Now I have placed a negative meniscus lens to diverge the beam and I put a surface when the beam spot size is 7 mm. Should the final beam profile (at the spot size = 7 mm) be still gaussian? Or the negative lens will change the gaussian profile? Is there any way to calculate the intensity profile theoretically, without again doing the beam profiling by methods like Razor blade method? Thanks.
Gaussian laser beams propagate as gaussian laser beam if their intensity is sufficiently weak so that they do not affect the refractive index of the medium through which they propagate and a linear approximation is valid. Furthermore the
"diameter" of the laser beam should be large compared to the wavelength so that a Fresnel approximation is valid (at least a few micrometers). Also the medium should be (fairly) homogeneous. This is related to the paraxial approximation used (small angles of deflection). The formulas for gaussian beam propagation a easily found in various textbooks. There is no fundamental difference between divergent and convergent lenses. The standard paraxial matrix formulation used in geometrical optics can be used to calculate gaussian beams transformation. The beam is characterized by its "waist diameter Wo" and the position of this waist "xo" with respect to the lens.
• asked a question related to Optics
Question
I would like to calculate the Mode Field Diameter of a step index fiber at different taper ratios. I understand that at a particular wavelength, the MFD will be decreasing as the fiber is tapered. It may increase if it's tapered more. I am looking to reproduce the figures ( attached ) given in US Patent 9946014. Is there any formula I may use ? Or it involves some complex calculations?
Using COMSOL or MATLAB or other simulation softwares it is easy to calculate the MFD. You need consider the change of wave-guiding difference as the tapering diameter decreasing: initially silica/(silica+Ge) and then air/silica. I believe you cannot use a simple formula to get the accurate result
• asked a question related to Optics
Question
Dear all,
Kindly provide your valuable comments based on your experience with surgical loupes
- Magnification (2.5 x to 5x)
- Working distance
- field of vision
- Galilean (sph/cyl) vs Kepler (prism)
- TTL vs non TTL/flip
- Illumination
- Post use issues (eye strain/ headache/ neck strain etc)
- Recommended brand
- Post sales services
Thank you
#Surgery #Loupes # HeadandNeck #Surgicaloncology #Otolaryngology
• asked a question related to Optics
Question
I am using a 550mW green laser (532nm) and I want to measure its intensity after being passed through several lenses and glass windows.
I found a ThorLabs power meter but it is around \$1200.
Any cheaper options to measure the intensity of the laser?
(high accuracy is not required)
• asked a question related to Optics
Question
I want to know if the number of fringes and their shape is an important factor for the accuracy of phase definition?
• asked a question related to Optics
Question
Solvents for the immersion oil are carbon tetrachloride, ethyl ether, Freon TF, Heptane, Methylene Chloride, Naptha, Turpentine, Xylene, and toluene.
What is the best of these to clean the surface? Toluene? Heptane? I'd like to stay away from more dangerous chemicals if possible and have something that evaporates easily.
Methyl alcohol is the best
• asked a question related to Optics
Question
I would like to calculate the return loss for a splice / connection between two different fibers. One of the fiber is having a larger core diameter and larger cladding diameter compared to the other fiber. I was considering the approach laid out in this paper which takes about identical fibers : M. Kihara, S. Nagasawa and T. Tanifuji, "Return loss characteristics of optical fiber connectors," in Journal of Lightwave Technology, vol. 14, no. 9, pp. 1986-1991, Sept. 1996, doi: 10.1109/50.536966.
I added a few screenshots from the paper.
Looks like a good paper and a good approach.
• asked a question related to Optics
Question
Hello
I am new to the field and I would like to ask on what is the criteria to say whether a photoswitchable compound or optogenetic molecule has fast kinetics and high spatiotemporal resolution at the cell-free model, cellular/in vitro model, in vivo and ex vivo model? Is there a consensus criterion to quantitatively qualify if a compound has a fast kinetic and high spatiotemporal resolution in these models?
For instance, if a compound becomes fully activated when turned on by light in less than 30 min, does it have fast kinetics?
On the other hand, if a compound can precisely activate certain neuronal regions in the brain but it has off-target activations in the surrounding regions around 20 uM from the region of activation, does it have high spatiotemporal resolution?
I may have mixed-up some terms here, I will be glad if this will be clarified in the discussion.
Thanks.
That's the least I could do.
• asked a question related to Optics
Question
I want to calculate the propagation constant difference for LP01 and LP02 modes for a tapered SMF-28 (in both core and cladding).
Is there a simple formula that I can use? My goal is to see if the taper profile is adiabatic or not.
I am using this paper for my study : T. A. Birks and Y. W. Li, "The shape of fiber tapers," in Journal of Lightwave Technology, vol. 10, no. 4, pp. 432-438, April 1992, doi: 10.1109/50.134196.
equation in attached figure
Well, one way would be to consider the transcendental modal equations for the LP modes and compute the propagation constants for LP01 and LP02 for different values of the core radius. Infact, as long as the fiber is an FMF, you could find the propagation constants of all the LP modes allowed by the structure. Then you could use the equation given above to check if the criteria is met or not.
The criterion will be slightly modified for structures with more than two modes. For eaxmple, if your structure has LP02 mode as well, then you must check the above criteria more for coupling between LP01 and LP11 modes rather than LP02 mode.
The modified criteron will contain the computation of the minimum of (\beta_a - \beta_b) for different pairs of \beta.
Let me also add that this critereon of only considering the Eigenvalues is not very efficient. The adiabaticity theorem is often extended the photonics context to consider both the eigenvalue (propagation constant) and the eigen function (the modal profile) for a better adiabaticity criteria. You could find many paper to this regard. Some of my PhD work also might be useful.
Thank you
• asked a question related to Optics
Question
In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.
Problem solved : The reel was getting pinched and deflected at the bare fiber adapter to the detector causing a huge drop in power.
Vincent Lecoeuche Thanks for your thoughts as well.
• asked a question related to Optics
Question
My set up is as follows : Elliptically polarized light at input -> Faraday Rotator -> Linear Polarizer (LP) -> Photodiode
The LP is set such that the power output is minimum. I use a lock -in-amplifier to measure the power change due to the Faraday effect. I have a more or less accurate measurement of the magnetic field and the length of the fiber. The experimental Faraday rotation (Rotation Theta= Verdet constant*MagneticField*Length of fiber) , is more than the theoretical prediction, so I was wondering if I am observing the effect of elliptical polarization at the input to the system.
Yes, you can say both polarizations get rotated. Taking each component separately, they both would get rotated by the same amount, and superposition applies, so together they do the sum of what each of the pieces would do.
However, if it helps, that is not the only way to think of it. We like to think in terms of linear polarization. We like to think of arbitrary polarization as a superposition of two orthogonal linear polarizations. It’s easy to make the diagrams. It also makes sense for linear polarizers and linear retarders. However, that is not the only choice. You can just as easily express an arbitrary polarization as the superposition of left and right circular polarizations. In the basis of right and left circular polarization a Faraday rotator is in fact a phase retarder. if the two components have equal amplitude the result is linear polarization. The relative phase determines the orientation of the linear polarization, so retarding the phase rotates the linear polarization. If the two components have different amplitude, you get an ellipse where the major and minor axes are the sum and difference of the amplitudes. Again, if you retard the phase, the whole ellipse just rotates.
As to why your experiment is producing answers that don’t quite seem right, I think this measurement has several things that can confuse the result. Although I don’t see why you wouldn’t put a polarizer on the entrance, I doubt the entering ellipticity is really the problem. That should just reduce your modulation amplitude making the signal a little weaker, but it shouldn’t impact the phase. A much more likely culprit is linear birefringence in the fiber. Fibers can have significant residual birefringence from the manufacturing, but also bending through the fiber acts as a retardation. For example, sequential coils of fiber called fiber paddles are sold as polarization manipulators.
• asked a question related to Optics
Question
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
Regarding the first problem, there are many examples. For instance, the paper by O. Penrose, Bose-Einstein condensation in an exactly soluble system of interacting particles'', esearchportal.hw.ac.uk/en/publications/bose-einstein-condensation-in-an-exactly-soluble-system-of-intera
Cf. also, the paper by E. Lieb and R. Seiringer, Proof of Bose-Einstein Condensation for Dilute Trapped Gases'',
Regarding the second problem, the boundary conditions break Lorentz invariance. That's why the question isn't well-posed, whether in the classical limit or when quantum effects must be taken into account. In a finite volume it requires care to define the propagation velocity properly, since the equilibrium field configurations describe standing waves.
• asked a question related to Optics
Question
Please, see the attached file RPVM.pdf. Any comment will be wellcome.
More on this subject at:
I think that an interesting point is that, using units with c = 1, the 4-velocity (dt,dx,0,0) is a 1-tensor that is the same for any offset of clocks of the inertial frame. Then we have that the 4-velocity (dt,dx,0,0) transforms the same for any synchronization, it satisfies the Einstein addition of velocities and consequently it also satisfies the principle of constancy of speed of light. On the other hand, as it behaves like a tensor under Lorentz transformations, the relativity principle holds for it an for all derived 1-tensors like velocity, acceleration and so on.
• asked a question related to Optics
Question
You can find the wording in the attached file PR1-v3.pdf. Any comment will be wellcome.
More on this topic at:
I think that an interesting point is that, using units with c = 1, the 4-velocity (dt,dx,0,0) is a 1-tensor that is the same for any offset of clocks of the inertial frame. Then we have that the 4-velocity (dt,dx,0,0) transforms the same for any synchronization, it satisfies the Einstein addition of velocities and consequently it also satisfies the principle of constancy of speed of light. On the other hand, as it behaves like a tensor under Lorentz transformations, the relativity principle holds for it an for all derived 1-tensors like velocity, acceleration and so on.
• asked a question related to Optics
Question
According to ASHRAE the values of tb and td for Atlantic, I want their values according to cities or latitude and longitude
Thanks
Calculation procedure for the Clear-Sky Beam and Diffuse Solar Irradiance as dependent on the time and the location can be found here (developed in Python):
• asked a question related to Optics
Question
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?
• asked a question related to Optics
Question
If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?
Dear all, I think the easiest way is an underground city. Shullters are built for similar protection purposes. My Regards
• asked a question related to Optics
Question
So far, I made the simulation for the thermal and deformation analysis. I know that neff-T graph can be plotted. TEM modes can also.
Murat Baran Hi. When coupled with the optical and electrical modules, the thermal module generates thermal maps (see the attached figure) in COMSOL. The article (cited below) from which the attached figure has been collected deals with the 3D modeling of a thin-film CZTSSe solar cell in COMSOL and takes various heat sources into consideration including SRH nonradiative recombination, Joule heating and the conductive heat flux magnitude (thermalization). According to the article: The thermal analysis allows the optimization of device stability by determining which heating source is the cause of performance drop over time. Hope this helps. Best.
Reference:
Zandi, Soma & Saxena, Prateek & Razaghi, Mohammad & Gorji, Nima. (2020). Simulation of CZTSSe Thin-Film Solar Cells in COMSOL: Three-Dimensional Optical, Electrical, and Thermal Models. IEEE Journal of Photovoltaics. PP. 1-5. 10.1109/JPHOTOV.2020.2999881.
• asked a question related to Optics
Question
For an actual laser system, the ellipticity of a Gaussian beam is like in the picture attached (measured by me). Near the laser focus the ellipticity is high and falls down drastically at Rayleigh length. Then increases again. This is a "simple astigmatic" beam. Can anyone explain this variation?
P.S. In the article (DOI: 10.2351/1.5040633), the author also found similar variation. But did not explain the reason.
Well, first, we know that if you had a perfectly circular Gaussian intensity profile with a perfect plane wave phase front then you would find a perfectly circular Gaussian intensity profile everywhere along the beam. Your beam must differ slightly in some way from the perfect single mode description. This is completely normal. At the very least there must always be some aperture clipping (can’t have a Gaussian out to infinity!). Typically there is also some slight amount of a second mode. Neither the clipping nor the higher mode content need to be axially symmetric. However, a little asymmetry doesn’t necessarily cause the beam to measure as being too asymmetric in some locations.
Also it is not unusual to find something different in the midfield than in the near field and far field. For example an elliptical beam in the near field Fourier transforms to an elliptical beam in the far field with the axes swapped. However somewhere in between the beam is symmetric and the ellipticity approaches zero.
What is interesting about your measurement is that it does the opposite of that. It is symmetric in the near and far field and slightly elliptical in between. I think the same idea of the “in between” looking different applies, but the exact explanation is a little less obvious. However, I’m not terribly surprised. In aligning the laser the builder tweaks the alignment until they get the best result looking at the far field. Naturally any imperfections are hidden and show up in the midfield.
Finally I will add that the ellipticity measurement can lie to you. Often ellipticity is calculated for the whole beam or perhaps a 1/e^2 encircled energy contour. This emphasizes low tails. Small amounts of extra diffraction orders can radically affect the measurement. I bet if you measured the FWHM instead you would find less deviation.
• asked a question related to Optics
Question
What are the advantages of using SiO2 substrate over Si substrate for monolayer graphene in photonics?
Dear
Behnam Farid
In fact, I thought about the hot electron injection to the substrate.
As we're expecting localized or propagating plasmons in graphene, isn't it possible to have electron leakage from the graphene to high doped substrates? Instead, SiO2 or other insulators offer electrostatic charge transfer, which may facilitates the graphene-graphene hybridization in periodic structures like 1D graphene ribbons. Is this a valid claim?
• asked a question related to Optics
Question
I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.
Dear Anatoly Smolovich , in addition to the previous answers, I would like to add probably one of the most popular materials for replication: PDMS, there are several commercial preparations of this silicone, but probably Sylgard184, by Dow, is the more commly used. It normally requires a mild curing temperature (80ºC for two hours or even at room temperature, but requiring longer curing times) and if temperature is an issue they also have some UV-curable PDMS. It has a key advantage over rigid polymers, given that it is an elastomer and that facilitates the demolding process without harming both the original and/or the replica. I also had good experiences with Microresist as pointed by Daniel Stolz , in fact I use to make the replica with PDMS (reverse replica) and the direct replica of the original with Ormocomp by Microresist, with very good results. These resins are solvent free and that is important both for avoiding damage of the original and to minimise the shrinking, so that the grating period is mantained.
Both, PDMS and Ormocomp do not need application of pressure, unlike the case of the PP replicas made for injection molding in the article provided by Przemyslaw Wachulak , (of course you can apply pressure but it is not necessary, just a glass slide or a cover slide will be enough.
About your late question related to the treatment of the original grating surface treatment:
It will depend on the nature of the grating and its surface. If your grating is made of glass or metal (alone) most antiadhesive treatments would work. If it is made of some polymer, you will need to know what polymer to apply some material that does not damage the grating.
If your grating is made of any material and coated with a thin metallic coat, then you should check that the antiadhesion material and the replication resin (or the solvent) are not going to damage the thin metallic film by disturbing the adhesion between the substrate material and the metal coat.
Hope this helps.
• asked a question related to Optics
Question
Hello dear friends,
I am trying to add our own optical components in the sample compartment region of our Nicolet 380 FTIR. Also, our sample is small, so we need to shrink the size of the beam spot with an iris to avoid signals from the substrate.
Therefore, we want to use a mid IR sensor card to help me find where and how large the light beam is. However, the IR sensor card does not show any color change when I put it in the light path (of course, when the light source is on). The mid IR sensor card we use can detect light in the wavelength range of 5-20 um, the min detectable power density is 0.2 W/cm2.
Did I miss anything here? And do you have any suggestions how I shall detect the beam spot, its position and size?
Any suggestions will be highly appreciated! Thank you in advance!
Best regards,
Ziwei
In general those cards are designed to work with a continuous beam. Don't forget that the IR beam from an FTIR is modulated and in my experience those cards don't work in an interferometer.
The Al foil approach mentioned above will work well but if this is an ongoing problem why not use a beam condenser? Those optics are designed to condense the beam and put the sample exactly where you get the maximum energy. Both Pike Technologies (https://www.piketech.com/product/ms-beam-condensers/) and Specac (https://www.specac.com/en/products/ftir-acc/transmission/solid/microfocus-beam-condenser) provide them and the 380 is a very common instrument so that there should be no concerns about unusual spectrometer configurations.
• asked a question related to Optics
Question
like of cavity length is 50 mm, so what value of thermal focal should be so that it doesn't have any effect on the stability of laser cavity. f>50mm or f<50mm
• asked a question related to Optics
Question
Some 2D galvos have both axes on the horizontal plane. It seems much easier to manufacture. However, some high-end galvos such as those from Cambridge have one of the axes tilted by a small angle. What is the benefit of that?
This is done only to reduce the size of the scanning head, so that the edge of the second mirror does not go beyond the plane bounding the first mirror. Then the f-theta lens can be positioned closer to the movable mirrors.
• asked a question related to Optics
Question
I have a long length (L) of coiled SMF-28 on a spool and I want to measure the "beat length (Lb)" of the entire spool using some simple means as described. (1) inject linearly polarized broadband light (for example, from a superluminescent source) (2) record the optical spectrum using an optical spectrum analyzer (OSA), after transmission through the fiber and another polarizer (3)That spectrum will exhibit oscillations with a period Δλ, from which the polarization beat length can be calculated using Lb = (Δλ/λ)*L.
My questions are (a) what is the typical resolution for the OSA used in such measurements (b) should I rotate the polarizer such that the power is maximized for the center wavelength of my source ? (c) if I am missing anything else that I should consider
Vincent Lecoeuche thank you for your answer, if I want to measure the beat length of the PM fiber can I use a tunable laser to sweep from 1500 nm to 1600 nm and then calculate the ripple spacing for the entire spectrum get the beat length ? Also, will it be any useful to launch linearly polarized light midway between the fast and slow axis of the PM fiber that is being tested ?
• asked a question related to Optics
Question
I am interested in defining the heterogeneity and similarities among metalenses and their advantages in the current and new applications, and identify some of their future improvements and characteristics.
Dear Ivan Moreno,
Here below is some info you're looking for:
The advantages of metalenses over diffractive lenses
• asked a question related to Optics
Question
I'm curious if anyone can share their measurement of the coupling loss as a function of the gap between two SMF FC/APC fibers at various wavelengths. If not, it would be great if you can refer me to a datasheet or a paper where this type of measurement was done.
Thanks!
you may ahve a look at equation 1a and 1b for a description of the gap and wavelength dependence of the coupling loss/ transmission in a butt joint SM-fiber connection:
But, sorry, no experimental data yet...
Best regards
G.M.t
• asked a question related to Optics
Question
Hi,
My input Jones vector is E1 = [ 0 ; 1] , the Jones Matrix is M = [ a + ib , 0 ; 0 , c+id] , the output is E2 = M*E1 = [ 0 ; x + iy]. Now I want to know the phase shift between vertical and horizontal polarization of the light wave.
E1 is 2x1, M is 2x2 and E2 is 2x1.
Is E2 elliptically polarized but then it does not have any X component, I am confused.
I think E2 is not elliptically polarized, it should be linear polarized in X compontent. Then I think it's a tricky exam question, with a simple answer that there is not phase difference.
• asked a question related to Optics
Question
I've come across a formula n= 1/Ts + Sqrt(1/Ts-1) where n is the refractive index and Ts is the transmittance. Is this formula valid? How is this formula arrived at?
You can use the well--know method called "The envelope method". Please refer to the following reference J. C. Manifacier, J. Gasiot and J. P. Fillard, (1976), Journal of Physics E, Vol. 9, pp. 1002-1004
• asked a question related to Optics
Question
I'm working on silicon-graphene hybrid plasmonic waveguides at 1.55um. for bilayer graphene my effective mode indices are near the source that I'm using but for trilayer they are not acceptable. For modeling the graphene I use relative primitivity or refractive index in different applied EV.
I attached my graphene relative primitivity and refractive index calculation code and one of my COMSOL file related to fig5.19b in the source.
• asked a question related to Optics
Question
you can use ns3, have look at the documentation.
• asked a question related to Optics
Question
I'm planning to modify a finite tube length compound microscope to allow the use of "aperture reduction phase contrast" and "aperture reduction darkfield" according to the following sources:
Piper, J. (2009) Abgeblendeter Phasenkontrast — Eine attraktive optische Variante zur Verbesserung von Phasenkontrastbeobachtungen. Mikrokosmos 98: 249-254. https://www.zobodat.at/pdf/Mikrokosmos_98_4_0001.pdf (in German).
The vague instructions state:
"In condenser aperture reduction phase contrast, the optical design of the condenser is modified so that the condenser aperture diaphragm is no longer projected into the objective´s back focal plane, but into a separate plane situated circa 0.5 – 2 cm below (plane A´ in fig. 5), i.e. into an intermediate position between the specimen plane (B) and the objective´s back focal plane (A). The field diaphragm is no longer projected into the specimen plane (B), but shifted down into a separate plane (B´), so that it will no longer act as a field diaphragm.
As a result of these modifications in the optical design, the illuminating light passing through the condenser annulus is no longer stopped when the condenser aperture diaphragm is closed. In this way, the condenser iris diaphragm can work in a similar manner to bright-field illumination, and the visible depth of field can be significantly enhanced by closing the condenser diaphragm. Moreover, the contrast of phase images can now be regulated by the user. The lower the condenser aperture, the higher the resulting contrast will be. Importantly, halo artifacts can be reduced in most cases when the aperture diaphragm is partially closed, and potential indistinctness caused by spherical or chromatic aberration can be mitigated."
The author combined finite 160 mm tube length objectives, a phase contrast condenser designed for finite microscopes, and an infinity-corrected microscope to get the desired results.
However, how would one accomplish this in the simplest way possible?
Golshan Coleiny thank ýou for your reply. I assume you mean, for example, modification of the illuminator field lens to displace the conjugate aperture plane of the field diaphragm? Kind regards, Quincy
• asked a question related to Optics
Question
I wanted to calculate the magnification of a system where I am using a webcam lens and was wondering if this could be done by applying the simple lens equation? If yes, then what would I consider my "v" to be in this case since I'm dealing with a lens assembly (unknown # of lenses and unknown separation between them)? I just know the EFL in this case.
If you know the principal planes of your lens you can use the simple lens equation if you set the object distance as the distance between the object and the first principal plane and the image distance as the distance between the second principal plane and the image while using the EFL as the focal length.
Otherwise, the ray tracing can be quite complicated - even the procedure described by Piergiacomo will not be completely correct if at least one of the individual lenses in a lens system are "thick".
• asked a question related to Optics
Question
It is known that in case particles having size less than one-tenth the wavelength of light, Rayleigh scattering occurs and the scattering is isotropic. But for particles whose sizes are comparable to the wavelength of light, the scattering is more towards the forward direction and hence is anisotropic.
Dear Somnath Sengupta , thanks for such interesting question, since we often get the mental image of isotropic Rayleigh scattering and the anisotropic Mie scattering in the forward direction but we hardly stop to think about this "odd" behaviour, which could even make us think that is counterintuitive because a larger particle "should" scatter more in the backwards direction, but actually it doesn´t:
First we can considerer two scatterers, one tiny (Rayleigh) and another larger (Mie). Now we are going to focus on two dipoles on each particle (1 and 2). In the tiny particle the two dipoles are necessarily close each other, while in the larger one they could be fairly separated.
Now we send a coherent light beam that hits both particles (and their two dipoles) and check what happens with the scattered light from each particle:
Tiny (Rayleigh) = dipoles 1 and 2 very close because (r<< wavelength)
Because 1 and 2 are very very close the wavelength hits them almost in the same moment, say in the crest of the wave. This interaction produces a new wave forward and backward from each dipole. The forward and backward waves are in phase with the main wave, and because the two dipoles are so close they respective waves are practically in phase too, both forward and backward so the scattering is isotropic.
Big (Mie) = dipoles 1 and 2 are separated because (r => wavelength)
Because 1 and 2 are separated the main wave hits them at different times, say 1 in the crest and 2 in the valey of the wave. In the forward direction both scattering waves are in phase with the main wave and therefore they reinforce themselves by positive interference. However, the backward waves are out of phase and therefore they cause negative interference, reducing its intensity and therefore explaining the anisotropic nature of Mie scattering.
The larger the particle, the further apart could be the dipoles and bigger would be the anisotropy.
Note: This explanation involved just two dipoles, for simplicity, but a real particle could have lots of them.
Hope this helps. Good luck with your research and thanks for making questions that make us think.
• asked a question related to Optics
Question
I have tried Keller's,tucker's and Barker's etchant but they aren't working.I am interested to get the optical micrography.but i'm not getting anything. :(
Hello Dr. Parth, this method worked well for me: preetching with H3PO4 for 4 min, and followed by coloration step with Weck's reagent for few seconds, up to 15s. You can find more details in this paper: https://doi.org/10.1017/S1431927618012400
• asked a question related to Optics
Question
In any OSL phosphor we require optical energy more that the thermal trap depth of that trap for optical stimulation. For example in case of Al2O3:C we require 2.6 eV photon to detrap the electron from the trap having 1.12 eV thermal trap depth. How are they related to each other?
For a given trap, E(optical) is always > E(thermal), because of the Franck-Condon principle. As a result, transitions on a configurational coordinate diagram always take place vertically, meaning that the transition is much faster than the lattice relaxation time. Once ionized optically the defect’s lattice configuration relaxes to a new configurational coordinate via the emission of phonons. Thermal excitation, however, includes the phonon emission and lattice reconfiguration takes place simultaneously. Thus E(optical) = E(thermal) + E(phonons), with the latter term given by the Huang-Rhys factor.
If experimentally measured energies ( for example E(optical) using OSL, E(thermal) using TL) are either unphysically different or approximately the same, I would question whether the two methods are actually probing the same defect, and/or whether or not the E(optical) and E(thermal) values are correctly obtained from the data, before launching into detailed possible explanations.
• asked a question related to Optics
Question
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
• Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.
After seeing your and other respectable researchers' answers, I am glad I asked this question.
I am really delighted to hear from you the history of an ever-lasting discussion about sensor and actuator definitions. I have always found it annoying that the sensor definition has usually been preferred as a "too specific" definition to serve only for an interface of an electrical/electronic system and an "other" system/medium with different form of signal(s).
Besides, that diiscussion, I can start another one:
There are many commercial integrated devices which are called "sensor"s, although in fact they are not basic sensors but are more complicated small systems which may also include electronic amplifier(s), filter(s), analog-digital-converter, indicators etc. For sure, these are very convenient devices for electronic design, but I think it is not correct to call them "sensor". Such a device employs a basic sensor but besides it provides other supporting electronic stages to aid the electronic designer. I don't know if there is a specific name for such devices.
Thank you again for your additional explanations.
Best regards...
• asked a question related to Optics
Question
Does anybody know what is the maximum power of laser sources (QCL, VECSEL, and so on) in THz regime?
I'm trying to realize a nonlinear effect in THz regime using a THz source without using DFG or SFG. I need 200 mw or more for my device. Is it doable? is there any source to generate that power?
Dear
Farooq Abdulghafoor Khaleel
Many thanks for your response and valuable information.
TOPTICA Photonics has unveiled some commercial THz sources with 0.1-6 THz spectrum in mW range.
• asked a question related to Optics
Question
Dear all,
I recently meet a problem when I use RCWA codes.
In the same structure, it tooks fewer time when using the FDTD solution.
I need set a lot of orders to calcuate the structure which can reach the similar result.
So I have a question that how can I judge the accuracy of the simulation when I use RCWA codes? and How to judge the orders I need?
Thanks
Sai Chen I am writing the code with matalb, base on Rumpf's lecture. first step. I just consider the normal insert. But very bad news, I the calculate the inverse matrix, no matter inv, ^(-1), pinv，it can not give the right results. And when the harmonics become large (such 40), it give a lot of warning, It seem it is the problem of calculate the inverse matrix. Thank you, do you know why? And the code I write is very short. Less than 200 lines.
• asked a question related to Optics
Question
Hello all,
I have some idea on how to measure the external quantum efficiency for my perovskite LEDs, but I want to calibrate that setup for which I want to measure the External Quantum effficiency of a normal 5 mm LED. How should I go forward with it? All suggestions/ help would be appreciated. Thank you
Jitesh Pandya
Adding to the colleagues above, you can use a standard solar cell in the shortcircuit mode pf operation where the out put photon flux of the diode can be recieved by the solar cell provided that the area of the solar cell is made large enough to receive the whole flux of the LED.
If the spectral response S(lambda) in mA/ photonic power of the solar cell is measured at the wavelength of the diode then one can get the the input photon flux what is that emitted from the diode at the same time, one can calculate the
input photonic power as Pphotonic= I/ S where I is the measured current and and S is the sensitivity at the intended wavelength.
If elaborated this method can work well in spite of its simplest.
Best wishes
• asked a question related to Optics
Question
Hello everyone,
I'm trying to implement a material with non-diagonal conductivity in my FDTD code. By the way, I'm using Dr. Elsherbeni's code for my purpose. Although I managed to implement diagonal anisotropy in my code, my code seems to be unstable for non-diagonal matrices. Through research, I've found out that my updating equations are not correct. Since it is necessary to interpolate the fields in irrelevant positions, it seems the updating equations also have to be organized differently than the isotropic case.
I attach the equations in a PDF below. the first equation on every page represents the equations in half-steps and the second one represents the updating equations implemented in the code.
Any help or hint would be appreciated.
I also have to point out that the source for the equations is the paper in the link below:
You are most welcome, Dear Amin Pishevar .
Best Regards.
• asked a question related to Optics
Question
I am looking for an overview on how FEM-Simulations are used in Optics. Especially, when it was first used and for what kind of systems.
• asked a question related to Optics
Question
The interference pattern is probably in the form of stripes / straight lines due to the tilt of the two interfering wavefronts. By adjusting the second mirror, the tilt can be reduced to a single fringe. Why do we get stripes and not circular fringes ?
In my opinion, your mirrors are still tilted with respect to each other. Circular fringes appear when the two mirrors are parallel.
• asked a question related to Optics
Question
I am looking to find possible methods to temporally overlapping a nanosecond pulsed laser (280 Hz - ~ 6 ns - 532 nm - beam diameter ~ 4 mm) with a picosecond pulsed laser (78 Mhz - ~ 10 ps - 565 nm - beam diameter ~ 4 mm) with delay line mirrors.
ATM I am using a fast PD with 1ns Rise Time (https://www.thorlabs.com/thorproduct.cfm?partnumber=DET210/M) and a 10 GS/s oscilloscope. However, I only can see the attached signals coming up from ns and ps sources when they run separately, and since the amplitude of the detected signal from ps is much low (~20 mV), it is hard to adjust the other one with it. One way which comes to mind is to lower the intensity of the ns laser with density filters but is there any other alternative to this.
Let me know if more information is required.
Do you want to synchronize pulsing of two lasers? I would assume that picosecond laser is likely self-oscillating but nanosecond one can be externally triggered. in such case if you need to synchronize them, you need to extract 78Mhz and turn it into digital clock which can be digitally divide by 278571 using some CPLD/FPGA board to make 280 Hz or so to clock pulsing of the nanosecond laser.
To extract 78 MHz you can use Phtodetector --> RF amplifier (e.g. Mini-Circuits ZFL-1000LN+ ) followed by some 78Mhz centered RF bandpass filter and then some high speed comparator board to obtain digital clock.
• asked a question related to Optics
Question
Hello;
It is well known that when light reaches an optical element, part of it is lost through absorption, diffusion, and back reflection. In the case of mirrors, this value is well characterized and a realistic estimate would be around 4-5% (or less depending of the material). However, I cannot find similar information on commercial or scientific sites for beam-splitters. For example, in a well-known optical products company, if we enter the raw data the percentage of reflected and transmitted light adds up to more than 100% at some points on the curve! Without a doubt this has to do with the measurement methodology.
In the case of scientific articles, some estimate this absorption to be around 2% assuming that it is a block or sheet of a certain material (ignoring ghost images). However, this does not make sense since it would then be more interesting to use a dichroic beam splitter than a mirror in certain circumstances.
Of course everything will depend on the thickness, material used, AR treatment. However, I cannot find a single example and I am not able to know the order of magnitude. Does anyone know of any reference where a realistic estimate of the useful light that is lost when using a beam splitter of whatever characteristics is made?
Thanks !
I think your premise is flawed. There isn’t going to be “an answer” because tailoring this parameter and trading it against other properties you might like is the crux of coating design and the answer might be anything over a wide range depending on what was designed under what set of constraints. For example, your example of a mirror being 4% is at best a rule of thumb and most often completely wrong. Over a fair range of wavelengths bare i coated aluminum happens to be around 4% absorptive. However Silver is only 2% absorptive in that range. Bare Gold may be terribly absorptive at shorter wavelengths. At longer wavelengths it doesn’t reach the 98% of silver, but over much of the IR and aluminum become terribly absorptive and gold is the best. More importantly, mirrors are rarely uncoated and a dielectric coating can raise the reflectivity of metallic mirrors above 99%. See for example Edmunds “ultrafast” enhanced aluminum coating.
And that is just metallic coatings. Metal is useful over a wide wavelength range when you don’t know what a mirror is going to be used for. However, if you know the wavelength (and what acceptance angle you need, and other constraints) you can use a pure dielectric stack. Dielectric mirrors can be made very close to 100% reflective. What’s more, very little light is absorbed, so what little doesn’t reflect transmits.
That brings us to beam splitters. It is not at all difficult to make a dielectric coating where essentially no light is absorbed. It is all either transmitted or reflected. Adding the reflected to the transmitted should yield just about 100% every time. When you found placed where they appeared to add up to higher than 100%, that is just experimental error or round off error, but they probably do add up to almost 100%
• asked a question related to Optics
Question
“The interaction of a field with a thin scattering layer corresponds to multiplication with a diagonal matrix“
Original from：Wetzstein, Gordon, et al. "Inference in artificial intelligence with deep optics and photonics." Nature 588.7836 (2020): 39-47.
Xiaohui Zhu , here is a short answer. For details, I suggest to read the book: "Introduction to Fourier Optics" by Goodman, J. W. (Roberts and Co, 2005), chapter 5 and Appendix B.
A scattering layer is typically composed of an optically dense material, with a refractive index significantly different than the one of air, and then the propagation velocity of an optical disturbance is less than the velocity in air.
Since the layer is thin, the sole effect of it is to shift the phase of waves when they are passing through it. Such a phase shift, as compared to air, results in a phase delay
\Delta \phi = k(n-1)d
where n is the index of refraction of the layer's material, d its thickness.
Transformations involving phase shifts are associated with the diagonal elements of the transformation matrix.
• asked a question related to Optics
Question
I want to stabilize (carrier-envelope offset) SESAME modelocked, linear-cavity laser with rep. rate 31.6 MHz by method of f-2f interferometry.
Can anyone suggest:
1- what are the typical feedback electronics one needs for laser stabilisation.
Specifically, PID controller specs.
2- Voltage to current converter (to pump diode current)
3- Since frequency spacing is low (31.6MHz), would dichroic mirrors work efficiently to filter f and 2f components of the supercontinuum other there is a better option?
Traditional feedback stabilization is difficult for most SESAM mode-locked lasers. One major problem is the long upperstate lifetime of these materials, which are typically in the millisecond range. The second is the rather low output coupling compared to fiber lasers. Both these effects limit the speed of the servo loop to sub-kHz bandwidth. Another recurring problem is the S/N of the beat note, which often barely reached the necessary 30dB in 100kHz RBW.
Let me therefore suggest the use of the feed-forward method: https://www.osapublishing.org/ol/fulltext.cfm?uri=ol-44-22-5610&id=423127
Additionally, this method does not require any fancy feedback electronics, and you don't have to act back on the pump current. Details are in the two papers. In particular, the OE describes some of the best performance ever observed for an oscillator.
Concerning the use of a dichroic mirror: yes, this will always work independent of frequency spacing as f- and 2f components are an optical octave apart.
• asked a question related to Optics
Question