Questions related to Optics

The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:

I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:

Hi All,

I am trying to generate the 3D corneal surface from the Zernike Polynomials. I am using the following steps, can anyone please let me know whether they are accurate

Step 1: Converted the cartesian data (x, y, z) to polar data (rho, theta, z)

Step 2: Nomalised the rho values, so that they will be less than one

Step 3: Based on the order, calculated the Zernike polynomials (Zpoly), (for example: if the order is 6, the number of polynomials is 28 )

Step 4: Zfit = C1 * Z1 + C2 * Z2 + C3 * Z3 + ......... + C28 * Z28

Step 5: Using regression analysis, calculated the coefficient (C) values

Step 6: Calculated the error between the predicted value (Zfit) and the actual elevation value (Z)

Step 7: Finally, converted the polar data (rho, theta, Zfit) to Cartesian coordinates to get the approximated corneal surface

Thanks & Regards,

Nithin

Hi there,

I hope you are doing well.

In the lab we have different BBO crystals, however, in the past, they did not mark them so we don't know which crystal is which. I appreciated it if somebody have an idea about how to measure the thickness of BBO crystals.

The second question is, are the BBO crystals sandwiched by two glasses or not? If yes is the measurement become complicated?

Best regards,

Aydin

How else can we explain :

Imprimis : That a light ray has different lengths for different observers. (cf. B.)

ii. That the length of a light ray is indeterminate? - both gigantic, and nothing, within the Einstein- train embankment carriage : (cf. B.)

iii. That a light ray can be both bent and straight. Bent for one observer, and straight for another : (cf. C.)

iv. That a light rays "bends" mid-flight in an effort to be consistent with an Absolute event which lies in the future : (cf. C.)

v. That these extraordinary things -- this extraordinary behaviour, (including the "constancy of speed") are so that the reality is consistent among the observers -- in the future. (cf. D, B, C)

vi. That light may proceed at different rates to the same place--- wholly on account of the reality at that place having to be consistent among the observers : (cf. D, A)

---------------------------------------------------------

B. --

C.--

D.--

Hello everyone,

I have made several optical phantoms with different weight ratio of ink into PDMS, from 0wt% to 5wt%. I have measured the transmission (%) and reflection (%) of each sample.

From there I calculated the absorption with the Beer-Lambert law, A=log(I0/I), with I0 being the transmission with 0wt% of ink and I the transmission of the sample desired.

I can therefore get the absorption coefficient of the phantoms with the formula: ua = A/thickness.

Therefore I have a linear relationship between the weight percentage of the phantoms and their absorption coefficient.

Now my issue is that I want to create a phantom of 2cm thickness but with a ratio of ink to PDMS known.

Should I assume the absorption coefficient will not change from the 2mm sample to the 2cm one ?

Otherwise, how do I determine the absorption coefficient of my new phantom?

Of course, I cannot measure the transmission of this sample as it is too thick now.

Thank you for your help!

I have a laser with a spectral bandwidth of 19nm and is linearly chirped. The Fourier transform limit is ~82fs assuming a Gaussian profile. I have built a pulse compressor based on the available transmission grating (1000 lines/mm, see the attachment); however, I noticed that the minimum achievable dispersion (further decrease in dispersion is limited by the distance between the grating and horizontal prism) of the compressor is greater than what is required for the optimal pulse compression supported by the optical bandwidth. Is there a way to decrease dispersion further in this setup? or Are there any other compressor configurations using single-transmission grating which might have more flexible dispersion control?

there is a fiber coupled EOM setup after the double pass AOM setup. the intensity of light is constant(checked with power meter) before the input of EOM but there is a 10% fluctuations in the power just after the EOM, due to which I am unable to lock to a signal that i see in the oscilloscope. This signal is fluctuating up and down on the screen. Is there any solution or could I be doing wrong somewhere, although I have realigned all optical elements numerous times to get it done correctly, but still facing the problem.

I am developing a maths model with Matlab code of optical coherence tomography signal for checking algorithms of extracting B-scans. Now I am struggling with taking into account optic systems such as sample scanner and spectrometer which distort optical signal. I am thinking to simulate this right in Matlab. Would it be better to use special soft for optic simulation and then transfer some coefficients into Matlab code to emitate optics (I suppose I could do it with 2D Fourier Transform besause all other parts of OCT system I implemented in spectral domain). Is there any code examples or tutorials?

So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "

*helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum*". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?

I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?

Please share your comments and references with me.

thanks

The intensity of each ray in RayOptics module of COMSOL is easily obtained after ray tracing. I need to find the radiation intensity in the mesh points which would be related to all the rays crossing a point . Does anybody know how I can get the continuous contours of radiation intensity? Should I use accumulator? If yes, what should be the settings of the accumulator?

1. The necessity of a polarization controller for single-mode fiber. Is a polarization controller necessary for single-mode fibers? What happens when you don't have a polarization controller?

2. Optical path matching problem. How to ensure that the two arms of the optical path difference match, any tips in the adjustment process? If the optical path difference exceeds the imaging distance, will interference fringes fail to appear?

Only these questions for the time being, if there are more welcome to point out.

My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.

The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.

The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.

The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.

All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.

I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.

I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.

I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.

All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...

What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?

+2

1) Can the existence of an aether be compatible with local Lorentz invariance?

2) Can classical rigid bodies in translation be studied in this framework?

By taking into account that the synchronization of clocks of inertial frames is just a gauge, we may use one synchronization that clearly violates Lorentz invariance globally but preserves it locally in the vecenity of each point of flat spacetime. Then the answer to 1) and 2) seems to be affirmative.

Christian Corda showed in 2019 that this effect of clock synchronization is a necessary condition to explain the Mössbauer rotor experiment (Honorable Mention at the Gravity Research Foundation 2018). In fact, it can be easily shown that it is a necessary condition to apply the Lorentz transformation to any experiment involving high velocity particles traveling along two distant points (including the linear Sagnac effect) .

---------------

We may consider the time of a clock placed at an arbitrary coordinate x to be t and the time of a clock placed at an arbitrary coordinate x

_{P}to be t_{P}. Let the offset (t – t_{P}) between the two clocks be:1) (t – t

_{P}) = v (x - x_{P})/c^{2}where (t-t

_{P}) is the so-called Sagnac correction. If we call g to the Lorentz factor for v and we insert 1) into the time-like component of the Lorentz transformation T = g (t - vx/c^{2}) we get:2) T = g (t

_{P}- vx_{P}/c^{2})On the other hand, if we assume that the origins coincide x = X = 0 at time t

_{P}= 0 we may write down the space-like component of the Lorentz transformation as:3) X = g(x - vt

_{P})Assuming that both clocks are placed at the same point x = x

_{P , }inserting x =x_{P}, X = X_{P ,}T = T_{P }into 2)3) yields:4) X

_{P}= g (x_{P}- vt_{P})5) T

_{P}= g (t_{P}- vx_{P}/c^{2})which is the local Lorentz transformation for an event happening at point P. On the other hand , if the distance between x and x

_{P}is different from 0 and x_{P}is placed at the origin of coordinates, we may insert x_{P}= 0 into 2)3) to get:6) X = g (x - vt

_{P})7) T = g t

_{P}which is a change of coordinates that it:

- Is compatible with GPS simultaneity.

- Is compatible with the Sagnac effect. This effect can be explained in a very straightfordward manner without the need of using GR or the Langevin coordinates.

- Is compatible with the existence of relativistic extended rigid bodies in translation using the classical definition of rigidity instead of the Born´s definition.

- Can be applied to solve the 2 problems of the preprint below.

- Is compatible with all experimenat corroborations of SR: aberration of light, Ives -Stilwell experiment, Hafele-Keating experiment, ...

Thus, we may conclude that, considering the synchronization condition 1):

a) We get Lorentz invariance at each point of flat space-time (eqs. 4-5) when we use a unique single clock.

b) The Lorentz invariance is broken out when we use two clocks to measure time intervals for long displacements (eqs. 6-7).

c) We need to consider the frame with respect to which we must define the velocity v of the synchronization condition (eq 1). This frame has v = 0 and it plays the role of an absolute preferred frame.

a)b)c) suggest that the Thomas precession is a local effect that cannot manifest for long displacements.

More information in:

I am using Seek thermal camera to track the cooked food as this video

As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.

As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.

I would appreciate any explanations of this phenomenon and solution, from either physics or optics

For my experiments I need circularly polarized laser light at 222 nm.

Can anyone tell me is there any important difference between a λ/4

**phase plate and a Fresnel rhomb? Prices seem to be comparable. My intuition tells me that phase plate could be significantly less durable due to optical damage of UV AR coatings on all of the surfaces. Also it seems that it is much harder to manufacture the UV phase plate since there are very limited light sources for testing. While Fresnel rhomb seems to be more easy to produce and apochromat. What am I supposed to choose?**There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?

Thank you in advance!

[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).

[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).

Hi. I have a question. Do substances(for example Fe or Benzene ) in trace amounts (for example micrograms per liter) cause light refraction? and if they do, is this refraction large enough to be detected? and also if they do, is this refraction unique for each substance?

I also need to know if we have a solution with different substances, can refraction help us determine what the substances are? can it measure the concentration?

Thanks for your help

I have profiled a collimated pulsed laser beam (5mm) at different pulse energies by delaying the Q-switch and I found the profile to be approximately gaussian. Now I have placed a negative meniscus lens to diverge the beam and I put a surface when the beam spot size is 7 mm. Should the final beam profile (at the spot size = 7 mm) be still gaussian? Or the negative lens will change the gaussian profile? Is there any way to calculate the intensity profile theoretically, without again doing the beam profiling by methods like Razor blade method? Thanks.

I would like to calculate the Mode Field Diameter of a step index fiber at different taper ratios. I understand that at a particular wavelength, the MFD will be decreasing as the fiber is tapered. It may increase if it's tapered more. I am looking to reproduce the figures ( attached ) given in US Patent 9946014. Is there any formula I may use ? Or it involves some complex calculations?

Dear all,

Kindly provide your valuable comments based on your experience with surgical loupes

- Magnification (2.5 x to 5x)

- Working distance

- field of vision

- Galilean (sph/cyl) vs Kepler (prism)

- TTL vs non TTL/flip

- Illumination

- Post use issues (eye strain/ headache/ neck strain etc)

- Recommended brand

- Post sales services

Thank you

#Surgery #Loupes # HeadandNeck #Surgicaloncology #Otolaryngology

I am using a 550mW green laser (532nm) and I want to measure its intensity after being passed through several lenses and glass windows.

I found a ThorLabs power meter but it is around $1200.

Any cheaper options to measure the intensity of the laser?

(high accuracy is not required)

_{I want to know if the number of fringes and their shape is an important factor for the accuracy of phase definition?}

Solvents for the immersion oil are carbon tetrachloride, ethyl ether, Freon TF, Heptane, Methylene Chloride, Naptha, Turpentine, Xylene, and toluene.

What is the best of these to clean the surface? Toluene? Heptane? I'd like to stay away from more dangerous chemicals if possible and have something that evaporates easily.

I would like to calculate the return loss for a splice / connection between two different fibers. One of the fiber is having a larger core diameter and larger cladding diameter compared to the other fiber. I was considering the approach laid out in this paper which takes about identical fibers : M. Kihara, S. Nagasawa and T. Tanifuji, "Return loss characteristics of optical fiber connectors," in

*Journal of Lightwave Technology*, vol. 14, no. 9, pp. 1986-1991, Sept. 1996, doi: 10.1109/50.536966.link :

I added a few screenshots from the paper.

Hello

I am new to the field and I would like to ask on what is the criteria to say whether a photoswitchable compound or optogenetic molecule has fast kinetics and high spatiotemporal resolution at the cell-free model, cellular/in vitro model, in vivo and ex vivo model? Is there a consensus criterion to quantitatively qualify if a compound has a fast kinetic and high spatiotemporal resolution in these models?

For instance, if a compound becomes fully activated when turned on by light in less than 30 min, does it have fast kinetics?

On the other hand, if a compound can precisely activate certain neuronal regions in the brain but it has off-target activations in the surrounding regions around 20 uM from the region of activation, does it have high spatiotemporal resolution?

I may have mixed-up some terms here, I will be glad if this will be clarified in the discussion.

Thanks.

I want to calculate the propagation constant difference for LP01 and LP02 modes for a tapered SMF-28 (in both core and cladding).

Is there a simple formula that I can use? My goal is to see if the taper profile is adiabatic or not.

I am using this paper for my study : T. A. Birks and Y. W. Li, "The shape of fiber tapers," in

*Journal of Lightwave Technology*, vol. 10, no. 4, pp. 432-438, April 1992, doi: 10.1109/50.134196.equation in attached figure

In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.

My set up is as follows : Elliptically polarized light at input -> Faraday Rotator -> Linear Polarizer (LP) -> Photodiode

The LP is set such that the power output is minimum. I use a lock -in-amplifier to measure the power change due to the Faraday effect. I have a more or less accurate measurement of the magnetic field and the length of the fiber. The experimental Faraday rotation (Rotation Theta= Verdet constant*MagneticField*Length of fiber) , is more than the theoretical prediction, so I was wondering if I am observing the effect of elliptical polarization at the input to the system.

1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)

2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)

Please, see the attached file RPVM.pdf. Any comment will be wellcome.

More on this subject at:

You can find the wording in the attached file PR1-v3.pdf. Any comment will be wellcome.

More on this topic at:

According to ASHRAE the values of tb and td for Atlantic, I want their values according to cities or latitude and longitude

Thanks

A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.

Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?

If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?

So far, I made the simulation for the thermal and deformation analysis. I know that neff-T graph can be plotted. TEM modes can also.

For an actual laser system, the ellipticity of a Gaussian beam is like in the picture attached (measured by me). Near the laser focus the ellipticity is high and falls down drastically at Rayleigh length. Then increases again. This is a "simple astigmatic" beam. Can anyone explain this variation?

P.S. In the article (DOI: 10.2351/1.5040633), the author also found similar variation. But did not explain the reason.

Thanks in advance

What are the advantages of using SiO2 substrate over Si substrate for monolayer graphene in photonics?

I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.

Hello dear friends,

I am trying to add our own optical components in the sample compartment region of our Nicolet 380 FTIR. Also, our sample is small, so we need to shrink the size of the beam spot with an iris to avoid signals from the substrate.

Therefore, we want to use a mid IR sensor card to help me find where and how large the light beam is. However, the IR sensor card does not show any color change when I put it in the light path (of course, when the light source is on). The mid IR sensor card we use can detect light in the wavelength range of 5-20 um, the min detectable power density is 0.2 W/cm2.

Did I miss anything here? And do you have any suggestions how I shall detect the beam spot, its position and size?

Any suggestions will be highly appreciated! Thank you in advance!

Best regards,

Ziwei

like of cavity length is 50 mm, so what value of thermal focal should be so that it doesn't have any effect on the stability of laser cavity. f>50mm or f<50mm

Some 2D galvos have both axes on the horizontal plane. It seems much easier to manufacture. However, some high-end galvos such as those from Cambridge have one of the axes tilted by a small angle. What is the benefit of that?

I have a long length (L) of coiled SMF-28 on a spool and I want to measure the "beat length (L

_{b})" of the entire spool using some simple means as described. (1) inject linearly polarized broadband light (for example, from a superluminescent source) (2) record the optical spectrum using an optical spectrum analyzer (OSA), after transmission through the fiber and another polarizer (3)That spectrum will exhibit oscillations with a period*Δλ*, from which the polarization beat length can be calculated using*L*_{b}*= (Δλ/λ)*L.*My questions are (a) what is the typical resolution for the OSA used in such measurements (b) should I rotate the polarizer such that the power is maximized for the center wavelength of my source ? (c) if I am missing anything else that I should consider

I am interested in defining the heterogeneity and similarities among metalenses and their advantages in the current and new applications, and identify some of their future improvements and characteristics.

I'm curious if anyone can share their measurement of the coupling loss as a function of the gap between two SMF FC/APC fibers at various wavelengths. If not, it would be great if you can refer me to a datasheet or a paper where this type of measurement was done.

Thanks!

Hi,

My input Jones vector is E1 = [ 0 ; 1] , the Jones Matrix is M = [ a + ib , 0 ; 0 , c+id] , the output is E2 = M*E1 = [ 0 ; x + iy]. Now I want to know the phase shift between vertical and horizontal polarization of the light wave.

E1 is 2x1, M is 2x2 and E2 is 2x1.

Is E2 elliptically polarized but then it does not have any X component, I am confused.

I've come across a formula n= 1/Ts + Sqrt(1/Ts-1) where n is the refractive index and Ts is the transmittance. Is this formula valid? How is this formula arrived at?

I'm working on silicon-graphene hybrid plasmonic waveguides at 1.55um. for bilayer graphene my effective mode indices are near the source that I'm using but for trilayer they are not acceptable. For modeling the graphene I use relative primitivity or refractive index in different applied EV.

I attached my graphene relative primitivity and refractive index calculation code and one of my COMSOL file related to fig5.19b in the source.

Related question of mine with more detail: https://www.researchgate.net/post/How_to_simulate_bilayer_trilayer_graphene_waveguide_in_COMSOL

I'm planning to modify a finite tube length compound microscope to allow the use of "aperture reduction phase contrast" and "aperture reduction darkfield" according to the following sources:

Piper, J. (2009) Abgeblendeter Phasenkontrast — Eine attraktive optische Variante zur Verbesserung von Phasenkontrastbeobachtungen. Mikrokosmos 98: 249-254. https://www.zobodat.at/pdf/Mikrokosmos_98_4_0001.pdf (in German).

The vague instructions state:

"In condenser aperture reduction phase contrast, the optical design of the condenser is modified so that the condenser aperture diaphragm is no longer projected into the objective´s back focal plane, but into a separate plane situated circa 0.5 – 2 cm below (plane A´ in fig. 5), i.e. into an intermediate position between the specimen plane (B) and the objective´s back focal plane (A). The field diaphragm is no longer projected into the specimen plane (B), but shifted down into a separate plane (B´), so that it will no longer act as a field diaphragm.

As a result of these modifications in the optical design, the illuminating light passing through the condenser annulus is no longer stopped when the condenser aperture diaphragm is closed. In this way, the condenser iris diaphragm can work in a similar manner to bright-field illumination, and the visible depth of field can be significantly enhanced by closing the condenser diaphragm. Moreover, the contrast of phase images can now be regulated by the user. The lower the condenser aperture, the higher the resulting contrast will be. Importantly, halo artifacts can be reduced in most cases when the aperture diaphragm is partially closed, and potential indistinctness caused by spherical or chromatic aberration can be mitigated."

The author combined finite 160 mm tube length objectives, a phase contrast condenser designed for finite microscopes, and an infinity-corrected microscope to get the desired results.

However, how would one accomplish this in the simplest way possible?

I wanted to calculate the magnification of a system where I am using a webcam lens and was wondering if this could be done by applying the simple lens equation? If yes, then what would I consider my "v" to be in this case since I'm dealing with a lens assembly (unknown # of lenses and unknown separation between them)? I just know the EFL in this case.

It is known that in case particles having size less than one-tenth the wavelength of light, Rayleigh scattering occurs and the scattering is isotropic. But for particles whose sizes are comparable to the wavelength of light, the scattering is more towards the forward direction and hence is anisotropic.

*I have tried Keller's,tucker's and Barker's etchant but they aren't working.I am interested to get the optical micrography.but i'm not getting anything. :(*

In any OSL phosphor we require optical energy more that the thermal trap depth of that trap for optical stimulation. For example in case of Al2O3:C we require 2.6 eV photon to detrap the electron from the trap having 1.12 eV thermal trap depth. How are they related to each other?

Today,

**sensors**are usually interpreted as devices which convert different sorts of**quantities**(e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an**electrical**quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the*states*or*changes*of events of the real world in order to convey the information to the relevant**electronic circuits**(which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).If we think in a simple way, we can assume that

**actuators**work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.If the signal processing and computation becomes based on

**"light" signals**instead of**electrical signals**, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).*Let's assume a case that we need to convert***pressure**to**light**: One can prefer the simplest (**hybrid**) approach, which is to use a**pressure sensor**and then an**electrical-to-optical**transducer (.e.g. an**LED**) for obtaining the required**new type**of sensor. However, instead of this**indirect***conversion, if a more efficient or faster***direct****pressure-to-light**converter (**new type**of pressure sensor) is available, it might be more favorable. In near future, we may need to use such**direct***transducer devices for low-noise and/or high-speed realizations.*

(

*The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome*)Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.

Does anybody know what is the maximum power of laser sources (QCL, VECSEL, and so on) in THz regime?

I'm trying to realize a nonlinear effect in THz regime using a THz source without using DFG or SFG. I need 200 mw or more for my device. Is it doable? is there any source to generate that power?

Dear all,

I recently meet a problem when I use RCWA codes.

In the same structure, it tooks fewer time when using the FDTD solution.

I need set a lot of orders to calcuate the structure which can reach the similar result.

So I have a question that how can I judge the accuracy of the simulation when I use RCWA codes? and How to judge the orders I need?

Thanks

Hello all,

I have some idea on how to measure the external quantum efficiency for my perovskite LEDs, but I want to calibrate that setup for which I want to measure the External Quantum effficiency of a normal 5 mm LED. How should I go forward with it? All suggestions/ help would be appreciated. Thank you

Jitesh Pandya

Hello everyone,

I'm trying to implement a material with non-diagonal conductivity in my FDTD code. By the way, I'm using Dr. Elsherbeni's code for my purpose. Although I managed to implement diagonal anisotropy in my code, my code seems to be unstable for non-diagonal matrices. Through research, I've found out that my updating equations are not correct. Since it is necessary to interpolate the fields in irrelevant positions, it seems the updating equations also have to be organized differently than the isotropic case.

I attach the equations in a PDF below. the first equation on every page represents the equations in half-steps and the second one represents the updating equations implemented in the code.

Any help or hint would be appreciated.

I also have to point out that the source for the equations is the paper in the link below:

I am looking for an overview on how FEM-Simulations are used in Optics. Especially, when it was first used and for what kind of systems.

Thanks in advance!

The interference pattern is probably in the form of stripes / straight lines due to the tilt of the two interfering wavefronts. By adjusting the second mirror, the tilt can be reduced to a single fringe. Why do we get stripes and not circular fringes ?

I am looking to find possible methods to temporally overlapping a nanosecond pulsed laser (280 Hz - ~ 6 ns - 532 nm - beam diameter ~ 4 mm) with a picosecond pulsed laser (78 Mhz - ~ 10 ps - 565 nm - beam diameter ~ 4 mm) with delay line mirrors.

ATM I am using a fast PD with 1ns Rise Time

**(https://www.thorlabs.com/thorproduct.cfm?partnumber=DET210/M) and a 10 GS/s oscilloscope. However, I only can see the attached signals coming up from ns and ps sources when they run separately, and since the amplitude of the detected signal from ps is much low (~20 mV), it is hard to adjust the other one with it. One way which comes to mind is to lower the intensity of the ns laser with density filters but is there any other alternative to this.**Let me know if more information is required.

Hello;

It is well known that when light reaches an optical element, part of it is lost through absorption, diffusion, and back reflection. In the case of mirrors, this value is well characterized and a realistic estimate would be around 4-5% (or less depending of the material). However, I cannot find similar information on commercial or scientific sites for beam-splitters. For example, in a well-known optical products company, if we enter the raw data the percentage of reflected and transmitted light adds up to more than 100% at some points on the curve! Without a doubt this has to do with the measurement methodology.

In the case of scientific articles, some estimate this absorption to be around 2% assuming that it is a block or sheet of a certain material (ignoring ghost images). However, this does not make sense since it would then be more interesting to use a dichroic beam splitter than a mirror in certain circumstances.

Of course everything will depend on the thickness, material used, AR treatment. However, I cannot find a single example and I am not able to know the order of magnitude. Does anyone know of any reference where a realistic estimate of the useful light that is lost when using a beam splitter of whatever characteristics is made?

Thanks !

“The interaction of a field with a thin scattering layer corresponds to multiplication with a diagonal matrix“

Original from：Wetzstein, Gordon, et al. "Inference in artificial intelligence with deep optics and photonics."

*Nature*588.7836 (2020): 39-47.I want to stabilize (carrier-envelope offset) SESAME modelocked, linear-cavity laser with rep. rate 31.6 MHz by method of f-2f interferometry.

Can anyone suggest:

1- what are the typical feedback electronics one needs for laser stabilisation.

Specifically, PID controller specs.

2- Voltage to current converter (to pump diode current)

3- Since frequency spacing is low (31.6MHz), would dichroic mirrors work efficiently to filter f and 2f components of the supercontinuum other there is a better option?

It is no doubt that VASP can be used to obtain the optical property at visible range （in the order of eV ）. Is there someone can tell me how to calculate optical property only at THz range （in the order of meV）. One may argue that VASP would output optical property at all range. However it will cost lots of time unnecessarily.

The term "phase" is always a confusing thing for me. When we recoding images, we say that we have recorded the amplitude and phase. Amplitude I am able to relate/ physically understand by connecting with intensity. As, intensity increases, the amplitude will also increase. But the phase term is still I am not able to digest. I am not able physically understand the phase term, like understand the amplitude term. Can anyone explain this?

I have studied the mathematics of Phase. But I am not able to physically relate it.