Science topics: Physical SciencesOptics
Science topic
Optics - Science topic
Explore the latest questions and answers in Optics, and find Optics experts.
Questions related to Optics
The sample is a silicon wafer, as the source of illumination comes close to being perpendicular, the surface is masked by the reflection.
Dear colleagues,
I’ve created a video on YouTube simulating diffraction phenomena and illustrating how it differs from wave interference.
I hope this visual approach offers a clear perspective on the distinctions between these effects.
Let say I have a Mode-locked linear cavity fibre laser with 3m meter PM980 used for connect components within the cavity. Also, I am using a chirped fibre bragg grating (CFBG) for dispersion compensation.
PM980 has GVD of 0.014 ps^2/m
CFBG has D parameter = 0.42 ps/nm and reflection bandwidth of 9nm
laser pulse has FWHM width of 6nm
My first question is:
how to convert ps/nm of CFBG into ps^2/m?
is it simple as using β2=−2πc \ λ^2.D (since D is given in ps/nm, do I need to multiply it with pulse's bandwidth or CFBG bandwidth?)
Second question:
PM980 used within the cavity is 3m. Since in a linear cavity round-trip length is calculated as 2L, therefore, to calculate total group delay dispersion, one should multiply 2 x 3m ?
Thanks in advance!
How can I calculate total EQE when I have the absorption of the first two modes of the system and their S parameters? Do I need to use S parameters at all? (I am using CST Microwave Studio for simulations.)
I am interested in defining the heterogeneity and similarities among metalenses and their advantages in the current and new applications, and identify some of their future improvements and characteristics.
Hi All,
I am trying to generate the 3D corneal surface from the Zernike Polynomials. I am using the following steps, can anyone please let me know whether they are accurate
Step 1: Converted the cartesian data (x, y, z) to polar data (rho, theta, z)
Step 2: Nomalised the rho values, so that they will be less than one
Step 3: Based on the order, calculated the Zernike polynomials (Zpoly), (for example: if the order is 6, the number of polynomials is 28 )
Step 4: Zfit = C1 * Z1 + C2 * Z2 + C3 * Z3 + ......... + C28 * Z28
Step 5: Using regression analysis, calculated the coefficient (C) values
Step 6: Calculated the error between the predicted value (Zfit) and the actual elevation value (Z)
Step 7: Finally, converted the polar data (rho, theta, Zfit) to Cartesian coordinates to get the approximated corneal surface
Thanks & Regards,
Nithin
Does Wolfram prefer quantum mechanics or relativity? Why?
The diffraction of light has been referred to as its wave quality since it seemed there was no other solution to describe that phenomenon as its particle quality and subsequently, it exhibited wave-particle duality.
The primary challenge in analyzing the shadow blister arises when the transverse distance between the two edges along the X-axis is either large or when the slit width reaches zero and the secondary barrier overlaps the primary barrier, rendering the "Fresnel Integral" valid. In such scenarios, this phenomenon can also be interpreted using traditional ray theory.
As the transverse distance decreases to approximately a millimeter, the validity of the "Fresnel Integral" diminishes. Regrettably, this narrow transverse distance has often been overlooked.
This article explores various scenarios where the transverse distance is either large or less than a millimeter, and where the secondary barrier overlaps the primary barrier.
Notably, complexity arises when the transverse distance is very small. In such conditions, the Fourier transform is valid only if we consider a complex refractive index, indicating an inhomogeneous fractal space with a variable refractive index near the surface of the obstacles. This variable refractive index introduces a time delay in the temporal domain, resulting in a specific dispersion region underlying the diffraction phenomenon.
Refer to:
http://www.ej-physics.org/index.php/ejphysics/article/view/304
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
The shadows of two objects undergo peculiar deformation when they intersect, regardless of the distance between the objects along the optical axis:
Tansverse resonance condition for the single layer(FIg.3) waveguide has bee deduced, as shown in Fig.2, which contains the phase shifts cause by reflection and optical path difference. Only light that can fulfill the equation of Fig2 can propagate through the waveguide.Is it possible to get a similar equation for a double-layer waveguide?
I am setting up a simulation where I want to see the reflectance from an array of nanoparticle using COMSOL wave optics module. I want to see the reflectance for co and cross polarized light. For example, let's say the incident beam is x-polarized. I want to see the reflectance separately for x and y polarized scattered light. I can't find a way to do the same. I can get the total reflectance using ewfd.Rport_1 or ewfd.S11, but I don't see a way to get the same thing for a particular polarization.
Any help will be greatly appreciated.
Thanks
I've come across a formula n= 1/Ts + Sqrt(1/Ts-1) where n is the refractive index and Ts is the transmittance. Is this formula valid? How is this formula arrived at?
Hello all,
I'm currently researching metalenses and facing an intriguing challenge.
In my simulations using Lumerical FDTD, based on methods from DOI: 10.1038/ncomms8069, I'm trying to calculate the focus efficiency of metalenses. My process involves placing an aperture at the incident with PML boundaries and measuring the total intensity at the focal point. Initially, I conducted this with only the glass substrate, then repeated with both glass and nanopillars, measuring over an area about three times the FWHM at the focal point.
Here's where it gets puzzling: The intensity with just the glass substrate is consistently lower than with both glass and nanopillars. Interestingly, I also tried the process without any glass substrate at the incident, yet the focal point intensity remained significantly higher than expected.
Could you offer any insights or thoughts on why this might be happening? Your advice or any pointers towards relevant resources would be invaluable.
Thank you for your time and consideration.
Best regards,
Hi all,
I am trying to calculate the curvatures of the cornea and compare them with Pentacam values. I have the Zernike equation in polar coordinates (Zfit = f(r, theta)). Can anybody let me know the equations for calculating the curvatures ?.
Thanks & Regards.
Nithin
Hi there,
I hope you are doing well.
In the lab we have different BBO crystals, however, in the past, they did not mark them so we don't know which crystal is which. I appreciated it if somebody have an idea about how to measure the thickness of BBO crystals.
The second question is, are the BBO crystals sandwiched by two glasses or not? If yes is the measurement become complicated?
Best regards,
Aydin
How else can we explain :
Imprimis : That a light ray has different lengths for different observers. (cf. B.)
ii. That the length of a light ray is indeterminate? - both gigantic, and nothing, within the Einstein- train embankment carriage : (cf. B.)
iii. That a light ray can be both bent and straight. Bent for one observer, and straight for another : (cf. C.)
iv. That a light rays "bends" mid-flight in an effort to be consistent with an Absolute event which lies in the future : (cf. C.)
v. That these extraordinary things -- this extraordinary behaviour, (including the "constancy of speed") are so that the reality is consistent among the observers -- in the future. (cf. D, B, C)
vi. That light may proceed at different rates to the same place--- wholly on account of the reality at that place having to be consistent among the observers : (cf. D, A)
---------------------------------------------------------
B. --
C.--
D.--
Hello everyone,
I have made several optical phantoms with different weight ratio of ink into PDMS, from 0wt% to 5wt%. I have measured the transmission (%) and reflection (%) of each sample.
From there I calculated the absorption with the Beer-Lambert law, A=log(I0/I), with I0 being the transmission with 0wt% of ink and I the transmission of the sample desired.
I can therefore get the absorption coefficient of the phantoms with the formula: ua = A/thickness.
Therefore I have a linear relationship between the weight percentage of the phantoms and their absorption coefficient.
Now my issue is that I want to create a phantom of 2cm thickness but with a ratio of ink to PDMS known.
Should I assume the absorption coefficient will not change from the 2mm sample to the 2cm one ?
Otherwise, how do I determine the absorption coefficient of my new phantom?
Of course, I cannot measure the transmission of this sample as it is too thick now.
Thank you for your help!
I have a laser with a spectral bandwidth of 19nm and is linearly chirped. The Fourier transform limit is ~82fs assuming a Gaussian profile. I have built a pulse compressor based on the available transmission grating (1000 lines/mm, see the attachment); however, I noticed that the minimum achievable dispersion (further decrease in dispersion is limited by the distance between the grating and horizontal prism) of the compressor is greater than what is required for the optimal pulse compression supported by the optical bandwidth. Is there a way to decrease dispersion further in this setup? or Are there any other compressor configurations using single-transmission grating which might have more flexible dispersion control?
there is a fiber coupled EOM setup after the double pass AOM setup. the intensity of light is constant(checked with power meter) before the input of EOM but there is a 10% fluctuations in the power just after the EOM, due to which I am unable to lock to a signal that i see in the oscilloscope. This signal is fluctuating up and down on the screen. Is there any solution or could I be doing wrong somewhere, although I have realigned all optical elements numerous times to get it done correctly, but still facing the problem.
I am developing a maths model with Matlab code of optical coherence tomography signal for checking algorithms of extracting B-scans. Now I am struggling with taking into account optic systems such as sample scanner and spectrometer which distort optical signal. I am thinking to simulate this right in Matlab. Would it be better to use special soft for optic simulation and then transfer some coefficients into Matlab code to emitate optics (I suppose I could do it with 2D Fourier Transform besause all other parts of OCT system I implemented in spectral domain). Is there any code examples or tutorials?
So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)
General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
thanks
The intensity of each ray in RayOptics module of COMSOL is easily obtained after ray tracing. I need to find the radiation intensity in the mesh points which would be related to all the rays crossing a point . Does anybody know how I can get the continuous contours of radiation intensity? Should I use accumulator? If yes, what should be the settings of the accumulator?
1. The necessity of a polarization controller for single-mode fiber. Is a polarization controller necessary for single-mode fibers? What happens when you don't have a polarization controller?
2. Optical path matching problem. How to ensure that the two arms of the optical path difference match, any tips in the adjustment process? If the optical path difference exceeds the imaging distance, will interference fringes fail to appear?
Only these questions for the time being, if there are more welcome to point out.
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
+2
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
For my experiments I need circularly polarized laser light at 222 nm.
Can anyone tell me is there any important difference between a λ/4 phase plate and a Fresnel rhomb? Prices seem to be comparable. My intuition tells me that phase plate could be significantly less durable due to optical damage of UV AR coatings on all of the surfaces. Also it seems that it is much harder to manufacture the UV phase plate since there are very limited light sources for testing. While Fresnel rhomb seems to be more easy to produce and apochromat. What am I supposed to choose?
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
Thank you in advance!
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
Hi. I have a question. Do substances(for example Fe or Benzene ) in trace amounts (for example micrograms per liter) cause light refraction? and if they do, is this refraction large enough to be detected? and also if they do, is this refraction unique for each substance?
I also need to know if we have a solution with different substances, can refraction help us determine what the substances are? can it measure the concentration?
Thanks for your help
I have profiled a collimated pulsed laser beam (5mm) at different pulse energies by delaying the Q-switch and I found the profile to be approximately gaussian. Now I have placed a negative meniscus lens to diverge the beam and I put a surface when the beam spot size is 7 mm. Should the final beam profile (at the spot size = 7 mm) be still gaussian? Or the negative lens will change the gaussian profile? Is there any way to calculate the intensity profile theoretically, without again doing the beam profiling by methods like Razor blade method? Thanks.
I would like to calculate the Mode Field Diameter of a step index fiber at different taper ratios. I understand that at a particular wavelength, the MFD will be decreasing as the fiber is tapered. It may increase if it's tapered more. I am looking to reproduce the figures ( attached ) given in US Patent 9946014. Is there any formula I may use ? Or it involves some complex calculations?
Dear all,
Kindly provide your valuable comments based on your experience with surgical loupes
- Magnification (2.5 x to 5x)
- Working distance
- field of vision
- Galilean (sph/cyl) vs Kepler (prism)
- TTL vs non TTL/flip
- Illumination
- Post use issues (eye strain/ headache/ neck strain etc)
- Recommended brand
- Post sales services
Thank you
#Surgery #Loupes # HeadandNeck #Surgicaloncology #Otolaryngology
I am using a 550mW green laser (532nm) and I want to measure its intensity after being passed through several lenses and glass windows.
I found a ThorLabs power meter but it is around $1200.
Any cheaper options to measure the intensity of the laser?
(high accuracy is not required)
I want to know if the number of fringes and their shape is an important factor for the accuracy of phase definition?
Solvents for the immersion oil are carbon tetrachloride, ethyl ether, Freon TF, Heptane, Methylene Chloride, Naptha, Turpentine, Xylene, and toluene.
What is the best of these to clean the surface? Toluene? Heptane? I'd like to stay away from more dangerous chemicals if possible and have something that evaporates easily.
I would like to calculate the return loss for a splice / connection between two different fibers. One of the fiber is having a larger core diameter and larger cladding diameter compared to the other fiber. I was considering the approach laid out in this paper which takes about identical fibers : M. Kihara, S. Nagasawa and T. Tanifuji, "Return loss characteristics of optical fiber connectors," in Journal of Lightwave Technology, vol. 14, no. 9, pp. 1986-1991, Sept. 1996, doi: 10.1109/50.536966.
link :
I added a few screenshots from the paper.
Hello
I am new to the field and I would like to ask on what is the criteria to say whether a photoswitchable compound or optogenetic molecule has fast kinetics and high spatiotemporal resolution at the cell-free model, cellular/in vitro model, in vivo and ex vivo model? Is there a consensus criterion to quantitatively qualify if a compound has a fast kinetic and high spatiotemporal resolution in these models?
For instance, if a compound becomes fully activated when turned on by light in less than 30 min, does it have fast kinetics?
On the other hand, if a compound can precisely activate certain neuronal regions in the brain but it has off-target activations in the surrounding regions around 20 uM from the region of activation, does it have high spatiotemporal resolution?
I may have mixed-up some terms here, I will be glad if this will be clarified in the discussion.
Thanks.
I want to calculate the propagation constant difference for LP01 and LP02 modes for a tapered SMF-28 (in both core and cladding).
Is there a simple formula that I can use? My goal is to see if the taper profile is adiabatic or not.
I am using this paper for my study : T. A. Birks and Y. W. Li, "The shape of fiber tapers," in Journal of Lightwave Technology, vol. 10, no. 4, pp. 432-438, April 1992, doi: 10.1109/50.134196.
equation in attached figure
In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.
My set up is as follows : Elliptically polarized light at input -> Faraday Rotator -> Linear Polarizer (LP) -> Photodiode
The LP is set such that the power output is minimum. I use a lock -in-amplifier to measure the power change due to the Faraday effect. I have a more or less accurate measurement of the magnetic field and the length of the fiber. The experimental Faraday rotation (Rotation Theta= Verdet constant*MagneticField*Length of fiber) , is more than the theoretical prediction, so I was wondering if I am observing the effect of elliptical polarization at the input to the system.
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
According to ASHRAE the values of tb and td for Atlantic, I want their values according to cities or latitude and longitude
Thanks
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?
If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?
So far, I made the simulation for the thermal and deformation analysis. I know that neff-T graph can be plotted. TEM modes can also.
For an actual laser system, the ellipticity of a Gaussian beam is like in the picture attached (measured by me). Near the laser focus the ellipticity is high and falls down drastically at Rayleigh length. Then increases again. This is a "simple astigmatic" beam. Can anyone explain this variation?
P.S. In the article (DOI: 10.2351/1.5040633), the author also found similar variation. But did not explain the reason.
Thanks in advance
What are the advantages of using SiO2 substrate over Si substrate for monolayer graphene in photonics?
I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.
Hello dear friends,
I am trying to add our own optical components in the sample compartment region of our Nicolet 380 FTIR. Also, our sample is small, so we need to shrink the size of the beam spot with an iris to avoid signals from the substrate.
Therefore, we want to use a mid IR sensor card to help me find where and how large the light beam is. However, the IR sensor card does not show any color change when I put it in the light path (of course, when the light source is on). The mid IR sensor card we use can detect light in the wavelength range of 5-20 um, the min detectable power density is 0.2 W/cm2.
Did I miss anything here? And do you have any suggestions how I shall detect the beam spot, its position and size?
Any suggestions will be highly appreciated! Thank you in advance!
Best regards,
Ziwei
like of cavity length is 50 mm, so what value of thermal focal should be so that it doesn't have any effect on the stability of laser cavity. f>50mm or f<50mm
Some 2D galvos have both axes on the horizontal plane. It seems much easier to manufacture. However, some high-end galvos such as those from Cambridge have one of the axes tilted by a small angle. What is the benefit of that?
I have a long length (L) of coiled SMF-28 on a spool and I want to measure the "beat length (Lb)" of the entire spool using some simple means as described. (1) inject linearly polarized broadband light (for example, from a superluminescent source) (2) record the optical spectrum using an optical spectrum analyzer (OSA), after transmission through the fiber and another polarizer (3)That spectrum will exhibit oscillations with a period Δλ, from which the polarization beat length can be calculated using Lb = (Δλ/λ)*L.
My questions are (a) what is the typical resolution for the OSA used in such measurements (b) should I rotate the polarizer such that the power is maximized for the center wavelength of my source ? (c) if I am missing anything else that I should consider
I'm curious if anyone can share their measurement of the coupling loss as a function of the gap between two SMF FC/APC fibers at various wavelengths. If not, it would be great if you can refer me to a datasheet or a paper where this type of measurement was done.
Thanks!
Hi,
My input Jones vector is E1 = [ 0 ; 1] , the Jones Matrix is M = [ a + ib , 0 ; 0 , c+id] , the output is E2 = M*E1 = [ 0 ; x + iy]. Now I want to know the phase shift between vertical and horizontal polarization of the light wave.
E1 is 2x1, M is 2x2 and E2 is 2x1.
Is E2 elliptically polarized but then it does not have any X component, I am confused.
I'm working on silicon-graphene hybrid plasmonic waveguides at 1.55um. for bilayer graphene my effective mode indices are near the source that I'm using but for trilayer they are not acceptable. For modeling the graphene I use relative primitivity or refractive index in different applied EV.
I attached my graphene relative primitivity and refractive index calculation code and one of my COMSOL file related to fig5.19b in the source.
Related question of mine with more detail: https://www.researchgate.net/post/How_to_simulate_bilayer_trilayer_graphene_waveguide_in_COMSOL
I'm planning to modify a finite tube length compound microscope to allow the use of "aperture reduction phase contrast" and "aperture reduction darkfield" according to the following sources:
Piper, J. (2009) Abgeblendeter Phasenkontrast — Eine attraktive optische Variante zur Verbesserung von Phasenkontrastbeobachtungen. Mikrokosmos 98: 249-254. https://www.zobodat.at/pdf/Mikrokosmos_98_4_0001.pdf (in German).
The vague instructions state:
"In condenser aperture reduction phase contrast, the optical design of the condenser is modified so that the condenser aperture diaphragm is no longer projected into the objective´s back focal plane, but into a separate plane situated circa 0.5 – 2 cm below (plane A´ in fig. 5), i.e. into an intermediate position between the specimen plane (B) and the objective´s back focal plane (A). The field diaphragm is no longer projected into the specimen plane (B), but shifted down into a separate plane (B´), so that it will no longer act as a field diaphragm.
As a result of these modifications in the optical design, the illuminating light passing through the condenser annulus is no longer stopped when the condenser aperture diaphragm is closed. In this way, the condenser iris diaphragm can work in a similar manner to bright-field illumination, and the visible depth of field can be significantly enhanced by closing the condenser diaphragm. Moreover, the contrast of phase images can now be regulated by the user. The lower the condenser aperture, the higher the resulting contrast will be. Importantly, halo artifacts can be reduced in most cases when the aperture diaphragm is partially closed, and potential indistinctness caused by spherical or chromatic aberration can be mitigated."
The author combined finite 160 mm tube length objectives, a phase contrast condenser designed for finite microscopes, and an infinity-corrected microscope to get the desired results.
However, how would one accomplish this in the simplest way possible?
I wanted to calculate the magnification of a system where I am using a webcam lens and was wondering if this could be done by applying the simple lens equation? If yes, then what would I consider my "v" to be in this case since I'm dealing with a lens assembly (unknown # of lenses and unknown separation between them)? I just know the EFL in this case.
It is known that in case particles having size less than one-tenth the wavelength of light, Rayleigh scattering occurs and the scattering is isotropic. But for particles whose sizes are comparable to the wavelength of light, the scattering is more towards the forward direction and hence is anisotropic.
I have tried Keller's,tucker's and Barker's etchant but they aren't working.I am interested to get the optical micrography.but i'm not getting anything. :(
In any OSL phosphor we require optical energy more that the thermal trap depth of that trap for optical stimulation. For example in case of Al2O3:C we require 2.6 eV photon to detrap the electron from the trap having 1.12 eV thermal trap depth. How are they related to each other?
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
- Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.
Does anybody know what is the maximum power of laser sources (QCL, VECSEL, and so on) in THz regime?
I'm trying to realize a nonlinear effect in THz regime using a THz source without using DFG or SFG. I need 200 mw or more for my device. Is it doable? is there any source to generate that power?
Dear all,
I recently meet a problem when I use RCWA codes.
In the same structure, it tooks fewer time when using the FDTD solution.
I need set a lot of orders to calcuate the structure which can reach the similar result.
So I have a question that how can I judge the accuracy of the simulation when I use RCWA codes? and How to judge the orders I need?
Thanks
Hello all,
I have some idea on how to measure the external quantum efficiency for my perovskite LEDs, but I want to calibrate that setup for which I want to measure the External Quantum effficiency of a normal 5 mm LED. How should I go forward with it? All suggestions/ help would be appreciated. Thank you
Jitesh Pandya