Science topics: Physical SciencesOptics
Science topic

Optics - Science topic

Explore the latest questions and answers in Optics, and find Optics experts.
Questions related to Optics
  • asked a question related to Optics
Question
4 answers
The sample is a silicon wafer, as the source of illumination comes close to being perpendicular, the surface is masked by the reflection.
Relevant answer
Answer
You could try inserting a sheet of white paper (not too thick) between your wafer and the light source. This should minimize the reflection on your wafer. Otherwise maybe an illumination with a very small incident angle?
  • asked a question related to Optics
Question
1 answer
Dear colleagues,
I’ve created a video on YouTube simulating diffraction phenomena and illustrating how it differs from wave interference.
I hope this visual approach offers a clear perspective on the distinctions between these effects.
Relevant answer
Answer
nterference happens when two light waves meet and mix together. It is caused by two or more light waves coming together. Diffraction happens when a light wave bends around corners or through small openings. It is caused by light waves hitting an obstacle or passing through a small gap.
  • asked a question related to Optics
Question
1 answer
Let say I have a Mode-locked linear cavity fibre laser with 3m meter PM980 used for connect components within the cavity. Also, I am using a chirped fibre bragg grating (CFBG) for dispersion compensation.
PM980 has GVD of 0.014 ps^2/m
CFBG has D parameter = 0.42 ps/nm and reflection bandwidth of 9nm
laser pulse has FWHM width of 6nm
My first question is:
how to convert ps/nm of CFBG into ps^2/m?
is it simple as using β2​=−2πc \ λ^2.D​ (since D is given in ps/nm, do I need to multiply it with pulse's bandwidth or CFBG bandwidth?)
Second question:
PM980 used within the cavity is 3m. Since in a linear cavity round-trip length is calculated as 2L, therefore, to calculate total group delay dispersion, one should multiply 2 x 3m ?
Thanks in advance!
Relevant answer
Answer
Nanometre birimi metreye çevrilir ve piksel cinsinden karesi alınır. 2L olarak hesaplamanın nedeni mod kilidi 2n (tekrarlama) nedeniyle, n-1 ve n-2 tek ve çift girişimi sıralaması için yapılır.
  • asked a question related to Optics
Question
1 answer
How can I calculate total EQE when I have the absorption of the first two modes of the system and their S parameters? Do I need to use S parameters at all? (I am using CST Microwave Studio for simulations.)
Relevant answer
Answer
Güneş hücreleri emilimi ve harici kuantum verimliliği merceğin veya optik aletin kırıcılık indisi, merkez noktası ve odak uzaklığına bağlı olarak değişir. S parametreleri de buna bağlı değişiklik gösterebilir ve hesaplama buna göre yapılır.
  • asked a question related to Optics
Question
4 answers
I am interested in defining the heterogeneity and similarities among metalenses and their advantages in the current and new applications, and identify some of their future improvements and characteristics.
Relevant answer
Answer
Two years ago, I asked this question here.
I'm excited to share that this initial curiosity led me on a journey of research and discovery, resulting in the publication of the first book on metalenses in the world. The book is titled "Introduction to Metalens Optics" and has been published by the Institute of Physics in the UK.
It’s amazing to think that what started as a simple question has now led to this book, which provides a detailed look at the fundamental characteristics of metalenses.
For those interested, you can find more information about the book here: .
  • asked a question related to Optics
Question
2 answers
Hi All,
I am trying to generate the 3D corneal surface from the Zernike Polynomials. I am using the following steps, can anyone please let me know whether they are accurate
Step 1: Converted the cartesian data (x, y, z) to polar data (rho, theta, z)
Step 2: Nomalised the rho values, so that they will be less than one
Step 3: Based on the order, calculated the Zernike polynomials (Zpoly), (for example: if the order is 6, the number of polynomials is 28 )
Step 4: Zfit = C1 * Z1 + C2 * Z2 + C3 * Z3 + ......... + C28 * Z28
Step 5: Using regression analysis, calculated the coefficient (C) values
Step 6: Calculated the error between the predicted value (Zfit) and the actual elevation value (Z)
Step 7: Finally, converted the polar data (rho, theta, Zfit) to Cartesian coordinates to get the approximated corneal surface
Thanks & Regards,
Nithin
Relevant answer
Answer
First, represent the Zernike polynomial as a complex polynomial in polar coordinates (r, θ) using the Zernike radial polynomials Rl(p) and angular harmonics Pm(θ). Then, evaluate the polynomial at a grid of points on a circular domain (e.g., using a radial and angular resolution). Finally, use the complex values to create a 2D array representing the surface height at each point. You can use libraries like Python's NumPy and SciPy to perform these steps. For example, you can use the `numpy.meshgrid` function to create a grid of (r, θ) values, and then evaluate the Zernike polynomial using NumPy's `polyval` function.
  • asked a question related to Optics
Question
4 answers
Physics vs philosophy.
Relevant answer
Answer
Every day we see similarities. For example, the rainbow caused by refraction of light, diffraction patterns off a compact disc, the patterns when viewed in a kaleidoscope, the blue hue of the sky and bodies of water, etc. This is only a small list.
  • asked a question related to Optics
Question
4 answers
Does Wolfram prefer quantum mechanics or relativity? Why?
Relevant answer
Answer
Hello, Stephen Wolfram, has a unique perspective when it comes to the age-old debate between quantum mechanics and relativity. Rather than explicitly favoring one theory over the other, Wolfram takes a more holistic approach, seeking to unify these seemingly disparate branches of physics through his groundbreaking work on computational models and the fundamental theory of physics.
At the heart of Wolfram's philosophy lies a deep fascination with the complex behaviors that can emerge from simple computational rules. In his seminal book, "A New Kind of Science" and his subsequent research, he explores how cellular automata and other computational processes might hold the key to unlocking the mysteries of the universe. His ultimate goal is to develop a unified theory that can encompass both quantum mechanics and relativity, transcending the traditional boundaries between these two pillars of modern physics.
The Wolfram Physics Project, a recent endeavor spearheaded by Wolfram himself, embodies this ambitious vision. By proposing that the universe operates as a vast computational system governed by simple rules, Wolfram aims to reconcile the principles of quantum mechanics and general relativity, deriving them from a more fundamental, computational substrate. This approach represents a bold departure from conventional thinking, suggesting that the dichotomy between these theories might be resolved through a deeper understanding of computation in physics.
Wolfram's reluctance to express a clear preference for either quantum mechanics or relativity stems from his commitment to a unified approach. He believes that both theories are likely emergent properties of underlying computational processes, and that a true understanding of the universe will require a framework that integrates them seamlessly. By focusing on cellular automata and the concept of computational irreducibility, Wolfram seeks to develop new ways of thinking about physical laws that go beyond current paradigms.
It's worth noting that Wolfram's ideas have not been without controversy. Some physicists have criticized his claims as being non-quantitative and arbitrary, arguing that his model has yet to reproduce the precise quantitative predictions of conventional physics. However, Wolfram remains undeterred, believing that new ideas in science often take time to gain acceptance, much like Einstein's theory of relativity did in its early days.
  • asked a question related to Optics
Question
43 answers
The diffraction of light has been referred to as its wave quality since it seemed there was no other solution to describe that phenomenon as its particle quality and subsequently, it exhibited wave-particle duality.
Relevant answer
Answer
Respected Farhad Vedad
I do agree with You, Your insightful response highlights the necessity of considering space as a complex, dynamic entity rather than a simple, homogeneous medium. By recognizing that space can have varying refractive indices and properties as described by relativity, we understand that photons and electrons may interact with their environment in unique ways, influencing their diffraction and behavior. This perspective challenges the traditional wave-particle duality and underscores the importance of environmental context in studying physical phenomena. Just as in social sciences, where individual behavior is shaped by surroundings, particle behavior is also deeply influenced by the structure of space, encouraging a more holistic and integrated approach to understanding the natural world.
  • asked a question related to Optics
Question
2 answers
The primary challenge in analyzing the shadow blister arises when the transverse distance between the two edges along the X-axis is either large or when the slit width reaches zero and the secondary barrier overlaps the primary barrier, rendering the "Fresnel Integral" valid. In such scenarios, this phenomenon can also be interpreted using traditional ray theory. As the transverse distance decreases to approximately a millimeter, the validity of the "Fresnel Integral" diminishes. Regrettably, this narrow transverse distance has often been overlooked. This article explores various scenarios where the transverse distance is either large or less than a millimeter, and where the secondary barrier overlaps the primary barrier. Notably, complexity arises when the transverse distance is very small. In such conditions, the Fourier transform is valid only if we consider a complex refractive index, indicating an inhomogeneous fractal space with a variable refractive index near the surface of the obstacles. This variable refractive index introduces a time delay in the temporal domain, resulting in a specific dispersion region underlying the diffraction phenomenon. Refer to: http://www.ej-physics.org/index.php/ejphysics/article/view/304
Relevant answer
Answer
Your interpretation demonstrated you just explained your imagination rather than a proper experimental consideration. Please take time and see:
or at ResearchGate:
Best regards,
  • asked a question related to Optics
Question
12 answers
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
Relevant answer
Answer
Dear Researchers,
I am pleased to share my latest work on optics and diffraction, focusing on the deformation of shadows when they intersect. This article has recently been published in the European Journal of Applied Physics. I hope you find it intriguing.
Best regards, Farhad
  • asked a question related to Optics
Question
14 answers
The shadows of two objects undergo peculiar deformation when they intersect, regardless of the distance between the objects along the optical axis:
Relevant answer
Answer
Dear Researchers, I am pleased to share my latest work on optics and diffraction, focusing on the deformation of shadows when they intersect. This article has recently been published in the European Journal of Applied Physics. I hope you find it intriguing. http://www.ej-physics.org/index.php/ejphysics/article/view/304
Best regards, Farhad
  • asked a question related to Optics
Question
4 answers
Tansverse resonance condition for the single layer(FIg.3) waveguide has bee deduced, as shown in Fig.2, which contains the phase shifts cause by reflection and optical path difference. Only light that can fulfill the equation of Fig2 can propagate through the waveguide.Is it possible to get a similar equation for a double-layer waveguide?
Relevant answer
Answer
Yeah sure
  • asked a question related to Optics
Question
1 answer
I am setting up a simulation where I want to see the reflectance from an array of nanoparticle using COMSOL wave optics module. I want to see the reflectance for co and cross polarized light. For example, let's say the incident beam is x-polarized. I want to see the reflectance separately for x and y polarized scattered light. I can't find a way to do the same. I can get the total reflectance using ewfd.Rport_1 or ewfd.S11, but I don't see a way to get the same thing for a particular polarization.
Any help will be greatly appreciated.
Thanks
Relevant answer
Answer
Have you found the answer? I want to know the same, If you know could you please share with me. Thank you
  • asked a question related to Optics
Question
7 answers
I've come across a formula n= 1/Ts + Sqrt(1/Ts-1) where n is the refractive index and Ts is the transmittance. Is this formula valid? How is this formula arrived at?
Relevant answer
Answer
This formula is not correct. It should read n=1/T+sqrt(1/T^2-1). It came from the definition that T=2n/(n^2+1).
  • asked a question related to Optics
Question
4 answers
Hello all,
I'm currently researching metalenses and facing an intriguing challenge.
In my simulations using Lumerical FDTD, based on methods from DOI: 10.1038/ncomms8069, I'm trying to calculate the focus efficiency of metalenses. My process involves placing an aperture at the incident with PML boundaries and measuring the total intensity at the focal point. Initially, I conducted this with only the glass substrate, then repeated with both glass and nanopillars, measuring over an area about three times the FWHM at the focal point.
Here's where it gets puzzling: The intensity with just the glass substrate is consistently lower than with both glass and nanopillars. Interestingly, I also tried the process without any glass substrate at the incident, yet the focal point intensity remained significantly higher than expected.
Could you offer any insights or thoughts on why this might be happening? Your advice or any pointers towards relevant resources would be invaluable.
Thank you for your time and consideration.
Best regards,
Relevant answer
Answer
Thank you so much for your input; you're absolutely right! Your insights have led me to reconsider the integration of Poynting vectors in my simulation.
Also, I used to set up the mesh as non-uniform, which I now believe was the main source of the problem. I've since switched to using a uniform mesh type.
Your advice has been incredibly valuable in guiding my work. Thanks again for your help!
  • asked a question related to Optics
Question
3 answers
Hi all,
I am trying to calculate the curvatures of the cornea and compare them with Pentacam values. I have the Zernike equation in polar coordinates (Zfit = f(r, theta)). Can anybody let me know the equations for calculating the curvatures ?.
Thanks & Regards.
Nithin
Relevant answer
I think you can try something like this
  • asked a question related to Optics
Question
5 answers
Hi there,
I hope you are doing well.
In the lab we have different BBO crystals, however, in the past, they did not mark them so we don't know which crystal is which. I appreciated it if somebody have an idea about how to measure the thickness of BBO crystals.
The second question is, are the BBO crystals sandwiched by two glasses or not? If yes is the measurement become complicated?
Best regards,
Aydin
Relevant answer
Answer
  • asked a question related to Optics
Question
6 answers
How else can we explain :
Imprimis : That a light ray has different lengths for different observers. (cf. B.)
ii. That the length of a light ray is indeterminate? - both gigantic, and nothing, within the Einstein- train embankment carriage : (cf. B.)
iii. That a light ray can be both bent and straight. Bent for one observer, and straight for another : (cf. C.)
iv. That a light rays "bends" mid-flight in an effort to be consistent with an Absolute event which lies in the future : (cf. C.)
v. That these extraordinary things -- this extraordinary behaviour, (including the "constancy of speed") are so that the reality is consistent among the observers -- in the future. (cf. D, B, C)
vi. That light may proceed at different rates to the same place--- wholly on account of the reality at that place having to be consistent among the observers : (cf. D, A)
---------------------------------------------------------
B. --
C.--
D.--
Relevant answer
Answer
Dear Gary Stephens To my understanding Newton is wrong with most of his theories. give you example.
Earth's gravity is working under Pressure, temperature, and mass of object, not Newtonian gravity force of 9.81 weight.
Unfortunately we are following speed of artificial light (flashlight) not sunlight that it does not have constant speed.
thanks
  • asked a question related to Optics
Question
1 answer
Hello everyone,
I have made several optical phantoms with different weight ratio of ink into PDMS, from 0wt% to 5wt%. I have measured the transmission (%) and reflection (%) of each sample.
From there I calculated the absorption with the Beer-Lambert law, A=log(I0/I), with I0 being the transmission with 0wt% of ink and I the transmission of the sample desired.
I can therefore get the absorption coefficient of the phantoms with the formula: ua = A/thickness.
Therefore I have a linear relationship between the weight percentage of the phantoms and their absorption coefficient.
Now my issue is that I want to create a phantom of 2cm thickness but with a ratio of ink to PDMS known.
Should I assume the absorption coefficient will not change from the 2mm sample to the 2cm one ?
Otherwise, how do I determine the absorption coefficient of my new phantom?
Of course, I cannot measure the transmission of this sample as it is too thick now.
Thank you for your help!
Relevant answer
Answer
Determining the optical properties of a phantom with a different thickness based solely on the properties of a thinner phantom can be challenging. While there might be some assumptions and approximations involved, I can provide you with some guidance on how to approach this issue.
Firstly, it is important to note that the Beer-Lambert law assumes a linear relationship between the absorption coefficient and the thickness of the medium. However, this assumption may not hold true for all materials and scenarios. In your case, the ink-PDMS mixture might exhibit nonlinear behavior as the thickness increases, especially if there are scattering effects or other factors involved.
To estimate the absorption coefficient of your new 2cm-thick phantom with a known ink-PDMS ratio, you can consider the following approaches:
1. Use a calibration curve: Based on the linear relationship you have established between the weight percentage of ink and the absorption coefficient in the 2mm-thick phantoms, you can create a calibration curve. Plot the weight percentage of ink against the corresponding absorption coefficient for your various samples. Then, extrapolate the calibration curve to estimate the absorption coefficient for the 2cm-thick phantom at the desired ink-PDMS ratio. However, keep in mind that extrapolation introduces additional uncertainties, and the accuracy of the estimation may vary.
2. Consider theoretical models: Explore theoretical models or empirical equations that relate the absorption coefficient to the material composition, such as the Mie theory or effective medium approximations. These models take into account the composition and structure of the material and can provide estimations of the absorption coefficient for different thicknesses. However, the accuracy of these models depends on the specific characteristics of your ink-PDMS mixture.
3. Conduct additional experiments: While it may not be feasible to directly measure the transmission of the 2cm-thick phantom, you could consider alternative experimental methods or techniques that can provide insights into the optical properties. For example, you could use diffuse reflectance spectroscopy, which measures the reflectance of light from the surface of the phantom. By analyzing the reflectance data, you may be able to infer certain optical properties, including the absorption coefficient.
In any case, it is important to acknowledge the limitations and uncertainties associated with estimating the optical properties of a phantom with a different thickness based on data from a different thickness. Ideally, conducting experimental measurements on the 2cm-thick phantom would provide the most accurate and reliable results. However, if that is not possible, the approaches mentioned above can serve as initial estimations, but they may require further validation and verification.
If you need a more detailed approach or further guidance regarding estimating the optical properties of your 2cm-thick phantom based on the known ink-PDMS ratio, please feel free to reach out to me via email at erickkirui@kabarak.ac.ke. I will be more than happy to assist you in exploring additional strategies or discussing specific theoretical models that could be relevant to your specific case.
  • asked a question related to Optics
Question
2 answers
I have a laser with a spectral bandwidth of 19nm and is linearly chirped. The Fourier transform limit is ~82fs assuming a Gaussian profile. I have built a pulse compressor based on the available transmission grating (1000 lines/mm, see the attachment); however, I noticed that the minimum achievable dispersion (further decrease in dispersion is limited by the distance between the grating and horizontal prism) of the compressor is greater than what is required for the optimal pulse compression supported by the optical bandwidth. Is there a way to decrease dispersion further in this setup? or Are there any other compressor configurations using single-transmission grating which might have more flexible dispersion control?
Relevant answer
Answer
I think it possible to use only one prizm (grating double pass scheme): you can to decrease twice the dispertion
  • asked a question related to Optics
Question
4 answers
there is a fiber coupled EOM setup after the double pass AOM setup. the intensity of light is constant(checked with power meter) before the input of EOM but there is a 10% fluctuations in the power just after the EOM, due to which I am unable to lock to a signal that i see in the oscilloscope. This signal is fluctuating up and down on the screen. Is there any solution or could I be doing wrong somewhere, although I have realigned all optical elements numerous times to get it done correctly, but still facing the problem.
Relevant answer
Answer
thank you for your suggestion, I will try that.
  • asked a question related to Optics
Question
4 answers
I am developing a maths model with Matlab code of optical coherence tomography signal for checking algorithms of extracting B-scans. Now I am struggling with taking into account optic systems such as sample scanner and spectrometer which distort optical signal. I am thinking to simulate this right in Matlab. Would it be better to use special soft for optic simulation and then transfer some coefficients into Matlab code to emitate optics (I suppose I could do it with 2D Fourier Transform besause all other parts of OCT system I implemented in spectral domain). Is there any code examples or tutorials?
Relevant answer
Answer
Many thanks to you, Thorsten Zwinger !
  • asked a question related to Optics
Question
13 answers
So-called "Light with a twist in its tail" was described by Allen in 1992, and a fair sized movement has developed with applications. For an overview see Padgett and Allen 2000 http://people.physics.illinois.edu/Selvin/PRS/498IBR/Twist.pdf . Recent investigation both theoretical and experimental by Giovaninni et. al. in a paper auspiciously titled "Photons that travel in free space slower than the speed of light" and also Bereza and Hermosa "Subluminal group velocity and dispersion of Laguerre Gauss beams in free space" respectably published in Nature https://www.nature.com/articles/srep26842 argue the group velocity is less than c. See first attached figure from the 2000 overview with caption "helical wavefronts have wavevectors which spiral around the beam axis and give rise to an orbital angular momentum". (Note that Bereza and Hermosa report that the greater the apparent helicity, the greater the excess dispersion of the beam, which seems a clue that something is amiss.)
General Relativity assumes light travels in straight lines in local space. Photons can have spin, but not orbital angular momentum. If the group velocity is really less than c, then the light could be made to appear stationary or move backward by appropriate reference frame choice. This seems a little over the top. Is it possible what is really going on is more like the second figure, which I drew, titled "apparent" OAM? If so, how did the interpretation of this effect get so out of hand? If not, how have the stunning implications been overlooked?
Relevant answer
Answer
You are right, the photon has a spiraling trajectory, just like the electron. This explains the associated wave of both, at least partly, there is still the mystery of the constant of Plank! Why are both behaving in a similar manner? QM is just a superficial theory based on the associated wave.
JES
  • asked a question related to Optics
Question
2 answers
I am going to make a setup for generating and manipulating time bin qubits. So, I want to know what is the easiest or most common experimental setup for generating time bin qubits?
Please share your comments and references with me.
thanks
Relevant answer
Answer
Time-bin encoding is a technique used in quantum information science to encode a qubit of information on a photon. Quantum information science makes use of qubits as a basic resource similar to bits in classical computing. Qubits are any two-level quantum mechanical system; there are many different physical implementations of qubits, one of which is time-bin encoding.
  • asked a question related to Optics
Question
1 answer
The intensity of each ray in RayOptics module of COMSOL is easily obtained after ray tracing. I need to find the radiation intensity in the mesh points which would be related to all the rays crossing a point . Does anybody know how I can get the continuous contours of radiation intensity? Should I use accumulator? If yes, what should be the settings of the accumulator?
Relevant answer
Answer
Have you solved this problem? In comsol, when a ray leaves the mesh unit, the accumulator will not accumulates the ray power. This also borders me.
  • asked a question related to Optics
Question
12 answers
1. The necessity of a polarization controller for single-mode fiber. Is a polarization controller necessary for single-mode fibers? What happens when you don't have a polarization controller?
2. Optical path matching problem. How to ensure that the two arms of the optical path difference match, any tips in the adjustment process? If the optical path difference exceeds the imaging distance, will interference fringes fail to appear?
Only these questions for the time being, if there are more welcome to point out.
Relevant answer
Answer
1. The polarization can change during propagation. This will degrade the visibility of the fringes. You need a polarization controller or a polarization-maintaining fiber.
2. By the imaging distance, do you mean the distance to the sample or the sample's dimension? In any case, if the OPD exceeds the coherence lenght of your source, the interference fringes disappear. In order to match the OPD, we typically sweep the reference arm a long distance and record the output intensity. Another method is to monitor the output spectrum using a spectrometer. The spectrum shows oscillations for non-zero OPD. When the OPD approaches zero, the spectrum oscillations tend to disappear.
  • asked a question related to Optics
Question
12 answers
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
Relevant answer
Answer
Gerhard Martens Thanks! I guess that is my problem solved.... Thanks for your input and suggestions.... :-D
  • asked a question related to Optics
Question
3 answers
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Relevant answer
Answer
Thanks for all comments.
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
  • asked a question related to Optics
Question
4 answers
For my experiments I need circularly polarized laser light at 222 nm.
Can anyone tell me is there any important difference between a λ/4 phase plate and a Fresnel rhomb? Prices seem to be comparable. My intuition tells me that phase plate could be significantly less durable due to optical damage of UV AR coatings on all of the surfaces. Also it seems that it is much harder to manufacture the UV phase plate since there are very limited light sources for testing. While Fresnel rhomb seems to be more easy to produce and apochromat. What am I supposed to choose?
Relevant answer
Answer
I think that absorption is not a problem since I want to install the rhomb or λ/4 into the seed pulse before the final amplifier. The pulse that I will work with is only needed for injection locking of spectrum and to the large extent can be strongly attenuated. Same with space, it's not crucial.
It seems that in my case (nanosecond, narrow bandwidth pulse) the difference between rhomb and plate can be neglected. When in comes to ultrashort/broadband pulse the thickness of a rhomb becomes a drawback since it introduces additional dispersion (pulse stretching) and even self-focusing and absorption.
  • asked a question related to Optics
Question
4 answers
There are many fields where light field can be utilized. For example it is utilized in microscopy [1] and for vision based robot control [2]. Which additional applications do you know?
Thank you in advance!
[1] Li, H., Guo, C. and Jia, S., "High-resolution light-field microscopy," Frontiers in Optics, FW6D. 3 (2017).
[2] Tsai, D., Dansereau, D. G., Peynot, T. and Corke, P. , "Image-Based Visual Servoing With Light Field Cameras," IEEE Robotics and Automation Letters 2(2), 912-919 (2017).
Relevant answer
Answer
Hi Vladimir Farber, I think one of the fields of Light Field Images missing to be mentioned is the "Quality Assessment of Light Field Images" when they are propagating through a communication channel. For your kind reference, one of the published papers in this field is :
"Exploiting saliency in quality assessment for light field images."
  • asked a question related to Optics
Question
15 answers
Hi. I have a question. Do substances(for example Fe or Benzene ) in trace amounts (for example micrograms per liter) cause light refraction? and if they do, is this refraction large enough to be detected? and also if they do, is this refraction unique for each substance?
I also need to know if we have a solution with different substances, can refraction help us determine what the substances are? can it measure the concentration?
Thanks for your help
Relevant answer
Answer
It is also on ResearchGate. Author Shangli Pu. Measurement of refractive index of magnetic fluid by Retro - reflection on fiber optics end face.
  • asked a question related to Optics
Question
18 answers
I have profiled a collimated pulsed laser beam (5mm) at different pulse energies by delaying the Q-switch and I found the profile to be approximately gaussian. Now I have placed a negative meniscus lens to diverge the beam and I put a surface when the beam spot size is 7 mm. Should the final beam profile (at the spot size = 7 mm) be still gaussian? Or the negative lens will change the gaussian profile? Is there any way to calculate the intensity profile theoretically, without again doing the beam profiling by methods like Razor blade method? Thanks.
Relevant answer
Answer
Gaussian laser beams propagate as gaussian laser beam if their intensity is sufficiently weak so that they do not affect the refractive index of the medium through which they propagate and a linear approximation is valid. Furthermore the
"diameter" of the laser beam should be large compared to the wavelength so that a Fresnel approximation is valid (at least a few micrometers). Also the medium should be (fairly) homogeneous. This is related to the paraxial approximation used (small angles of deflection). The formulas for gaussian beam propagation a easily found in various textbooks. There is no fundamental difference between divergent and convergent lenses. The standard paraxial matrix formulation used in geometrical optics can be used to calculate gaussian beams transformation. The beam is characterized by its "waist diameter Wo" and the position of this waist "xo" with respect to the lens.
  • asked a question related to Optics
Question
4 answers
I would like to calculate the Mode Field Diameter of a step index fiber at different taper ratios. I understand that at a particular wavelength, the MFD will be decreasing as the fiber is tapered. It may increase if it's tapered more. I am looking to reproduce the figures ( attached ) given in US Patent 9946014. Is there any formula I may use ? Or it involves some complex calculations?
Relevant answer
Answer
Using COMSOL or MATLAB or other simulation softwares it is easy to calculate the MFD. You need consider the change of wave-guiding difference as the tapering diameter decreasing: initially silica/(silica+Ge) and then air/silica. I believe you cannot use a simple formula to get the accurate result
  • asked a question related to Optics
Question
8 answers
Dear all,
Kindly provide your valuable comments based on your experience with surgical loupes
- Magnification (2.5 x to 5x)
- Working distance
- field of vision
- Galilean (sph/cyl) vs Kepler (prism)
- TTL vs non TTL/flip
- Illumination
- Post use issues (eye strain/ headache/ neck strain etc)
- Recommended brand
- Post sales services
Thank you
#Surgery #Loupes # HeadandNeck #Surgicaloncology #Otolaryngology
Relevant answer
Answer
A loupe with at least 3 to 3.5x magnification should suffice.
  • asked a question related to Optics
Question
7 answers
I am using a 550mW green laser (532nm) and I want to measure its intensity after being passed through several lenses and glass windows.
I found a ThorLabs power meter but it is around $1200.
Any cheaper options to measure the intensity of the laser?
(high accuracy is not required)
  • asked a question related to Optics
Question
7 answers
I want to know if the number of fringes and their shape is an important factor for the accuracy of phase definition?
Relevant answer
  • asked a question related to Optics
Question
3 answers
Solvents for the immersion oil are carbon tetrachloride, ethyl ether, Freon TF, Heptane, Methylene Chloride, Naptha, Turpentine, Xylene, and toluene.
What is the best of these to clean the surface? Toluene? Heptane? I'd like to stay away from more dangerous chemicals if possible and have something that evaporates easily.
Relevant answer
Methyl alcohol is the best
  • asked a question related to Optics
Question
4 answers
I would like to calculate the return loss for a splice / connection between two different fibers. One of the fiber is having a larger core diameter and larger cladding diameter compared to the other fiber. I was considering the approach laid out in this paper which takes about identical fibers : M. Kihara, S. Nagasawa and T. Tanifuji, "Return loss characteristics of optical fiber connectors," in Journal of Lightwave Technology, vol. 14, no. 9, pp. 1986-1991, Sept. 1996, doi: 10.1109/50.536966.
I added a few screenshots from the paper.
Relevant answer
Answer
Looks like a good paper and a good approach.
  • asked a question related to Optics
Question
3 answers
Hello
I am new to the field and I would like to ask on what is the criteria to say whether a photoswitchable compound or optogenetic molecule has fast kinetics and high spatiotemporal resolution at the cell-free model, cellular/in vitro model, in vivo and ex vivo model? Is there a consensus criterion to quantitatively qualify if a compound has a fast kinetic and high spatiotemporal resolution in these models?
For instance, if a compound becomes fully activated when turned on by light in less than 30 min, does it have fast kinetics?
On the other hand, if a compound can precisely activate certain neuronal regions in the brain but it has off-target activations in the surrounding regions around 20 uM from the region of activation, does it have high spatiotemporal resolution?
I may have mixed-up some terms here, I will be glad if this will be clarified in the discussion.
Thanks.
Relevant answer
Answer
That's the least I could do.
  • asked a question related to Optics
Question
3 answers
I want to calculate the propagation constant difference for LP01 and LP02 modes for a tapered SMF-28 (in both core and cladding).
Is there a simple formula that I can use? My goal is to see if the taper profile is adiabatic or not.
I am using this paper for my study : T. A. Birks and Y. W. Li, "The shape of fiber tapers," in Journal of Lightwave Technology, vol. 10, no. 4, pp. 432-438, April 1992, doi: 10.1109/50.134196.
equation in attached figure
Relevant answer
Answer
Well, one way would be to consider the transcendental modal equations for the LP modes and compute the propagation constants for LP01 and LP02 for different values of the core radius. Infact, as long as the fiber is an FMF, you could find the propagation constants of all the LP modes allowed by the structure. Then you could use the equation given above to check if the criteria is met or not.
The criterion will be slightly modified for structures with more than two modes. For eaxmple, if your structure has LP02 mode as well, then you must check the above criteria more for coupling between LP01 and LP11 modes rather than LP02 mode.
The modified criteron will contain the computation of the minimum of (\beta_a - \beta_b) for different pairs of \beta.
Let me also add that this critereon of only considering the Eigenvalues is not very efficient. The adiabaticity theorem is often extended the photonics context to consider both the eigenvalue (propagation constant) and the eigen function (the modal profile) for a better adiabaticity criteria. You could find many paper to this regard. Some of my PhD work also might be useful.
Thank you
  • asked a question related to Optics
Question
5 answers
In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.
Relevant answer
Answer
Problem solved : The reel was getting pinched and deflected at the bare fiber adapter to the detector causing a huge drop in power.
Vincent Lecoeuche Thanks for your thoughts as well.
  • asked a question related to Optics
Question
3 answers
My set up is as follows : Elliptically polarized light at input -> Faraday Rotator -> Linear Polarizer (LP) -> Photodiode
The LP is set such that the power output is minimum. I use a lock -in-amplifier to measure the power change due to the Faraday effect. I have a more or less accurate measurement of the magnetic field and the length of the fiber. The experimental Faraday rotation (Rotation Theta= Verdet constant*MagneticField*Length of fiber) , is more than the theoretical prediction, so I was wondering if I am observing the effect of elliptical polarization at the input to the system.
Relevant answer
Answer
Yes, you can say both polarizations get rotated. Taking each component separately, they both would get rotated by the same amount, and superposition applies, so together they do the sum of what each of the pieces would do.
However, if it helps, that is not the only way to think of it. We like to think in terms of linear polarization. We like to think of arbitrary polarization as a superposition of two orthogonal linear polarizations. It’s easy to make the diagrams. It also makes sense for linear polarizers and linear retarders. However, that is not the only choice. You can just as easily express an arbitrary polarization as the superposition of left and right circular polarizations. In the basis of right and left circular polarization a Faraday rotator is in fact a phase retarder. if the two components have equal amplitude the result is linear polarization. The relative phase determines the orientation of the linear polarization, so retarding the phase rotates the linear polarization. If the two components have different amplitude, you get an ellipse where the major and minor axes are the sum and difference of the amplitudes. Again, if you retard the phase, the whole ellipse just rotates.
As to why your experiment is producing answers that don’t quite seem right, I think this measurement has several things that can confuse the result. Although I don’t see why you wouldn’t put a polarizer on the entrance, I doubt the entering ellipticity is really the problem. That should just reduce your modulation amplitude making the signal a little weaker, but it shouldn’t impact the phase. A much more likely culprit is linear birefringence in the fiber. Fibers can have significant residual birefringence from the manufacturing, but also bending through the fiber acts as a retardation. For example, sequential coils of fiber called fiber paddles are sold as polarization manipulators.
  • asked a question related to Optics
Question
4 answers
1. Bose-Einstein condensation: How do we rigorously prove the existence of Bose-Einstein condensates for general interacting systems? (Schlein, Benjamin. "Graduate Seminar on Partial Differential Equations in the Sciences – Energy, and Dynamics of Boson Systems". Hausdorff Center for Mathematics. Retrieved 23 April 2012.)
2. Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?(Barton, G.; Scharnhorst, K. (1993). "QED between parallel mirrors: light signals faster than c, or amplified by the vacuum". Journal of Physics A. 26 (8): 2037.)
Relevant answer
Answer
Regarding the first problem, there are many examples. For instance, the paper by O. Penrose, ``Bose-Einstein condensation in an exactly soluble system of interacting particles'', esearchportal.hw.ac.uk/en/publications/bose-einstein-condensation-in-an-exactly-soluble-system-of-intera
Cf. also, the paper by E. Lieb and R. Seiringer, ``Proof of Bose-Einstein Condensation for Dilute Trapped Gases'',
Regarding the second problem, the boundary conditions break Lorentz invariance. That's why the question isn't well-posed, whether in the classical limit or when quantum effects must be taken into account. In a finite volume it requires care to define the propagation velocity properly, since the equilibrium field configurations describe standing waves.
  • asked a question related to Optics
Question
7 answers
According to ASHRAE the values of tb and td for Atlantic, I want their values according to cities or latitude and longitude 
Thanks 
  • asked a question related to Optics
Question
3 answers
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern that is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modeling the two wavefronts and adding them together digitally. The resulting digital image is then printed onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Why can't the two-intensity combination algorithm be adapted like MATLAB software in Octave software and create an interference scheme?
  • asked a question related to Optics
Question
11 answers
If I were to make a half dome as an umbrella to protect a city from rain and sun how would I proceed. Are there special materials or do you have an idea on how to make this? What do you say about an energy shield ?
Relevant answer
Answer
Dear all, I think the easiest way is an underground city. Shullters are built for similar protection purposes. My Regards
  • asked a question related to Optics
Question
1 answer
So far, I made the simulation for the thermal and deformation analysis. I know that neff-T graph can be plotted. TEM modes can also.
Relevant answer
Answer
Murat Baran Hi. When coupled with the optical and electrical modules, the thermal module generates thermal maps (see the attached figure) in COMSOL. The article (cited below) from which the attached figure has been collected deals with the 3D modeling of a thin-film CZTSSe solar cell in COMSOL and takes various heat sources into consideration including SRH nonradiative recombination, Joule heating and the conductive heat flux magnitude (thermalization). According to the article: The thermal analysis allows the optimization of device stability by determining which heating source is the cause of performance drop over time. Hope this helps. Best.
Reference:
Zandi, Soma & Saxena, Prateek & Razaghi, Mohammad & Gorji, Nima. (2020). Simulation of CZTSSe Thin-Film Solar Cells in COMSOL: Three-Dimensional Optical, Electrical, and Thermal Models. IEEE Journal of Photovoltaics. PP. 1-5. 10.1109/JPHOTOV.2020.2999881.
  • asked a question related to Optics
Question
8 answers
For an actual laser system, the ellipticity of a Gaussian beam is like in the picture attached (measured by me). Near the laser focus the ellipticity is high and falls down drastically at Rayleigh length. Then increases again. This is a "simple astigmatic" beam. Can anyone explain this variation?
P.S. In the article (DOI: 10.2351/1.5040633), the author also found similar variation. But did not explain the reason.
Thanks in advance
Relevant answer
Answer
Well, first, we know that if you had a perfectly circular Gaussian intensity profile with a perfect plane wave phase front then you would find a perfectly circular Gaussian intensity profile everywhere along the beam. Your beam must differ slightly in some way from the perfect single mode description. This is completely normal. At the very least there must always be some aperture clipping (can’t have a Gaussian out to infinity!). Typically there is also some slight amount of a second mode. Neither the clipping nor the higher mode content need to be axially symmetric. However, a little asymmetry doesn’t necessarily cause the beam to measure as being too asymmetric in some locations.
Also it is not unusual to find something different in the midfield than in the near field and far field. For example an elliptical beam in the near field Fourier transforms to an elliptical beam in the far field with the axes swapped. However somewhere in between the beam is symmetric and the ellipticity approaches zero.
What is interesting about your measurement is that it does the opposite of that. It is symmetric in the near and far field and slightly elliptical in between. I think the same idea of the “in between” looking different applies, but the exact explanation is a little less obvious. However, I’m not terribly surprised. In aligning the laser the builder tweaks the alignment until they get the best result looking at the far field. Naturally any imperfections are hidden and show up in the midfield.
Finally I will add that the ellipticity measurement can lie to you. Often ellipticity is calculated for the whole beam or perhaps a 1/e^2 encircled energy contour. This emphasizes low tails. Small amounts of extra diffraction orders can radically affect the measurement. I bet if you measured the FWHM instead you would find less deviation.
  • asked a question related to Optics
Question
2 answers
What are the advantages of using SiO2 substrate over Si substrate for monolayer graphene in photonics?
Relevant answer
Answer
Dear
Behnam Farid
Thanks for your complete answer.
In fact, I thought about the hot electron injection to the substrate.
As we're expecting localized or propagating plasmons in graphene, isn't it possible to have electron leakage from the graphene to high doped substrates? Instead, SiO2 or other insulators offer electrostatic charge transfer, which may facilitates the graphene-graphene hybridization in periodic structures like 1D graphene ribbons. Is this a valid claim?
  • asked a question related to Optics
Question
6 answers
I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.
Relevant answer
Answer
Dear Anatoly Smolovich , in addition to the previous answers, I would like to add probably one of the most popular materials for replication: PDMS, there are several commercial preparations of this silicone, but probably Sylgard184, by Dow, is the more commly used. It normally requires a mild curing temperature (80ºC for two hours or even at room temperature, but requiring longer curing times) and if temperature is an issue they also have some UV-curable PDMS. It has a key advantage over rigid polymers, given that it is an elastomer and that facilitates the demolding process without harming both the original and/or the replica. I also had good experiences with Microresist as pointed by Daniel Stolz , in fact I use to make the replica with PDMS (reverse replica) and the direct replica of the original with Ormocomp by Microresist, with very good results. These resins are solvent free and that is important both for avoiding damage of the original and to minimise the shrinking, so that the grating period is mantained.
Both, PDMS and Ormocomp do not need application of pressure, unlike the case of the PP replicas made for injection molding in the article provided by Przemyslaw Wachulak , (of course you can apply pressure but it is not necessary, just a glass slide or a cover slide will be enough.
About your late question related to the treatment of the original grating surface treatment:
It will depend on the nature of the grating and its surface. If your grating is made of glass or metal (alone) most antiadhesive treatments would work. If it is made of some polymer, you will need to know what polymer to apply some material that does not damage the grating.
If your grating is made of any material and coated with a thin metallic coat, then you should check that the antiadhesion material and the replication resin (or the solvent) are not going to damage the thin metallic film by disturbing the adhesion between the substrate material and the metal coat.
Hope this helps.
  • asked a question related to Optics
Question
4 answers
Hello dear friends,
I am trying to add our own optical components in the sample compartment region of our Nicolet 380 FTIR. Also, our sample is small, so we need to shrink the size of the beam spot with an iris to avoid signals from the substrate.
Therefore, we want to use a mid IR sensor card to help me find where and how large the light beam is. However, the IR sensor card does not show any color change when I put it in the light path (of course, when the light source is on). The mid IR sensor card we use can detect light in the wavelength range of 5-20 um, the min detectable power density is 0.2 W/cm2.
Did I miss anything here? And do you have any suggestions how I shall detect the beam spot, its position and size?
Any suggestions will be highly appreciated! Thank you in advance!
Best regards,
Ziwei
Relevant answer
Answer
In general those cards are designed to work with a continuous beam. Don't forget that the IR beam from an FTIR is modulated and in my experience those cards don't work in an interferometer.
The Al foil approach mentioned above will work well but if this is an ongoing problem why not use a beam condenser? Those optics are designed to condense the beam and put the sample exactly where you get the maximum energy. Both Pike Technologies (https://www.piketech.com/product/ms-beam-condensers/) and Specac (https://www.specac.com/en/products/ftir-acc/transmission/solid/microfocus-beam-condenser) provide them and the 380 is a very common instrument so that there should be no concerns about unusual spectrometer configurations.
  • asked a question related to Optics
Question
4 answers
Some 2D galvos have both axes on the horizontal plane. It seems much easier to manufacture. However, some high-end galvos such as those from Cambridge have one of the axes tilted by a small angle. What is the benefit of that?
Relevant answer
Answer
This is done only to reduce the size of the scanning head, so that the edge of the second mirror does not go beyond the plane bounding the first mirror. Then the f-theta lens can be positioned closer to the movable mirrors.
  • asked a question related to Optics
Question
3 answers
I have a long length (L) of coiled SMF-28 on a spool and I want to measure the "beat length (Lb)" of the entire spool using some simple means as described. (1) inject linearly polarized broadband light (for example, from a superluminescent source) (2) record the optical spectrum using an optical spectrum analyzer (OSA), after transmission through the fiber and another polarizer (3)That spectrum will exhibit oscillations with a period Δλ, from which the polarization beat length can be calculated using Lb = (Δλ/λ)*L.
My questions are (a) what is the typical resolution for the OSA used in such measurements (b) should I rotate the polarizer such that the power is maximized for the center wavelength of my source ? (c) if I am missing anything else that I should consider
Relevant answer
Answer
Vincent Lecoeuche thank you for your answer, if I want to measure the beat length of the PM fiber can I use a tunable laser to sweep from 1500 nm to 1600 nm and then calculate the ripple spacing for the entire spectrum get the beat length ? Also, will it be any useful to launch linearly polarized light midway between the fast and slow axis of the PM fiber that is being tested ?
  • asked a question related to Optics
Question
3 answers
I'm curious if anyone can share their measurement of the coupling loss as a function of the gap between two SMF FC/APC fibers at various wavelengths. If not, it would be great if you can refer me to a datasheet or a paper where this type of measurement was done.
Thanks!
Relevant answer
Answer
you may ahve a look at equation 1a and 1b for a description of the gap and wavelength dependence of the coupling loss/ transmission in a butt joint SM-fiber connection:
But, sorry, no experimental data yet...
Best regards
G.M.t
  • asked a question related to Optics
Question
4 answers
Hi,
My input Jones vector is E1 = [ 0 ; 1] , the Jones Matrix is M = [ a + ib , 0 ; 0 , c+id] , the output is E2 = M*E1 = [ 0 ; x + iy]. Now I want to know the phase shift between vertical and horizontal polarization of the light wave.
E1 is 2x1, M is 2x2 and E2 is 2x1.
Is E2 elliptically polarized but then it does not have any X component, I am confused.
Relevant answer
Answer
I think E2 is not elliptically polarized, it should be linear polarized in X compontent. Then I think it's a tricky exam question, with a simple answer that there is not phase difference.
  • asked a question related to Optics
Question
1 answer
I'm working on silicon-graphene hybrid plasmonic waveguides at 1.55um. for bilayer graphene my effective mode indices are near the source that I'm using but for trilayer they are not acceptable. For modeling the graphene I use relative primitivity or refractive index in different applied EV.
I attached my graphene relative primitivity and refractive index calculation code and one of my COMSOL file related to fig5.19b in the source.
  • asked a question related to Optics
Question
3 answers
Thanks in advance
Relevant answer
Answer
you can use ns3, have look at the documentation.
  • asked a question related to Optics
Question
2 answers
I'm planning to modify a finite tube length compound microscope to allow the use of "aperture reduction phase contrast" and "aperture reduction darkfield" according to the following sources:
Piper, J. (2009) Abgeblendeter Phasenkontrast — Eine attraktive optische Variante zur Verbesserung von Phasenkontrastbeobachtungen. Mikrokosmos 98: 249-254. https://www.zobodat.at/pdf/Mikrokosmos_98_4_0001.pdf (in German).
The vague instructions state:
"In condenser aperture reduction phase contrast, the optical design of the condenser is modified so that the condenser aperture diaphragm is no longer projected into the objective´s back focal plane, but into a separate plane situated circa 0.5 – 2 cm below (plane A´ in fig. 5), i.e. into an intermediate position between the specimen plane (B) and the objective´s back focal plane (A). The field diaphragm is no longer projected into the specimen plane (B), but shifted down into a separate plane (B´), so that it will no longer act as a field diaphragm.
As a result of these modifications in the optical design, the illuminating light passing through the condenser annulus is no longer stopped when the condenser aperture diaphragm is closed. In this way, the condenser iris diaphragm can work in a similar manner to bright-field illumination, and the visible depth of field can be significantly enhanced by closing the condenser diaphragm. Moreover, the contrast of phase images can now be regulated by the user. The lower the condenser aperture, the higher the resulting contrast will be. Importantly, halo artifacts can be reduced in most cases when the aperture diaphragm is partially closed, and potential indistinctness caused by spherical or chromatic aberration can be mitigated."
The author combined finite 160 mm tube length objectives, a phase contrast condenser designed for finite microscopes, and an infinity-corrected microscope to get the desired results.
However, how would one accomplish this in the simplest way possible?
Relevant answer
Answer
Golshan Coleiny thank ýou for your reply. I assume you mean, for example, modification of the illuminator field lens to displace the conjugate aperture plane of the field diaphragm? Kind regards, Quincy
  • asked a question related to Optics
Question
12 answers
I wanted to calculate the magnification of a system where I am using a webcam lens and was wondering if this could be done by applying the simple lens equation? If yes, then what would I consider my "v" to be in this case since I'm dealing with a lens assembly (unknown # of lenses and unknown separation between them)? I just know the EFL in this case.
Relevant answer
Answer
If you know the principal planes of your lens you can use the simple lens equation if you set the object distance as the distance between the object and the first principal plane and the image distance as the distance between the second principal plane and the image while using the EFL as the focal length.
Otherwise, the ray tracing can be quite complicated - even the procedure described by Piergiacomo will not be completely correct if at least one of the individual lenses in a lens system are "thick".
  • asked a question related to Optics
Question
6 answers
It is known that in case particles having size less than one-tenth the wavelength of light, Rayleigh scattering occurs and the scattering is isotropic. But for particles whose sizes are comparable to the wavelength of light, the scattering is more towards the forward direction and hence is anisotropic.
Relevant answer
Answer
Dear Somnath Sengupta , thanks for such interesting question, since we often get the mental image of isotropic Rayleigh scattering and the anisotropic Mie scattering in the forward direction but we hardly stop to think about this "odd" behaviour, which could even make us think that is counterintuitive because a larger particle "should" scatter more in the backwards direction, but actually it doesn´t:
First we can considerer two scatterers, one tiny (Rayleigh) and another larger (Mie). Now we are going to focus on two dipoles on each particle (1 and 2). In the tiny particle the two dipoles are necessarily close each other, while in the larger one they could be fairly separated.
Now we send a coherent light beam that hits both particles (and their two dipoles) and check what happens with the scattered light from each particle:
Tiny (Rayleigh) = dipoles 1 and 2 very close because (r<< wavelength)
Because 1 and 2 are very very close the wavelength hits them almost in the same moment, say in the crest of the wave. This interaction produces a new wave forward and backward from each dipole. The forward and backward waves are in phase with the main wave, and because the two dipoles are so close they respective waves are practically in phase too, both forward and backward so the scattering is isotropic.
Big (Mie) = dipoles 1 and 2 are separated because (r => wavelength)
Because 1 and 2 are separated the main wave hits them at different times, say 1 in the crest and 2 in the valey of the wave. In the forward direction both scattering waves are in phase with the main wave and therefore they reinforce themselves by positive interference. However, the backward waves are out of phase and therefore they cause negative interference, reducing its intensity and therefore explaining the anisotropic nature of Mie scattering.
The larger the particle, the further apart could be the dipoles and bigger would be the anisotropy.
Note: This explanation involved just two dipoles, for simplicity, but a real particle could have lots of them.
Hope this helps. Good luck with your research and thanks for making questions that make us think.
  • asked a question related to Optics
Question
6 answers
I have tried Keller's,tucker's and Barker's etchant but they aren't working.I am interested to get the optical micrography.but i'm not getting anything. :(
Relevant answer
Answer
Hello Dr. Parth, this method worked well for me: preetching with H3PO4 for 4 min, and followed by coloration step with Weck's reagent for few seconds, up to 15s. You can find more details in this paper: https://doi.org/10.1017/S1431927618012400
  • asked a question related to Optics
Question
14 answers
In any OSL phosphor we require optical energy more that the thermal trap depth of that trap for optical stimulation. For example in case of Al2O3:C we require 2.6 eV photon to detrap the electron from the trap having 1.12 eV thermal trap depth. How are they related to each other?
Relevant answer
Answer
For a given trap, E(optical) is always > E(thermal), because of the Franck-Condon principle. As a result, transitions on a configurational coordinate diagram always take place vertically, meaning that the transition is much faster than the lattice relaxation time. Once ionized optically the defect’s lattice configuration relaxes to a new configurational coordinate via the emission of phonons. Thermal excitation, however, includes the phonon emission and lattice reconfiguration takes place simultaneously. Thus E(optical) = E(thermal) + E(phonons), with the latter term given by the Huang-Rhys factor.
If experimentally measured energies ( for example E(optical) using OSL, E(thermal) using TL) are either unphysically different or approximately the same, I would question whether the two methods are actually probing the same defect, and/or whether or not the E(optical) and E(thermal) values are correctly obtained from the data, before launching into detailed possible explanations.
  • asked a question related to Optics
Question
11 answers
Today, sensors are usually interpreted as devices which convert different sorts of quantities (e.g. pressure, light intensity, temperature, acceleration, humidity, etc.), into an electrical quantity (e.g. current, voltage, charge, resistance, capacitance, etc.), which make them useful to detect the states or changes of events of the real world in order to convey the information to the relevant electronic circuits (which perform the signal processing and computation tasks required for control, decision taking, data storage, etc.).
If we think in a simple way, we can assume that actuators work the opposite direction to avail an "action" interface between the signal processing circuits and the real world.
If the signal processing and computation becomes based on "light" signals instead of electrical signals, we may need to replace today's sensors and actuators with some others (and probably the sensor and actuator definitions will also be modified).
  • Let's assume a case that we need to convert pressure to light: One can prefer the simplest (hybrid) approach, which is to use a pressure sensor and then an electrical-to-optical transducer (.e.g. an LED) for obtaining the required new type of sensor. However, instead of this indirect conversion, if a more efficient or faster direct pressure-to-light converter (new type of pressure sensor) is available, it might be more favorable. In near future, we may need to use such direct transducer devices for low-noise and/or high-speed realizations.
(The example may not be a proper one but I just needed to provide a scenario. If you can provide better examples, you are welcome)
Most probably there are research studies ongoing in these fields, but I am not familiar with them. I would like to know about your thoughts and/or your information about this issue.
Relevant answer
Answer
After seeing your and other respectable researchers' answers, I am glad I asked this question.
I am really delighted to hear from you the history of an ever-lasting discussion about sensor and actuator definitions. I have always found it annoying that the sensor definition has usually been preferred as a "too specific" definition to serve only for an interface of an electrical/electronic system and an "other" system/medium with different form of signal(s).
Besides, that diiscussion, I can start another one:
There are many commercial integrated devices which are called "sensor"s, although in fact they are not basic sensors but are more complicated small systems which may also include electronic amplifier(s), filter(s), analog-digital-converter, indicators etc. For sure, these are very convenient devices for electronic design, but I think it is not correct to call them "sensor". Such a device employs a basic sensor but besides it provides other supporting electronic stages to aid the electronic designer. I don't know if there is a specific name for such devices.
Thank you again for your additional explanations.
Best regards...
  • asked a question related to Optics
Question
6 answers
Does anybody know what is the maximum power of laser sources (QCL, VECSEL, and so on) in THz regime?
I'm trying to realize a nonlinear effect in THz regime using a THz source without using DFG or SFG. I need 200 mw or more for my device. Is it doable? is there any source to generate that power?
Relevant answer
Answer
Dear
Farooq Abdulghafoor Khaleel
Many thanks for your response and valuable information.
TOPTICA Photonics has unveiled some commercial THz sources with 0.1-6 THz spectrum in mW range.
  • asked a question related to Optics
Question
7 answers
Dear all,
I recently meet a problem when I use RCWA codes.
In the same structure, it tooks fewer time when using the FDTD solution.
I need set a lot of orders to calcuate the structure which can reach the similar result.
So I have a question that how can I judge the accuracy of the simulation when I use RCWA codes? and How to judge the orders I need?
Thanks
Relevant answer
Answer
Sai Chen I am writing the code with matalb, base on Rumpf's lecture. first step. I just consider the normal insert. But very bad news, I the calculate the inverse matrix, no matter inv, ^(-1), pinv,it can not give the right results. And when the harmonics become large (such 40), it give a lot of warning, It seem it is the problem of calculate the inverse matrix. Thank you, do you know why? And the code I write is very short. Less than 200 lines.
  • asked a question related to Optics
Question
5 answers
Hello all,
I have some idea on how to measure the external quantum efficiency for my perovskite LEDs, but I want to calibrate that setup for which I want to measure the External Quantum effficiency of a normal 5 mm LED. How should I go forward with it? All suggestions/ help would be appreciated. Thank you
Jitesh Pandya
Relevant answer
Adding to the colleagues above, you can use a standard solar cell in the shortcircuit mode pf operation where the out put photon flux of the diode can be recieved by the solar cell provided that the area of the solar cell is made large enough to receive the whole flux of the LED.
If the spectral response S(lambda) in mA/ photonic power of the solar cell is measured at the wavelength of the diode then one can get the the input photon flux what is that emitted from the diode at the same time, one can calculate the
input photonic power as Pphotonic= I/ S where I is the measured current and and S is the sensitivity at the intended wavelength.
If elaborated this method can work well in spite of its simplest.
Best wishes
  • asked a question related to Optics
Question
10 answers
Hello everyone,