Science topic

# Applied Optics - Science topic

Explore the latest questions and answers in Applied Optics, and find Applied Optics experts.
Questions related to Applied Optics
• asked a question related to Applied Optics
Question
The shadows of two objects undergo peculiar deformation when they intersect, regardless of the distance between the objects along the optical axis:
sorry, but such laser cexperiments have nothing to do with shadow blister effect.
In the shadow experiments, which are handling with bright white light from extended sources, one only deals with penumbras; diffraction effects cannot be seen due to the very low amount of diffracted light on a huge background.
In your diffraction related videos it is mentioned by yourself, that you have to (i) darken the lab for avoiding any backgroud signal and (ii) that you have to acquire the data over a lot of time...
Best regards
G.M.
• asked a question related to Applied Optics
Question
The theme of the diffraction typically refers to a small aperture or obstacle. Here I would like to share a video that I took a few days ago that shows diffraction can be produced by the macroscopic items similarly:
I hope you can explain this phenomenon with wave-particle duality or quantum mechanics. However, I can simply interpret it with my own idea of Inhomogeneously refracted space at:
1) The diffraction pattern is oscillatory in nature. For monochromatic light, you won't have a sudden dark fringe follow by a sudden bright one. You'll see gradual transitions between both.
2) Those are obviously not monochromatic lights. As such, there is no reason all wavelengths should be extinguished at the same time in dark fringes. One would instead see color variations.
3)Let's do dome rough calculations. In the k-space the diffraction pattern for an aperture or object will be similar to sinc(a*k) where a is the half width of the object.
That means the first 0 will be for k=pi/a.
But that is the tangential component of hhe k vector only, or the sine of its projection.
So, if you want to find the corresponding angle you have
Sin(ang) = pi/a / (k0) = lambda/(2a)
Let's say 2a=3cm and lambda=600nm (yellow), we obtain
sin(ang) = 600e-9 / 3e-2 = 2e-5 ~ang
As such, if you where at 10m from the latter, the corresponding length projected on the sensor of your camera would be roughly
L ~ ang * 10 = 0.2 mm, wich is much smaller than the sensor itself. So we should see the diffraction pattern of it had any actual energy in it.
You did not see diffraction.
• asked a question related to Applied Optics
Question
Hi there,
I hope you are doing well.
In the lab we have different BBO crystals, however, in the past, they did not mark them so we don't know which crystal is which. I appreciated it if somebody have an idea about how to measure the thickness of BBO crystals.
The second question is, are the BBO crystals sandwiched by two glasses or not? If yes is the measurement become complicated?
Best regards,
Aydin
• asked a question related to Applied Optics
Question
Hi,
I would like to understand the link between GRD (ground resolved distance) experimental value and the GSD (ground sample distance) theoretical value. I saw somewhere the following formula used: GRD=2*k*GSD when the factor 2 is to get the value of two-pixel ( cyc/mm ), and k would represent a factor that includes all other influences such as turbulence, aerosol or camera aberration.
When k>1 and if k=1 then we talk about an ideal system.
I would like to know is there a formula to calculate k directly to be able to find the GR? Is there a maximum value of k where one can say that the only influence is the atmosphere and that the camera is limit diffracted?
And is there sources talking of this factor, I have not found any on internet.
Thank you very much
In remote sensing, GRD is the smallest distance that can be resolved by the sensor. It is influenced by various factors such as atmospheric conditions, sensor resolution, and altitude. GSD, on the other hand, is the distance between the centers of two adjacent pixels on the ground. The sensor resolution and the altitude determine it.
The formula you mentioned, GRD=2kGSD, relates the two parameters, where k is a constant that incorporates all the influences other than the sensor resolution and altitude. This constant can be used to calculate the GRD value for a given GSD.
However, there is no direct formula to calculate k. It is generally determined experimentally by measuring the GRD values for different atmospheric conditions and sensor settings. In practice, k values can range from less than one to several, depending on the atmospheric conditions and sensor characteristics.
One can say that the camera is limit diffracted when k reaches its maximum value, which is typically around 1.5-2. Beyond this value, the sensor performance is limited by atmospheric conditions, and further improvements in the sensor resolution will not lead to a significant increase in the GRD.
Several authors discuss the influence of atmospheric conditions on remote sensing parameters, including GRD and GSD. Some of the useful sources are the books "Remote Sensing and Image Interpretation" by Lillesand and Kiefer and "Introduction to Remote Sensing" by Campbell and Wynne, and the journals "IEEE Transactions on Geoscience and Remote Sensing" and "Remote Sensing of Environment."
• asked a question related to Applied Optics
Question
I have been struggling with this concept for a very long time. To calculate the total magnification provided by any digital microscope we generally multiply all the individual magnifications (objective, ocular, coupler/adapter, and the video/on-screen magnification). But currently, I'm working on an in-house developed system and I'm trying to match the magnification to that of a traditional microscope so when I modify the distance between the sensor and the objective lens, my total magnification obviously changes. How do I calculate the total magnification here since the above formula isn't suitable for this case?
ps. not using any ocular here, a coupler lens is also optional.
Rather than trying to calculate the magnification you could take an image of a stage micrometer and measure it.
• asked a question related to Applied Optics
Question
Hi all,
I am testing UV-vis spectrophotometer for PDMS, using "Hitachi U-3900"
My holder for solid can only get reflectance(%R) data,
Schematic of test is Figure 1,
PDMS is a very high transparent material, but its %R is very high(Figure 2), it is weird.
I think most of the light passing through the PDMS and reflected by the Aluminium oxide,
In such situation, can I convert reflectance(%R) into transmittance(%T)?
Thank you very much.
Good day! Here I found brief and complex explanation on reflectance and transmittance phenomenons:
• asked a question related to Applied Optics
Question
My source laser is a 20mW 1310nm DFB laser diode pigtailed into single-mode fiber.
The laser light then passes into an inline polarizer with single-mode fiber input/output, then into a 1x2 coupler (all inputs/outputs use PM (polarization maintaining) Panda single mode fiber, except for the fiber from the laser source into the initial polarizer). All fibers are terminated with and connected using SC/APC connectors. See the attached diagram of my setup.
The laser light source appears to have a coherence length of around 9km at 1310nm (see attached calculation worksheet) so it should be possible to observe interference fringes with my setup.
The two output channels from the 1x2 coupler are then passed into a non-polarizing beam splitter (NPBS) cube (50:50 reflection/transmission) and the combined output beam is projected onto a cardboard screen. The image of the NIR light on the screen is observed using a Contour-IR digital camera capable of seeing 1310nm light, and observed on a PC using the software supplied with the camera. In order to capture enough light to see a clear image, the settings of the software controlling the camera need to have sufficient Gain and Exposure (as well as Brightness and Contrast). This causes the frame rate of the video imaging to slow to several second per image frame.
All optical equipment is designed to operate with 1310nm light and the NPBS cube and screen are housed in a closed box with a NIR camera (capable of seeing 1310nm light) aiming at the screen with the combined output light from the NPBS cube.
I have tested (using a polarizing filter) that each of the two beams coming from the 1x2 coupler and into the NPBS cube are horizontally polarized (as is the combined output beam from the NPBS cube), yet I don't see any signs of an interference pattern (fringes) on the screen, no matter what I do.
I have tried adding a divergent lens on the output of the NPBS cube to spread out the beam in case the fringes were too small.
I have a stepper motor control on one of the fiber beam inputs to the NPBS cube such that the horizontal alignment with the other fiber beam can be adjusted in small steps, yet no matter what alignment I set there is never any sign of an interference pattern (fringes) in the observed image.
All I see is a fuzzy blob of light for the beam emerging from the NPBS cube on the screen (see attached screenshot) - not even a hint of an interference pattern...
What am I doing wrong? How critical is the alignment of the two input beams to the NPBS cube? What else could be wrong?
Gerhard Martens Thanks! I guess that is my problem solved.... Thanks for your input and suggestions.... :-D
• asked a question related to Applied Optics
Question
Dear reserches,
A silver halide-containing photochromic glasses exhibiting refractive indices between 1.585-1.610 in response to radiation exposure.
This domain is not remarkable due to my experiments.
Could you give me another suggestion?
Kind regards,
I suggest you post your experiments and ask this question right at the beginning. If you write out what you are trying to do, that will clarify your own thoughts and ideas. But it will help others to help you.
Variable index of refraction devices are wide spread. Gradient index materials are many. Power intensity (Watts/meter^2) dependent and frequency dependent processes in materials are widely studied - in chemistry, in photonics. If you allow the identical concepts and processes with lattice and acoustic signals, your can find still more useful places to put your time and skills. Acoustic resonance, impulse response functions, 3D imaging, materials processing, laser cutting, laser material interactions - the list is not endless but enough millions of pathways for a single lifetime.
If you say what you want to build, design, understand, accomplish - then others can help.
Your question is too vague and too narrow. Ask BIG questions and then work out the details.
Richard Collins, The Internet Foundation
• asked a question related to Applied Optics
Question
Recently, I have been troubing in calculating the Lenticular lens Imaging. I want to find a method to calculate the object is imaged through the lenticular lens
Dear Yu Yu:
I hope it will be helpful ...
Best wishes ....
• asked a question related to Applied Optics
Question
I use Fujikura CT-30 cleaver for PCF cleaving to use for supercontinuum generation. Initially, it seems like working fine as I could get high coupling efficiency (70-80%) in the 3.2um core of PCF. However, after some time (several hours) I notice that coupling efficiency decreases drastically and when I inspect the PCF endface with an IRscope, I could see a bright shine on the PCF end facet, which is maybe an indication that the end face is damaged. Also, I want to mention that the setup is well protected from dust and there is no chance of dusting contaminating the fiber facet.
Please suggest what should be done to get an optimal cleave, shall I use a different cleaver (pls suggest one) or there are other things to consider.
Thanks
Supercontinuum generation by short pulse with high power that lead to traction or fusion soliton.
• asked a question related to Applied Optics
Question
In many papers, researchers mentioned the formula below for the optical path length in corner cube reflector, however they didn’t show any proof. I want to understand how they reached to this formula.
The total optical path length of a ray in corner cube reflector is
OPL=2*n*D*sec(theta)
where n is the refraction index of the glass, D is the height of the corner cube (the distance from the apex to the glass face), and theta is the angle to the normal of the corner cube glass face. By snell's formula the sec(theta)=n/sqrt(n^2-sin^2(theta)).
below is some ref mentioned the formula above without proof.
ref:
1- theorem 7 in "Theory of the Corner-Cube Interferometer", EDsoN R. PECK
2- Eq (7), "Simultaneous Measurement Method and Error Analysis of the Six Degrees-of-Freedom Motion Errors of a Rotary Axis",Chuanchen Bao
3- Eq(1), "Laser heterodyne interferometer for simultaneous measuring displacement and angle based on the Faraday effect”, Enzheng Zhang
The roundtrip OPD inside the CCR is
1 OPD=2H/cos(theta), where H is the vertical distance from input plane of CCR till the Apex
and theta is angle of incidence and reflection (retroreflective) into and out of CCR.
2 OPD=2H*n/cos(thetan), where H is the physical path from input plane of CCR till the Apex, thetan is
the angle inside the solid (glass) CCR and n is the refractive Index of the glass material. Incident and
retroreflected angle can be found from Snell's Law.
• asked a question related to Applied Optics
Question
In my experiment I have a double cladding fiber spliced on to a reel of SMF-28. The double cladding fiber has total cladding diameter about 2 times more than that of the SMF-28. The source is a SLD, and there is a polarization scrambler after the source which feeds onto one end of the reel of SMF-28. The output power from the 1 km long reel is X mW. But when I splice a half meter length of the specialty fiber to the reel output and measure the power it is 0.9Y mW, where Y is the power output after the polarization scrambler (Y = 3.9X). I am not sure why the power reading suddenly increased.
Problem solved : The reel was getting pinched and deflected at the bare fiber adapter to the detector causing a huge drop in power.
Vincent Lecoeuche Thanks for your thoughts as well.
• asked a question related to Applied Optics
Question
I am interested in the technique of obtaining high-quality replicas from diffraction gratings, as well as holograms with surface relief. What materials are best used in this process? Also of interest is the method of processing the surface of the grating to reduce adhesion in the process of removing a replica from it.
Dear Anatoly Smolovich , in addition to the previous answers, I would like to add probably one of the most popular materials for replication: PDMS, there are several commercial preparations of this silicone, but probably Sylgard184, by Dow, is the more commly used. It normally requires a mild curing temperature (80ºC for two hours or even at room temperature, but requiring longer curing times) and if temperature is an issue they also have some UV-curable PDMS. It has a key advantage over rigid polymers, given that it is an elastomer and that facilitates the demolding process without harming both the original and/or the replica. I also had good experiences with Microresist as pointed by Daniel Stolz , in fact I use to make the replica with PDMS (reverse replica) and the direct replica of the original with Ormocomp by Microresist, with very good results. These resins are solvent free and that is important both for avoiding damage of the original and to minimise the shrinking, so that the grating period is mantained.
Both, PDMS and Ormocomp do not need application of pressure, unlike the case of the PP replicas made for injection molding in the article provided by Przemyslaw Wachulak , (of course you can apply pressure but it is not necessary, just a glass slide or a cover slide will be enough.
About your late question related to the treatment of the original grating surface treatment:
It will depend on the nature of the grating and its surface. If your grating is made of glass or metal (alone) most antiadhesive treatments would work. If it is made of some polymer, you will need to know what polymer to apply some material that does not damage the grating.
If your grating is made of any material and coated with a thin metallic coat, then you should check that the antiadhesion material and the replication resin (or the solvent) are not going to damage the thin metallic film by disturbing the adhesion between the substrate material and the metal coat.
Hope this helps.
• asked a question related to Applied Optics
Question
I have a Jones Matrix of the form : M= [ A + iB C + iD ; U + iV X + iZ]
The input Jones vector is E1 = [ 0 ; 1] , linearly polarized light along Y-axis
So the output is E2 = M*E1 = [ a + ib ; c + id]
Now I want to normalize E2, what is the best way of doing it ?
I am thinking about calculating the magnitudes of each element of E2 and using that for normalizing
Jones matrices are not necessarily normalized. The intensity of the light after a Jones matrix can be less than went in. Take, for example, a polarizer where all the light in one polarization is lost. You can even have the light get brighter. Many laser gain media have polarization dependent gain and the gain is modelled as a Jones matrix.
So, I am curious how your Jones matrix was constructed, and whether or not you are sure that it needs normalized.
However, let's assume you know your Jones matrix represents a set of lossless optics, and let's assume for mathematical convenience or some other reason you have constructed it in a way that it doesn't preserve intensity but is in every other way correct.
You cannot normalize it by elements, and you cannot normalize it for a particular polarization. If the optics are lossless, then the Jones matrix must be lossless for all polarizations. For an arbitrary Jones vector "v" the light after the optics is "u" = Mv. For lossless optics it must be that u*u = v*v. The correct normalization is (v*v)/(u*u). If everything is correct, the parameters of v will cancel and there will be no arbitrary terms in the normalizer.
Now, without loss of generalization v can be written (e^(i phi), e^(-i phi)). Taking your matrix to be ((A, B),(C,D)) where I am not going to write out the complex components as you did, you will find the normalizer calculated as indicated above is (A*A + B*B + C*C + D*D) + (AB* + CD*)e^(i 2 phi) + (A*B + C*D)e^(-i 2 phi)
This must not depend on phi, so those last two terms had better be zero. This puts a constraint on your Jones matrix. If it preserves energy for all polarizations then it must be antisymmetric. Also, if it preserves energy and is antisymmetric, the normalization you want is (A*A + B*B + C*C + D*D)
• asked a question related to Applied Optics
Question
Hello;
It is well known that when light reaches an optical element, part of it is lost through absorption, diffusion, and back reflection. In the case of mirrors, this value is well characterized and a realistic estimate would be around 4-5% (or less depending of the material). However, I cannot find similar information on commercial or scientific sites for beam-splitters. For example, in a well-known optical products company, if we enter the raw data the percentage of reflected and transmitted light adds up to more than 100% at some points on the curve! Without a doubt this has to do with the measurement methodology.
In the case of scientific articles, some estimate this absorption to be around 2% assuming that it is a block or sheet of a certain material (ignoring ghost images). However, this does not make sense since it would then be more interesting to use a dichroic beam splitter than a mirror in certain circumstances.
Of course everything will depend on the thickness, material used, AR treatment. However, I cannot find a single example and I am not able to know the order of magnitude. Does anyone know of any reference where a realistic estimate of the useful light that is lost when using a beam splitter of whatever characteristics is made?
Thanks !
I think your premise is flawed. There isn’t going to be “an answer” because tailoring this parameter and trading it against other properties you might like is the crux of coating design and the answer might be anything over a wide range depending on what was designed under what set of constraints. For example, your example of a mirror being 4% is at best a rule of thumb and most often completely wrong. Over a fair range of wavelengths bare i coated aluminum happens to be around 4% absorptive. However Silver is only 2% absorptive in that range. Bare Gold may be terribly absorptive at shorter wavelengths. At longer wavelengths it doesn’t reach the 98% of silver, but over much of the IR and aluminum become terribly absorptive and gold is the best. More importantly, mirrors are rarely uncoated and a dielectric coating can raise the reflectivity of metallic mirrors above 99%. See for example Edmunds “ultrafast” enhanced aluminum coating.
And that is just metallic coatings. Metal is useful over a wide wavelength range when you don’t know what a mirror is going to be used for. However, if you know the wavelength (and what acceptance angle you need, and other constraints) you can use a pure dielectric stack. Dielectric mirrors can be made very close to 100% reflective. What’s more, very little light is absorbed, so what little doesn’t reflect transmits.
That brings us to beam splitters. It is not at all difficult to make a dielectric coating where essentially no light is absorbed. It is all either transmitted or reflected. Adding the reflected to the transmitted should yield just about 100% every time. When you found placed where they appeared to add up to higher than 100%, that is just experimental error or round off error, but they probably do add up to almost 100%
• asked a question related to Applied Optics
Question
I am trying to construct an interferometer where I need to use polarizing and non-polarizing beam splitters. Can anyone suggest how to represent the beam splitter matrix for PBS and NPBS?
NPBS: A matrix which operates on the electric fields at the input (E1, E2) and generates the electric fields at two outputs would be one of the:
[-r, t; t, r], [i*r, t; t, i*r]; or [r, i*t; i*t, r]
where:
t = sqrt(transmission);
r = sqrt(1-transmission);
i = sqrt(-1)
which matrix exactly it is depends on the specific implementation of the beamsplitter, (actually there are more possibilities). However the first one is ok for a plate with AR coating and fused fiber coupler.
PBS: A Jones matrix per port is required:
output port 1: [0, 0; 0, 1],
output port 2: [1, 0; 0, 0],
these matrices would be applied to the polarization vector.
The phase would result from difference in optical path.
• asked a question related to Applied Optics
Question
It sounds that shorter λ (more energy) makes the wave more powerful to go through a specific thickness of the material, but the weaker wave does it better. How are the interactions of the wave with the molecules?
You are very welcome, Dr. Marziyeh Mohebbi
• asked a question related to Applied Optics
Question
Talking to Dr. Jörn Schliewe inspired me to raise this illustrated question and how you may call these barriers in the experiment of diffraction? Would you call it n-slits or n-obstacles?
Well, first, it’s N+1 obstacles or if you don’t want to count the long walls at either end for some reason, N-1obstacles, but certainly not N obstacles.
It certainly doesn’t matter what you call it. In your picture the two terms are both correct, and not mutually exclusive. It is in, in fact, N+1 obstacles forming N slits.
I don’t think anyone misunderstands that slits are formed by barriers, and if you talk about N slits everyone will instantly picture a barrier with slits in it. However, on a practical note, at optical wavelengths it generally isn’t possible to have free standing barriers like this. Instead the solid wall continues above and below. Generally a transmissive grating looks like a solid barrier with ”slits” cut into it. So the ”slits” term is constructivist. It is indicative of how the structure is created. You cut slits into a foil or similar. That is the dictionary definition of slit: a narrow cut. That is also how this became the standard terminology in optics because in the early experiments that is literally how gratings were made. We’ve greatly improved our “knife”, but fundamentally that is still how subtractive transmission gratings are still made today.
Terminology is for understanding, and often it uses similarity for recognition. No one thinks the arrow slits in a castle wall were literally made by cutting, but they look like cuts. If you call them slits everyone understands what you are talking about. That is the only important cr for terminology.
In optics we always talk about the slits. This is probably because we are focused on the light. Each slit is treated as a source, we propagate on using Huygen’s principle, etc. It doesn’t really matter what the barriers are so long as they exist. However, we have to talk about slit width and slit spacing, so in what an artist might call “negative space” we are inevitably also describing the barrier. Everyone gets that. I don’t think I’ll switch to explicitly talking about the barriers any time soon
• asked a question related to Applied Optics
Question
What would be the pulse compression limit for initial pulses from fibre laser with average power 4mw, 4.5ps duration and 0.4nm fwhm bandwidth.After amplification to 450mw and propagating through a length of SMF (different lengths can be chosen 5m,10m etc..and hence different amount of spectral broadening) , I want to understand on choosing optimal length of SMF to achieve maximum pulse compression (pulse compression limit using spectral broadening in SMF). How would the length of SMF affect the subsequent pulse compression stage?
The famous equation is: \tau_compressed [ps] = 0.05 \sqrt( \tau_in[ps] )
Dianov et al. Efficient compression of high-energy pulses, IEEE J. Quantum Electron. 25, 828 (1989).
Another good reference is: W. J. Tomlinson and W. H. Knox, Limits of fiber-grating
optical pulse compression, J. Opt. Soc. Am. B 4, 1404 (1987).
So you can expect best compression to about 100 fs, and this may already prove to be quite a challenge.
• asked a question related to Applied Optics
Question
Hi,
I'm a researcher in optimization and a hobbyist photographer, and I'd like to get acquainted with lens design via the use of optimization methods. I found for example the paper "Human-competitive lens system design with evolution strategies" (2007).
Are you aware of more recent techniques to design camera lenses? Are there optimization models or benchmarks available?
Thank you,
Charlie
I have many advanced books and can send you by e-mail, if you need.
• asked a question related to Applied Optics
Question
When simulating light pipes, will the choice of a source (collimated beam vs angular beam) make any difference on the efficiency of the light pipe to channel light from source on one end to the detector on the other end.
Hello Avijit Prakash,
Based on my knowledge, the radiation pattern of your light source has a significant effect on your results. Therefore, I strongly agree with Dr. Sascha
Regards. - Hossien
• asked a question related to Applied Optics
Question
I had this question for years. I frequently (like almost always) find optic designs that expand collimated SMF output into larger diameter beams (something like two achromats after a small asphere). It makes sense if a system requires beam diameter reconfiguration occasionally, but this also happens in many clearly finalized optical designs. My question is, why don't people collimate SMF output directly into a bigger diameter beams using larger diameter collimators?
I did some quick qualitative test before using a beam profiler and found no obvious difference. There is also little difference with optical simulation. I understand those larger aspheric lenses are more expensive in general, but definitely not by a lot when they are within 1 inch.
i fuly agree with Mike is right that cost is definitely a major factor
• asked a question related to Applied Optics
Question
I have the refractive indices (n & k) of a thin film. I can estimate the real part of sub-wavelength structures by considering the shape and void fraction using a simple linear relation. However, I could not find any reference to analytically estimate/calculate the imaginary part of such sub-wavelength structures.
I could find the following but, unfortunately, this is not applicable to my question:
Kar, Meenakshi, Bhullan S. Verma, Amitabha Basu, and Raghunath Bhattacharyya. "Modeling of the refractive index and extinction coefficient of binary composite films." Applied optics 40, no. 34 (2001): 6301-6306.
Would you please introduce a reliable reference or some that are straight forward?
The mixing rules for the real part of the index of refraction hold for the complex quanitity and thus for the imaginary part as well (or as bad). The problem is that this is a very complex problem without a clear solution. The names that are connected with this problem are Gladstone-Dale (Arago-Biot), Newton-Lagrange, Lorentz-Lorenz (Clausius-Mossotti), Maxwell-Garnett, Bruggeman, Loyenga... and there are nearly as many different models as names. If you want an overview I suggest
• asked a question related to Applied Optics
Question
Dear colleagues,
I am investigating the dependence of the number of diffraction rings on the concentration in third order non-linear organic dyes (due to nonlinear refraction and nonlinear absorption). Prof. Pramodini [1] claims that the number of diffraction rings depends linearly on the concentration. However, Prof. Hussain A Badran [2] assume that the number of rings increases exponentially with respect to the concentration. Our experimental curves on aniline blue and Acid blue 29 showed a linear relationship. However, for Oil Red O, experimental curve is not the straight line and the exponential curve. So, is this relationship linear or exponential?
Thank you and hoping for your insightful response.
1.S. Pramodini, P. Poornesh, Effect of conjugation length on nonlinear optical parameters of anthraquinone dyes investigated using He –Ne laser operating in CW mode, Optics & Laser Technology
2. Badran, Hussain A.; Ali Hassan, Qusay Mohammed; Imran, Abdulameer, A Quantitative Study of the Laser-Induced Ring Pattern and optical limiting From 4-Chloro-3-methoxynitrobenzene solution, Basrah Journal of Agricultural Sciences . 2015, Vol. 41 Issue 2, p51-57. 7p.
what is the effect of the increases of the number of rings
• asked a question related to Applied Optics
Question
Hi everyone. I want to buy rectangular aperture as the beam stop. However, all the apertures sold in the market are circular. I am curious why there is no other shape.
Could anyone tell me where I can buy rectangular (square) apertures? Or give some suggestions that how to make it? Thanks in advance!
Thanks for the good solutions. I know what to choose now. Thanks
• asked a question related to Applied Optics
Question
how to spliced different different core size (MFD) fibres ( single mode to graded index multimode). I am trying to splice SMF to GIMF, to fabricate SMF-GIMF-SMF saturable absorber.
Although, I could splice with apparently no power loss (shows 0dB loss). However, splicer shows "Bubble Error", even after several attempts.
note:Please have a look at the photos attached
Thanks
Hi Abbas
I have the same question! I am trying to fabricate SMF-GIMF-SMF structure for sensor application. I tried many times to splice the fibres. I manipulated with different splicer parameters as prefusion time, overlap, arc power and so on, but almost always I got splice with a black vertical line (fig. 1).
Once I got good splice. It happened when I accidentally broke the bad splice and respliced it again. Unfortunately, I could never repeat it.
At fig.2 you can see two spectrums. The blue one corresponds to SMF-GIMF-SMF with both bad splices. The red one corrensponds to SMF-GIMF-SMF that have one good splice. The second spectrum had a much better contrast then the first one. This is due to the fact that the modes in GIMF fibre are more equally coupled to the SMF fiber. Thus, there is a difference between coupling coefficients of modes in GIFM to SMF fibre for the "bad" and "good" splice.
P.S. Abbas, please, let me know if you will find a method, how to splice it without bubbles and black curves!:)
• asked a question related to Applied Optics
Question
We have LG10 beam only and we have to get HG(1,0).
• asked a question related to Applied Optics
Question
I am concerned about the energies of EM radiations. Like the visible has an energy in the range of 1eV, UV has an energy of 10 eV. So when we shine the EM on a semiconductor, how does it affect the charge carrier concentration of it.
It depends on the thermal part of incident energy, which in turn rising the charge carriers from valence to conduction band
• asked a question related to Applied Optics
Question
Dear professors and colleagues,
I am going to to study effect of the two photons absorption in Safranin O. Safranin O is organic material, so I think that the power needed to activate this effect doesn't need to be too high. However, the two photons absorption is a third-order nonlinear optical effect, so it is usually implemented with high-power pulsed lasers. I cannot afford to buy high-power pulsed lasers. So, can I stimulate effect of two photons absorption in safranin O by continuous wave laser (808 nm or 1064 nm)? And how much is the required power of laser? I hope the colleagues who have experience in this experiment share me useful information.
I look forward to hearing from you. Thanks in advance.
Yours sincerely.
Dear Nguyen, I beg You pardon, please read with attention my above given advice.. or at least tell us for what task You are going to apply TPA excitation then it would be possible to figure out correctly what laser beam intensity W/mm2 You was needed...
• asked a question related to Applied Optics
Question
Dear Colleagues,
I am investigating methods to determine the photodynamic activity of photosensitizers for photodynamic therapy. One of the methods being used is absorption spectrometry. A work concludes that significant absorption of light was shown to be prerequisite but not sufficient for high photodynamic activity. My point of view is: When a photosensitizer absorbs more radiation at a certain wavelength, it will produce more Ros (Reactive oxygen species), i.e the absorption maximum will correspond to the wavelength active photodynamic effect best. However, this point of view contradicts the viewpoint in above work. I look forward colleagues to explain this question.
Nguyen, It seems reasonable that the greater the absorption efficiency the greater the release of ROS. This applies to both exogenous photosensitizers and endogenous porphyrins. We are preparing a paper describing the absorption spectra of intact, live planktonic pathogens, both bacteria and fungi, collected with diffuse reflection spectroscopy. (Please see our papers: "The Black Bug Myth" and "Selective Photoantisepsis" posted on RG). I propose that these absorption spectra, if obtained in vivo, will mirror the action spectrum (clinical efficacy VS WL) of the clinical application.
• asked a question related to Applied Optics
Question
I have three collimated optical beams with 1cm separation between the adjacent one. I want to shift one of the three beam laterally so that it goes closer towards or farther away from the adjacent beam by micrometer accuracy.
If you are not worried about relative phases and can tolerate a number of very weak secondary beams, perhaps the simplest way is to insert a tilted parallel glass plate into the beam you want to translate. The translation of the main transmitted beam will be Theta*T*(n-1)/n, where n is the refractive index of the glass plate, T is its thickness and Theta is the tilt of the plate's normal relative to the beam propagation direction (in radians). A 1 mm thick glass slide at 5 degrees will result in a shift on order of 50 microns. The main drawback from this method is that more than one beam is transmitted due to the multiple reflections at the glass surface, however the main transmitted beam will be several hundred times more intense than the strongest secondary beam.
Good luck
• asked a question related to Applied Optics
Question
The laser beam is first expanded to get a elliptical shape beam using a cylindrical lens. This elliptical shaped beam is then passed through a variable attenuator in order to obtain a change in intensity with time. This attenuated elliptical beam is then incident on a fiber of approximately 0.36 mm width, clamped vertically. I would like to know an effective way of measuring what INTENSITY of the laser beam is actually hitting the sample.
I f you want to measure the intensity of the incident laser pulse you can use a suitable photo detector which is fast enough to respond for the time variation s of the pulse. Knowing the external quantum efficiency ETAe one can can get the photo flux pulse assuming that the photocurrent Iph= qETAe phi
where q is the electron charge, phi is the the indecent photonflux in photons/ sec.
If you have a pulse then you have to integrate its flux with with time along the duration of the pulse and also you integrate the photocurrent along the same time.
Best wishes
• asked a question related to Applied Optics
Question
Dear Sirs,
The setup for the dispersion measurement is as follows. Hg lamp, collimator, goniometer with triangular prism. We measure the light dispersion using the prism. To do so we determine the minimal deviation angle of the particular color line and then calculate the refraction coefficient (the wavelengths are known from the standard source). To find the minimal deviation angle we rotate the prism.
In the above setup does anybody know whether the smooth minimum of the deviation angle (as a function of incidence angle) can increase the error of the measurement of the minimal deviation angle? Please correct me if there is a mistake.
For the general case of refraction in a prism spectrometer, the angle of deflection depends on the refractive index of the prism, the vertex angle of the prism, and the orientation of the prism with respect to the incident beam.
The angle of deviation is a minimum when the prism is aligned symmetrically with respect to incident and refracted output beam directions. In this orientation the deviation depends only weakly on the rotation of the prism. To calculate the refractive index, we need only the deviation angle φ, and the prism vertex angle, θ.
n = sin( (θ+φ)/2 ) / sin(θ/2)
A small error, ε, in the prism orientation gives rise to a much smaller error in the deflection angle, proportional to the square of the orientation error ε2. In practice, with reasonably careful adjustment, this error can be neglected.
I don't see how the minimum deviation angle could be made more sensitive to prism rotation, but if this were possible, it would increase the errors arising from measurements of the prism alignment.
The deviation angle does become extremely sensitive to incident angle when the internal refracted ray approaches the critical angle at either the entrance or exit face of the prism. The deviation angle is a maximum at the critical angle, where the beam will mostly be reflected rather than refracted through the prism.
• asked a question related to Applied Optics
Question
Assume that you are living in the time when the Gregorian calendar was introduced by Pope Gregory XIII in October 1582, when
Galileo Galilei was about eighteen years old. However, he was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant 1632, and then he spent the rest of his life under house arrest.
The most noticeable thing in this matter is that people of those years could realize the rotation and subsequently, they could calculate the rate and the duration of the rotation but what was not clear for them was what is rotating around what. At that time what would be your solution?
Now, if I can take this sad historical event as the fact, then I would ask myself if the integral theorem of Helmholtz and Kirchhoff plays a central role in the derivation of the scalar theory of diffraction along with the concept of the wave-particle duality, or it obtains the propagation of light in the diffracted space with an inhomogeneous refractive index?
One more thing: In all this, mathematics is totally neutral. People often confuse "mathematical models" (sothing that a natural scientist does) with mathematics (which mathematicians do). Mathematics is not concerned with "truth" at all. All math theorems say "if A than B", but they NEVER tell you whether A is true or not. Math creates pre-fabricated logic containers, it is not concerned with the truth values of their "inputs".
Hence, you should re-formualate you question as "Do mathematical models present solutions for understanding the reality of the universe?". The answer is then obvious: some do, some don't. And often several model lead to the same conclusion. That's all, folks :-)
• asked a question related to Applied Optics
Question
I am trying to get output of laser beam from a FOC laser beam with diameter ranging from 100 microns to 1000 microns. The FOC diameter is 200 micron and NA is 0.22. What kind of lens or combination of lenses is possible to do this kind of focussing and how much should be the focal size of the lens. Any additional info is also welcome. Thanks
As John explained, it is not possible from a 200µm/25° light source to get a collimated beam from 100µm to 1000µm. The beam parameter product of your source (diameter×divergence) is a constant. You can then expect 100µm/50° or 1000µm/5° spots. Now if you want spots ranging continuously from 100µm to 1000µm, I would look first for a 10× zoom lens (5-50mm for instance). I would then collimate the output of my fiber with a FOC with twice the shortest focal length of my objective (here 10mm). The collimated beam passing through the zoom will give you at the image plane spot sizes ranging from 100µm to 1000µm . However, the equivalent aperture of your source is F/2.2. It means that the aperture of your zoom have to be greater than F/1.1 for your 100µm spot if you do not want to lost energy and it will be hard for 10× zoom lens. If you can deal with a 30% loss at 100µm, it should work. Good luck.
• asked a question related to Applied Optics
Question
The energy density was calculated using this formula, Energy Density = E/A (J/cm2). Here E is the input energy measured in millijoules, A is the area of the circular spot.
E values I know from LDT analysis.
A value, How to calculate using the following parameters?
Laser Beam diameter= 8 mm
Focal length = 20 cm
Nd:YAG laser = 1064 nm
Pulse width = 10 ns
Time duration of laser pulse is needed if one need to calculate the Laser intensity. This can be obtained by dividing the energy density by the time duration of laser. It will have unit of W/cm2.
• asked a question related to Applied Optics
Question
In a Luminescent Solar Concentrator (LSC), the fluorophores (organic dyes, quantum dots,...) can be embedded either in the waveguide material, in a top or a bottom coated layer. I want to know how does the location affect the LSC performances. Is there any reference treating this issue?
Thank you dear @Biswas
• asked a question related to Applied Optics
Question
Einstein stated that “The same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good”. In general, one of the main principle of SR is that "the laws of nature are the same for all inertial reference frames". Is this statement true?
One simple counter example refutes the above statement. Consider the law of the equality of the angles of reflection and incidence, say, when an ideal-elastic ball is thrown with a specific angle at a flat wall.
The reflection law is not true if the ball movement is studied from a different inertial frame. For example, if the experiment is observed by someone who is moving with the relative speed of v parallel to the reflection flight of the ball, the angle of reflection is always the same for any angle of incidence. The latter angle depends on the relative speed between the frame and the observer, v.
In general, the angle of reflection can be smaller, equal or larger than the angle of incidence if observed from different inertial reference frames. Please see section 2.1 (page 4) of the attached article for illustrations and more details.
"Pythagoras's theorem only works in three dimensions."
The Pythagoras theorem works in all Euclidean spaces of dimension D>=2. It can even be been extended to infinite dimensional Hilbert spaces. The notion of cross product can be generalised to every finite-dimensional space, as the wedge operator ^ of Grassmann (exterior) algebra. It can be defined on spaces where no metric is defined, i.e. where the Pythagoras does not make sense. But additional properties, like the * operator, can be defined for metric spaces.
• asked a question related to Applied Optics
Question
I am preparing an optical system in infrared, however I will need to stick two optical components. They are made from CaF2.
Can you recommend index matching-liquid? I am interested in the range 1um-15um. I see that other researchers use paraffin oil, but still i would like to make sure of all possibilities.
Or maybe there are few liquids, that would work eg. in 1um-5um, 5um-10um and so on?
A few came to mind, but none is perfect. You will need to choose based on your application. Each of these below has certain low absorption window in the MIR and FIR, and the optical dispersion (or refractive indices) for some may need to be further characterized.
(1) Acetonitrile
See "Schafer SA et al., Mechanism of Biliary Stone Fragmentation Using the Ho:YAG Laser, IEEE Trans Biomed Eng, vol 41, no 3, 1994."
(2) Glycerol, 1,3-Butylene glycol, trimethylolpropane, Topicare(TM)
See "Viator JA, et al., Spectra from 2.5-15 um of tissue phantom materials, optical clearing agents and ex vivo human skin: implications for depth profiling of human skin, Phys Med Biol, 48 (2003) N15-N24."
Best of luck,
Kin
• asked a question related to Applied Optics
Question
Dear Professors,
Can you please send me your paper entitled " A Quantitative Study of the Laser-Induced Ring Pattern and optical limiting From 4-Chloro-3-methoxynitrobenzene solution"
Best regards.
Following.
• asked a question related to Applied Optics
Question
Dear colleagues,
The Z-scan technique is proposed by Sheik-Bahae et al [1]. Theoretically, when there is no nonlinear absorption, the Z-scan curve must be symmetric around the origin of the Z-axis. However, in practice, the Z-scan curve usually has a large asymmetry. I know the reason for this phenomenon for thermal-optic nonlinear mechanisms. For the electronic nonlinear mechanism, what are the reasons for this asymmetric phenomenon? (Except for experimental error).
Thank you and hoping for your insightful response.
[1] Sheik-Bahae, Mansoor, et al. "Sensitive measurement of optical nonlinearities using a single beam." IEEE journal of quantum electronics 26.4 (1990): 760-769.
When the nonlinear response is dominated by the Kerr effect (nonlinear refractive index change) the asymmetry is easy to understand in the limit of thin media. Essentially the Kerr effect induces a "nonlinear lens" in the material which changes the subsequent propagation of the beam. The example you show in your question would correspond to the case of a negative Kerr effect (n2<0). When the sample is placed before the focal plane of the physical lens, the combined effect of the lens originally used to focus the beam and the negative focal length "lens" induced in the sample is to translate the beam's focal plane further along the z-axis. Thus by the time the beam reaches the aperture it has not diverged as much and the transmission through the aperture is higher. Conversely when the sample is placed behind the focal plane of the physical lens, the induced negative lens leads to an increased divergence and consequently a lower transmission at the aperture. It is not just the phase shift, but difference in curvature of the wavefronts on either side of the focus that gives rise tot he assymmetry. You can find a more detailed explanation in the book "Fundamentals of Nonlinear Optics" by Powers and Haus, chapter 8.
Basically the optics is roughly the same as in the thermal lens case. The main difference is that for an electronic nonlinearity in the absence of absorption, the induced lens is an "instantaneous" response to the transverse profile of the incident beam, whereas in the thermal case it is the radial variation of the adsorbed energy convoluted with thermal diffusion that produced the effective lens induced by the incident beam. Of course the detailed shape of the Z-scan curve will be different, due to the difference in the transverse variation of the induced refractive index change in the two cases. In the "instantaneous" electronic case, this variation will follow the radial variation of the incident beam, in the thermal case the balance between the power deposited by the incident beam and diffusion of the heat within the sample will determine the spatial profile of the induced refractive index change.
• asked a question related to Applied Optics
Question
Dear colleagues,
Without nonlinear absorption, the Z-scan curve corresponding to the pure nonlinear refraction will be symmetric around the origin O. The nonlinear absorption will lead to asymmetry of Z-scan curve. Thus, the closed aperture Z-scan of a material with nonlinear nonlinear absorption and nonlinear refraction give an asymmetric curve. Therefore, we can develop a matlab program to automatically generate nonlinear absorption curves so that these curves multiply with the closed aperture Z-scan curves reproduce a symmetric curve [1]. From this symmetry curve, we can calculate the nonlinear refractive indices, and from the nonlinear absorption curve produced by the matlab program we derive the nonlinear absorption coefficient without the open aperture Z-scan measurement. I have implemented the above idea on closed aperture Z-scan data in works [2] and [3] and found that results perfectly consistent with results in above works. In summary, we can use the matlab program or the numerical methods (fitting curve) generally to determine n2 and beta from the closed- aperture Z-scan data. But why in most works did open aperture Z-scan measurements implement to determine n2 and beta, are this measurements really necessary?
Thank you and hoping for your insightful response.
[2] Sheik-Bahae, M., Said, A. A., Wei, T. H., Hagan, D. J., & Van Stryland, E. W. (1990). Sensitive measurement of optical nonlinearities using a single beam. IEEE journal of quantum electronics, 26(4), 760-769.
[3] Abrinaei, F. (2017). Nonlinear optical response of Mg/MgO structures prepared by laser ablation method. Journal of the European Optical Society-Rapid Publications, 13(1), 15.
I think when nonlinear refraction is dominant, you can extract the nonlinear parameters from the closed z-scan with some confidence. However, there are cases , for example when either NL refraction or absorption are dominant, that you cannot do that without ambiguity, so that is why it is customary to run the open z-scan to get the NL absorption parameters first, and then used them in the closed-aperture results. Experimentally all you need is a beam splitter in the far field, and an extra detector yo obtain both the open and closed-zscan traces at the same time
• asked a question related to Applied Optics
Question
hi there,
how we can compute refractive index of rectangle plates with interferometer, diffraction and polarization methods?
and what is related setup?
thanks.
• asked a question related to Applied Optics
Question
Dear colleagues,
I have used LBP-1-USB Laser Beam Profiler, Newport. This device can measure two-dimensional and three-dimensional beam profiles as well as measure the beam radius very well. The device can also measure relative power (compare two powers). However, the results are very different from that of the optical power meter. At present, we have made laser beam profiler according to the work of Prof.S. De Iuliis:
However, I still wonder if the laser beam profiler can measure the power accurately theoretically?
I really appreciate your help with my project, Prof.Zbigniew Motyka and Prof.Maria Chiara Ubaldi.
• asked a question related to Applied Optics
Question
Currently I'm optimising a lens system which needs to have a MTF 50% or over at a certain line pair (e.g. 50lp/mm) across 400 - 700nm. By studying the examples of OSLO software, I didn't find such built-in function to get 50% @ 50lp/mm at 400nm, 550nm and 700nm, simultaneously. How can I achieve it?
Many thanks if anyone can give some suggestions
Mengjia and Jim, For comparison, I summarized my Ernostar design on OSLO. It works at FOV angle=15degrees, F/4.8. (focal length is 21mm, and wavelength is 0.48,.. 0.656 microns in this case.) Petzval sum is neary the same as Jim's triplet. Shigeo
• asked a question related to Applied Optics
Question
Dear Colleagues,
I'm studying the Self-defocusing effect in Aniline Blue organic. Initially this material is in powder form. However, to exploit their applications, we have to convert them into a solid film by the free radical bulk polymerization method. However, through observation of organic film,  I found that aniline blue did not dissolve well into the solvent. Can anyone explain to me why the aniline blue is not soluble but gives good Self-defocusing effect?
Thank you and hoping for your insightful response.
Thank you very much, Dr. Reza Taheri Ghahrizjani
• asked a question related to Applied Optics
Question
Dear Colleagues,
I'm studying self-defocusing effect in organic material. In the light beam on the screen, I observe the rings like attached images. Is this the result of diffraction? What is the physical mechanism behind it? And how does it affect radius measurement since we usually measure the radius of a continuous light beam, with no interruptions (at dark rings)?
Thank you and hoping for your insightful response.
Dear Prof. Zbigniew Motyka
Your explanation is so good. However, there is another explanation of the Professor Qusay M Ali Hassan as follows:” A pump laser beam with Gaussian intensity distribution is able to stimulate a phase shift, ∆ϕ, in the shape of bell in a nonlinear medium in the transverse direction with respect to the direction of the beam as the one shown in Fig. 1.
On a Gaussian curve, for any point, ρ1, there exists another point, ρ2, having the same slope and wave-vector, their radiation can interfere. Destructive and constructive interferences occurs when the change in phase from the point ρ1, ∆ϕ(ρ1), and the one from the point ρ2, ∆ϕ(ρ2), when the relation ∆ϕ(ρ1)- ∆ϕ(ρ2)= mπ (where m is a constant being odd or even integer for destructive and constructive interferences respectively) is verified.”
I think the diffraction effect does not affect the radius measurement because diffraction only redistributes energy but does not result in energy loss.
• asked a question related to Applied Optics
Question
Dear Colleagues,
From relation:
P=ε_0 χ^((1)) E
I suppose that in an isotropic medium,  χ^((1)) is the scalar quantity, vector P and E are in the same direction, while for an anisotropic medium,  χ^((1)) is the tensor quantity, vector P and E are different in direction. However, my professor said that the above statement is not true in some special cases. Could you tell me which is the those cases?
Thank you and hoping for your insightful response.
I really appreciate your help, Prof.Thomas Mayerhöfer and Prof.Pablo Acebal !
• asked a question related to Applied Optics
Question
Dear Colleagues,
I am studying a third order nonlinear optical organic material with a negative nonlinear refractive index, but the total refractive index is positive (n=n0+n2I, n2<0). What does the n2.I / n0 ratio make sense in the third order nonlinear optics? And for organic materials, how much is this ratio ? (according my experience, it is about 1/100000)?
Thank you and hoping for your insightful response.
Dear Prof.Zbigniew Motyka and Colleagues,
So what significance do n2.I / n0 ratio have in the third order nonlinear optics?

• asked a question related to Applied Optics
Question
Dear Colleagues,
I'm studying the third-order nonlinear optical effect in organic material (polymer film of aniline blue, acid blue 29, Oil red O), but I worry organic material  has a lower thermal stability and a lower optical damage  threshold than that of inorganic. Is the thermal stability and optical damage  threshold of organic material  really low? Is there a way to improve that?
Thank you and hoping for your insightful response.
Thank you very much, Prof.Madhukar Baburao Deshmukh, Uday Saha.
Mechanical property (mechanical strength) of organic material can be lower than that of iorganic material. However, I have read somewhere that organic material has a higher damage threshold. And in this book, author assume that some of the organic materials have much higher damage thresholds than lithium niobate. https://goo.gl/x66fAU
• asked a question related to Applied Optics
Question
There are several types of laser diodes used in the construction of holograms.
Hi,
This company sells small laser modules that include a collimation system
they also offer a custom service,
• asked a question related to Applied Optics
Question
Z-scan technique is very powerful and simple in determining both the sign and magnitude of the nonlinear refractive index  and the nonlinear absorption coefficient . The original Z-scan was proposed by Prof.Sheik- Bahae  et al in 1989 identifying nonlinear coefficients  and  through a closed  aperture Z-scan and an open aperture Z-scan1,2 . This method can be called transmittance based Z-scan. Since then, many variants of original Z-scan technique have been developed to enhance the sensitivity and signal-to-noise ratio. According Prof.T.Godin3, these variants can be categorized into 4 types: alteration of the input beam profiler4,5, theory optimization6, alteration of the detection system3,7-10  or modification of the original experimental setup11-14 .
However, when I read articles on the investigating third order nonlinear characteristics of material, the method often used is  Z-scan method of Prof.Sheik- Bahae. Why these variants are not applicable and can not replace the original Z-scan?
Thank you and hoping for your insightful response.
1.M.Sheik-Bahae,  A. A.Said, and E. W. Van Stryland, High-sensitivity, single-beam n2 measurements, Opt. Lett 14(17)  (1989)  955-957.
2.P. B. Chapple, J. Staromlynska, J. A. Hermann, T. J. Mckay, R. G. Mcduff, Single-Beam Z-Scan: Measurement Techniques and Analysis, J. Nonlinear Optic. Phys. Mat, 6(3) (1997) 251-293.
3.T.Godin, M.Fromager, E.Cagniot, R.Moncorgé and K. Aït-Ameur, Baryscan: a sensitive and user-friendly alternative to Z scan for weak nonlinearities measurements. Opt. Lett, 36(8) (2011) 1401-1403.
4.W. Zhao and P. PalffyMuhoray, Zscan technique using tophat beams, Appl. Phys. Lett. 63 (1993) 1613.
5.S. Hughes and J. M. Burzler, Theory of Z-scan measurements using Gaussian-Bessel beams, Phys. Rev. A 56(1997) R1103.
6.R. E. Bridges, G. L. Fisher, and R. W. Boyd, Z-scan measurement technique for non-Gaussian beams and arbitrary sample thicknesses, Opt. Lett. 20(1995)1821.
7.T Xia, M Sheik-Bahae, AA Said, DJ Hagan, Z-scan and EZ-scan measurements of optical nonlinearities, J. Nonlinear Optic. Phys. Mat,  3(04) (1994) 489-500.
8.A. O. Marcano, H. Maillotte, D. Gindre, and D. Métin, Picosecond nonlinear refraction measurement in single-beam open Z scan by charge-coupled device image processing, Opt.Lett. 21(1996)101.
9.G.Boudebs, V.Besse, C.Cassagne, H.Leblond, and F.Sanchez, Why optical nonlinear characterization using imaging technique is a better choice?, In: Transparent Optical Networks (ICTON), 2013 15th International Conference on. IEEE ( 2013) 1-4.
10.G.Tsigaridas, M.Fakis, I.Polyzos, P.Persephonis and V.Giannetas,  Z-scan technique through beam radius measurements, Appl. Phys. B  76(1)(2003) 83-86.
11.G. Boudebs and S. Cherukulappurath, Nonlinear optical measurements using a 4 f coherent imaging system with phase objects, Phys. Rev. A 69(2004) 053813.
12.D. V. Petrov, A. S. L. Gomes, and C. B. De Araujo, Reflection Z-scan technique for measurements of optical properties of surfaces, Appl.Phys. Lett. 65(1994)1067.
13.H. Ma and C. B. De Araujo, Two color Z-scan technique with enhanced sensitivity, Appl. Phys. Lett. 66(1995)1581.
14.A. A. Andrade, E. Tenorio, T. Catunda, M. L. Baesso, A. Cassanho, and H. P. Jenssen, Discrimination between electronic and thermal contributions to the nonlinear refractive index of SrAlF 5: Cr+ 3, J. Opt. Soc. Am. B 16(1999) 395.
Dear Lam Thanh Nguyen:
Thank you for your question, which has introduced me to the Z-Scan technology of which I have previously been completely ignorant. I don't have time to become an expert in your field, but I have just glanced at
My own impression is that as a scientist it is your job to pursue the answers to such questions you have asked by designing your own investigations to achieve the results you seek and to answer for yourself, to your own satisfaction, the questions related to which approaches give you the best information related to the knowledge you seek to gain about nature. Then, to tell others what you've learned and why you believe your work to be useful to others, whether or not your results have confirmed your initial suspicions or taught you some surprising things along the way.
I have never worried too much about doing things just as others do them routinely. Doing so would make a technologist out of this scientist. Try things out, modify your approaches to fit the need to satisfy your own curiosity, and celebrate the times when you learn something new and think that nobody has done that before.
I hope you often enjoy the feeling of success.
Sincerely, -Steve-
• asked a question related to Applied Optics
Question
Is it possible to lock the laser to fabry perot interferometer resonance curve negative slope (at half power point) by electrical feedback of transmitted signal to laser drive current source.
It is possible to lock a laser to the side of an etalon resonance using a photodiode to measure the transmitted power.
It is possible to change the laser wavelength by changing the laser pump power.
However, by changing the pump power you mainly change the output power. As a result, the given power transmitted via the FPI is no longer associated with 50% transmission rate. You have coupled two parameters.
Depending on the physical laser design, this concept may still allow for laser locking but with a much lower performance.
• asked a question related to Applied Optics
Question
In Z-scan technique, we often use motor (digital mcrometer transformer) to move sample. I think that motor is ued to automate measurement. If there is no motor, for example, we can move sample manualy on bar (marked by lines seperating 1 mm, sample is placed on holder, holder can  move on bar), we determine position of sample by reading lines on bar, and get laser intensity on detector.
So  is it necessary to use motor ? Can we move sample by hand ?
Thank you and hoping for your insightful response.
If you use a stepper motor you can increase the speed of scan and thus you can eliminate any possible thermal response that appears by overexposure of the sample to the spot laser.
• asked a question related to Applied Optics
Question
Dear colleagues,
As far as I know, reverse saturation absorption is one of the mechanisms of optical limiting effect. Because when the light intensity is strong, the absorption coefficient increases, that is, light is absorbed more strongly by third order nonlinear optics material. So when we illuminate the material, initially when the input power increases, the output power behind the material also increases. At certain threshold, the output power is saturated. However, I'm not sure whether nonlinear refraction (nonlinear index n2) is the mechanism that causes the optical limiting or not. The nonlinear refraction only cause the beam to diverge or converge, only changing the light intensity without changing the power.
So, the issue is: Is the nonlinear refraction one of mechanism that causes the optical limiting effect?
I am looking forward to hearing from you.
Hi Lam,
I had some experience with optical limiting (OL), and my comments about your question are: (i) nonlinear index (NI) could have important effect for OL if the material/structures have resonant features (micro-cavities, photonics crystal etc.) so that intensity at resonant mode(s) or wavelengths is changed with RI. Good design could make OL based on that. (ii) in general cases, OL effects are mostly from nonlinear absorption (NA) (saturable absorption, two photon absorption etc.). Note that, many optical nonlinear materials have both NA and NI.
Hope that would help. By the way, I will visit HCM city on December, if you have time we can meet there.
Best,
Dan
• asked a question related to Applied Optics
Question
Dear colleagues,
Recently, we have seen many studies on third order nonlinear optical effects and optical limitting in organic with CW laser. Some researchers wonder whether these effects are within the scope of the nonlinear optics or just thermal effects? Because, in these studies, wavelength of the laser is strongly absorbed by organic materials. And the self-focusing or self-defocusing effects occurs simply due to absorbing effect and heat is formed. Should we consider these effects as nonlinear optics effects? Any materials absorbing wavelength of the laser have self-defocusing effects. So are these effects is important? I think that these effects are nonlinear optics effecs because n2 depend on intensity and some materials absorbing wavelength of the laser don't have self-defocusing effects.
What is your point of view on this issue?
• asked a question related to Applied Optics
Question
Dear colleagues,
I have  read many  papers on Z-Scan measurements of material,  I see that the error is in the range of 25 to 40%. Some authors claim that the error is up to 50%. And the measurement of nonlinear index n2 by Z-scan and other methods are sometimes a difference of more than one order of magnitude. So which range the error of the Z-scan method lies in? And how many order of magnitude do results of measuring nonlinear index n2 by Z-scan method and other methods such as THG, EFISH, DFWM different?
I am looking forward to hearing from you.
dear Lam
in my experience the main source of error in z-scan measurement is the uncertainty in the laser beam fluence measurements, or in other words in the beam shape. As a matter of fact the laser beam is often assumed to be a TEM 00 which is not always realistic. I think this fact can explain the reported differences in the published values and add large uncertainty in the results.
Following Rajeev Gandhi reply, I think it is worth to stress here that measurements using cw sources cannot be compared to pulsed ones since in the first case only the thermo-optical properties are probed. In  fact in this case the signal is originated by the thermo-optical coefficient dn/dT which is not a true third order nonlinear coefficient.
Finally, other techniques for third-order nonlinear characterization may probe other components of the chi^3 tensor, so that the measurements results may be not directly comparable to z-scan results.
• asked a question related to Applied Optics
Question
I used femtosecond laser to do ablation in producing silver nanoparticle. In order to create the good coloid, i have to make the beam focused, how can i get the new focused beam in medium? How to calculate the new focused beam size, so that the focused beam touch the silver plate
Michael's response is more than adequate for the focus location of an incoherent ray bundle. The answer for the coherent field propagation is a bit more complicated than that. The optical field that is converging to the focus in air has a spherical wavefront. For a Gaussian beam, the minimum spot location is not at the center of curvature of the spherical wave. It's actual location depends on both the radius of curvature and the size of the distribution at the location that spherical wave is considered.
The dielectric interface between the media of propagation (considered here to be flat with no curvature) will affect the radius of curvature of the wavefront (this change is consistent with Michael's treatment). Since the radius curvature change, the location of the spot will also change given the same considerations as the location in air before the interface.
Al Siegman gives these considerations a formal treatment for an ideal Gaussian beam distribution that is monochromatic and coherent (originally based on papers from Kogelnik and Li publish by Bell Labs in the 1960s). The problem you have is that these treatments don't pertain well to your problem.
First, the assumption of a Gaussian spatial distribution may not be valid. You did not describe the laser, but if it's based on fiber lasers and fiber gain segments, then the assumption is probably adequate for your purposes.
Second, the field is not monochromatic. The effective bandwidth (wavelength range) of the optical field will increase as the pulse width decreases. Since the dielectric containing your sample is dispersive, the index for each wavelength will be slightly different and hence will the change in the radius of curvature for that field component will change differently. That means the minimum spot location will shift as well. This shift may not be significant but should be considered as it may increase the focus spot size and reduce the field amplitude accordingly.
Third, any absorption in the dielectric medium may produce heat which in turns leads to thermal lensing effects that are often stochastic for media that are gaseous or liquid phase. Random thermal lensing effects give rise to spurious focal power that may not only shift the focus but also move it laterally due to a linear phase component in the random focal power.
Lastly, a femto second laser has a very high peak electric field amplitude. This may introduce nonlinear effects in the material. This is beyond my direct area of expertise but I felt I should mention this for your consideration as well.
If you need to calculate the focus shift with some level of confidence, then the problem is a simple or complicated as you feel it should be to arrive at your level of confidence (for a thesis for example). If you need to maximize nan-particle generation, then an empirical study is probably your best choice. I would start with the numbers you get by assuming a Gaussian distribution and Siegman's ABCD math. You can probably find these equations on the internet somewhere. Be aware that 2 approaches to the ABCD matrix values are in the literature. I prefer the use of matrices that always have a unity determinant because the wavelength that is used to get spot size, radius or curvature, waist size and location are always based on the wavelength in a vacuum. The other approach gives the determinant as the index of the final material, then it is very important to use the wavelength in that material to calculate the results accurately.
• asked a question related to Applied Optics
Question
Hello everyone.
there are different type of relationship to calculate the phase matching angle, I bring two of them in attached, which I can derive the second one. and we found that the first phrase (phase matching condition ) is not true. I need to derive the irradiance of frequency doubled beam varies with theta when phase matching condition is not obeyed (the yellow part marked in the picture).
I will be grateful to anyone can help me.
Best regards,
I am attached the answer below.
• asked a question related to Applied Optics
Question
Hi everyone,
What configuration of lenses is utilized to make light sheets for imaging purposes?
Refer to PIV. As the others have mentioned before, use the cylindrical lens. The details of its application can be easily found in the frame of PIV.
• asked a question related to Applied Optics
Question
form the image in circular phantom in diffuse optical imaging
Dear All,
Minkowski not only introduced the space-time (pseudo-Euclidean) and the imaginary unit in the theory of relativity, but he also introduced the proper time in the frame of reference of observer at rest and related macroscopic bodies (wikipedia/proper-time, wikipedia/собственное-время, Google/proper-time). However, in a multi-dimensional interpretation of the Lorentz transformations, the space-time and the imaginary unit are not in demand: elementary particles move in full space with basic rate (the upper limit of the speed of light) on the Compton distance from the three-dimensional space of the universe. The projection on the extra space of this motion is finite trajectory, allowing macroscopic bodies do not move away from the three-dimensional space, and in it to move freely.
Observations show that our three-dimensional universe is isotropic and homogeneous at distances of more than 300 million light-years. This means that on such scale curvature at all points of the three-dimensional universe is the same, and therefore it is a three-dimensional sphere and it can expand, and only in the space of a higher number of spatial dimensions. The scale of the inhomogeneities in the Universe is 100000 times smaller than the characteristic size of Meta-galaxy. 6D-cosmology gives for the radius of Meta-galaxy (a observable part of the universe) value 3980 Mpc. The length of the great circle of Meta-galaxy passing through its borders (particle horizon), can be taken as a characteristic size. Then the size of the irregularities in the Meta-galaxy is 2π 3980/100000 = 0.25 Mpc or 815646 light-years, which is approximately equal to the average distance between galaxies. The curvature of the universe as a three-dimensional sphere is defined as a unit, divided by the square of the radius of the sphere. Today's radius of the universe is equal to 7100 Mpc. As the universe is expanded its curvature relatively rapidly go to zero. Geometric and physical characteristics of three-dimensional universe are given by the cosmological model based on the principle of simplicity, with the fixed parameters of the theory. These parameters are chosen so that the deviations of the compared values from each other were minimal.
The simplest object in the six-dimensional Euclidean space is a five-dimensional sphere of disturbances in this space. As intersections of the three five-dimensional expanding spheres are expanding three four-dimensional spheres, which consists of three mutual intersections expanding three-dimensional spheres. One of them is our three-dimensional universe.
If the formulas of Newtonian mechanics refer not to the three-dimensional space, but to the six-dimensional space, then we obtain the formulas of the theory of special relativity and quantum mechanics, provided that the proper time of an elementary particle is proportional to the path traversed by it in the additional space, and velocity of the particle in the whole space is equal to the upper limit the speed of light. In order to the observable interaction of elementary particles could occur, the particles are held in Compton proximity to our three-dimensional Universe by means of cosmological forces perpendicular to our Universe. This is force of the Lorentz type in which the role of the charge played by the mass of the charge particles in a magnetic field oriented along the radius of the universe.
A simple interpretation of spin and isotopic spin requires at least three additional spatial dimensions. This yields a simple interpretation of the uncertainty relation of Heisenberg, de Broglie waves, Klein– Gordon equation, the intrinsic magnetic moment of the electron, the CPT-symmetry. In the six-dimensional space, the spin and isospin are treated as a projection of the total angular momentum, respectively, on our and additional space to it, and the intrinsic magnetic moment as a result of the rotation of the charge with the speed of light in the additional space in the orbit of the Compton radius.
The potential energy of a particle is the energy of its motion in extra space.
A lack of spatial imagination leads to fruitless attempts to replace the physics by philosophy.
Igor A. Urusovskii
• asked a question related to Applied Optics
Question
Is there a concept of optical oscillators (like an electronic LC oscillator) or a tunable laser can be used as an oscillator?
No differenct exept the wavelength. In light we can use geomentic optics. In meters waves we can use too geomtry between Our Sun and Star of Sirius.
Regards, Alexanger
• asked a question related to Applied Optics
Question
I want to measure the retardance and diattenuation for a set of retarders and polarizer at visible wavelenghts. I would like to know what are the methods and devices to measure those properties.
Uygun Vakhidovich Valiev · 30.62 · National University of Uzbekistan
I believe that you can use the following method for the measuring of phase light shift described by Randall D.D.: "A new photoelectrical method for the calibration of retardation plates", JOSA, V. 44, No 8, P. 600-602 (1954).  It is very successful and enough simplest experimental method....
Hope it helps!
2h ago
• asked a question related to Applied Optics
Question
I am quite flexible about the number of screws and the shape of the mount as long as the diameter is around 0.5".
Thorlabs, Newport, Edmund Optics, Optosigma, Siskiyou, Eksma, Standa and many others. You can use Laser Focus World Buyer`s Guide (Internet)
• asked a question related to Applied Optics
Question
Generation of visible light can be achieved using SFG of two sources in the IR. I am looking for information on the viability of generating orange light by SFM of 1550 nm and 980 nm pump sources.
Thank you Wei for the link to your article. It is very insightful.
• asked a question related to Applied Optics
Question