Science method

Optical Imaging - Science method

Explore the latest questions and answers in Optical Imaging, and find Optical Imaging experts.
Questions related to Optical Imaging
  • asked a question related to Optical Imaging
Question
3 answers
I have a video of Brownian motion of microbes under a microscope. From this video, I need to calculate the angular velocity of a particular prominent cell. From my understanding, what I know is that I need to identify some points within the cell, track its location at successive time frames and thus get the velocity by dividing the distance traveled by that point by time.
Now my question is given the quality of video I have, what is the best way to track some points within the video? My quality of video is such that manual identification of points at different time snapshots within the video is not possible.
Attached a particular time snapshot of the video as an image here.
Relevant answer
Answer
I can suggest the following approach.
1. Cells detection using instance segmentation or just detection. There are a lot of approaches.
2. Track cells. The simplest approach is SORT track algorithm.
3. Analyze a cell (each track)
3.1 Align cell crops using circle detection, build "video" for each cell
3.2 Calculate optical flow
3.3 Convert the optical flow from cartesian to circular coordinates
3.4 The <mean> amplitude of result vector field is rotation speed.
Probably, you will need to crop this field or do some additional transformation in another steps.
  • asked a question related to Optical Imaging
Question
2 answers
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Thank you
  • asked a question related to Optical Imaging
Question
6 answers
The optical image of the silver nanowires show white color and blue colored silver nanowires when placed under an optical microscope.
Specifications:
1. Reflective mode
2. bright field.
3. Magnification: 1000x.
Relevant answer
Answer
As nanowires are narrow, completely intransparent objects, physical effects like diffraction, dispersion and chromatic abberation of the microscope lenses, as well as reflection of ambient light should be taken into account.
  • asked a question related to Optical Imaging
Question
4 answers
This material is AISI 4140 or SCM440 .
What is white etching part in this optical image?
Relevant answer
Answer
I think 5.7c is more suitable.
In work from attached file there are CCT diagrams for AISI 4140, so you can make a rough estimation of what structures have been received in your case.
Once again - it is neseccary to know values of cooling rate and what kind of thermal treatment was carry out.
  • asked a question related to Optical Imaging
Question
4 answers
Can the IMCOUNTOR image diagram in MATLAB be used to run on holograms as well?
The image below is an example of an activated sludge cell under a light microscope and the second figure is an example of another cell.
Relevant answer
Answer
You can create a contour plot of the data in a grayscale image using imcontour . This function is similar to the contour function in MATLAB®, but it automatically sets up the axes so their orientation and aspect ratio match the image. To label the levels of the contours, use the clabel function.
Regards,
Shafagat
  • asked a question related to Optical Imaging
Question
7 answers
Hello;
It is well known that when light reaches an optical element, part of it is lost through absorption, diffusion, and back reflection. In the case of mirrors, this value is well characterized and a realistic estimate would be around 4-5% (or less depending of the material). However, I cannot find similar information on commercial or scientific sites for beam-splitters. For example, in a well-known optical products company, if we enter the raw data the percentage of reflected and transmitted light adds up to more than 100% at some points on the curve! Without a doubt this has to do with the measurement methodology.
In the case of scientific articles, some estimate this absorption to be around 2% assuming that it is a block or sheet of a certain material (ignoring ghost images). However, this does not make sense since it would then be more interesting to use a dichroic beam splitter than a mirror in certain circumstances.
Of course everything will depend on the thickness, material used, AR treatment. However, I cannot find a single example and I am not able to know the order of magnitude. Does anyone know of any reference where a realistic estimate of the useful light that is lost when using a beam splitter of whatever characteristics is made?
Thanks !
Relevant answer
Answer
I think your premise is flawed. There isn’t going to be “an answer” because tailoring this parameter and trading it against other properties you might like is the crux of coating design and the answer might be anything over a wide range depending on what was designed under what set of constraints. For example, your example of a mirror being 4% is at best a rule of thumb and most often completely wrong. Over a fair range of wavelengths bare i coated aluminum happens to be around 4% absorptive. However Silver is only 2% absorptive in that range. Bare Gold may be terribly absorptive at shorter wavelengths. At longer wavelengths it doesn’t reach the 98% of silver, but over much of the IR and aluminum become terribly absorptive and gold is the best. More importantly, mirrors are rarely uncoated and a dielectric coating can raise the reflectivity of metallic mirrors above 99%. See for example Edmunds “ultrafast” enhanced aluminum coating.
And that is just metallic coatings. Metal is useful over a wide wavelength range when you don’t know what a mirror is going to be used for. However, if you know the wavelength (and what acceptance angle you need, and other constraints) you can use a pure dielectric stack. Dielectric mirrors can be made very close to 100% reflective. What’s more, very little light is absorbed, so what little doesn’t reflect transmits.
That brings us to beam splitters. It is not at all difficult to make a dielectric coating where essentially no light is absorbed. It is all either transmitted or reflected. Adding the reflected to the transmitted should yield just about 100% every time. When you found placed where they appeared to add up to higher than 100%, that is just experimental error or round off error, but they probably do add up to almost 100%
  • asked a question related to Optical Imaging
Question
4 answers
I have TerraSAR-X spotlight image & worldview II optical image. And I want to extract soil moisture or soil roughness information from that image in regional scale. Many soil moisture models need field measurements to calculate the parameters. However, in this case no any field measurements to find. Hence I Would like to know any soil moisture model or relationship of radar backscatter with soil moisture + surface roughness without using any field measurements.  
Relevant answer
Answer
I suggest you considering our new publication: “Machine learning inversion approach for soil parameters estimation over vegetated agricultural areas using a combination of water cloud model and calibrated integral equation model”
  • asked a question related to Optical Imaging
Question
5 answers
I'm working with different satellite imagery and I was wondering if there is a way to see what units (DN, radiance, reflectance) the pixel values are in. Currently I'm using ENVI, QGIS and SNAP working with KOMPSAT-3 and WorldView data.
Is there a way to tell if the images have been calibrated or not?
Relevant answer
Answer
Tora Båtvik You are correct, the Level 1O falls between L1R and L1G. I didn't noticed that there was updated version of the KOMPSAT-3 Image Data Manual. The V 2.1 provides more detail about KOMPSAT-3 data and pre-processing operations applied to compensate for sensor radiometric and geometric artifacts.
According to the Manual V 2.1 "Level 1O is a processing level so called ortho ready. It is the product corrected for geometric distortions and projected to UTM. Processing for Level 1O includes all radiometric corrections and sensor corrections applied to Level 1R processing. Optical distortions are corrected and terrain effects are corrected using average height over the region...".
You may access the KOMPSAT-3 Image Data Manual V 2.1 from here;
In addition, three Enhanced Processing Levels for KOMPSAT 3 image data: Level 1R DZ, Level 1ODZ and Level 1G DZ are also offered.
Please note that , initial sensor-recorded signals are calibrated to radiance values using gains and offsets and then 'rescaled' to DNs. DN commonly used to describe pixel values that have not yet been calibrated into physically meaningful units. The three levels for KOMPSAT-3 image data: Level 1R, Level 1O and Level 1G underwent 'basic processing'. As of L1R, the Level 1O Product is corrected for radiometric and sensor distortions. The difference of relative radiometric response between detectors is corrected and internal detector geometry and mis-registrations between detectors are corrected when applicable.
So, it is safe to conclude that your data has been stored as DN (as 14 bits/pixel) after applying aforementioned radiometric, sensor and geometric corrections. Now you are supposed to convert the these DNs to TOA Radiance and then TOA reflectance following instructions provided on Pg. 17 of the manual. You may use ENVI to do this job for you.
  • asked a question related to Optical Imaging
Question
6 answers
I am planning to use Y-90 and Cu-64 for some radiolabeling experiments and is it possible to detect the Cerenkov signals from these solutions in a Chemidoc system?
My purpose is to obtain preliminary results for screening purposes and then we will be sending out samples to another facility that is equipped with IVIS imagers.
Relevant answer
Answer
I agree with William Thoma that it should be able to pick up the Cherenkov signal. You may need longer exposure (5 min) and/or pixel binning. Make sure to remove any optical filters as the Cherenkov signal is broadband.
  • asked a question related to Optical Imaging
Question
13 answers
Hi everyone, I am using an reflective phase-only SLM as the phase modulation source.
I have read some papers about the influence of the incident angle. It seems that the with the growthe of the incident anlge, the phase modulation has more distortion. It is better to keep the incident angle less than 10 degrees. In one week, I try two normal configurations: with/ without beam splitter and here is my problem.
1. With beam splitter, then incident angle of the SLM is 0 degree. However, I find that there exists second image! It seems that the second image comes from reflection path (to the original path) caused by the beam splitter. When I put the pattern onto the SLM, these two images look the same, but the second image is distorted.
2. So I remove the beam splitter and use the tilted incident angle. However, the lens after the SLM would block the beam when the incident angle is small... But the second image disappears.
So I am stuck here, could anyone give some suggestion about this problem? From your experience with SLM. Thanks in advance!
Relevant answer
Answer
Dear Quilang Ye
Although I don't know the details of your SLM, often ghosts can appear when the incident polarization is not orientated properly with respect to the direction of the liquid crystal's alignment. If this is your problem, one way to solve it is to use a polarizer to clean up your incident polarization, a waveplate to rotate the transmitted polarization to the proper orientation for the SLM, placing the reflective beam splitter between the waveplate and the SLM to couple a portion of the modulated light out. Also, be aware that the coatings on beamsplitters (both on the reflective side and the antireflective side) usually have different reflection coefficients for s and p polarizations.
  • asked a question related to Optical Imaging
Question
1 answer
We are looking for glass-bottom 96-well plates suitable for high content imaging. As these are quite expensive I was looking for alternative suppliers and came across Cellvis who sell them at a more reasonable price. Has anyone used these plates for high content or similar applications and would be able to provide some feedback? Specifically with regards to variations in bottom thickness, bottom flatness, batch-to-batch variation, and suitability for standard TC procedures? Thank you,
Stefan
Relevant answer
Answer
Hi Stefan,
I'm the CEO of Cellvis, thank you for your interest in our products.
You can view the plate tech spec and dimension of the 96 well plate you mentioned at:
Whole plate flatness variation is within 0.1mm.
Coverglass is suitable for tissue culture, however usually cell attachment is weaker compared with tissue culture treated polystyrene dish/plate. For some cells coating is needed.
We have 96 well plate with #1.5P coverslip. It has optical characteristics similar to glass, with cell attachment comparable to tissue culture treated polystyrene dish/plate:
  • asked a question related to Optical Imaging
Question
7 answers
Mostly, CCD camera is used for speckle measurements. I am interested to do measurements with cmos camera, are there any characteristics of cmos camera which can effect my measurements?
e.g a lower sensitivity due to the smaller fill factor, higher temporal noise, higher pattern noise, higher dark current, and the nonlinear characteristic curve are primarily mentioned.
Relevant answer
Answer
I thank this is very important work .
I hope for you a nice day
Qusay Kh. Al_Dulamey
  • asked a question related to Optical Imaging
Question
7 answers
After recording the speckle contrast, next step is to measure it quantitatively and accurately. I am not sure how many pixels of an image should be considered for its calculation. Speckles were recorded under dark room condition.
It would be great if someone can tell from his/her experienece or recommend me a research paper about this specific issue. I searched but i was unable to find any papers or research work related to it.
Thanks in advance!
Relevant answer
Answer
Isn't that a question of satisfying the sampling condition? If you imagine the speckle pattern is formed from a range of plain waves (angular), then the highest frequency component is resulted from the interference between the two outermost, in angular term, plain wave components. This will set the limit of the sampling spacing you need to achieve, taking into account if you are dealing with intensity rather than amplitude.
  • asked a question related to Optical Imaging
Question
4 answers
I need an accurate and real time algorithm to register optical and SAR image. can anyone help?
Relevant answer
Answer
  • asked a question related to Optical Imaging
Question
7 answers
Hi there,
I hope you are doing well.
I am working with imaging systems. I am confused about the effects of linear polarizer in such systems ( I mean how a linear polarizer can improve the resolution?) and why working with one polarization is better than two polarization in image processing systems?
Bests,
Relevant answer
Answer
Dear William,
Thank you for your response.
Actually the light source that we simulate is a point source light that has spectral width of 50 nm with peak around 400 nm and the detector is photo multiplier tube.
Bests
  • asked a question related to Optical Imaging
Question
18 answers
Hi everyone. I meet a trouble case. I want to acquire the Fourier spectrum of an optical image for post-processing. It is known that most energy of the image concentrates on the low-frequency domain. Therefore, it is inevitable that saturation would happen because of the strong constrast between low -and high- frequency domain.
I try to reduce the intensity of laser to amend this problem, but the intensity in high-frequency domain also reduces, hence introducing noise and even no light in these areas anymore.
Is there any better solution? Could you give me some suggestions? Thanks in advance!
Relevant answer
Answer
Suggestions?
Highly depending on the original image/scene. Knowing nothing about that, nothing substantial can be proposed.
  • asked a question related to Optical Imaging
Question
5 answers
I am requesting expert advice in determining camera field of view and data processing considerations, for the purposes of proximal Structure from Motion 3D volumetric reconstructions of 0-2 m tall plants in an agricultural field, taken from a 1.5 mph moving platform, using color cameras and the pinhole optical model compute?
This because in Maricopa Phenotyping, Plant Group Phenotyping Team 2019 experimentation under the leadership of Dr. Thompson, we plan to image cotton plants, using three Nikon N1 aw1 16 MP DSLR action cameras, triggered together at 1 Hz, one camera mounted in nadir view and two oblique on either side of the row crop. We plan to process images using Photoscan.
However, as final (adjustable) camera mounting positions are created via new square tube arms on Professor PSC, I seek additional input in how to suspend, point and set cameras optimally, and so support success in the expected subsequent large volume SfM processing.
Please see iteration two in the Project update, “A second year Professor – Tenure Track?”
Relevant answer
Answer
Do you want to move that position as time goes on to accommodate changing plant size, or one position for the season? Changing positions allows more accuracy at each size, especially if you have some known objects in the mix to standardize from... In our case, we are looking for pathogens, etc, so moving the cameras makes sense as we strive go image the areas we need to evaluate... this then requires a one size fits one system, which is a hassle to set up compared to a one size fits all...
  • asked a question related to Optical Imaging
Question
4 answers
I have seen some workers used Matlab in combination with Image J for that purpose, others used scripting languages as well, Is this means that ImageJ can not do all the necessary tasks alone with all its free plugins.
Relevant answer
Answer
ImageJ is a Java-based image processing program with a lot of useful plug-ins, where the one or the other might fulfill your requirements. However, you did not mention what exactly are your individual tasks in these images. I have experience with Matlab and Python. If you have a licence for Matlab you will have less stress with libriaries and dependencies. If you have no licence, Python is reasonable too and you can install and work wherever you want. If you need algorithms using artificial intelligence (object detection, automatic (semantic) segmentation, in Python there is probably more code and repositories available. I would only recommend ImageJ or other standalone (bio) applications (e.g. Icy - Bioimage analysis) if you never want a program a line of code.
  • asked a question related to Optical Imaging
Question
6 answers
While analyzing the trajectories of 40 nm Au nanoparticles diffusing on a surface ( dark-field optical image), I often land up with trajectories which cross. As a result the trajectory of a single particle is identified as several discreet trajectories by the algorithm. I use the standard centre of mass method for tracking. Is there a better technique which can take care of crossed trajectories by taking into consideration velocity/directionality or other parameters of a moving particle? However this is random walk, so this may be difficult. The particles are quite faint, and the background noise is appreciable even after post -processing, so the method also has to be robust in locating two very close spaced particles.
Relevant answer
Answer
“Faint” particles presumably means low signal-to-noise ratio (SNR) for tracking, which means that there will always be some uncertainty about track assignments for close-passing particles.
Two possible solutions are:
  1. to discover some higher quality features in the signals that in effect drives SNR significantly higher; or
  2. to discover and include some additional prior information in terms of heuristics that the particles must obey (e.g., particle tracks can cross but they never osculate; particle-track crossings never occur in the close neighborhood of a other particles; the heading of particles do not undergo significant change at the point of crossing the track of another particle; etc.).
Perfect tracking requires perfect information (high SNR and/or faultless heuristics). Your track assignments will always be to uncertain, and therefore subject to errors, while SNR remains low and heuristics are incomplete or imperfect.
Do not fall into the common trap of endlessly revising/tweaking tracking algorithms to deal with yet another error you happened to notice, because then your work will churn on endlessly, with no stopping point, and without confidence in your results.
The way ahead with confidence is to squarely address the question: “How do I use an automatic tracking algorithm that I know is going to make mistakes?”
It has to be addressed statistically and practically:
  1. Assess the frequency with which tracking errors occur;
  2. Assess the cost and harm of that error frequency to the objectives to your project, or the cost/benefit of using automatic tracking with that frequency of error;
  3. Use any tracking algorithm whose error rates and cost of error are acceptable to you. That is your stopping point in tracking algorithm development.
Your goal is presumably not to develop perfect tracking in low SNR conditions. Your goal is rather to address some other more interesting aspects of particle motion or dynamics. The fact that you see errors occurring on rare occasions (assuming they are rare) does not necessarily mean that you have to fix your tracking algorithm. If particle crossings are rare, then your current algorithm may already be at acceptable operation with your current algorithm.
One can imagine situations, for instance, where simply discarding particle tracks that have crossed from further analysis is a serviceable strategy for handling crossings. All you need is a crossing detector. This may or may not apply in your case.
Or you might flag all crossings for visual inspection by you in a manual stage of processing, in which you fix any errors using your expert judgement about each crossing event (assuming that you are a perfectly reliable judge, or at least a better judge that the algorithm, of track continuation).
The points to be made in any case are that, given finite SNR and imperfect heuristics, you generally need to accept that automatic tracking errors will occur, and you need to show that their ultimate negative influence on the objectives of your project is acceptably low and suitably managed.
  • asked a question related to Optical Imaging
Question
3 answers
Given the ideal case, in the optically cleared sample there must be no light scattering, hence the polarisation and coherence are preserved. Would it be possible to interfere the lighting within the optically cleared sample?
Or in case of patterned light, how far the pattern would survive inside the sample?
Relevant answer
Answer
"... in the optically cleared sample there must be no light scattering"
This is not true.
Even in pure liquids, light is scattered due to density fluctuations (ie. compressibility).
  • asked a question related to Optical Imaging
Question
6 answers
I want to code and process from scratch, by using the available libraries. Which could be the appropriate methods when one have only a single image?
Relevant answer
  • asked a question related to Optical Imaging
Question
9 answers
I am looking for a good program that helps me to calculate the penetration depth of light depending on some parameters? Any suggestions?
Relevant answer
Answer
There is a window of transparancy- from ~1.2 to 2 microns - i which the wavelengths are too long for electronic absoption and too short for bond vibrations. This region is very popular in Optical Coherence Tomography. Attenuation in this region is mostly due to scattering, and is usually describedas photon diffusion. Be careful, however, since in a lot of cases the so-called "diffusion model" employed in random-walk simulation is applicable, but the corrersponding "diffusion approximation", which employs paraboloc equations of diffusion, is not.
  • asked a question related to Optical Imaging
Question
5 answers
I found in a preliminary study that there is a relationship between focal depth and slip in local study in Fiji islands region,
Relevant answer
Answer
Dear Anup, please note that a SOLUTION being sensititive to the depth does not mean that the focal mechanism of the actual earthquake depends on the depth. If one uses waveform inversion to obtain a focal mechanism, one will get a solution that is independent of any prior determination of the depth and representative of in situ conditions.  
  • asked a question related to Optical Imaging
Question
3 answers
If a subject such as soft tissue of human body, was exposed to a certain light, how we can measure the penetration depth of this light in that subject, practically and NOT by calculation?
Relevant answer
Answer
You can insert an optical fiber into the tissue and measure the attenuation as a function of the distance... Of course you will damage the tissue, but in ex vivo samples you can try to do it.
  • asked a question related to Optical Imaging
Question
5 answers
Hi everyone, 
What configuration of lenses is utilized to make light sheets for imaging purposes? 
Relevant answer
Answer
Refer to PIV. As the others have mentioned before, use the cylindrical lens. The details of its application can be easily found in the frame of PIV.
  • asked a question related to Optical Imaging
Question
9 answers
Spectral matting is a method permits the segmentation of  the foreground taking into consideration all the details of it.  
Relevant answer
Answer
Not at all.
I'am waiting you'r remarks. 
  • asked a question related to Optical Imaging
Question
2 answers
Sentinel - 1 product types are:
SH (single HH polarisation)
SV (single VV polarisation)
DH (dual HH+HV polarisation)
DV (dual VV+VH polarisation)
Which polarisation is useful for surface deformation studies?
Relevant answer
Answer
If you are interested in ground deformation, use HH images in StripSAR (they could be FBD or FBS, does not matter). 
  • asked a question related to Optical Imaging
Question
2 answers
diffuse grey radiative surface 
Relevant answer
Answer
Thank you
  • asked a question related to Optical Imaging
Question
9 answers
Is there a version of the optical arrays that can capture the image intensities over a continuous region rather than the discrete sampling as being done in modern digital cameras in the form of pixels.
Relevant answer
This is absolutely true, what i ve tried to say is that lensless imaging for certain cases might allow such a high NA registration when pixel size would matter. When it usually doesnt if a common type glass objectives are used. So conciderations about sampling conditions might not be important here.
  • asked a question related to Optical Imaging
Question
4 answers
I would like use annular illumination with a LEDs ring illuminator or a optical fibers (a bundle) for an uniform (or relatively) of a surface (disk) of about few cm diameter?
Relevant answer
Answer
Mihail,    Hot spot is harmful, when you want perfect uniformity.  In my experience of miscrospcope image photos, they have been always accompanied with this.  So, I recommend you diffuser dome.  It should be as uniform as in integrating sphere in principle.  But your intension is relatively uniform illumination, Thorlabs ring is not bad.   Regards, Shigeo
  • asked a question related to Optical Imaging
Question
9 answers
I am working to establish the optical density of muscle fibers (type I, II, IIa & IIx) and am having some difficulty locating the function to do so on ImageJ.
Relevant answer
Answer
If you want to measure OD with ImageJ first you have to produce first a calibration image (an 8-bit grayscale or color image with defined areas, the mean gray value of each corresponding to a known OD value under your experimental conditions). Once you have it, load the calibration image into ImageJ and measure the mean gray value of each of its areas using an appropriate area selection tool. Then, go to Analyze/Calibrate, enter "OD" in the measurement unit box, and in the two boxes below enter the mean gray values in the left and their corresponding, known ODs in the right. Select a best-fitting equation, tick the "Global calibration" box, and press OK (if you are not satisfied with the chosen equation, you can select a different one by going to Analyze/Calibrate again, no need to enter grayscale-OD value pairs all over).
Once you have fittted the OD values satisfactorily to a curve, load the image to be analyzed and measure whatever it is that you want to measure. Main, min and max gray values will now correspond to optical densities.
Hope it helps!
  • asked a question related to Optical Imaging
Question
2 answers
We're using fiber optical G.652 and it needs Chromatic dispersion compensator but the latency is increasing, then if we use fiber optical G.655 the latency can improve?
Relevant answer
Answer
G655 fibre typically has very slightly slower group velocity than G652 (effective index of Corning LEAF is 1.4693, compared with 1.4679 for SMF-28e+).  However, if the G652 link requires one or more dispersion compensation modules, each fibre DCM will introduce additional latency, and the G655 solution should perform better in this respect.
Does your application require fibre-based compensation?  Bragg grating dispersion compensation will introduce significantly less latency than an equivalent fibre compensator.
How much control do you have over cable manufacture and installation?  An alternative to discrete DCMs is to incorporate compensating fibre with negative dispersion into the transmission cable so that the total length of fibre is unchanged and there is minimal impact on latency.
Other options to improve dispersion tolerance without appreciable impact on latency include use of lower symbol rates.  For example via multiple WDM channels, or multi-level modulation formats such as differential QPSK, dual polarisation QPSK or higher level QAM.
Using QAM with coherent detection, high levels of dispersion can be compensated electronically or by digital signal processing.  This can introduce some additional latency, but potentially much less than a fibre-based compensation module.
Note that the lower dispersion of G655 fibre increases susceptibility to both Kerr-effect nonlinearity and to stimulated Raman crosstalk.  This may require a reduction in the per-channel launch power, particularly in dense WDM systems.  Fibre effective area is typically lower in G655, and this degrades the non-linear crosstalk further.
  • asked a question related to Optical Imaging
Question
2 answers
I am willing to image latex beads under phase contrast microscope. I have suspended solution of latex beads from Sigma. I want to know good procedure to make a slide with latex beads and covering with a glass slide. I have tried drying the suspension on slide and then a drop of PBS solution and covering with cover-slip. I find difficulty to get good sample in this way. Could someone suggest the better procedure to prepare the sample ?
Regards
Relevant answer
Answer
Hello sir,
How r u...
I would like to tell you that I don't hv enough knowledge about this topic but I told my coalgees and senior about that ?. Whenever I will get the solution for question  I will inform you as soon as possible....
Regards 
Ajay yadav
  • asked a question related to Optical Imaging
Question
2 answers
Hello everyone, 
Does confocal micro. provide absorption cross section per unit chlorophyll a ? What is the best method for bio-optical measurement?
Anyone have experience using HyperOCR Satlantic product ?
Thank you,
Uyen Nguyen
Relevant answer
Answer
Thank you so much Ronald. 
  • asked a question related to Optical Imaging
Question
6 answers
I aimed to use laser rangefinding instruments to find the presence and distance of vessels in the waters for fishermen to be aware of his surroundings.I think that laser will help me with this.But still I want to know whether there even other sources which can work for my process even more efficiently.
Relevant answer
Answer
Laser range finding for other vessels near the surface of the sea might run into a bit of an issue: you need it most when visibility (e.g., light transmission in the air) is the worst.  My understanding is that most laser range finding systems rely on ballistic (singly scattered or reflected) photons.  On a clear night, laser range finding would do the job.  Any time when there is significant spray in the air (windy conditions, storms) the laser beam would be heavily scattered and therefore be less useful.
Importantly, for vessel detection, you don't need a lot of spatial precision- is a meter accuracy in your maps good enough?  10m?  My inclination would be to think about a modality which is not impacted by near-surface water conditions (above or below).
There is an introductory article on ocean spray in the context of weather and climate in Physics Today, Nov. 2016.  Figure 2 showed the common drop sizes, which are in a range to heavily scatter most commonly available lasers. 
regards,
David
  • asked a question related to Optical Imaging
Question
11 answers
Suppose I culture cells on a glass bottom dish for imaging, for example. The cell is an adhesive cell which tends to spread on the glass substrate. I wonder how large the gap can be, between the membrane of the cell and the glass substrate where the cell is attached to.
Actually, I've been considering that the gap can be ignorable since the cells are literally 'attached' to the glass substrate. However, recently I was told that there might be at least ~200 nm gap between the membrane and the glass substrate, even if the cell is firmly attached to the glass. Since my experiment matters in the order of 10 nm, I would like to make sure about the actual distance of the gap, if any. 
Of course it depends on the cell shape and may vary upon local area in a single cell, I would like to know the overall idea, in case of the large leading edge of an adhesive cell. 
Or, if you can share any idea about measuring the distance, I would appreciate it. Thank you for reading.
Relevant answer
Answer
The classic way to study this problem is to use interference reflection microscopy to measure the distance between the cell membrane and the underlying substrate (5 to 100nm distance range). There is an extensive body of literature on the subject, and there is broad consensus that the plasma membrane is NOT uniformly pancaked against the substratum. So you cannot conceive of a single distance measurement that describes membrane-to-substrate distance.
Instead, cells contact the substrate at focal adhesions (5-10nm from the glass surface), and the plasma membrane between these adhesion sites assumes an undulating pattern with variable distances between it and a 2D extracellular matrix. Electron microscopy of lamellipodial structures also show strong adhesion near the base of the membrane at the leading edge, but the protruding/extending membrane can tower a fair distance above the substratum. So 200nm might be plausible, depending on the cell type and the region of plasma membrane under consideration.
If your goal is simply to image the ventral (substrate-facing) plasma membrane, then TIRF imaging modality with a fluorescently tagged molecule is your best bet. Confocal would be the next choice if you can't do TIRF.
  • asked a question related to Optical Imaging
Question
4 answers
Hello, dear researchers, I have a modulation system which can give me the backscattered(reflection) and forward scattered (transmission) Stokes parameter of the particle under study. According to the Polarization Guide by Edward Collett, the Stokes of the elliptically polarized light  are given as:
S0=Ex0^2+Ey0^2
S1=Ex0^2-Ey0^2
S2=2Ex0*Ey0*Cos(delta)
S3=2Ex0*Ey0*Sin(delta)
where Ex0,Ey0 are the amplitude of the scattered orthogonally e.field components and delta is the optical retardation due to a material(the particle under study)
Absorption Stokes can be found out from Reflection and transmission light Stokes and from all three types of Stokes, we can find out the orthogonally e-filed components imaginary and real part and delta from S2 and S3 which is the angle between them.
now my question is that  with above-mentioned quantities can I find out the Scattering cross section? I used 400 nm to 700 nm wavelength.
Relevant answer
Answer
From wikipedia: "The cross section is an effective area that quantifies the essential likelihood of a scattering event when an incident beam strikes a target object, made of separate particles."
In other words,be careful when you define transmission as forward scattered light. I would rather say that light is either scattered, absorbed or transmitted. Extinction is defined as the sum of scattering and absorption,
Cext = Cabs + Csca,
in scattering cross sections. 
Also, the Stokes parameters describe the light, not scattering by the particle. You will have a set of Stokes parameters for your incident light, and a set of Stokes parameters for the scattered light. Note that the S0 is really the light intensity! I assume your modulation system describes the relation between the two sets of Stokes parameters i.e. the scattering by your particle (see Mueller Calculus).
Now, an easy way to find the scattering cross section would be to look at the intensities of the incident light and the scattered light:
Csca = Isca/Iinc
Just be sure to integrate the scattered intensity over all angles.
H.C. van de Hulst, Light Scattering by Small Particles, 1957.
  • asked a question related to Optical Imaging
Question
3 answers
How VV, VH, HH, HV polarized wave interacts with smooth, rough, vertical heighten objects and flat surfaces ? Why we prefer HV for AGB of Forest cover. One reason is volume / depolarization. Please explain the other reasons as well. 
Relevant answer
Answer
I am only want to clear this concept. 
VV. VH, HH and HV interacts with a single tree one by one that is heighten from surface then what will be response ? 
  • asked a question related to Optical Imaging
Question
5 answers
Dear Researchers,
I am new to this imaging field, so I am not familiar about 1 question.
(1) Why people use anti-Stokes emission for imaging?
Generally anti-stokes emission is always weaker than Stokes emission (2 or 3 photon process),right?
Then why not to use Stokes emission for imaging?
e.g. mostly used Er case, excite at 800 nm and strong Stokes emission appears at 980 nm.
Fulfill all the criteria needed for imaging so far I know, like optical window of body tissue, Detector sensitivity (well available Si-detector), availability of excitation source (800 nm strong laser) and so on.
Can anybody suggest me, proper purpose of Anti-Stokes emission in imaging?
Relevant answer
Answer
Search for papers experimentally comparing the two ways of imaging and, if you don't find good publications, cary out the experiments with the compositions you've peaked in your lab - it will be a good paper.
One point maybe still has to be mentioned - oxides are not perfect hosts for upconversion. Although there is some logic behind a direct comparison of yttria-based down-shifting and up-conversion, it would be more relevant if you compare not the same host with different doping but "best in class" compositions for both types of emission. You will need something like sodium yttrium fluoride as a host for upconversion (to be put against an optimised oxide-based material exhibiting Stokes emission).
  • asked a question related to Optical Imaging
Question
3 answers
I want to evaluate biofilm in plastic surface, and it  is possible I can use a confocal microscopie but first need to know if laser light could through the plastic.
thanks
Relevant answer
Answer
High end microscope objective lenses, like those found on most confocals, are designed to image through a number 1.5 glass coverslip.  You can image through plastic but be aware the image quality will be worse.  Ibidi makes plastic bottom dishes that are more similar to glass. They are designed for high end microscopy applications.
If you use standard plastic, as long as the objective lens has enough working distance to get through the thickness of the plastic, then it can be done.  Long WD lenses are typically dry, low magnification lenses.  Look for the WD marked on the lens (at least it is for Nikon optics). 
  • asked a question related to Optical Imaging
Question
1 answer
I have no references regarding Hydroxyapatite optical microscope images I got optical studies papers  on HAP has PL and SEM only but I want optical images of hydroxyapatite using microscopes.Anyone clarify me regarding this question?
                          Thank you 
Relevant answer
Answer
in this paper the researchers measure some optical properties of different types of hydroxyapatite. Maybe it helps.
  • asked a question related to Optical Imaging
Question
5 answers
If the limit of an optical microscopy system is to resolve the finest element in group 7 of the resolution charts, which has the width of ~2.19um.
Does this mean the resolution of the system is 2.19um or double that (4.38um)?
Relevant answer
Answer
 You should consider the distance from the center of a dark area to the center of the next dark area thats means 4.38um.
  • asked a question related to Optical Imaging
Question
4 answers
Hi,
We are planning to build a high finesse(>2000) FP cavity in which mirror reflectivity is 99.93%, in order to stabilize the ECDL. These FP cavity will be used in PDH locking in the later stage and we would like to know the AR coating necessary for the rear side of mirror since we are planning to order the mirrors.
Relevant answer
Answer
Hi Muhammed,
I assume you mean HR coating if you want to obtain a 99.93% reflection on the back mirror? I don't know which wavelength you're aiming at but these HR coatings normally consist of combinations of high and low index coatings (e.g. SiO2-Si3N4, SiO2-Ta2O5). There is good software to calculate these coatings for your specific wavelength (TF_calc, Filmstar, McLeod). Very often the commercial companies who deliver you the mirrors will use this software if you give them the specs (wavelength, reflectance, polarisation, etc). For a 99.93% reflectance my guess is you will need 5-6 stacks of the material combinations I mentioned. For AR coatings the same software is used and there you probably need a single thin film (e.g SiO2) or a single combination of SiO2 and Ta2O5, depending on central wavelength and wavelength window. 
  • asked a question related to Optical Imaging
Question
6 answers
After taking out the copper foil from quartz tube, I took the optical images. But I can't even see the copper grains. Can anyone suggest me what it is? Im using methane as carbon source in Ar and H2 atmosphere. Optical image is attached here.
Relevant answer
Answer
Dear Shubhda Srivastava,
You might had use Ar during annealing steps before the growth by keeping Cu foils directly on a quartz boat (tube). The uneven surface indicates the presence of oxygen in the reaction chamber. You can try once by flowing H2 for few min(eg. 10min) after you checked your proper vacuum to ensure there is no more Oxygen. After that, you can replace H2 by Ar. You can check the purity of Ar gas used (what % of oxygen ppm).
We had the similar problem for both APCVD and LPCVD at 1050oc. But that technique effectively worked for us. Even on that surface we succeed to grow graphene crystals but difficult to transfer.
Maybe this technique helpful to you too.
  • asked a question related to Optical Imaging
Question
2 answers
From sub-pixel correlation of optical imagery the migration of sand dunes in river bed can be analyzed. How does this analysis enable me to suggest a suitable site for the construction of a bridge? 
Relevant answer
Answer
I think, it is important to read this file 
  • asked a question related to Optical Imaging
Question
1 answer
gray scale images. 
general microscope. 
What is the most common way to estimate the image is focused or not ? 
Can you recommend some effective algorithm for automatic focusing ?
Relevant answer
Answer
Hello Hong,
in order to perform any automatic correction, you need two things: a image metric, and an optimization algorithm. In the case of focusing, a good metric can be the edge contrast, often referred to as acutance (https://en.wikipedia.org/wiki/Acutance), which is, in simple words, the mean square difference in intensity between adjacent pixels. Other metrics could be the variance of the image, or the high spatial frequency content of the image, computed through its fourier transform.
Once you have your metric, which is maximum (or minimum) when the image is on focus, you need an optimization algorithm. If you are not in a hurry, you can simply scan through the focus, compute the metric for many positions, and fit the curve with a polynomial (usually a parabola) to find the maximum. In alternative you can use more advanced algorithms such as stochastic gradient descent, or simplex optimization, in order to find your minimum in less iterations.
I hope this was useful, good luck with your experiment!
  • asked a question related to Optical Imaging
Question
7 answers
I am attaching certain optical images , one of base matrix and other of reinforced. Is there any possibility to judge the increase of pores from these images.
Relevant answer
Answer
Dear Dr. Sharma,
You can use metallographic software such as Dewinter Material Plus to analyse the porosity through ASTM B276. Alternatively, you can also analyse the porosity in by ImageJ software tool. ImageJ is available freely (follow the link below).
Please see the attachment for help on ImageJ.
I hope it helps
With Best Regards,
Sunny Zafar
  • asked a question related to Optical Imaging
Question
6 answers
The distance from the proximal to the distal mirrors will be about .5 meters
Relevant answer
Answer
Apologies Paul, I miss read your question. 
If you draw a ray diagram for your device, given a known the size of your proximal mirror, which I imagine will be of order of the size of an eye piece, you should be able to calculate the appropriate size of your distal mirror. 
  • asked a question related to Optical Imaging
Question
2 answers
I am currently looking for one that can be used for simulations, and thus a non-computer graphics based algorithm/model to depict the pathway of light in multi-layered skin and tissue targets. 
Relevant answer
Answer
If you want to generlize your simulation to any geometry you can check up Nirfast an open source software
  • asked a question related to Optical Imaging
Question
13 answers
Hi all,
I’m planning on performing voltage-sensitive dye imaging experiments in cortical brain slices to study neuronal-astrocytic interactions. 
Since blue dyes have longer wavelengths (>600nm), and consequently the light scattering is reduced, I was thinking of using RH-1691 from Optical Imaging Ltd, but the VSD signal I observe is very small or none. 
1-    Although these dyes are optimal for in vivo recordings, do you think that they could be also useful for in situ functional imaging VSD recordings in mice/rat cortical brain slices? If not, what dye/s do you think would be more useful?
2) What would be the optimal incubation time and washing period for a cortical brain slice incubated with the dye?
3) Finally, would it be better to use red laser excitation with stained slices or just the proper filters?
I would appreciate if anyone can give me some advice about this to optimize the technique.
Thank you,
Alba Bellot Saez
Relevant answer
Answer
Hi Alba, answers to your questions. 1. we use a different dye, RH482, (NK3630), which is an absorption dye. The good thing about absorption dye is that it is not excited by the light so is not phtotoxic. Another good thing about NK3630 is that it works in near infrared, 705-720nm.  In contrast,  Di-4-ANEPPS is a fluorescent dye excitation 520nm emission 610 and longer. It has larger signals but the recording time is shorter due to bleaching and phototoxicity. 2. We incubate 2 hrs in diluted dye solution. In order to incubate slice in a long time you will need to circulate the ACSF to keep the slice viable. 3. all dyes work on exact wavelength. RH482 works on 705 and longer, RH1691 works on excitation 630 and emission >690; Di-4-ANEPPS works on excitation 520 and emission > 610. you can get LED light to work. Laser sometimes are noisier then LED. We had a method paper for RH482 :
Jin, W., Zhang, R.J., and Wu, J.Y. (2003). Voltage-sensitive dye imaging of population neuronal activity in cortical tissue. J Neurosci Methods 115, 13-27.
A method paper for in vivo RH1691:
Lippert, M.T., Takagaki, K., Xu, W., Huang, X., and Wu, J.Y. (2007). Methods for voltage-sensitive dye imaging of rat cortical activity with high signal-to-noise ratio. J Neurophysiol 98, 502-512.    
  • asked a question related to Optical Imaging
Question
2 answers
From Unwrapped SAR Interferogram Line of Sight (LOS) component & from the sub-pixel correlation of optical imagery, NS and EW true horizontal components are derived. Is it possible to get the true vertical component from these three derived components? 
Relevant answer
Answer
Yes, it should be possible - if you have 3 independent components of observation you can derive the full 3D displacement. You can first write out the InSAR LOS unit vector with its EW, NS, and vertical components, and then solve for the vertical component in terms of the others. Insert the LOS, EW and NS observations into your equation, and you can get the vertical displacement. Note that all the LOS unit vector coefficients are spatially dependent, so you will have to do it for each pixel separately. If you have even more observations, then you can do it in a least squares sense to help minimize the error.
  • asked a question related to Optical Imaging
Question
2 answers
HI, i have experimental optical microscopic results of Oral skin effected from cancer. I want to get the images by using some simulations methods like FDTD, FEM etc. Is it possible to have simulations in order to get images comparable with experimental imaging results? Which method can be useful and how can we define the material? I mean, for defining material we have to put there refractive index for specific wavelength, then how can we decide which is the the material? If there are more than one material, how can we put refractive index?
Relevant answer
Answer
@Farouk,This not just by perform only in FEM also in FDTD, but the real problem is that how i can define refractive index of the material over a range of wavelength, i don't know material??? 
  • asked a question related to Optical Imaging
Question
7 answers
I have serial sectioned optical micrographs, which I need to set up a 3d view.
Relevant answer
Answer
Have a look at TrakEM2, which is a plugin in ImageJ (Fiji):
In my opinion the best tool for image alignment at the moment (elastic alignment) and manual segmentation.
  • asked a question related to Optical Imaging
Question
3 answers
I am planning to register missile borne SAR images and optical pictures and want to know the differences between missle-borne and air-borne SAR images
Relevant answer
Answer
SAR cares about imaging geometry, and not the vehicle/platform on which the radar is located...  Once you have an image, you can't tell what the vehicle was from which the data was collected...  So, no difference...
  • asked a question related to Optical Imaging
Question
3 answers
In my samples you can find cracks, small spherical pores and large irregular pores/voids. So, I'm trying to understand how can you make sure that a large irregular pore/void is actually a pore and not the cross section of a crack.
I have attached two images to demonstrate my question. 
Relevant answer
Answer
Use mCT analyses. If you want, you can send sample to our research centre (I belive you have possibilities of mCT in your university). You can obtain 3D information. The small round artefacts are metalurgical pores typical for small laser speed (high energy). Bigger one, shape similar to keyhole are typical pro higher speed of laser and unsufficient hatch strategy.
  • asked a question related to Optical Imaging
Question
10 answers
Hello,
I have 5 LEDs with different wavelength and i want to do superposition of Leds to get broadband spectrum. I do not want to use any lens to collimate of light.
Is there any basic idea to collimate of 5 Leds light without using any lens?
Relevant answer
Answer
You can use a diffractive optical element, but that still acts as a lens. So I'm assuming you are wanting to minimize loses.  You can design some sort of all reflective system that can provide collimated light and also combine the beams.  This system could be done in a way it could be molded from plastics as well.  Keep in mind that you're going to need aspheric off-axis elements to design this system.  You're not likely to easily find what you want in a catalog.  Systems like this are typically designed and built as specialty systems.  If you're wanting to do this on a lab bench, it is possible, but will require many more elements.  
  • asked a question related to Optical Imaging
Question
3 answers
confused to select digital camera to sample cotton leaves image,can any body suggest and reason?
Relevant answer
Answer
Is it a miicroscopic image? How much resolutions you need? Nowadays digital cameras have a very wide range of selections. Finalising one depends on many factors like the exact use, fund available etc. It is also not clear what sort of images are you looking for...............
  • asked a question related to Optical Imaging
Question
4 answers
Raman spectroscopy and two-photon microscopy are to be used to image 8um thick Formalin fixed paraffin embedded (FFPE) human breast tissues. The FFPE tissue sections are to be mounted on CaF2 slides. To avoid sample floating off the slide and peeling are there any mounting media / coverslips that can be used to secure the tissue sections onto the CaF2 slide that does not influence or affect the Raman signal during imaging?
Relevant answer
Answer
 which laser excitation wavelength are you using?
is the objective coverslip corrected (0.17)?
If you are using 532nm, glass is no problem (and objective corrected)
For the 785 nm, use quartz or CAF2 coverslips also.
  • asked a question related to Optical Imaging
Question
4 answers
The linearly polarized light incident to the tissue gives us different response from the  surface and the bulk in the highly scattering tissues such as dermis, retina, etc.
The reflected light from the surface keeps the same polarization while the light reflected back from the layers in depth undergoes the  multiple scattering that certainly  depolarize the incident light. The de-polariztion ratio may account for the discrimination  of  the  unhealthy  from healthy tissues leading to  the diagnosis of the disease.
The phenomena may contribute to diagnosis  the diabetic retinopathy , dermal disorders and cutaneous and subcutaneous diseases.
Relevant answer
Answer
     
Nishidate et. al has reported a multi-spectral diffuse reflectance imaging method based on a single snap shot of Red-Green-Blue images for estimating melanin concentration, blood concentration, and oxygen saturation in human skin tissue. Multiple regression analysis of the absorbance spectrum and the extinction coefficients of melanin, oxygenated hemoglobin and deoxygenated hemoglobin provides concentrations of melanin and total blood;
Sensors 2013, 13(6), 7902-7915; doi:10.3390/s130607902
  • asked a question related to Optical Imaging
Question
3 answers
Hi.
I plan to do CLSM on biofilm for the first time ever in other institution, but the service provider doesn't have experience in viewing biofilm by CLSM, I hope to get some answers from those who did it before. How should I prepare my samples to be viewed under CLSM? Can I just grow them on glass slide or 96-well plates then stain with LIVE BacLight Kit ? How long can I store them before getting to the institution for viewing ? 
Relevant answer
Answer
this file can help you
  • asked a question related to Optical Imaging
Question
13 answers
I'm confused about two things.
1. What is the definition of image compression?
2. Is that okay to say a "compression" is done when the size of an image is decreased by optical means?
Despite the discrete cosine and wavelet transforms, I wonder if there is another effective optical method for image compression.
Relevant answer
Answer
Image data compression is concerned with minimizing the number of bits required to represent an image. Image compression algorithms aim to remove redundancy i.e. repetition of messages to reduce the probability of errors in transmission, in data in a way which makes image reconstruction possible. Image compression algorithms calculate which data needs to be kept in order to reconstruct the original image and therefore which data can be "thrown away". By removing the redundant data, the image can be represented in a smaller number of bits, and hence can be compressed.
  • asked a question related to Optical Imaging
Question
5 answers
Hi everyone,
We're building a small optical setup for reflective (episcopic) imaging using a C-mount camera and a LED light source. I've read that the best illumination is called köhler illumination, but I don't know the kind of optics we need to achieve that.
We plan to order everything we need from Thorlabs, but it can be a bit overwhelming to try and pick the right parts without expert knowledge of the optics.
Looking forward to your help!
Best,
Bjarke
Relevant answer
Answer
Hi Bjarke.
Have you looked in 'Fundamentals of Light Microscopy and Electronic Imaging' by Douglas B. Murphy and Michael W. Davidson, 2nd Edition, Wiley-Blackwell 2013 [ISBN 978-0-471-69214-0]; ?    E.g., pp. 9, 40 & 220.
Best wishes,
Richard
  • asked a question related to Optical Imaging
Question
19 answers
We are interested in constructing an intrinsic optical imaging system to investigate changes in cortical blood flow. The simple versions of this procedure - look relatively straight forward to put together but the devil is always in the details.
Relevant answer
Answer
I've built one before with a qimaging camera for $12,000. Now that the Basler Ace cameras are out, I'm going to try an acA1300-30um Monochrome. USB3 interface. $695 from Edmund Optics. Use free ephus software from Svoboda lab that runs in matlab for the frame triggering, data acquisition and data processing. Should have the whole rig built for under $3k. Will post a build to my blog once its done. Cheers.
  • asked a question related to Optical Imaging
Question
3 answers
I am not familiar with pre-processing RADARSAT imagery. Is it so different compare to pre-processing optical imagery? Is it possible to use ENVI software to do this task?
Relevant answer
Answer
I think -.Georeferencing, preprocessing, and other basic stuff can be done using ENVI
If you want to do more analysis like interferogram etc ..Yes, it can be done however, you have to load SARscape module in your ENVI.
  • asked a question related to Optical Imaging
Question
149 answers
Being transparent or creating an illusion? I may propose a device to transfer the images from one location to display at another location using the optical fibers. Furthermore, invisibility may be re-defined over different IR-visible and UV spectral ranges. What optical materials, nanostructures or hybrid mechanisms can be used to enhance the invisibility process?
Relevant answer
Answer
I believe the answer to your question is: no.
The reason is that when you try to do cloaking (eg. using metamaterials) all the techniques are narrowband. To get true invisibility you need your technique to work over a wide band. I there are no good wideband techniques.
  • asked a question related to Optical Imaging
Question
47 answers
The planar phase front and the spherical phase front may change the focal point of an ideal micro-lens ( having min aberration). This fact may be enhanced drastically for an incident distorted wavefront.
Relevant answer
Strictly speaking, focal point of a lens is defined as the point where plane wave is collected into a point, in the geometrical optics approximation. So, focal point of a lens is a constant. If lens is used for transformation of a wave front different from plane wave, than that wave front can be collected, in general, not to a point, but to an area with nonzero size. The position of minimum size occupied by the light may not coincide with the focal plane.
  • asked a question related to Optical Imaging
Question
12 answers
Diffraction
Relevant answer
Answer
I'm not sure I understand the sentence about "sub-wavelength technique in optical fiber for communication." Perhaps, you meant how one can actually beat the diffraction limit or achieve better than (roughly) 200nm resolution? Let's talk nano-optics then.
Nano-optics deals with optical phenomena on nanometer scale, that is beyond the diffraction limit of light. The spatial confinement that can be achieved for photons is inversely proportional to the spread of wavevector components in the corresponding spatial direction. Such spread occurs in light field that converges towards the focus (behind the lens). Thus, one can obtain some finite spread in the wavevector components for highest possible NA that can be obtained with oil-based microscope objectives. If one would be able to to achieve a wider spread of wavevectors, then light could be focused to the spot smaller than the one dictated by the diffraction limit and one can achieve sub-wavelength resolution...
In 1928 Synge published an article that outlined the concepts of what is currently known as scanning near-field optical microscopy. He stated if one takes an opaque film with a sub-wavelength aperture in it and places it within a sub-wavelength distance from a sample, then the produced illumination spot will not be limited by diffraction but the size of the aperture!
In modern days this concept is the basis of the near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM). Both abbreviations have been used. The tip can be made in several ways: metal coated tapered fiber that has an aperture 10-100 nm (achieved by so called shadow coating evaporation technique) or micro-fabricated probes (for example, based on standard AFM cantilever technology). These are just examples because the technology is evolving and so do the methods of making probes with sub-wave apertures.
So, in the nutshell this is a combination of AFM with optical probing. One brings such a probe to a close contact with a sample and scans it across the sample. Collection is done in far-field with a diffraction limited optics. As in optical microscopy, various modes of operation are possible: transmission, reflection etc. One can add non-linear effects as contrast mechanism (SHG, for example), fluorescence etc.
To get more info, do a literature search for NSOM, SNOM and you end up with excellent reviews and original articles on the subject. Alternatively, you may look at some books on the topic like L. Novotny and B. Hecht "Principles of Nano-Optics", Cambridge University Press (2006)
Hope it helps!
  • asked a question related to Optical Imaging
Question
2 answers
Using it for a multi-modal imaging system.
Relevant answer
Answer
The main advantages of using a ‘liquid crystal tunable filter’ (LCTF) for a multi-modal imaging system are: wide wavelength (tuning) range (from the visible to the near-infrared, NIR), large apertures, high switching speed, wide field of view and two-dimensional imaging.
  • asked a question related to Optical Imaging
Question
9 answers
Is there a quantum treatment of optical imaging which is comparably comprehensive as the classical treatment in Born&Wolf: Principles of Optics?
Relevant answer
Answer
Well, it is not my field of research, but I would say yes to the first question, and no to the second:
The state of a quantum system is described by its quantum state function. These functions involve all the degrees of freedom, so grows in complexity with the number of photons, becoming useless. Nevertheless, the light is almost always incoherent, and photons behave independently. It is true that lasers generate coherent radiation, but generally no entanglement, except in a few cases (e.g. two-photon sources). Two photons definitely not form images, so, in realistic cases, the intensity should be proportional to photon count on a detector.
The quantum description is useful for systems with a few photons. The double slit experiment can be understood building the quantum state function with even one photon in the quantum regime, but diffraction is described by the Huyggens principle. I have found this text about the link between both representations:
I won't talk about single photon detectors or extremely weak sources, let's wait for a true expert. It seems that the kind of representation you need may exist, but restricted to some situations. I would also like to know it.
Regards, Nasser