Question
Asked 20 September 2013

CCD vs CMOS = sensitivity vs anti-blooming. What is the better choice for DHM?

Has anyone lately come across some comprehensive reviews devoted to sensor type comparison related to DHM (digital holography microscopy) applications? Is it true that CCDs typically outpixel CMOS sensors and have a wider dynamic range (being more expensive though), while CMOS sensors have anti-blooming feature, which is important for DH. Where is the compromise between information loss due to the pixel saturation, when diffraction orders pixels are bloomed by zero order diffraction pixels, and the loss related to low SNR due to poor sensor sensitivity? Does FFC (flat field correction) feature solve that issue in any way? Does sCMOS type sensors handle this problem in any way?

Popular answers (1)

David G Grier
New York University
Several considerations dictate the choice of camera for DHM applications.
1. Global shutter: You almost certainly want a camera with a global shutter. Consumer-grade cameras typically record the even and odd scan lines at different times, leading to geometric distortions if your sample moves. This can be handled by analyzing the even and odd fields separately, but requires care. Some cameras, particularly CMOS cameras, have rolling shutters, in which pixel are acquired sequentially, rather than simultaneously. This results in geometric distortions that are a real pain to correct. Cameras with global shutters record all the pixels simultaneously. This is the way to go.
2. Noise/Sensitivity: CCD cameras typically have superior characteristics: Greater sensitivity and lower noise. This is a real benefit for low-light applications such as fluorescence microscopy. It is less of a problem for holography, however, because the illumination intensity can be adjusted to optimize the signal-to-noise ratio.
3. Dynamic range: CCD cameras clearly win on this front. If you are trying to resolve
small differences in intensity over a wide range of intensities, you probably need a CCD camera.
4. Smear/Blooming: These defects require active correction in CCD cameras. They are not a problem for CMOS cameras, which convert charge to voltage directly at each pixel and do not use shift registers to transfer data. If you cannot avoid saturating regions of your field of view, then you probably would do better with a CMOS camera.
5. Linearity: Some CMOS cameras have logarithmic response to intensity variations. You almost certainly want to specify a camera with linear response.
6. Uniformity: The sensor's sensitivity to light can vary from pixel to pixel. Historically, this has been a bigger problem for CMOS cameras. Recently introduced cameras appear to have improved this considerably, often with on-camera calibrations. When in doubt, the sensor's response can calibrated and nonuniformities corrected during data analysis.
7. Speed: CMOS cameras are faster.
8. Cost per pixel: Both CMOS and CCD cameras can have large pixel counts. CMOS cameras tend to be much cheaper.
9. Region acquisition: Because CMOS sensors are so closely related to RAM chips, it's comparatively fast and easy to acquire regions of interest, including non-contiguous regions.
Based on these considerations, my group has moved from using CCD cameras for holographic microscopy to using CMOS cameras in recent years.
3 Recommendations

All Answers (3)

David G Grier
New York University
Several considerations dictate the choice of camera for DHM applications.
1. Global shutter: You almost certainly want a camera with a global shutter. Consumer-grade cameras typically record the even and odd scan lines at different times, leading to geometric distortions if your sample moves. This can be handled by analyzing the even and odd fields separately, but requires care. Some cameras, particularly CMOS cameras, have rolling shutters, in which pixel are acquired sequentially, rather than simultaneously. This results in geometric distortions that are a real pain to correct. Cameras with global shutters record all the pixels simultaneously. This is the way to go.
2. Noise/Sensitivity: CCD cameras typically have superior characteristics: Greater sensitivity and lower noise. This is a real benefit for low-light applications such as fluorescence microscopy. It is less of a problem for holography, however, because the illumination intensity can be adjusted to optimize the signal-to-noise ratio.
3. Dynamic range: CCD cameras clearly win on this front. If you are trying to resolve
small differences in intensity over a wide range of intensities, you probably need a CCD camera.
4. Smear/Blooming: These defects require active correction in CCD cameras. They are not a problem for CMOS cameras, which convert charge to voltage directly at each pixel and do not use shift registers to transfer data. If you cannot avoid saturating regions of your field of view, then you probably would do better with a CMOS camera.
5. Linearity: Some CMOS cameras have logarithmic response to intensity variations. You almost certainly want to specify a camera with linear response.
6. Uniformity: The sensor's sensitivity to light can vary from pixel to pixel. Historically, this has been a bigger problem for CMOS cameras. Recently introduced cameras appear to have improved this considerably, often with on-camera calibrations. When in doubt, the sensor's response can calibrated and nonuniformities corrected during data analysis.
7. Speed: CMOS cameras are faster.
8. Cost per pixel: Both CMOS and CCD cameras can have large pixel counts. CMOS cameras tend to be much cheaper.
9. Region acquisition: Because CMOS sensors are so closely related to RAM chips, it's comparatively fast and easy to acquire regions of interest, including non-contiguous regions.
Based on these considerations, my group has moved from using CCD cameras for holographic microscopy to using CMOS cameras in recent years.
3 Recommendations

Similar questions and discussions

How do I address a wavefront correction phase pattern to a spatial light modulator?
Question
7 answers
  • Marta KazimierskaMarta Kazimierska
Dear all,
I would like to ask you for an advice in adressing a phase map to a spatial light modulator (reflective and transmissive), precisely in generating a wavefront correcting pattern. I recieve the wavefront error information (Zernike coeffcients) from a Shack Hartmann wavefront sensor which is a part of my setup.
The Matlab script that I'm using now works under such algorithm:
*generates the wavefront pattern from the recieved coefficients in Cartesian coordinates, as a result I get a matrix with OPD [waves] at each point of the grid
*I calculate the remainder of each data point to get the x*2pi information, where x is my location on the wave.
My maximal phase shift of the modulator is 2pi. The phase shift vs. grey level is close to a linear function.
Subsequently:
*If the phase is negative, the correcting phase is its absolute value. Which brings the location to 0.
*If the phase is positive, the correcting phase is 1-x. Which brings it to 2pi.
So what I'm doing is correcting the phase by adding it to the nearest multiplicaiton of 2pi.
Then the resulting "ratio" is multiplied by a grey level and the bitmap is created, which I then display at the SLM.
Is this correct way of thinking? It doesn't correct the wavefront of my system.
Should it look differently for a reflective spatial light modulator (eg. divided by 2 and add the pi at the refection)?
Thank you in advance for any help!
Cheers,
Marta

Related Publications

Article
Full-text available
In digital holography the sinusoidal interference pattern resulting from the interference between reference and object-wave is stored on a digital receiver, such as CCD or CMOS cameras. The maximum resolution of such receivers (typically 500 lp/mm) is small compared to the typical resolution of photographic plates (3000 lp/mm) originally used in op...
Got a technical question?
Get high-quality answers from experts.