Conference PaperPDF Available

# Novel Software-based Method to Widen Dynamic Range of CCD Sensor Images

Authors:
• AXIS Commnication

## Abstract and Figures

In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.
Content may be subject to copyright.
Novel Software-based Method to Widen Dynamic Range
of CCD Sensor Images
Wei Wen, Siamak Khatibi
Department of Communication System, Blekinge technical Institute, Karlskrona, Sweden
{wei.wen, siamak.khatibi}@bth.se
Abstract. In the past twenty years, CCD sensor has made huge progress in im-
proving resolution and low-light performance by hardware. However due to
physical limits of the sensor design and fabrication, fill factor has become the
bottle neck for improving quantum efficiency of CCD sensor to widen dynamic
range of images. In this paper we propose a novel software-based method to
widen dynamic range, by virtual increase of fill factor achieved by a resampling
process. The CCD images are rearranged to a new grid of virtual pixels com-
posed by subpixels. A statistical framework consisting of local learning model
and Bayesian inference is used to estimate new subpixel intensity. By knowing
the different fill factors, CCD images were obtained. Then new resampled im-
ages were computed, and compared to the respective CCD and optical image.
The results show that the proposed method is possible to widen significantly the
recordable dynamic range of CCD images and increase fill factor to 100% vir-
tually.
Keywords: dynamic range, fill factor, CCD sensors, sensitive area, quantum ef-
ficiency
1 Introduction
Since the first digital cameras equipped with charge-coupled device (CCD) image
sensors in 1975 [1] the investigation on human visual system has been affecting the
digital camera design and their development. In human visual system the retina,
formed by the rod and cone photoreceptors, initiates the visual process by converting
a continuous image to a discrete array of signals. Curcio et el. [2] investigation on
human photoreceptor revealed that the properties of the rods and cones mosaic deter-
mine the amount of information which is retained or lost by sampling process, includ-
ing resolution acuity and detection acuity. The photoreceptor layer specialized for
maximum visual acuity is in the center of the retina, the fovea, which is 1.5 mm wide
and is composed by cones entirely. Fig.1 shows a close up of the fovea region. The
shape of cones is much closed to hexagonal and the cones in fovea are densely
packed, where no gap between each two cones can be considered. Due to this config-
uration the visual acuity and neural sampling in fovea are optimized [3].
The image sensor array in a CCD digital camera is designed by modelling the fo-
vea in human eyes for capturing and converting the analog optical signal from a scene
into a digital electrical signal. Over the past twenty years, the quality of digital camera
sensors has made tremendous progress, especially in increasing resolution and im-
proving low-light performance [4]. This has been achieved by reducing the pixel size
and improving the quantum efficiency (QE) of the sensor [4, 5]. Assuming each pixel
corresponding to one cone in fovea, the pixel density in camera sensor should be close
to 199,000 /mm2 which is the average density of the peak fovea cones in human eyes
[2]. Todays the pixel density can be 24,305 /mm2 in a common commercial camera
such as Canon EOS 5D Mark II or the pixel density can be even higher as
480,000/mm2 in a special camera such as Nokia 808. However the quality of the im-
age is not affected only by the pixel size or quantum efficiency of the sensor [6]. As
the sensor pixel size becomes smaller this results to detect a smaller die size, gain
higher spatial resolution and obtain lower signal-to-noise ratio (SNR); all in cost of
lower recordable dynamic range (DR) and lower fill factor (FF). Hardware solutions
in image sensor technologies as a respond to increase of mobile imaging applications
try to compensate for the performance degradation with decrease of the pixel size. On
the other hand the only standard post-processing solution for obtaining a nearly infi-
nite displayable dynamic range is to use the high dynamic range imaging technology
which implements a combination of several images captured with different exposure
times or different sizes of aperture sensor [7, 8].
Fig. 1. A close up of the fovea region [2].
In this paper we propose a novel software-based method to compensate the per-
formance degradation of the recordable dynamic range, with decrease of the pixel
size, by virtual increase of fill factor. In our method the original fill factor is known
and the processing is done on the captured image. The virtual increase of fill factor is
achieved by a statistical resampling process which initiates the estimation of each new
pixel intensity value. Our results show that it is possible to widen the dynamic range
significantly by software solution. Although there are software technologies to im-
prove the displayable dynamic range using only one image, such as the histogram
equalization and image contrast enhancement [9], to the best of our knowledge, our
approach is the first work that tries to improve the recordable dynamic range by virtu-
ally increase of the fill factor of a CCD sensor.
The rest of paper is organized as follow; in section 2 the effect of fill factor is ex-
plained, in section 3 the details of the method are described, section 4 explains the
experimental setup, the results are shown and discussed in section5, and finally we
conclude and discuss potential future work in section 6. Here it is worth to mention
that we distinguish between recordable and displayable dynamic range where the
recordable dynamic range represents the raw input dynamic range obtained after cap-
turing of an image and the displayable dynamic range is considered a post-processing
operation on the raw captured image.
2 Effect of fill factor
CCD sensor has reigned supreme in the realm of imaging with the steady improve-
ment of performance, especially on scientific area [5] due to its advantage of high
quantum efficiency. The QE which is defined as the number of signal electrons creat-
ed per incident photon is one of the most important parameters used to evaluate the
quality of a detector which affects the DR and SNR of captured images. Fig. 2 is the
graphical representation of a CCD sensor showing some buckets and rain drops
shown as blue lines. Each bucket represents storage of photoelectrons, by modelling a
photodiode and a well of one pixel in the CCD sensor and the rain drops, blue lines,
represent the incident photons that fall into the sensor. By having bigger bucket more
rain drops, number of photons, is collected, which can be seen as increase of the spa-
tial quantum efficiency [10] resulting in DR quality improvement of captured images.
The temporal quantum efficiency is related to accumulation of photoelectrons during
the exposure time and is regulated by the fill factor [5].
Fig. 2. Graphical representation of a CCD sensor.
The quantum efficiency is affected by fill factor as in

= × , where

is the effective  and  is the fill factor which is the ratio of light sensitive
area versus total area of a pixel [5]. A typical dynamic range, the ratio of the brightest
accurately detectable signal to the faintest, of CCD sensors is proportional to the
number of detected photons. The fill factor of non-modified CCD sensor varies from
30% to 75%. Fig. 3(a) and 3(b) show light incident on a sensor with high and low fill
factor pixels respectively. In a sensor with the high fill factor pixels more number of
photons is captured in comparison to a sensor with the low fill factor pixels. Hardware
innovations in image sensor technologies (e.g. decrease of occupied area of the pixel
transistors, increase of the photodiode area within maintaining small pixel size and
use of microlenses) are achieved to increase the fill factor. An effective way to in-
crease the fill factor is to put a microlens above each pixel which converges light from
the whole pixel unit area into the photodiode in order to capture more number of pho-
tons as shown in Fig. 3(c). Since 90s, various microlens have been developed for
increasing fill factor and they are widely used in CCD sensors [12, 13]. But it is still
impossible to make fill factor 100% in practical production due to the physical limits
in digital camera development [4]. Our proposed method compensates the loss of
input signal, caused by incident light on non-sensitive area, in each sensor pixel to
achieve a fill factor of 100%.
Fig. 3. Light incident on sensor with high and low fill factor pixels are shown in (a) and (b)
respectively. (c) Pixel with the micro-lens is used to compensate the entering light. [11]
3 Methodology
By collecting incoming photons into a sample grid, a CCD sensor samples an image.
Fig. 4 illustrates such conventional sampling grid on a CCD sensor array of four pix-
els. The white and black areas represent the sensitive areas and non-sensitive areas in
pixels respectively. The blue arrows in the figure represent the positions of sampled
signal. Let assume the size of each pixel is  by . Then  and  , the sample
interval in and directions, are the spatial resolution of the output image. The sam-
pling function can be expressed as
(
,
)
=
(
, 
)
(
,
)


where
, ,
and are pixel coordinates in the input optical image and the output sampled
image , with size of by , respectively [14]. Let also assume the size of the sensi-
tive area is  by  as shown in Fig. 4. Thus when the incident light is not in the
sensitive area with size of  by,
(
,
)
= 0.
Fig. 4. The conventional image sampling on a CCD sensor.
Based on the discussion in Section 2 the fill factor is lower than 100%, in that case
the region of support corresponding to the sensitive area is not the whole area of one
pixel. A signal resampling procedure is used in order to expand the region of support
and improve the captured signal. For each pixel the resampled procedure consists of
following parts: a) local learning model, b) sub-pixel rearrangement, c) model simula-
tions on sub-pixel grid, d) intensity estimating of sub-pixels based on Bayesian infer-
ence, and e) pixel intensity estimation based on highest probability. Each part of the
procedure is explained in more details as it follows.
a) Local learning model In a neighborhood of the actual pixel one or combina-
tion of several statistical models are tuned according to data structure in the pixel
neighborhood. We used a Gaussian statistical model in our experiments.
b) Sub-pixel rearrangement - By knowing the fill factor, the CCD image is rear-
ranged in a new grid of virtual sensor pixels, each of which consisting of virtual sensi-
tive and non-sensitive areas. Each of these areas is defined by integer number of sub-
pixels. The intensity value of each pixel in the CCD image is assigned to all of sub-
pixels in the virtual sensitive area. The intensity values of all sub-pixels in non-
sensitive area in virtual sensor pixels were assigned to zero. An example of such rear-
rangement of sampled data to sub-pixel level is presented in section 4.
c) Model simulations on sub-pixel grid In a sub-pixel rearranged neighborhood
of the actual pixel, the local learned model is used to simulate all intensity values of
the sub-pixels. The known intensity values of virtual sensitive sub-pixels and result of
linear interpolation on the sub-pixel grid for obtaining the unknown intensity values
of virtual non-sensitive sub-pixels are used to initiate the intensity values in the simu-
lation. Several simulations are accomplished where in each one the number of actual
sub-pixels varies from zeros to total number of sub-pixels of the actual virtual sensi-
tive area. In this way each sub-pixel of the actual virtual sensor obtains, after a num-
ber of simulations, various random intensity values.
d) Intensity estimating of sub-pixels based on Bayesian inference Bayesian
inference is employed to estimate the intensity of each sub-pixel by having the model
simulations values as the observation values. Let y be the observed intensity value of
each sub-pixel after simulations and x be the true intensity value of the sub-pixel, then
= +
where n can be considered as the contaminated noise by the linear interpolation
process. Here the goal is to make the best guess, x , of the value x given the observed
values by
= arg max
(
|
)
,
and
(
|
)
=
(
|
)
(
)
(
)
where P
(
x
|
y
)
is the probability distribution of x given y, P
(
y
|
x
)
is the probability
distribution of y given x, P
(
x
)
and P
(
y
)
are the probability distribution of x and y.
By assumption of having a Gaussian noise yields
(
|
)
= P
(
|
+
)
=
1
2

(

)

where
is the variance of the noise; i.e. the variance of interpolated data from
simulations for each sub-pixel . The educated hypothesis P(x) is obtained by the local
learning model which here has a Gaussian distribution with the mean
and standard
deviation of
as following
P
(
)
=
1
2

(

)

For posterior probability on x yields
(
|
)
(
|
)
P
(
)
=
1
2

(

)

×
1
2

(

)

The x which maximizes P
(
x
|
y
)
is the same as that which minimizes the exponent
term in the above equation. Thus if f
(
)
=
(

)

(

)

, when the derivative

(
)
= 0, the minimum value or the best intensity estimation of the corresponding
subpixel is
=

+
+
e) Pixel intensity estimation based on highest probability The histogram of in-
tensity values of the actual virtual sensor sub-pixels is calculated which indicates a
tendency of intensities probability. The multiplication of inverse of fill factor with
highest value of such probability is considered as the estimated intensity value of the
actual virtual sensor as the result of resampling procedure.
Fig. 5. The virtual CCD image sensor pixel composed by subpixels whose fill factor is set as
81% (left) and the optical image (right).
4 Experimental setup
Several optical and CCD images for different fill factor values were simulated using
our own codes and Image Systems Evaluation Toolbox (ISET) [15] in MATLAB.
ISET is designed to evaluate how image capture components and algorithms influence
image quality, and has been proved to be an effective tool for simulating the sensor
and image capturing [16]. Five fill factor values of 25%, 36%, 49%, 64% and 81%,
were chosen for five simulated CCD sensors, having the same resolution of 128 by
128. All sensors had a pixel area of 10 by 10 square microns, with well capacity of
10000 e-. The read noise and the dark current noise were set to 1 mV and 1
mV/pixel/sec respectively. Each CCD sensor was rearranged to a new grid of virtual
sensor pixels, each of which was composed of 40-by-40 sub-pixels. Fig. 5 shows an
example of the rearranged CCD sensor pixel with fill factor value of 81%. The light
grey and the dark grey areas represent the sensitive and the non-sensitive areas re-
spectively. The sensitive area in each pixel was located in the middle of each pixel.
The intensity value of subpixels in the virtual sensitive area was assigned by corre-
spondent output image pixel intensity. The intensity values of all sub-pixels in non-
sensitive area were assigned to zero. One type of optical and sensor image were gen-
erated and called as sweep frequency image. The intensity values of the sweep fre-
quency image were calculated, in each row, from a linear swept frequency cosine
function. For generation of CCD sensor images, the luminance of the sweep frequen-
cy image was set to 100 cd/m
2
and the diffraction of the optic system was considered
limited to ensure to obtain the brightness of the output as close as possible to the
brightness of scene image. The exposure time was also set to 0.9 ms for all simulated
five sensors to ensure constant number of input photons in the simulations.
5 Results and discussion
The conventional sampled images from the sweep frequency optical images are
shown in the first row of Fig. 6, the second row of images show the resampled images
with our method. The result images from histogram equalization and image enhance-
ment methods are shown in the third and fourth rows. The images of each row are
related to different fill factor values, which are 25%, 36%, 49%, 64% and 81% from
left to right. All the images in Fig. 6 are displayed in unsigned 8 bits integer (uint8)
format without any DR normalization. It can be seen that the DR of the conventional
sampled images are very different from each other. When the fill factor is increasing,
the DR is increased as well and approaching to the DR level of optical image shown
in the Fig. 5. This is according to proportional relation of fill factor to number of pho-
tons which results to the intensity value changes of each pixel. The appearance of all
resampled images with our proposed method is shown in Fig. 6 respectively. The top
row of each image related to the sweep frequency optical image is chosen to visualize
a typical comparison of pixel intensity values in these images. Fig. 7 shows such pixel
intensity values from conventional sampled images with the label of fill factor (FF)
percentage, the optical and the resampled image from the image of 25% fill factor
with our method. The results in Fig. 7 are consistent with the image appearance in
Fig. 6. These also verify that the pixel intensity values are increased as a consequence
of virtual increase of fill factor by our method.
The dynamic range and root mean square error (RMSE) are used for comparison of
the conventional sampled images, our resampled images and the respective optical
image. The dynamic range is calculated by= 20 × log



, where

and

are the maximum and minimum pixel intensity values in an image. In
uint8, there are 256 grey levels, which it means the maximum DR is 48.13 based on
the above equation. The RMSE is defined as
=
1
(
)

where
and
are the intensities of correspondent pixels in two compared imag-
es, and n is the number of pixels. The DR, RMSE and entropy results for the sweep
frequency related images are shown in Table 1 and 2 respectively, where the entropy
of an image is used to evaluate the amount of information preservation after imple-
mentation of each method. Generally when the fill factor is increasing, the RMSEs
between the optical image and the other images are decreasing, on the contrary entro-
py values and DRs are increasing. The tables also show that our method not only
widen DRs in comparison to the conventional method but also preserve the infor-
mation, shown in entropy values, and approach the truth data of the optical image,
shown in RMSEs values, significantly better than post-processing solutions of histo-
gram equalization and image contrast enhancement methods.
(
a)
(b)
(c)
(d)
Fig. 6. The conventional sampled images (a) from the sweep frequency optical image having
different fill factors, and the resampled image with our method (b), and the result images from
histogram equalization (c) and image contrast enhancement (d). From left to right, the sensor
fill factors are 25%, 36%, 49%, 64% and 81%.
Fig. 7. The row wise comparison of pixel intensity values in sweep frequency related images.
Table 1. The RMSE and dynamic range (DR in dB) comparison between the optical image and
the result images related to different methods having different fill factor (FF) percentage.
Methods
FF25%
FF36%
FF49%
FF64%
FF81%
Mean
RMSE
RMSE
DR
RMSE
DR
RMSE
DR
RMSE
DR
RMSE
DR
Conventional
method
102.54 36 87.51 39 69.5 42 49.63 44 26.4 46 67.16
Our method 4.69 48 9.79 48 3.6 48 3.09 48 2.79 48 4.79
Histogram
Equalization
36.24 48 38.13 48 36.13 48 36.25 48 36.11 48 36.57
Image Contrast
enhancement
12.08 48 14.84 48 11.39 48 10.95 48 10.87 48 12.03
Table 2. The Entropy comparison between the optical image and the result images related to
different methods having different fill factor (FF) percentage.
Methods FF25% FF36% FF49% FF64% FF81%
Standard
deviation
Conventional method 5.34 5.90 6.29 6.67 7.00 0.64
Our method 7.31 7.35 7.33 7.33 7.33 0.02
Histogram Equalization 5.1 5.42 5.59 5.77 5.80 0.27
Image Contrast enhancement
5.28
5.84
6.22
6.64
6.92
0.64
Fig. 8. The normalized histogram envelopes of conventional sampled images related to the
sweep frequency optical image and correspondent our proposed resampled images, the images
created with histogram equalization and image contrast enhancement.
Fig. 8 shows the histograms envelopes of the conventional sampled images and the
resampled images by our method and another two post-processing methods for im-
proving the dynamic range. The results in the figures verify as well our statement that
the dynamic range is widening by our method. The top left plot in Fig. 8 shows the
histogram envelopes of the conventional sampled images and the truth optical image.
The dynamic range varies with the change of fill factor for the conventional sampled
images and by increase of fill factor the width of dynamic range is also increased.
However the histogram envelopes of the resampled images by our method, shown in
top right plot in Fig. 8, have the same width of dynamic range, independent of fill
factor changes, and the DR is significantly close to DR of the truth optical image. The
histograms envelopes of images by the post-processing methods, shown in bottom left
plot in Fig. 8, have wider range than the conventional method, but fewer numbers of
gray level indexes in comparison to our method, which is also consistent with result
of the entropy in Table 2 indicating unsuccessfulness of such methods in preservation
of information. The obtained results from the resampling images by our method show
that the dependency of the method to fill factor is not significant, see Table 3. In the
table the standard deviation of the histograms of result images by different methods
are presented which indicating three issues: first the changes in the standard deviation
of histograms are correlated to the preservation of original information, see the Table
2; secondly our method has significantly lower standard deviation of the histogram
value than the values obtained by the post-processing methods within each fill factor
value, indicating significantly more preservation of original information; thirdly our
method has significantly less variation of standard deviation of the histogram values
having different fill factors in comparison to the results from the post-processing
methods; indicating the distinguish between recordable and displayable properties of
the methods.
Table 3. The standard deviation of the histogram distribution.
Fill Factor Our method
Histogram
equalization
Image
enhancement
25%
72.25
183.52
178.41
36%
68.87
153.59
140.54
49%
74.41
142.62
123.99
64%
74.35
131.21
104.08
81%
74.91
124.72
88.95
6 Conclusion
In this paper, a novel software-based method is proposed for widening the dynamic
range in CCD sensor images by increasing the fill factor virtually. The experimental
results show that the low fill factor causes high RMSE and low dynamic range in
conventional sampling images and increase of fill factor causes increase of dynamic
range. Also the results show that 100% fill factor can be achieved by our proposed
method; obtaining low RMSE and high dynamic range. In the proposed method, a
CCD image and a known value of fill factor are used to generate virtual pixels of the
CCD sensor where the intensity value of each pixel is estimated by considering an
increase of fill factor to 100%. The stability and fill factor dependency of the pro-
posed method were examined in which the results showed a maximum of 1.6 in
RMSE and 0.02 in entropy of all resampled images for standard deviation in compari-
son to the optical image from the proposed method and fill factor dependency was
insignificant. In the future works we will apply our methodology on CMOS sensors
and the color sensors.
References
1. Prakel, D. The Visual Dictionary of Photography. AVA Publishing, 91. ISBN 978-2-
940411-04-7 (2013)
2. Curcio, C.A., Sloan, K.R., Kalina, R. E., Hendrickson, A. E. Human photoreceptor topog-
raphy. J. Comp. Neurol, 292, 497523 (1990)
3. Rossi, E.A., Roorda, A. The relationship between visual resolution and cone spacing in the
human fovea. Nat Neurosci, 13:156157 (2010)
4. Goldstein, D.B. Physical Limits in Digital Photography http://www.northlight-
images.co.uk/article_pages/guest/physical_limits_long.html (2009)
5. Burke, B., Jorden, P., Vu, P. CCD Technology, Experimental Astronomy, vol. 19, 69-102
(2005)
6. Chen, T., Catrysse, P., Gamal, A. E., Wandell, B. How small should pixel size be? Proc.
SPIE, vol. 3965, 451459 (2000)
7. Reinhard, E., Ward, G., Pattanaik, S. Debevec, P. High Dynamic Range Imaging: Acquisi-
tion, Display and mage-Based Lighting, Morgan Kaufmann Publishers, San Francisco,
California, S.U.A (2005)
8. Yeganeh, H., Wang, Z. High dynamic range image tone mapping by maximizing a struc-
tural fidelity measure. In IEEE International Conference on Acoustics, Speech and Signal
Processing, pp. 18791883 (2013)
9. Abdullah-Al-Wadud, M., et al, A Dynamic Histogram Equalization for Image Contrast
Enhancement, IEEE Trans., Consumer Electronics, vol.53, no. 2, pp. 593600, (2007)
10. Sebastiano, B., Arcangelo R.B., Giuseppe, M., Giovanni, P. Image Processing for Embed-
ded Devices: From CFA Data to Image/video Coding. Bentham Science Publishers. p. 12.
ISBN 9781608051700 (2010)
11. Dalsa, Image sensor architectures for digital cinematography
http://www.teledynedalsa.com/public/corp/PDFs/papers/Image_sensor_Architecture_Whit
epaper_Digital_Cinema_00218-00_03-70.pdf (2003)
12. Deguchi, M., Maruyama, T., Yamasaki, F. Microlens Design using Simulation Program
for CCD Image Sensor, IEEE Trans Consumer Electronics, 583-589. (1992)
13. Donati, S., Martini, G., Norgia, M. Microconcentrators to recover fill- actor in image pho-
todetectors with pixel on-board processing circuits. Opt. Express 15, 1806618075 (2007)
14. Potmesil, M., Chakravarty, I. Modelling Motion Blur in Computer-Generated Images.
Computer Graphics, vol. 17, 3, 389-399 (1983)
15. Farrell, J., Xiao, F., Catrysse, P., Wandell, B. A simulation tool for evaluating digital cam-
era image quality. In Proc. SPIE Int. Soc. Opt. Eng., vol. 5294, 124 (2004)
16. Farrell, J., Okincha, M., Parmar, M. Sensor calibration and simulation, In Proc. SPIE Int.
Soc. Opt. Eng., vol. 6817 (2008)
... This facilitates application-based configuration of grid, pixel, and gap on the image sensor. In the core framework, we use the idea of modelling the incident photons onto the sensor surface which is elaborated in our previous works [26,27]. Accordingly, each pixel of a captured image by a traditional image sensor (i.e., having square grid and pixel form) is projected onto a grid of L × L square subpixels in which the grid is arranged by the known fill factor or its estimation value as in [28]. ...
... (a) The grid and pixel are square and there is no or fixed gap [26,27]. By virtual increase of the fill factor to 100%, the gap between actual pixels in a CCD camera sensor is removed. ...
... In this section, the process of generating an image using the virtual deformable sensor under the framework is explained. According to the model of the incident photons onto the sensor surface presented in [26], the reconstruction process is divided into three steps of: (a) projecting each original pixel intensity onto a new grid of subpixels based on the gap size and form in the configuration; (b) ...
Article
Full-text available
Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs.
... However practical issues and history of camera development have made us to use fixed arrangements and forms of pixel sensors. Our previous works [1,2] showed that despite the limitations of hardware, it is possible to implement a software method to change the size of pixel sensors. In this paper, we present a software-based method to change the arrangement and form of pixel sensors. ...
... The hexagonal enriched image has a hexagonal pixel form on a hexagonal grid. The generation process is similar to the resampling process in [1], which has three steps: projecting the original image pixel intensities onto a grid of sub-pixels; estimating the values of subpixels at the resampling positions; estimating each new hexagonal pixel intensity in a new hexagonal arrangement. The three steps are elaborated in the following. ...
... The intensity value of each pixel in the square grid is the intensity value which has the strongest contribution in the histogram of its belonging subpixels. Then the corresponding intensity is divided by the fill factor to obtain the square pixel intensity by virtual increase of fill factor to 100% as the work in [1]. ...
Article
Full-text available
The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images.
... At the same time, compared to human eyes that have a much larger range of luminance (10,000:1 vs. 100:1 cd/m 2 ) [13], it is necessary for a camera sensor to have a larger fill factor with a die size to create more intensity levels for an image in the luminance range. Recently a new approach was presented [14] in which the fill factor is increased virtually, resulting in a significant extension of image tonal level and widening of the dynamic range. In the approach, the original fill factor is assumed to be known. ...
... In this paper, we propose a method to estimate the fill factor of a camera sensor from a captured image by the camera. In our method, from the captured image, a set of N new images with N number of fill factors are generated based on the proposed method in [14]. Then the virtual response function of the imaging process is estimated from the set of generated images which combines the images into a composite virtual radiance map. ...
... The optimized method to preserve the existing dynamic range of the scene radiance L in the acquired image Z is to choose the correct exposure time for the radiation of each scene point and having a 100% fill factor. As the exposure time for capturing an image is fixed for all scene points and the fill factor of a camera is generally far from 100% [14], we should surely assume that only part of the dynamic range of L is captured. The nonlinear function of f , which is caused by components in IAP and IPP, maps the image irradiance Z to the output of IAP; the IAP image M, as: ...
Article
Full-text available
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera.
... The form, arrangement, and inter-element distance of sensors play significant roles in the image quality, which is verified by a comparison between the current sensor techniques and animal, especially human, visual systems. The effect of the inter-element distance on an image quality was studied in [4] which showed that the inter-element distance could be decreased by the means of a physical modeling and thus obtaining higher quality images, higher dynamic range and greater number of tonal levels. Anatomical and physiological studies indicate that our visual quality related issues, such as high sensitivity, high speed response, high contrast sensitivity, high signal to noise ratio, and optimal sampling are related directly to the form and arrangement of the sensors in the visual system [5,6]. ...
... In our method, the images are firstly rearranged into a new grid of virtual square grid composed by subpixels with the estimated value of the fill factor. A statistical framework proposed in [4], consisting of a local learning model and the Bayesian inference, is used to estimate new subpixel intensities. Then the subpixels are projected onto the square grid, shifted and resampled in the hexagonal grid. ...
... In this section, the method for generating a hexagonal image by resampling and half-pixel shifting a square-pixel image is explained. According to the resampling process in [4], the process is divided into The three steps are elaborated in the following, as well as in Algorithm 1. ...
Article
Full-text available
The current camera has made a huge progress in the sensor resolution and the lowluminance performance. However, we are still far from having an optimal camera as powerful as our eye is. The study of the evolution process of our visual system indicates attention to two major issues: the form and the density of the sensor. High contrast and optimal sampling properties of our visual spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose a novel software-based method to create images on a compact dense hexagonal grid, derived from a simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible to achieve an image processing in the orbit domain and the generated hexagonal images are superior, in detection of curvature edges, to the square images. We believe that the orbit domain image processing has a great potential to be the standard processing for hexagonal images.
... The appearance of such fringes is commonly attributed to the interference between the spacing in the camera sensor array and any length scale related to the physical elements being recorded (Wen and Khatibi 2015). Aliasing which involves undersampling of light is a potential cause of Moiré fringes and is dependent on the fill factor of the sensor. ...
... Ideally, a sensor with a fill factor of 100% would not be influenced by Moiré patterns but in practice, this is not achievable due Fig. 7 Variation of the difference in RMS between the true value and the derived value from diverse adopted methodologies as the particle image diameter, d increases to physical limitations of the camera. Alternatively, there exists techniques such as Gaussian blurring, "descreening" algorithms or "despeckle" processes which provide significant attenuation (Wen and Khatibi 2015). However, the trade-off of using these is a reduction in sharpness or brightness of the image. ...
Article
Full-text available
Pixel locking in stereoscopic particle image velocimetry (SPIV) is often overlooked, albeit the existence of studies demonstrating an influence on turbulent statistics. Such a bias error occurs when the seeding particles have an image diameter in the range of 1–2 pixels. Together with the advent of superior cameras and more powerful lasers, new image-processing techniques enable large field of views to be examined. Under such circumstances, defocusing is not an option and due to the small nature of the particle image diameter, pixel locking is inherent in the data. This study analyses the contrast between object plane-based and image plane-based approach for cross-correlation in the presence of pixel locking in SPIV. By combining experimental data and synthetic images, the sensitivity of pixel locking on single-point statistics is quantified and a protocol is proposed to tackle such an increasingly prevalent condition. A consequence of pixel locking is the occurrence of Moiré fringes, whose intensity is observed to be enhanced in the presence of particles with an image diameter less than 2 pixels. Such high frequency artifacts are not only receptive to the fill factor of the sensor but also depend on the residual error of the interpolation from image plane to the object plane. This study discusses the origins of Moiré fringes in SPIV and provides a method to mitigate these based on the type of cross-correlation used. Graphic abstract Variation of the difference in RMS between the true value and the derived value from diverse adopted methodologies as the particle image diameter, $$d_{\tau }$$ d τ increases. (OP: Object plane cross-correlation, IP: Image plane cross-correlation, IP HE: Image plane cross-correlation post histogram equalisation)
... Recently a software-based method was p the dynamic range of a CCD sensor by virtu fill factor [11]. The proposed DR widening implementing a statistical resampling pro known fill factor. ...
... In this paper we propose a softw performs as microlens in the hardware solu only the recordable dynamic range but also levels. The proposed tonal level extension is the statistical resampling process in [11] and fill factor. In the method the estimated captured CCD image, is increased virtually physical limitation of the microlens' solut generates the estimation of each new pixel in results verify that the estimation of fill factor is a not only feasible but also significantly e in DR widening and tonal levels extension pr there are software technologies for improvin dynamic range using only one image, such equalization and image contrast enhancement our knowledge, our approach is the first w extend the tonal levels and widen the recorda by virtual increase of an estimated fill facto sensor. ...
Conference Paper
Full-text available
As one of important outcomes of the past decades of researches on sensor arrays for digital cameras, the manufacturers of sensor array technology have responded to the necessity and importance of obtaining an optimal fill factor, which has great impact on collection of incident photons on the sensor, with hardware solution e.g. by introducing microlenses. However it is still impossible to make a fill factor of 100% due to the physical limitations in practical development and manufacturing of digital camera. This has been a bottle neck problem for improving dynamic range and tonal levels for digital cameras e.g. CCD cameras. In this paper we propose a software method to not only widen the recordable dynamic range of a captured image by a CCD camera but also extend its tonal levels. In the method we estimate the fill factor and by a resampling process a virtual fill factor of 100% is achieved where a CCD image is rearranged to a new grid of virtual subpixels. A statistical framework including local learning model and Bayesian inference is used for estimating new sub-pixel intensity values. The highest probability of sub-pixels intensity values in each resampled pixel area is used to estimate the pixel intensity values of the new image. The results show that in comparison to the methods of histogram equalization and image contrast enhancement, which are generally used for improving the displayable dynamic range on only one image, the tonal levels and dynamic range of the image is extended and widen significantly and respectively.
... The virtual hexagonal enriched image has a hexagonal pixel form on a hexagonal arrangement. The generation process is similar to the resampling process in [17,18], which has three steps: projecting the original image pixel intensities onto a grid of sub-pixels; estimating the values of subpixels at the resampling positions; estimating each new hexagonal pixel intensity in a new hexagonal arrangement where the subpixels are projected back to a hexagonal grid, which are shown as red grids in Figure 2. In this arrangement the distance between each two hexagonal pixels is the same and the resolution of the generated Hex_E image is the same as the original image. ...
Article
Full-text available
The study of the evolution process of our visual system indicates the existence of variational spatial arrangement; from densely hexagonal in the fovea to a sparse circular structure in the peripheral retina. Today’s sensor spatial arrangement is inspired by our visual system. However, we have not come further than rigid rectangular and, on a minor scale, hexagonal sensor arrangements. Even in this situation, there is a need for directly assessing differences between the rectangular and hexagonal sensor arrangements, i.e., without the conversion of one arrangement to another. In this paper, we propose a method to create a common space for addressing any spatial arrangements and assessing the differences among them, e.g., between the rectangular and hexagonal. Such a space is created by implementing a continuous extension of discrete Weyl Group orbit function transform which extends a discrete arrangement to a continuous one. The implementation of the space is demonstrated by comparing two types of generated hexagonal images from each rectangular image with two different methods of the half-pixel shifting method and virtual hexagonal method. In the experiment, a group of ten texture images were generated with variational curviness content using ten different Perlin noise patterns, adding to an initial 2D Gaussian distribution pattern image. Then, the common space was obtained from each of the discrete images to assess the differences between the original rectangular image and its corresponding hexagonal image. The results show that the space facilitates a usage friendly tool to address an arrangement and assess the changes between different spatial arrangements by which, in the experiment, the hexagonal images show richer intensity variation, nonlinear behavior, and larger dynamic range in comparison to the rectangular images.
Chapter
The quality assessment of a high dynamic image is a challenging task. The few available no reference image quality methods for high dynamic range images are generally in evaluation stage. The most available image quality assessment methods are designed to assess low dynamic range images. In the paper, we show the assessment of high dynamic range images which are generated by utilizing a virtually flexible fill factor on the sensor images. We present a new method in the assessment process and evaluate the amount of improvement of the generated high dynamic images in comparison to original ones. The results show that the generated images not only have more number of tonal levels in comparison to original ones but also the dynamic range of images have significantly increased due to the measurable improvement values.
Conference Paper
Full-text available
Tone mapping operators (TMOs) that convert high dynamic range (HDR) images to standard low dynamic range (LDR) images are highly desirable for the visualization of these images on standard displays. Although many existing TMOs produce visually appealing images, it is until recently validated objective measures that can assess their quality have been proposed. Without such objective measures, the design of traditional TMOs can only be based on intuitive ideas, lacking clear goals for further improvement. In this paper, we propose a substantially different tone mapping approach, where instead of explicitly designing a new computational structure for TMO, we search in the space of images to find better quality images in terms of a recent objective measure that can assess the structural fidelity between two images of different dynamic ranges. Specifically, starting from any initial image, the proposed algorithm moves the image along the gradient ascent direction and stops until it converges to a maximal point. Our experiments show that the proposed algorithm reliably produces better quality images upon a number of state-of-the-art TMOs.
Article
Full-text available
This paper describes a procedure for modeling motion blur in computer-generated images. Motion blur in photography or cinematography is caused by the motion of objects during the finite exposure time the camera shutter remains open to record the image on film. In computer graphics, the simulation of motion blur is useful both in animated sequences where the blurring tends to remove temporal aliasing effects and in static images where it portrays the illusion of speed or movement among the objects in the scene. The camera model developed for simulating motion blur is described in terms of a generalized image-formation equation. This equation describes the relationship between the object and corresponding image points in terms of the optical system-transfer function. The use of the optical system-transfer function simplifies the description of time-dependent variations of object motion that may occur during the exposure time of a camera. This approach allows us to characterize the motion of objects by a set of system-transfer functions which are derived from the path and velocity of objects in the scene and the exposure time of a camera.
Article
Full-text available
Visual resolution decreases rapidly outside of the foveal center. The anatomical and physiological basis for this reduction is unclear. We used simultaneous adaptive optics imaging and psychophysical testing to measure cone spacing and resolution across the fovea, and found that resolution was limited by cone spacing only at the foveal center. Immediately outside of the center, resolution was worse than cone spacing predicted and better matched the sampling limit of midget retinal ganglion cells.
Article
Full-text available
We propose an array of non-imaging micro-concentrators as a mean to recover the loss of sensitivity due to area fill-factor. This is particularly important for those image photo detectors in which complex circuit functions are required and a substantial fraction of the pixel area is consumed, like e.g., 3D camera, SPAD arrays, fluorescence analyzers, etc., but also in CMOS sensors. So far, the low fill-factor was an unacceptable loss of sensitivity precluding from the development of such devices, whereas by using a concentrator array a recovery is possible, up to the inverse square of numerical aperture of the objective lens. By ray tracing, we calculate the concentration factors of several geometries of non-imaging concentrator, i.e., truncated cone, parabolic and compound parabolic, both reflective and refractive. The feasibility of a sizeable recovery of fill-factor (up to 50) is demonstrated.
Article
We describe a method for simulating the output of an image sensor to a broad array of test targets. The method uses a modest set of sensor calibration measurements to define the sensor parameters; these parameters are used by an integrated suite of Matlab software routines that simulate the sensor and create output images. We compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties. The simulation captures the essential features of the images created by these different sensors. Finally, we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus, such as natural scenes with high dynamic range or low light levels.
Article
The Image Systems Evaluation Toolkit (ISET) is an integrated suite of software routines that simulate the capture and processing of visual scenes. ISET includes a graphical user interface (GUI) for users to control the physical characteristics of the scene and many parameters of the optics, sensor electronics and image processing-pipeline. ISET also includes color tools and metrics based on international standards (chromaticity coordinates, CIELAB and others) that assist the engineer in evaluating the color accuracy and quality of the rendered image.
Article
Charge-coupled devices (CCDs) continue to reign supreme in the realm of imaging out to 1 μm, with the steady improvement of performance and the introduction of innovative features. This review is a survey of recent developments in the technology and the current limits on performance. Device packaging for large, tiled focal-plane arrays is also described. Comparisons between CCDs and the emerging CMOS imagers are highlighted in terms of process technology and performance.