ArticlePDF Available

Back to basics: Towards novel computation and arrangement of spatial sensory in images

Abstract and Figures

The current camera has made a huge progress in the sensor resolution and the lowluminance performance. However, we are still far from having an optimal camera as powerful as our eye is. The study of the evolution process of our visual system indicates attention to two major issues: the form and the density of the sensor. High contrast and optimal sampling properties of our visual spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose a novel software-based method to create images on a compact dense hexagonal grid, derived from a simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible to achieve an image processing in the orbit domain and the generated hexagonal images are superior, in detection of curvature edges, to the square images. We believe that the orbit domain image processing has a great potential to be the standard processing for hexagonal images.
Content may be subject to copyright.
Acta Polytechnica 56(5):409–416, 2016 ©Czech Technical University in Prague, 2016
available online at
Wei Wen, Siamak Khatibi
Institute of Communication, Blekinge Institute of Technology, Karlskrona, Sweden
corresponding author:
Abstract. The current camera has made a huge progress in the sensor resolution and the low-
luminance performance. However, we are still far from having an optimal camera as powerful as our
eye is. The study of the evolution process of our visual system indicates attention to two major issues:
the form and the density of the sensor. High contrast and optimal sampling properties of our visual
spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose
a novel software-based method to create images on a compact dense hexagonal grid, derived from a
simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After
that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible
to achieve image processing operations in the orbit domain and the generated hexagonal images are
superior, in detection of curvature edges, to the square images. We believe that the orbit domain image
processing has a great potential to be the standard processing for hexagonal images.
Keywords: Hexagonal pixel, Square pixel, Hexagonal sensor array, Hexagonal processing, Fill factor,
Convolution, Orbit functions, Orbit transform.
1. Introduction
Nowadays, the ubiquitous influence of cameras in our
life is undoubtable and this is thanks to the current
camera sensory technique, which has made a huge
progress on increasing the sensor resolution and the
low-luminance performance [
]. The progress achieve-
ments are due to the size reduction of the sensory
element (pixel), improvement in generation of the sig-
nal from collected light (quantum efficiency), and the
used hardware techniques of the sensor [
]. However,
the image quality is not affected only by the pixel size
or quantum efficiency of a sensor [
]. As the sensor
pixel size becomes smaller, a smaller die size detection,
gaining higher spatial resolution and obtaining lower
signal-to-noise ratio are required; all in cost of a lower
dynamic range and a fewer number of tonal levels.
The form, arrangement, and inter-element distance
of sensors play significant roles in the image quality,
which is verified by a comparison between the current
sensor techniques and animal, especially human, vi-
sual systems. The effect of the inter-element distance
on an image quality was studied in [
] which showed
that the inter-element distance could be decreased by
the means of a physical modeling and thus obtain-
ing higher quality images, higher dynamic range and
greater number of tonal levels. Anatomical and physi-
ological studies indicate that our visual quality related
issues, such as high sensitivity, high speed response,
high contrast sensitivity, high signal to noise ratio, and
optimal sampling are related directly to the form and
arrangement of the sensors in the visual system [
In the human eye, three types of color photoreceptors,
cones, are packed densely within a hexagonal pattern
form and are located mostly in fovea in the center
of retina [
]. Figure 1 shows a dramatic increase of
the cone cross sectional area and a decrease of cone
density within the eccentricity range represented in a
strip of the inner segments of photoreceptors from the
foveal center along the temporal horizontal meridian.
Rods, another type of photoreceptors with non-color
property, first appear at about 100
m from the foveal
center and are smaller than the cones.
Hexagonal photoreceptors arrays are not found only
in human, but also often in compound eyes of insects
and other invertebrates. Actually, such hexagonal ar-
rays are more common for animals and plants than any
other geometric arrays, such as rectilinear orthogonal
arrays. This is due to the need of a motion detection,
which is obtained based on the local difference of light
intensity between adjacent neighboring photoreceptor
cells [
] using the lateral inhibition process [
Lateral inhibition is a contrast enhancement computa-
tion to exaggerate the light intensity differences of the
neighboring cells; it is useful in an edge detection. The
hexagonal array provides the best candidate for a con-
tiguous neighboring-cell computation of the local light
intensity difference between adjacent cells [
]. By us-
ing the contiguous neighboring cells at 60
angle, the
first-order computation of light intensity differences
is computed, and for the computation of finer angu-
lar differences, such as 30
angle, the second-order
adjacent cells are used. Thus, this provides means
for a symmetric computation, using one set of com-
putational algorithms to compute the light intensity
difference of exactly 6 contiguous neighbors. Each suc-
cessive higher-order neighbor will compute the light
intensity difference with the angular direction rotated
by 30
degree successively. In the gradient based edge
Wei Wen, Siamak Khatibi Acta Polytechnica
Figure 1. The enhanced image of inner segments of a human fovea1 photoreceptor mosaic from the original image
printed in [
]. The mosaic strip is extending 575
m from the fovea1 center along the temporal horizontal meridian;
shown from upper left to lower right in the figure. Arrowheads indicate the edges of the sampling windows. Brackets,
shown as red, indicate a quadrant of the first sampling window with the highest density of cones. The midpoint of
the boundary of this quadrant and a quadrant adjacent to it in the temporal direction (to the right) with similar
density and mean spacing was considered to be the point of 0.0 eccentricity. The strip contains profiles of only cones
up to the fifth window, where the small profiles of rods begin to intrude. The large cells are cones and the small cells
are rods and the bar is 10 µm.
detection, a local process at pixel level, the edges are
discriminated by comparing the light intensity gradi-
ent of neighboring cells to find sudden changes of light
intensity in adjacent pixels. It is shown in numeri-
cal studies that, when using the spatial computation,
the hexagonal pixel-arrays are more efficient for edge
detection instead of the rectilinear arrays [
]. It
achieves a 40% computational efficiency by using a
hexagonal edge detection operator [
] and it can
efficiently detect motion in six different directions at
60 degree [
]. Due to a biological inspiration and
aforementioned benefitsl of hexagonal sensory arrays
implementation; generally, two main approaches are
followed by researchers to acquire hexagonal sampled
images. The first approach is to manipulate the re-
sult of conventional acquisition sampling devices using
square sensor arrays, via software, to generate a hexag-
onally sampled image. The second approach is to use
dedicated hardware to acquire the image, such as the
super CCD from Fujifilm whose sensor structure is
hexagonal [
], or color filters in hexagonal shape for
the image sensors [
] to improve the quality of the
acquired color by the sensor.
In this paper, we propose a novel software-based
method to create images on a dense hexagonal sensor
grid by shifting the sensor array virtually. This is
derived from a simulated camera sensor array by a
virtual increase of fill factor and transferring of the
shifted square pixel grid into a virtual hexagonal grid.
In our method, the images are firstly rearranged into
a new grid of virtual square grid composed by sub-
pixels with the estimated value of the fill factor. A
statistical framework proposed in [
], consisting of a
local learning model and the Bayesian inference, is
used to estimate new subpixel intensities. Then the
subpixels are projected onto the square grid, shifted
and resampled in the hexagonal grid. Unlike the
previous hexagonal image processing methods, the
intensity of each hexagonal image pixel is estimated in
our method,which results to maintaining the density
of the arrangement. To the best of our knowledge,
our approach is the first work that tries to create a
hexagonal image by a virtual increase of a fill factor.
After the hexagonal image is generated, type A2 orbit
function from Weyl group is used to map the image
into the orbit domain. Then smoothing and gradient
filtering, as typical image processing operations, are
done in the orbit domain on the hexagonal images.
We believe such hexagonal image processing has great
potential to be the standard processing for hexagonal
arrangement due to its practical usage. Such process-
ing is surprisingly more genuine and close to the way
how human brain works.
This paper is organized as follows. In Sections 2
and 3, the related researches of hexagonal resampling
and the methodology are discussed more in detail.
Section 4 presents the experiment setup. Then the
results are shown and analyzed in Section 5. Finally,
we summarized and discussed our work in this paper
in Section 6.
2. Related works to hexagonal
The software based approaches rearrange the conven-
tional square pixel grids to generate the hexagonal
images. There are three most common methods for
simulating hexagonal resampling [21]:
The hexagonal grid is mimicked by a half-pixel
shift, which is derived from delaying the sampling
by a half a pixel on the horizontal direction as it
is shown in Figure 2 [
]. In Figure 2, the left
and right patterns are showing the conventional
square lattice and the new pseudo-hexagonal sam-
pling structure, whose pixel shape is still square;
see the six connected pixels by the dashed lines. In
such sampling structure, the distance between the
vol. 56 no. 5/2016 Back to Basics
Figure 2. The procedure from square pixels to hexag-
onal pixels by half-pixel shifting method.
two sampling points, pixels, are not the same; they
are one or 5/2.
The hexagonal pixel is simulated by a set of
square pixels, called hyperpel [
] which is widely
used for displaying a hexagonal image on normal
monitor. Figure 3 shows an example of the hyperpel,
composed of 20 ×20 square sub-pixels.
The hexagonal pixel is generated by mimicing
the Spiral Architecture method [
] by averaging
the pixel grey level values of four square pixels in
the structure. The method preserves the hexagonal
arrangement as each hexagonal pixel is surrounded
by six neighbour pixels. A reduction in the reso-
lution of the resulted hexagonal image is expected
due to averaging of four pixels for each hexagonal
generated pixel. Also, in this method the distance
between each of the six surrounding pixels and the
central pixel is not the same as it should be in a
hexagonal structure.
The spiral architecture is a common method for the
hexagonal addressing and for the processing of the
hexagonal images [
]. In the spiral architecture, all
the hexagonal pixels are arranged on a spiral; it maps
the hexagonal image into a one-dimensional vector.
There are still many problems related to the hexagonal
arrangement and computation that cannot be solved
with all these approaches, e.g. the image resolution
and pixel intensity values are changed during the
resampling from the square grid to the hexagonal
grid. In the case of the hexagonal computation, the
hexagonal image still has to be mapped to a certain
architecture form ; as far as the Cartesian indexing
and coordinate is not yield in hexagonal computation.
For example, in the spiral architecture, the hexagonal
image is mapped into a one-dimensional vector to
achieve a faster addressing and processing. However,
such mapping results to a complicated computation
process for each image processing operation; e.g. due
to the reduction of the neighbour pixels from six to
two in the spiral architecture.
3. Methodology
3.1. Hexagonal pixel
In this section, the method for generating a hexag-
onal image by resampling and half-pixel shifting a
square-pixel image is explained. According to the
resampling process in [
], the process is divided into
Figure 3. an example of hyperpel
Is <- Select Pixel Matrix
F <- Fill factor
S <- Active Area Size (F)
G <- Gaussian Distribution Estimation (I)
In <- Introduce Gaussian Noise (G)
P <- Active Area Size of Middle pixel
While P <= S
L <- Local Learning (In)
X <- Subpixel Intensity (L)
End while
Ev <- Subpixel Gaussian Estimation (X)
H <- Hexagonal grid projection (Is)
E <- Histogram Distribution (Ev)
Es <- Subpixel Intensity Estimation (E)
I <- Pixel Intensity Estimation (Es)
Algorithm 1. Resampling process.
three steps: projecting the sampled signal to a new
grid of sub-pixels; estimating the values of subpixels
at the resampling positions; estimating the new pixel
intensity and arranging the data to the hexagonal
The three steps are elaborated in the following, as
well as in Algorithm 1.
A grid of virtual image sensor pixels is designed.
Each pixel is divided into 20
20 subpixels. Accord-
ing to the known fill factor, the size of the active
area is S by S, where
= 20
. The intensity
value of every pixel in the image sensor array is
assigned to the virtual active area in the new grid.
The intensities of subpixels in the non-sensitive ar-
eas are assigned to be zero. An example of such
sensor rearrangement on sub-pixel level is presented
on the left in Figure 4, where there is a 3 by 3 pixels
grid, and the light and dark grey areas represent
the active and non-active areas in each pixel. The
active area is composed by 12 by 12 subpixels, and
thereby the fill factor becomes 36 % according to
the above equation, and the intensities of active
areas are represented by different greylevel values.
The second step is to estimate the values of sub-
pixels in the new grid of subpixels. Considering
the statistical fluctuation of incoming photons and
their conversion to electrons on the sensor, there
is a need for a statistical model to estimate the
Wei Wen, Siamak Khatibi Acta Polytechnica
Figure 4. From left to right: the sensor rearrange-
ment onto the subpixel, the projection of the square
pixels onto the hexagonal grid by half pixel shifting
and the pixel intensities displayed by hyperpel.
original signal. The Bayesian inference is used for
the estimating every subpixel intensity, which is
considered to be in the new position of resampling.
Therefore, the more subpixels are used to present
one pixel, the more accurate the resampling will be.
By introducing the Gaussian noise into a matrix
of selected pixels and estimating the intensity val-
ues of the subpixels at the non-sensitive area with
different sizes of active area (local modeling), a vec-
tor of intensity values for each subpixel is created.
Then the subpixel intensity will be estimated by
the maximum likelihood.
In the third step, the subpixels are projected back
to the original grid and then transformed onto a
hexagonal grid shown as red grids on the left and
in the middle of Figure 4 respectively. The red
hexagonal grid, which is presented in the middle
of Figure 4, is used for estimation of the pixel in-
tensities of the hexagonal image. As the middle
row of pixels is shifted to the right by a half a
pixel, on subpixel level the sampling position is also
shifted by a half a pixel as the method in [
]. The
yellow grid represents the pixel connection in the
new hexagonal sampling grid. In comparison to
the method in [
], in our method the subpixels
in each square area are estimated with respect to
the virtual increase of the fill factor. The inten-
sity value of a pixel in the hexagonal grid is the
intensity value which has the strongest contribu-
tion in the histogram of belonging subpixels. The
corresponding intensity is divided by the fill factor
for removing the FF effect to obtain the hexagonal
pixel intensity; as illustrated on the right in Figure 4
by means of hyperpels.
3.2. Hexagonal computation
The discrete transform of Lie group provides a possibil-
ity for frequency analysis of discrete functions defined
on a triangle or hexagonal grids [
]. Figure 5 shows
an example of fundamental domain
, the fundamen-
tal weights (
ω1, ω2
)of the orbit function in the form
of a hexagon. The similarity of the grid in the funda-
mental domain of a certain orbit function from Weyl
group to the grid of hexagonal images makes the orbit
functions interesting for the hexagonal computation in
which it becomes possible to process hexagonal images
without any further transformation. In [
], the dis-
crete orbit functions convolution was defined and was
Figure 5. The fundamental domain
, fundamental
weights (ω1, ω2), and simple roots (α1, α2).
Figure 6. The flow chart of hexagonal computation
of image processing operations.
used for image processing in the spatial domain. Here
we propose hexagonal computational for the image
processing operation using directly the orbit domain.
The flow chart of such a hexagonal computation for
typical operations is shown in Figure 6. A hexagonal
image, generated by the proposed method in § 3.1, is
transformed to an orbit domain using the hexagonal
grid related to the image. Then two filter kernels, con-
structed in the spatial domain, are transformed to the
orbit domain using the same hexagonal grid related to
the image. The process of the convolution in the orbit
domain is a multiplication operation, shown as X in
the middle dash box in Figure 6. According to the
convolution theory in the orbit domain, the multiple
convolutions can be combined to one by multiplication
operations, which implies that all the image process-
ing operations remain in the orbit domain and use
the same hexagonal grid. The hexagonal image is
obtained by the inverse transformation of results in
the orbit domain in which the scaling factor plays a
key role and it is defined by
, where the
is the
constant scaling factors for each operation and the n
is the number of operations.
vol. 56 no. 5/2016 Back to Basics
4. Experimental setup
A group of optical images with assumption of known
fill factor were simulated using our own codes and
Image Systems Evaluation Toolbox (ISET) [
] in
MATLAB. The ISET is designed to evaluate how im-
age capturing components and algorithms influence
image quality, and has been proved to be an effective
tool for simulating the sensor and image capturing [
The fill factor value of 36% was chosen for the simu-
lated image sensor, having the resolution of 128
and 8-bits quantization levels. The sensor had a pixel
area of 8
8 square
m, with well capacity of 10000 e
The read noise and the dark current noise were set
to 1 mV and 1 mV/pixel/sec respectively. The im-
age sensor was rearranged to a new grid of virtual
square sensor pixels, each of which was composed of
20 subpixels. All the image simulation setup is
the same as in [
]. The optical images are from COIL-
20 (Columbia University Image Library) [
], which
is a database of grayscale images of 20 objects. For
generation of sensor images, the luminance of optical
images was set to 200 cd/m
and the diffraction of the
optic system was considered limited to ensure that the
brightness of the output is as close as possible to the
brightness of the scene image. The exposure time was
also set to what is used for the sensor with 100 % fill
factor correspondingly for the simulated sensor. For
capturing images, the sensor exposure time is set to
1 ms. All the processing is programmed and done by
Matlab2015 on a HP laptop with an Intel i7-5600U
CPU and a 16GB RAM memory to keep the process
stable and fast.
5. Results and discussion
Three of the hexagonal images generated according
to the proposed method mentioned in § 3.1 are shown
next to the bottom row of Figure 7, where the cor-
responding optical images, simulated sensor images,
recovered square-pixel images are shown from top to
bottom rows respectively. The bottom row in Fig-
ure 7 shows the images that are zoomed in the red
rectangle area. For visualization purposes each pixel
in the hexagonal images is mapped according to the
hyperpel method in [
] and is composed of 20
subpixels. The corresponding logarithm of histograms
of the hexagonal and square-pixel images in Figure 7
is shown in Figure 8, which indicates that in both
recovered square-pixel images and generated hexag-
onal images the tonal levels are extended, and also
the tonal ranges are wider in comparison to the simu-
lated camera sensor image. The generated hexagonal
images are quite similar to the square-pixel images in
intensity values according to their histograms shown
in Figure 8. This is reasonable as far as the hexago-
nal images are obtained by half-pixel shifting of the
square-pixel image recovered with the enhanced fill
In our test, the type A2 orbit function from Weyl
group is used for the transformation of the hexago-
Figure 7. From top to bottom, the optical images,
simulated sensor images, recovered square-pixel im-
ages, hexagonal images and the zoomed regions, shown
with red rectangles, in hexagonal images.
nal image to the orbit domain. By using the orbit
functions, there is no need to map the hexagonal im-
age to certain coordinates system or an addressing
system. We believe such hexagonal image process-
ing has great potential to be the standard processing
for hexagonal arrangement due to its practical us-
age. Such processing is surprisingly more genuine
and close to the way how human brain works [
considering certain pathways of mapping of the in-
formation from the hexagonal grid in the visual sys-
In our experiment, three basic filter kernels in the
spatial domain are used to operate on the generated
hexagonal images. These filters are a mean filter, a
gradient or an edge detector filter, and the combina-
tion of the two mentioned filters. According to the
orbit convolution definition and a hexagonal grid, the
mean and edge detector filter kernels are as follows:
fmean = 1/3
Wei Wen, Siamak Khatibi Acta Polytechnica
Figure 8. Log of histograms of the three object images which are shown in Figure 7: Bird (left), Cylinder (right),
Car (bottom).
fedge =
where the two filter kernels in the spatial domain are
fmean = 1/9
fedge =
0 0 1
1 3 0
0 0 1
The filtering results according to the hexagonal com-
putation, see § 3.2, are shown in Figure 9. From the
left to right, the filtered images by the mean, the edge
detector, and the combination filters are shown, where
the edges are shown as a binary image. The results of
the edge detector filter are improved by applying the
smoothing filter duo to the used edge detector filter
which is a first order gradient computation.
Each pixel in the hexagonal image has six contigu-
ous neighbors which results to more effective and
Figure 9. From the left to right, the filtered images
by the mean filter, the edge detector filter, and the
combination filter.
vol. 56 no. 5/2016 Back to Basics
Images Object 1
Object 2
Object 3
Optical square 4858 3528 1998
Recovered square 3169 2357 4858
Hexagonal 5135 3295 3436
Table 1. The result of ratio images for different type
of images.
efficient responsivity to gradient-based edge detection.
To show this responsivity, two different mean filters,
whose sizes are 3
3 and 5
5, are used for smoothing
of the optical square-pixel images, recovered square-
pixel images and hexagonal images, then the edge
detector is used on the smoothed images to detect the
edges in the images. The two resulted edge-detected
images, using the two smoothing filters on each of the
optical square-pixel images or the recovered square-
pixel images or the hexagonal images, are gray level
images. The pixel-based intensity ratio of each pair of
the edge detected images is computed and finally the
sum of such ratio images is obtained. Apparently, the
blurring effect of more smoothed images reduces the
detected edges. In Table 1, the result of the ratio im-
ages for different type of images is shown. According
to the table, although it is impossible to detect the
vertical edges due to the hexagonal form of our hyper-
pel, the hexagonal images still can preserve the edges
superior in comparison to the square-pixel images.
6. Conclusion
In this paper, a novel approach is proposed for gener-
ating the hexagonal images from a simulated sensor
based on a known fill factor of the sensor. The results
show that the generated hexagonal images have the
same tonal range and tonal levels as the square-pixel
image. Also, their histogram distribution is very sim-
ilar, indicating that our approach keeps the source
information as much as possible during the genera-
tion. The orbit functions from Weyl group are used
for hexagonal computation. Using orbit functions, it
is possible to process hexagonal images with multiple
operators, and at the same time, in the orbit domain
without further need of mapping to a different coor-
dinate system or addressing system. Although the
processing speed of the orbit function convolution is
still very slow, it provides a special domain for the
hexagonal computation and for the image processing
operations. The results also show that the edge de-
tection on the hexagonal images preserves the edges
superiorly, in comparison to the square-pixel images;
with a significant curvature detection ability. In the
future, our work would be to improve the processing
speed of the orbit functions and to test our hexago-
nal images generation method with real cameras, to
develop a way to create a hexagonal image from a
conventional rectangle image sensor.
We would like to thank
Ondˇrej Kaj´
for providing
and explaining the Matlab code of the orbit function.
[1] D. B. Goldstein. Physical limits in digital
photography. Northlight Images 2009.
[2] B. Burke, P. Jorden, P. Vu. Ccd technology.
Experimental Astronomy 19(1-3):69–102, 2005.
[3] T. Chen, P. B. Catrysse, A. El Gamal, B. A. Wandell.
How small should pixel size be? In Electronic Imaging,
pp. 451–459. International Society for Optics and
Photonics, 2000.
[4] W. Wen, S. Khatibi. Novel software-based method to
widen dynamic range of ccd sensor images. In Image
and Graphics, pp. 572–583. Springer, 2015.
[5] T. D. Lamb. Evolution of phototransduction,
vertebrate photoreceptors and retina. Progress in
retinal and eye research 36:52–119, 2013.
[6] N. D. Tam. Hexagonal pixel-array for efficient spatial
computation for motion-detection pre-processing of
visual scenes. Advances in image and video processing
2(2):26–36, 2014.
[7] C. A. Curcio, K. R. Sloan, O. Packer, et al.
Distribution of cones in human and monkey retina:
individual variability and radial asymmetry. Science
236(4801):579–582, 1987.
[8] B. D. Coleman, G. H. Renninger. Theory of delayed
lateral inhibition in the compound eye of limulus.
Proceedings of the National Academy of Sciences
71(7):2887–2891, 1974.
[9] A. Schultz, M. Wilcox. Parallel image segmentation
using the l4 network. Biomedical sciences
instrumentation 35:117–121, 1998.
[10] M. V. Srinivasan, G. D. Bernard. The effect of motion
on visual acuity of the compound eye: a theoretical
analysis. Vision research 15(4):515–525, 1975.
[11] S. Laughlin. A simple coding procedure enhances a
neuron’s information capacity. Zeitschrift für
Naturforschung c 36(9-10):910–912, 1981.
[12] X. He, W. Jia, N. Hur, et al. Bilateral edge detection
on a virtual hexagonal structure. In Advances in Visual
Computing, pp. 176–185. Springer, 2006.
[13] R. C. Staunton. The design of hexagonal sampling
structures for image digitization and their use with
local operators. Image and vision computing
7(3):162–166, 1989.
[14] E. Davies. Optimising computation of hexagonal
differential gradient edge detector. Electronics Letters
27(17):1526–1527, 1991.
[15] S. Abu-Baker, R. Green. Detection of edges based on
hexagonal pixel formats. In Signal Processing, 1996.,
3rd International Conference on, vol. 2, pp. 1114–1117.
IEEE, 1996.
[16] R. C. Staunton. Hexagonal image sampling: A
practical proposition. In 1988 Robotics Conferences, pp.
23–27. International Society for Optics and Photonics,
Wei Wen, Siamak Khatibi Acta Polytechnica
[17] B. Gardiner, S. Coleman, B. Scotney. Multi-scale
feature extraction in a sub-pixel virtual hexagonal
environment. In Machine Vision and Image Processing
Conference, 2008. IMVIP’08. International, pp.
111–116. IEEE, 2008.
[18] E. Davies. Low-level vision requirements. Electronics
& communication engineering journal 12(5):197–210,
[19] Z. T. Jiang, Q. H. Xiao, L. H. Zhu. 3d reconstruction
based on hexagonal pixelâĂŹs dense stereo matching.
In Applied Mechanics and Materials, vol. 20, pp.
487–492. Trans Tech Publ, 2010.
[20] D. Schweng, S. Spaeth. Hexagonal color pixel
structure with white pixels, 2008. US Patent 7,400,332.
[21] X. He, W. Jia. Hexagonal structure for intelligent
vision. In Information and Communication
Technologies, 2005. ICICT 2005. First International
Conference on, pp. 52–64. IEEE, 2005.
[22] B. Horn. Robot vision. MIT press, 1986.
[23] C. A. Wüthrich, P. Stucki. An algorithmic
comparison between square-and hexagonal-based grids.
CVGIP: Graphical Models and Image Processing
53(4):324–339, 1991.
[24] X. He. 2-d object recognition with spiral architecture.
1999. University of Technology, Sydney .
[25] P. Sheridan, T. Hintz, W. Moore. Spiral Architecture
for machine vision. University of Technology, Sydney,
[26] A. Akhperjanian, A. Atoyan, J. Patera, V. Sahakian.
Application of multi-dimensional discrete transforms on
lie groups for image processing. Data Fusion for
Situation Monitoring, Incident Detection, Alert and
Response Management 198:404, 2005.
[27] G. Chadzitaskos, L. Háková, O. Kajínek. Weyl group
orbit functions in image processing, 2014.
[28] J. E. Farrell, F. Xiao, P. B. Catrysse, B. A. Wandell.
A simulation tool for evaluating digital camera image
quality. In Electronic Imaging 2004, pp. 124–131.
International Society for Optics and Photonics, 2003.
[29] J. Farrell, M. Okincha, M. Parmar. Sensor
calibration and simulation. In Electronic Imaging 2008,
pp. 68170R–68170R. International Society for Optics
and Photonics, 2008.
[30] S. A. Nene, S. K. Nayar, H. Murase, et al. Columbia
object image library (coil-20). Tech. rep., Technical
Report CUCS-005-96, 1996.
[31] A. B. Watson, A. J. Ahumada. A hexagonal
orthogonal-oriented pyramid as a model of image
representation in visual cortex. IEEE Transactions on
Biomedical Engineering 36(1):97–106, 1989.
... Some of the obstacles in developing new sensor arrangements are the difficulty in manufacturing, the cost, and rigidity of hardware components. The virtual deformation of the sensor arrangement [3] provides new possibilities for overcoming such obstacles. We need strong arguments to convince the involved partners in sensor development to implement the virtual deformation ideas. ...
... The new pseudo hexagonal grid is derived from a usual 2D grid by shifting each even row a half pixel to the right and leaving odd rows unattached, or of course any similar translation. The virtual Half-pixel Shift Enriched image (HS_E) is generated from the original enriched image [3] which has a square arrangement. ...
... Figure 14 shows the mean (a) and variance (b) of ratio values of ten corresponding pixel sets between each SQ and to Hex_E image. The mean (a) shows the nonlinear relation between SQ to Hex_E which was previously shown in [3,18]. The mean (a) also shows that the relation between to Hex_E is similar to the relation between SQ to Hex_E and behaves in a nonlinear manner. ...
Full-text available
The study of the evolution process of our visual system indicates the existence of variational spatial arrangement; from densely hexagonal in the fovea to a sparse circular structure in the peripheral retina. Today’s sensor spatial arrangement is inspired by our visual system. However, we have not come further than rigid rectangular and, on a minor scale, hexagonal sensor arrangements. Even in this situation, there is a need for directly assessing differences between the rectangular and hexagonal sensor arrangements, i.e., without the conversion of one arrangement to another. In this paper, we propose a method to create a common space for addressing any spatial arrangements and assessing the differences among them, e.g., between the rectangular and hexagonal. Such a space is created by implementing a continuous extension of discrete Weyl Group orbit function transform which extends a discrete arrangement to a continuous one. The implementation of the space is demonstrated by comparing two types of generated hexagonal images from each rectangular image with two different methods of the half-pixel shifting method and virtual hexagonal method. In the experiment, a group of ten texture images were generated with variational curviness content using ten different Perlin noise patterns, adding to an initial 2D Gaussian distribution pattern image. Then, the common space was obtained from each of the discrete images to assess the differences between the original rectangular image and its corresponding hexagonal image. The results show that the space facilitates a usage friendly tool to address an arrangement and assess the changes between different spatial arrangements by which, in the experiment, the hexagonal images show richer intensity variation, nonlinear behavior, and larger dynamic range in comparison to the rectangular images.
... The results show the dynamic range is widened and tonal level is extended. (b) The grid and pixel are hexagonal and square respectively and there is no or fixed gap in [29], where the hexagonal grid is generated by a half-pixel shifting, its results show that the generated hexagonal images are superior in detection of curvature edges to the square images. (c) The grid and pixel are hexagonal and there is no gap [30]. ...
Full-text available
Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs.
... However practical issues and history of camera development have made us to use fixed arrangements and forms of pixel sensors. Our previous works [1,2] showed that despite the limitations of hardware, it is possible to implement a software method to change the size of pixel sensors. In this paper, we present a software-based method to change the arrangement and form of pixel sensors. ...
Full-text available
The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images.
... In the regular hexagonal lattice, the vertical and horizontal pitch ratio is ≈ 3 : 2 0.8660, and in the "brick wall" approach, the ratio is 1:1, while in the "hyperpel" approach, the ratio is 7:8 = 0.8750. Therefore, the "hyperpel" approach can be treated as a good approximation and it is the most common displaying approach used in current hexagonal image research [2,3,[16][17][18]. ...
Hexagonal image processing is theoretically superior to that based on the common square lattice, and due to the lack of practical imaging devices, this paper intends to implement a simulated display. The paper investigates the sampling lattices used in the hexagonal image processing, and finds that variable lattices occur in hexagonal discrete Fourier transform (HDFT), while the most commonly used “hyperpel” approach, due to its fixed cell pattern, cannot handle such displaying tasks. Then, the paper proposes to represent each pixel with the exact Voronoi cell (VC) according to the sampling lattice. In the paper, a simple algorithm is presented to compute the VC vertices, and each simulated pixel is constructed with the VC filled with its intensity value, and then the simulated display is implemented by the tessellation of each simulated pixel on the correct lattice position. Finally, experimental results show that the proposed simulated display can display data without geometric distortion.
The quality assessment of a high dynamic image is a challenging task. The few available no reference image quality methods for high dynamic range images are generally in evaluation stage. The most available image quality assessment methods are designed to assess low dynamic range images. In the paper, we show the assessment of high dynamic range images which are generated by utilizing a virtually flexible fill factor on the sensor images. We present a new method in the assessment process and evaluate the amount of improvement of the generated high dynamic images in comparison to original ones. The results show that the generated images not only have more number of tonal levels in comparison to original ones but also the dynamic range of images have significantly increased due to the measurable improvement values.
Conference Paper
Full-text available
In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.
Full-text available
Motion-detection, edge-detection, and orientation-detection often require spatial computation of the light intensity difference between neighboring pixel cells. Pre-processing the image at the retinal level can improve the computational efficiency when the parallel processing can be achieved naturally by the spatial arrangement of the pixel-array. Pixel-arrays that require spatial pre-processing computation have different geometric constraints than plain pixel-arrays that do not require such local computation. Our analysis shows that geometric optimization of pixel-arrays can be achieved by using hexagonal arrays over rectilinear arrays. Hexagonal arrays improve the packing density, and reduce the complexity of the spatial computation compared to rectilinear square and octagonal arrays. They also provide geometric symmetry for efficient computation not only at the contiguous neighboring cell level, but also at the higher-order neighboring cell level. The light intensity difference at the higher-order cell level is used to compute the first-order and second-order time-derivatives for velocity and acceleration detections of the visual scene, respectively. Thus, hexagonal arrays increase the computational efficiency by using a symmetric configuration that allows pre-processing of spatial information of the visual scene using hardware implementations that are repeatable in all higher-order neighboring pixels.
Full-text available
We deal with the Fourier-like analysis of functions on discrete grids in two-dimensional simplexes using $C-$ and $E-$ Weyl group orbit functions. For these cases we present the convolution theorem. We provide an example of application of image processing using the $C-$ functions and the convolutions for spatial filtering of the treated image.
Full-text available
Evidence is reviewed from a wide range of studies relevant to the evolution of vertebrate photoreceptors and phototransduction, in order to permit the synthesis of a scenario for the major steps that occurred during the evolution of cones, rods and the vertebrate retina. The ancestral opsin originated more than 700 Mya (million years ago) and duplicated to form three branches before cnidarians diverged from our own lineage. During chordate evolution, ciliary opsins (C-opsins) underwent multiple stages of improvement, giving rise to the 'bleaching' opsins that characterise cones and rods. Prior to the '2R' rounds of whole genome duplication near the base of the vertebrate lineage, 'cone' photoreceptors already existed; they possessed a transduction cascade essentially the same as in modern cones, along with two classes of opsin: SWS and LWS (short- and long-wave-sensitive). These cones appear to have made synaptic contact directly onto ganglion cells, in a two-layered retina that resembled the pineal organ of extant non-mammalian vertebrates. Interestingly, those ganglion cells appear to be descendants of microvillar photoreceptor cells. No lens was associated with this two-layered retina, and it is likely to have mediated circadian timing rather than spatial vision. Subsequently, retinal bipolar cells evolved, as variants of ciliary photoreceptors, and greatly increased the computational power of the retina. With the advent of a lens and extraocular muscles, spatial imaging information became available for central processing, and gave rise to vision in vertebrates more than 500 Mya. The '2R' genome duplications permitted the refinement of cascade components suitable for both rods and cones, and also led to the emergence of five visual opsins. The exact timing of the emergence of 'true rods' is not yet clear, but it may not have occurred until after the divergence of jawed and jawless vertebrates. [292 words].
We describe a method for simulating the output of an image sensor to a broad array of test targets. The method uses a modest set of sensor calibration measurements to define the sensor parameters; these parameters are used by an integrated suite of Matlab software routines that simulate the sensor and create output images. We compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties. The simulation captures the essential features of the images created by these different sensors. Finally, we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus, such as natural scenes with high dynamic range or low light levels.
The Image Systems Evaluation Toolkit (ISET) is an integrated suite of software routines that simulate the capture and processing of visual scenes. ISET includes a graphical user interface (GUI) for users to control the physical characteristics of the scene and many parameters of the optics, sensor electronics and image processing-pipeline. ISET also includes color tools and metrics based on international standards (chromaticity coordinates, CIELAB and others) that assist the engineer in evaluating the color accuracy and quality of the rendered image.
A new feature points extraction method is presented, which consider pixel as hexagonal. The method quasi increases the density of image pixel, expands the dynamic range of feature point extraction, increases the number of the features and resolves the problem of deformation of reconstruction which was leaded by lack of feature points. Firstly, the method was successful applied to sift operator of features extraction in this paper and then use dense stereo matching method to find the matching point of the image sequences. Secondly, through the RANSAC method to eliminate mistake matches, and by the camera matrix, calculate the corresponding points' three-dimensional coordinates of space. Finally, the 3D model can be established through the partition merging triangulation method and texture mapping. Experimental results show that this method can get more accurate matches pairs and achieve a satisfactory effect of 3D reconstruction.
A case is made for the use of regular hexagonal sampling systems in robot vision applications. With such a sampling technique, neighbouring pixels reside in equidistant shells surrounding the central pixel and this leads to the simpler design and faster processing of local operators and to a reduced image storage requirement. Connectivity within the image is easily definedand the aliasing associated with vertical lines in the hexagonal system is not a problem for robot vision. With modern processors only a minimal time penalty is incurred in reporting results in a rectangular co-ordinate system, and a comparison between equivalent processing times for hexagonal and rectangular systems implemented on a popular processor has shown savings in excess of 40% for hexagonal edge detection operators. Little modification is required to TV frame grabber hardware to enable hexagonal digitisation.
An algorithmic comparison between square and hexagonal grids, showing the strong similarity between them, is presented. General digitalization schemes, valid for all real curves of the plane, are presented for the hexagonal grid. A proof of the validity in the hexagonal case of Bresenham's straight-line algorithm for calculating the digitalization of a straight line joining two end-points is given. The transformations necessary to perform correctly the digitalization of a straight line using Bresenham's algorithm on a hexagonal grid are introduced. A simple circle drawing algorithm is presented. A simulator to render graphic primitives both on the hexagonal grid and on the square grid is then introduced, and various outputs of the simulator are shown and discussed.
Conference Paper
Edge detection plays an important role in image processing area. This paper presents an edge detection method based on bilateral filtering which achieves better performance than single Gaussian filtering. In this form of filtering, both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values as well. In addition, the edge detec-tion method proposed in this paper is achieved on sampled images represented on a newly developed virtual hexagonal structure. Due to the compact and circular nature of the hexagonal lattice, a better quality edge map is obtained on the hexagonal structure than common edge detection on square structure. Experimental results using proposed methods exhibit encouraging performance.