ArticlePDF Available

A Common Assessment Space for Different Sensor Structures

Authors:

Abstract and Figures

The study of the evolution process of our visual system indicates the existence of variational spatial arrangement; from densely hexagonal in the fovea to a sparse circular structure in the peripheral retina. Today’s sensor spatial arrangement is inspired by our visual system. However, we have not come further than rigid rectangular and, on a minor scale, hexagonal sensor arrangements. Even in this situation, there is a need for directly assessing differences between the rectangular and hexagonal sensor arrangements, i.e., without the conversion of one arrangement to another. In this paper, we propose a method to create a common space for addressing any spatial arrangements and assessing the differences among them, e.g., between the rectangular and hexagonal. Such a space is created by implementing a continuous extension of discrete Weyl Group orbit function transform which extends a discrete arrangement to a continuous one. The implementation of the space is demonstrated by comparing two types of generated hexagonal images from each rectangular image with two different methods of the half-pixel shifting method and virtual hexagonal method. In the experiment, a group of ten texture images were generated with variational curviness content using ten different Perlin noise patterns, adding to an initial 2D Gaussian distribution pattern image. Then, the common space was obtained from each of the discrete images to assess the differences between the original rectangular image and its corresponding hexagonal image. The results show that the space facilitates a usage friendly tool to address an arrangement and assess the changes between different spatial arrangements by which, in the experiment, the hexagonal images show richer intensity variation, nonlinear behavior, and larger dynamic range in comparison to the rectangular images.
Content may be subject to copyright.
sensors
Article
A Common Assessment Space for Different
Sensor Structures
Wei Wen 1, * , Ondˇrej Kajínek 2, Siamak Khatibi 1, * and Goce Chadzitaskos 2
1Department of Technology and Aesthetics, Blekinge Institute of Technology, 37179 Karlskrona, Sweden
2Department of Physics, Czech Technical University, 11519 Prague 1, Czech Republic;
kajinond@fjfi.cvut.cz (O.K.); goce.chadzitaskos@fjfi.cvut.cz (G.C.)
*Correspondence: wei.wen@bth.se (W.W.); siamak.khatibi@bth.se (S.K.)
Received: 21 November 2018; Accepted: 22 January 2019; Published: 29 January 2019


Abstract:
The study of the evolution process of our visual system indicates the existence of variational
spatial arrangement; from densely hexagonal in the fovea to a sparse circular structure in the
peripheral retina. Today’s sensor spatial arrangement is inspired by our visual system. However,
we have not come further than rigid rectangular and, on a minor scale, hexagonal sensor arrangements.
Even in this situation, there is a need for directly assessing differences between the rectangular and
hexagonal sensor arrangements, i.e., without the conversion of one arrangement to another. In this
paper, we propose a method to create a common space for addressing any spatial arrangements and
assessing the differences among them, e.g., between the rectangular and hexagonal.
Such a space
is
created by implementing a continuous extension of discrete Weyl Group orbit function transform
which extends a discrete arrangement to a continuous one. The implementation of the space is
demonstrated by comparing two types of generated hexagonal images from each rectangular image
with two different methods of the half-pixel shifting method and virtual hexagonal method. In the
experiment, a group of ten texture images were generated with variational curviness content using
ten different Perlin noise patterns, adding to an initial 2D Gaussian distribution pattern image.
Then, the common space was obtained from each of the discrete images to assess the differences
between the original rectangular image and its corresponding hexagonal image. The results show
that the space facilitates a usage friendly tool to address an arrangement and assess the changes
between different spatial arrangements by which, in the experiment, the hexagonal images show
richer intensity variation, nonlinear behavior, and larger dynamic range in comparison to the
rectangular images.
Keywords:
software-based; common space; hexagonal image; pixel arrangement; pixel form;
continuous extension; resampling
PACS: J0101
1. Introduction
The visual sensory of some of biological species can easily outperform our conventional vision
technology. Inspired by such efficient machines, we have built our electronic systems which aim
to capture a scenery with the same efficient style of performance by emulating the structure and
function of biological counterparts. The sensor structure, sensor form, and surface shape of eye
show a wide range of adaptations to meet the requirements of the organisms which bear them. Eye
performance of different species vary in their visual acuity—the range of wavelengths they can detect,
their sensitivity in low light, their ability to detect motion or to resolve objects, and whether they can
discriminate colors [
1
]. The spatial sensor arrangement of the eyes plays a significant role in such
Sensors 2019,19, 568; doi:10.3390/s19030568 www.mdpi.com/journal/sensors
Sensors 2019,19, 568 2 of 19
variational performances [
2
]. The study of the evolution process of our visual system indicates how
our spatial sensor arrangement is evolved and differentiated from other species and especially from
the closest ones, the primates, which has resulted in the existence of variational spatial arrangement;
from densely hexagonal in the fovea to a sparse circular structure in the peripheral retina. The high
contrast and optimal sampling properties of our visual system are directly related to the densely
hexagonal spatial arrangement.
Today’s sensor spatial arrangement is inspired by our visual system. However, we have not
come further than rigid rectangular and, on a minor scale, hexagonal sensor arrangements. Some of
the obstacles in developing new sensor arrangements are the difficulty in manufacturing, the cost,
and rigidity of hardware components. The virtual deformation of the sensor arrangement [
3
] provides
new possibilities for overcoming such obstacles. We need strong arguments to convince the involved
partners in sensor development to implement the virtual deformation ideas. It is not enough to only
show that the virtual deformation sensor arrangement is feasible, but also, that the addressing of new
arrangements can be achieved easily and smoothly, without need of defining new grid structures
which generally results in heavy computation. Thus, we propose a new method in the paper which
eliminates the need for defining new grid structures for addressing different sensor arrangements.
One direct application of the proposed method is its implementation as an assessment tool where
different sensor arrangements are compared with each other; i.e., without the need for conversion of
one arrangement to another one.
In this paper, we propose a method to create a common space which facilitates addressing and
assessing different spatial arrangements of sensors, e.g., between the rectangular and hexagonal
arrangements. Such a space is created by implementing a continuous extension of discrete
Weyl Group orbit function transform which extends a discrete arrangement to a continuous one.
The implementation of the space is demonstrated by comparing two types of generated hexagonal
images from each rectangular image with two different methods of the half-pixel shifting and virtual
hexagonal method. In the experiment, a group of ten texture images are generated with variational
curviness content using ten different Perlin noise patterns, adding to an initial 2D Gaussian distribution
pattern image. Then, the common space is obtained from each of the discrete images to address and
assess the differences between the original rectangular image and its corresponding hexagonal image.
This paper is organized as follows. In Section 2, the addressing of arrangement is explained.
Then the two types of image generation are explained in Section 3. Sections 4and 5present the
methodology of the common space and the experiment setup, respectively. Then the results are shown
and discussed in Section 6. Finally, we summarize our work in Section 7.
2. Arrangement Addressing
In relation to the assessment of two images having two different arrangements; e.g., one having
square and another hexagonal arrangement, the addressing of arrangement is the most important
issue by which it becomes possible to access each arrangement unit (the pixel). Such access property
for any arrangement should be easy and fast in implementation, in comparison to the popular square
arrangement. The problem of any arrangement, beside the square one, is manifested in finding new
definitions for grid structures. Here, we elaborate on the problem for the hexagonal arrangement,
which has been studied for more than four decades, and different addressing methods are suggested.
A hexagonal arrangement is addressed using two oblique axes [
4
], also referred to as skewed coordinate
system in [
5
], and h2 system in [
6
], where two basis vectors are not orthogonal. With such an oblique
coordinate system, each hexagonal pixel is addressed by an ordered pair of unit vectors. A symmetrical
hexagonal coordinate frame which uses three coordinates instead of two is used to represent each
pixel on a grid plane [
7
,
8
]. The major advantage of this coordinate system is that there is a one-to-one
mapping between hexagonal and square arrangements. Moreover, in [
9
], this symmetrical hexagonal
coordinate frame is used to derive various affine transformations. The geometric transformations
on the hexagonal grid are conveniently simplified and the symmetry property of the hexagonal grid
Sensors 2019,19, 568 3 of 19
is successfully preserved. The three-axis coordinate system is also used in [
10
] for mathematically
handling the hexagonal arrangement. Spiral Architecture, inspired from anatomical consideration
of the primate’s vision system, is proposed by [
11
] which is a 1D addressing system. This address
grows from the center of image in powers of seven along a spiral-like curve. This addressing scheme
combined with two later proposed mathematic operations, spiral addition and spiral multiplication,
is the basic Spiral Architecture [
11
,
12
]. A similar single-index system for pixel addressing is proposed
by modifying the Generalized Balanced Ternary system [
13
,
14
]. A virtual hexagonal structure is
proposed by the authors of [
15
] where the hexagonal pixels do not physically exist but are recorded
during image processing in the memory space. The approach demands high computation for image
conversion (from one arrangement to another) for determining the locations (or the areas) of each pixel.
A reduced computational complexity method is derived from the virtual hexagonal structure proposal
by the authors of [16].
3. Image Generation
In this section, we explain generation of two types of images which have hexagonal arrangements.
The images are generated from an original image with square arrangement. An example of such
images is demonstrated in Figure 1.
Sensors 2019, 19, x FOR PEER REVIEW 3 of 20
from anatomical consideration of the primate's vision system, is proposed by [11] which is a 1D
addressing system. This address grows from the center of image in powers of seven along a spiral-
like curve. This addressing scheme combined with two later proposed mathematic operations, spiral
addition and spiral multiplication, is the basic Spiral Architecture [11,12]. A similar single-index
system for pixel addressing is proposed by modifying the Generalized Balanced Ternary system
[13,14]. A virtual hexagonal structure is proposed by the authors of [15] where the hexagonal pixels
do not physically exist but are recorded during image processing in the memory space. The approach
demands high computation for image conversion (from one arrangement to another) for determining
the locations (or the areas) of each pixel. A reduced computational complexity method is derived
from the virtual hexagonal structure proposal by the authors of [16].
3. Image Generation
In this section, we explain generation of two types of images which have hexagonal
arrangements. The images are generated from an original image with square arrangement. An
example of such images is demonstrated in Figure 1.
(a) (b) (c)
Figure 1. The images on three types of sensory arrangements. (a) The original square image (SQ); (b)
hexagonal image (Hex_E); (c) half-pixel shift image (HS_E).
3.1. Generation of the Virtual Hexagonal Enriched Image (Hex_E)
The virtual hexagonal enriched image has a hexagonal pixel form on a hexagonal arrangement.
The generation process is similar to the resampling process in [17,18], which has three steps:
projecting the original image pixel intensities onto a grid of sub-pixels; estimating the values of
subpixels at the resampling positions; estimating each new hexagonal pixel intensity in a new
hexagonal arrangement where the subpixels are projected back to a hexagonal grid, which are shown
as red grids in Figure 2. In this arrangement the distance between each two hexagonal pixels is the
same and the resolution of the generated Hex_E image is the same as the original image.
3.2. Generation of the Virtual Half-Pixel Shift Enriched Image (HS_E)
The hexagonal grid in previous work [19,20] is mimicked by a half-pixel shift which is derived
from delaying sampling by a half pixel on the horizontal direction. The red grid, which is presented
in the middle of Figure 2, is the new pseudo hexagonal sampling structure whose pixel form is still
square. The new pseudo hexagonal grid is derived from a usual 2D grid by shifting each even row a
half pixel to the right and leaving odd rows unattached, or of course any similar translation. The
virtual Half-pixel Shift Enriched image (HS_E) is generated from the original enriched image [3]
which has a square arrangement.
Figure 1.
The images on three types of sensory arrangements. (
a
) The original square image (SQ);
(b) hexagonal image (Hex_E); (c) half-pixel shift image (HS_E).
3.1. Generation of the Virtual Hexagonal Enriched Image (Hex_E)
The virtual hexagonal enriched image has a hexagonal pixel form on a hexagonal arrangement.
The generation process is similar to the resampling process in [
17
,
18
], which has three steps: projecting
the original image pixel intensities onto a grid of sub-pixels; estimating the values of subpixels at the
resampling positions; estimating each new hexagonal pixel intensity in a new hexagonal arrangement
where the subpixels are projected back to a hexagonal grid, which are shown as red grids in Figure 2.
In this arrangement the distance between each two hexagonal pixels is the same and the resolution of
the generated Hex_E image is the same as the original image.
3.2. Generation of the Virtual Half-Pixel Shift Enriched Image (HS_E)
The hexagonal grid in previous work [
19
,
20
] is mimicked by a half-pixel shift which is derived
from delaying sampling by a half pixel on the horizontal direction. The red grid, which is presented in
the middle of Figure 2, is the new pseudo hexagonal sampling structure whose pixel form is still square.
The new pseudo hexagonal grid is derived from a usual 2D grid by shifting each even row a half
pixel to the right and leaving odd rows unattached, or of course any similar translation. The virtual
Half-pixel Shift Enriched image (HS_E) is generated from the original enriched image [
3
] which has a
square arrangement.
Sensors 2019,19, 568 4 of 19
Sensors 2019, 19, x FOR PEER REVIEW 4 of 20
(a) (b) (c)
Figure 2. Three types of sensory arrangements. (a) The sensor rearrangement onto the subpixel; (b)
the projection of the square pixels onto the hexagonal arrangement by half-pixel shifting method (i.e.,
HS_E image generation); (c) the projection of the square pixels onto the hexagonal grid in generation
of hexagonal image (Hex_E).
4. Common Space Based on Continuous Extension
To elaborate the common space, let us start with a simple 1D example. Assuming we have a
continuous 1D signal, it is not difficult to imagine that we can sample the signal with different time
intervals. However, the opposite way is not so easy; i.e., to obtain the continuous signal from different
time intervals. Further, this becomes even extremely difficult when we have sampled our data by a
certain time interval and try to use the data to resample according to another time interval. Here, for
the common space we have the last-mentioned condition where the sampled data is 2D and from the
image sensor. In this relation, the choice of spatial sensor arrangement affects the sampling results as
the choice of time interval in the 1D signal example. In the 2D sampling the data is sampled from a
continuous surface; i.e., each spatial sensor arrangement results in certain sampling data from certain
points on the continuous surface. By common space, we mean such continuous surface which is
created by continuous extension of spatial data; i.e., from sampling data from certain spatial sensor
arrangement a common space (a continuous surface) is generated. The common space is used to
estimate the sampling data according to another spatial sensor arrangement; i.e., a common space is
created by sampling data from hexagonal spatial arrangement and then the sampling data of a
rectangular spatial arrangement is estimated. In this way, on the common space, we have
correspondent points of each sampling point related to different spatial arrangements, which
facilitates the addressing and assessing of different spatial arrangements of sensors.
The common space is created by implementing a continuous extension of discrete Weyl Group
orbit function transform. Orbit functions on the Euclidean space are symmetrized exponential
functions. The symmetrization is fulfilled by a Weyl group corresponding to a Coxeter-Dynkin
diagram. The values of orbit functions are repeated on copies of a fundamental domain of the affine
Weyl group (determined by the initial Weyl group) in the entire Euclidean space. Recalling that the
exponential functions determine the Fourier transform on Euclidean space. Correspondingly, orbit
functions determine a symmetrized version of the Fourier transform which is also called an orbit
function transform. One of the key properties of orbit transform is that sequence of orbit transform,
and inverse orbit transform preserve the processed data. This property is preserved even when
discrete orbit function in the inverse orbit transform is replaced with a continuous orbit function of
the same family. In other words, for any symmetrical grid such as rectangular or hexagonal grid, in
frequency domain a continuous spectrum surface can be generated from the discrete information of
the grid. We call this continuous spectrum surface a common space. The creation and proof of such
common space is explicated in detail in Appendix A for interested readers.
The creation of continuous extension of the original data is independent of the data arrangement;
i.e., it is possible to create common space from any spatial arrangement, such as square or hexagonal
Figure 2.
Three types of sensory arrangements. (
a
) The sensor rearrangement onto the subpixel; (
b
) the
projection of the square pixels onto the hexagonal arrangement by half-pixel shifting method (i.e., HS_E
image generation); (
c
) the projection of the square pixels onto the hexagonal grid in generation of
hexagonal image (Hex_E).
4. Common Space Based on Continuous Extension
To elaborate the common space, let us start with a simple 1D example. Assuming we have
a continuous 1D signal, it is not difficult to imagine that we can sample the signal with different
time intervals. However, the opposite way is not so easy; i.e., to obtain the continuous signal from
different time intervals. Further, this becomes even extremely difficult when we have sampled our
data by a certain time interval and try to use the data to resample according to another time interval.
Here, for the common space we have the last-mentioned condition where the sampled data is 2D
and from the image sensor. In this relation, the choice of spatial sensor arrangement affects the
sampling results as the choice of time interval in the 1D signal example. In the 2D sampling the data is
sampled from a continuous surface; i.e., each spatial sensor arrangement results in certain sampling
data from certain points on the continuous surface. By common space, we mean such continuous
surface which is created by continuous extension of spatial data; i.e., from sampling data from certain
spatial sensor arrangement a common space (a continuous surface) is generated. The common space is
used to estimate the sampling data according to another spatial sensor arrangement; i.e., a common
space is created by sampling data from hexagonal spatial arrangement and then the sampling data
of a rectangular spatial arrangement is estimated. In this way, on the common space, we have
correspondent points of each sampling point related to different spatial arrangements, which facilitates
the addressing and assessing of different spatial arrangements of sensors.
The common space is created by implementing a continuous extension of discrete Weyl Group
orbit function transform. Orbit functions on the Euclidean space are symmetrized exponential
functions. The symmetrization is fulfilled by a Weyl group corresponding to a Coxeter-Dynkin
diagram. The values of orbit functions are repeated on copies of a fundamental domain of the affine
Weyl group (determined by the initial Weyl group) in the entire Euclidean space. Recalling that
the exponential functions determine the Fourier transform on Euclidean space. Correspondingly,
orbit functions determine a symmetrized version of the Fourier transform which is also called an orbit
function transform. One of the key properties of orbit transform is that sequence of orbit transform,
and inverse orbit transform preserve the processed data. This property is preserved even when discrete
orbit function in the inverse orbit transform is replaced with a continuous orbit function of the same
family. In other words, for any symmetrical grid such as rectangular or hexagonal grid, in frequency
domain a continuous spectrum surface can be generated from the discrete information of the grid.
We call this continuous spectrum surface a common space. The creation and proof of such common
space is explicated in detail in Appendix Afor interested readers.
Sensors 2019,19, 568 5 of 19
The creation of continuous extension of the original data is independent of the data arrangement;
i.e., it is possible to create common space from any spatial arrangement, such as square or hexagonal
ones. We refer to these common spaces in relation to their original data arrangements, such as CSE_sq
or CSE_hex for the created common spaces from square and hexagonal arrangement, respectively.
On the common space, any grid structure is applied virtually; i.e., the corresponding addressing
of each pixel position from different arrangements are done on the common space. Thus, by knowing
the pixel form of each arrangement, the intensity value of each corresponding pixel is determined at
the pixel position on the common space.
5. Experimental Setup
Evaluating the proposed common space method in assessing different sensor structures is based
on using different generated images. In Section 3, the generated procedures of those types of image,
which are used in the evaluation, are all types of image that are originated from a rectangular
arrangement. Thus, generating images based on rectangular arrangement is essential for experimental
evaluation. On the other hand, to evaluate the addressing accuracy of the common space usage,
we need to generate such images which also have a content with random spatial variation in each
pixel. This is because by using the common space only one coordinate system is used to address each
pixel position and obtain its intensity value in two different arrangements; i.e., each pixel position and
intensity value of the originate arrangement to the common space is known, but the correspondent
position and intensity value on the other arrangement is estimated using the common space surface.
In relation to this, the evaluation of addressing accuracy can be achieved by measuring the estimations
error. The statistical validation of the estimations error requires the random spatial variation in each
pixel; i.e., as spatial variation in natural images. The estimations error can be measured for all pixels
of each two experimental images, using the common space addressing, or selected amount of their
correspondent pixels. In the experiments we used the latter option. To ensure that the selected pixels
represent different intensity levels it requires to generate the experimental images with a certain
intensity model; e.g., a Gaussian model.
An image dataset is created which consists of 10 high resolution (4096 by 2160) original images
(SQs) and their converted ones, of type of HS_E and Hex_E images with the same resolution;
i.e., the dataset has a total of 30 images, where the interval of subpixel is 30. The conversion process is
elaborated on in Section 3. Each of the ten original images is generated by adding a Gaussian image
(GI) to a random Perlin noise image (PI). The GI contributes to obtain all possible tonal levels in range
of 0–255 gray levels in each original image. Each GI is generated by
GI =255 e(x2
2σ2
1
+y2
2σ2
2
)(1)
where σ1and σ2are 1920 and 1280 respectively and the original images SQs is obtained by:
SQj=GI +P Ij(2)
where
j
is the image index number. The values of
σ1
and
σ2
are approximately half of the image
resolution in each direction. Based on the rule of thumb, GI represents fully a Gaussian intensity model
where the values of
σ1
and
σ2
are one third of image resolution in each direction. In this relation GI
is not fully a representative of a Gaussian intensity model. This is to prevent obtaining significantly
lower level intensity values which can affect evaluation of addressing accuracy. By generating the
PI image, a pseudo-random spatial variation in each pixel is obtained which simulates variational
curviness content; i.e., we imitate the appearance of textures in natural images by a controlled random
process. In this way, using GI and PI, each original image of the dataset is generated to have natural
images properties and with wider range of variation than exists in a captured natural image. Each PI is
generated by implementing the Perlin noise algorithm [21,22] where each pixel of the image; PI(x, y),
Sensors 2019,19, 568 6 of 19
is computed by two major steps: (a) projection of pixel vector position on pseudorandom gradients of
g00 = [x00,y00]
,
g01 = [x01,y01]
,
g10 = [x10,y10]
, and
g11 = [x11,y11]
at integer points [0,0], [0,1], [1,0],
and [1,1], respectively, (b) interpolation and smoothing between points ‘value at the integer points by
a cubic spline function
S(x)=x2(32x)
and a linear interpolation function
L(ε,x,y)=x+ε(yx)
as shown in Figure 3and explained by algorithm steps in Table 1. The PI contributes to obtain all
possible tonal levels in range of 0–255 gray levels. The range of
SQj
images; a combination of GI and
PI images where each has a range of 0–255 tonal levels, are normalized to obtain images with range of
0–255 tonal levels. The generation of SQ images is demonstrated in Figure 4.
Figure 3.
At integer grid points, 2D Perlin noise interpolates and smooths between pseudorandom gradients.
Table 1. The algorithm of implemented 2D Perlin noise.
0. Input
P=[x,y]
1. Sx=S(x)
2. Sy=S(y)
3. ua=
P·
g00
4. va=
P·
g10
5. a=L(Sx,ua,va)
6. ub=
P·
g01
7. vb=
P·
g11
8. b=L(Sx,ub,vb)
9. Output L(Sy,a,b)
Sensors 2019, 19, x FOR PEER REVIEW 6 of 20
steps in Table 1. The PI contributes to obtain all possible tonal levels in range of 0–255 gray levels.
The range of 𝑆𝑄 images; a combination of GI and PI images where each has a range of 0–255 tonal
levels, are normalized to obtain images with range of 0–255 tonal levels. The generation of SQ images
is demonstrated in Figure 4.
x
01
x
00
x
10
x
11
y
01
y
00
y
11
y
10
[0,0]
[1,1]
[0,1]
[1,0]
Figure 3. At integer grid points, 2D Perlin noise interpolates and smooths between pseudorandom
gradients.
Table 1. The algorithm of implemented 2D Perlin noise.
0. Input 𝑷
=[𝒙,𝒚]
1. 𝑺𝒙=𝑺(𝒙)
2. 𝑺𝒚=𝑺(𝒚)
3. 𝒖𝒂=𝑷
·𝒈
𝟎𝟎
4. 𝒗𝒂=𝑷
·𝒈
𝟏𝟎
5. 𝒂=𝑳(𝑺𝒙,𝒖𝒂,𝒗𝒂)
6. 𝒖𝒃=𝑷
·𝒈
𝟎𝟏
7. 𝒗𝒃=𝑷
·𝒈
𝟏𝟏
8. 𝒃=𝑳(𝑺𝒙,𝒖𝒃,𝒗𝒃)
9. Output 𝑳(𝑺𝒚,𝒂,𝒃)
(a) (b) (c)
Figure 4. Generation of an SQ image (a) is added to a Gaussian image: PI; (b): GI; (c): a random Perlin
noise image.
6. Results and Analysis
In this section, we show the addressing and assessment feasibility of three types of images of
SQ, HS_E, and Hex_E (i.e., having different pixel arrangements) using the common space. There are
ten of such triple types of images in the dataset and for each triple image type the results were
obtained in three stages of general preparation, case of CSE_sq and case of CSE_hex as it is shown in
the flowchart of Figure 5. The blue, green and red dash-line squares represent the image dataset
Figure 4.
Generation of an SQ image (
a
) is added to a Gaussian image: PI; (
b
): GI; (
c
): a random Perlin
noise image.
Sensors 2019,19, 568 7 of 19
6. Results and Analysis
In this section, we show the addressing and assessment feasibility of three types of images of SQ,
HS_E, and Hex_E (i.e., having different pixel arrangements) using the common space. There are ten of
such triple types of images in the dataset and for each triple image type the results were obtained in three
stages of general preparation, case of CSE_sq and case of CSE_hex as it is shown in the flowchart of Figure 5.
The blue, green and red dash-line squares represent the image dataset generation, case of CSE_sq and case
of CSE_hex respectively. The dot arrow shows the pixels are selected in the Hex_E, HS_E and SQ images.
The thick and thin arrows represent the process of image generation and applying the selected pixels on the
images respectively. Table 2lists the symbols in Figure 5with their meanings. We explain the three stages
and then discuss and analyze the obtained results which indicate the feasibility and accuracy of addressing
and assessment of random pixels from one arrangement to another one.
Sensors 2019, 19, x FOR PEER REVIEW 7 of 20
generation, case of CSE_sq and case of CSE_hex respectively. The dot arrow shows the pixels are
selected in the Hex_E, HS_E and SQ images. The thick and thin arrows represent the process of image
generation and applying the selected pixels on the images respectively. Table 2 lists the symbols in
Figure 5 with their meanings. We explain the three stages and then discuss and analyze the obtained
results which indicate the feasibility and accuracy of addressing and assessment of random pixels
from one arrangement to another one.
Hex_E SQHS_E
Hex_E
p
24
×
200 random
positions
HS_E
p
SQ
CEhex
HS_E
CEhex
Hex_E
CEorg
SQ
CEorg
HS_E
CEsq
Hex_E
CEsq
Conversion
Conversion
CSE_hex CSE_sq
SQ
p
24 su b-
range
Image dataset
Case of
CSE_sq
Case of
CSE_hex
Figure 5. The flowchart to discuss and analyze the obtained results.
Table 2. Description of used symbols.
Symbol Full name and size Sensor
arrangement
Originated
from Method
SQ Square image
4096 × 2160 square - -
Hex_E
Hexagonal enriched
image
4096 × 2160
hexagonal SQ Conversion
HS_E Half-pixel shift image
4096 × 2160 square SQ Conversion
𝑺𝑸𝒑 Square matrix image
200 × 24 square SQ
Pixel selection
on SQ
𝑯𝒆𝒙_𝑬𝒑
Hexagonal enriched
matrix image
200 × 24
hexagonal Hex_E
Pixel selection
on Hex_E
𝑯𝑺_𝑬𝒑
Half-pixel shift matrix
image
200 × 24
square HS_E image
Pixel selection
on HS_E
CSE_sq Common Space surface continuous
extension SQ image New method,
see Section 4
Figure 5. The flowchart to discuss and analyze the obtained results.
Table 2. Description of used symbols.
Symbol Full Name and Size Sensor
Arrangement Originated from Method
SQ Square image
4096 ×2160 square - -
Hex_E
Hexagonal enriched image
4096 ×2160 hexagonal SQ Conversion
HS_E Half-pixel shift image
4096 ×2160 square SQ Conversion
SQpSquare matrix image
200 ×24 square SQ Pixel selection
on SQ
Hex_Ep
Hexagonal enriched
matrix image
200 ×24
hexagonal Hex_E Pixel selection on
Hex_E
Sensors 2019,19, 568 8 of 19
Table 2. Cont.
Symbol Full Name and Size Sensor
Arrangement Originated from Method
HS_Ep
Half-pixel shift
matrix image
200 ×24
square HS_E image Pixel selection
on HS_E
CSE_sq Common Space surface continuous
extension SQ image New method,
see Section 4
CSE_hex Common Space surface continuous
extension Hex_E image New method,
see Section 4
SQCEorg
Estimated Square
matrix image
200 ×24
square CSE_sq Pixel selection on
the CSE_sq
Hex_ECEsq
Estimated Hexagonal
matrix image
200 ×24
hexagonal CSE_sq Pixel selection on
the CSE_sq
HS_ECEsq
Estimated Half-pixel shift
matrix image
200 ×24
square CSE_sq Pixel selection on
the CSE_sq
SQCEhex
Estimated Square
matrix image
200 ×24
square CSE_hex Pixel selection on
the CSE_hex
HexCEorg
Estimated Hexagonal
matrix image
200 ×24
hexagonal CSE_hex Pixel selection on
the CSE_hex
HSCEhex
Estimated Half-pixel shift
matrix image
200 ×24
square CSE_hex Pixel selection on
the CSE_hex
6.1. General Preparation
Each SQ image in the data set is an eight bits image; i.e., the range of intensity values is between
0 and 255. The pixels of each SQ image are partitioned by having 24 intensity sub-ranges (e.g., 10–19,
. . .
, 190–199, 240–250) to investigate in more detail the tonal variation. In each sub-range, 200-pixel
positions are selected randomly in each SQ image; i.e., 24 by 200 pixels are chosen randomly meanwhile
assuring to have different tonal levels and representative of the whole intensity range. The 24 intensity
sub-ranges are related to the statistical requirement of having a pixel population in which we can
select 200 pixels positions. According to our observation from the generated images, a binning
of 10 tonal levels could fulfill the requirement where each intensity sub-range has at least a pixel
population of 1%. Figure 6shows a typical pixel population for 25 intensity sub-ranges. The first
intensity sub-range; with tonal levels between 0–9. And the last sub-range with tonal levels between
251–255 have less than the pixel population of 1% which accordingly will be discard in the pixel
selection process. The 200 random pixels in each intensity sub-range is because they contain sufficient
spatial intensity variation information in a certain sub-range of tonal variations to underpin statistical
analysis. Using the pixel positions, the relative intensity values from SQ, HS_E and Hex_E images
are organized in new images of
SQp
,
HS_Ep
, and
Hex_Ep
respectively; each with size of 200 by 24.
The pixels of each column of such an image are ordered by sorting the linear indexing of the 200
random selected pixels in each intensity-subrange.
Sensors 2019,19, 568 9 of 19
Sensors 2019, 19, x FOR PEER REVIEW 9 of 20
Figure 6. A typical pixel population for 25 intensity sub-ranges.
6.2. In Case of CSE_sq
The common space of each SQ image, CSE_sq, is created according to Section 4. Using the
common space of CSE_sq and the pixel positions of a SQ image the corresponding pixel positions
and the related intensity values are estimated for SQ, HS_E, and Hex_E image types. Accordingly, in
correspondent to a 𝑆𝑄, three images of 𝑆𝑄,𝐻𝑆_𝐸
 , 𝑎𝑛𝑑 𝐻𝑒𝑥_𝐸 are generated.
6.3. In Case of CSE_hex
The common space of each Hex_E image, CSE_hex, is created according to Section 4. As with
Case 6.1, by using the pixel positions of SQ image and the common space of CSE_hex,the
corresponding pixel positions and the related intensity values are estimated for SQ, HS_E, and Hex_E
image types. Accordingly, corresponding to a 𝐻𝑒𝑥_𝐸three images of 𝑆𝑄, 𝐻𝑆 and
𝐻𝑒𝑥 are generated.
6.4. Analysis of the Two Cases
In cases of CSE_sq or CE_hex, the images with a square or a hexagonal arrangement originate
the respective common spaces. Generally, in the process of obtaining the results by using a common
space and a pixel position in the originated image to the common space, the corresponding pixel
position and its intensity value are estimated for another type of image which has another
arrangement in comparison to the originated image. Here, we address the three questions of a) How
different are any two generated common spaces which are originated from two different
arrangements; e.g., the comparison of generated 𝑆𝑄 (representative of CSE_sq common space)
and 𝐻𝑒𝑥_𝐸 (representative of CSE_hex common space)? b) How similar are any generated
common space and its originated image; e.g., the comparison of 𝑆𝑄 to 𝑆𝑄 or 𝐻𝑒𝑥_𝐸 to
𝐻𝑒𝑥_𝐸? c) What is the accuracy of implementing any common space in addressing and
assessment between two types of arrangements; e.g., from SQ to Hex_E?
We generated ten CSE_sq and ten CSE_hex common spaces from the related images in the
dataset; i.e., each SQ image and its converted Hex_E image were used to create each related CSE_sq
and CSE_hex (a pair of common spaces). For each pair of the common spaces a pixel set of 200 chosen
pixels (see 6.1 and 6.2) of the originated images were chosen and organized as images. In this way,
ten 𝑆𝑄 and ten 𝐻𝑒𝑥_𝐸 images are obtained where each has size of 200 by 24 and represent
the relative common space. Question (a) is answered by comparison of the 𝑆𝑄 and 𝐻𝑒𝑥_𝐸
Figure 6. A typical pixel population for 25 intensity sub-ranges.
6.2. In Case of CSE_sq
The common space of each SQ image, CSE_sq, is created according to Section 4. Using the
common space of CSE_sq and the pixel positions of a SQ image the corresponding pixel positions
and the related intensity values are estimated for SQ, HS_E, and Hex_E image types. Accordingly,
in correspondent to a SQp, three images of SQCEorg ,HS_EC Esq ,and Hex_EC Esq are generated.
6.3. In Case of CSE_hex
The common space of each Hex_E image, CSE_hex, is created according to Section 4. As with
Case 6.1, by using the pixel positions of SQ image and the common space of CSE_hex, the corresponding
pixel positions and the related intensity values are estimated for SQ, HS_E, and Hex_E image types.
Accordingly, corresponding to a
Hex_Ep
three images of
SQCEhex
,
HSC Ehex
and
HexCEorg
are generated.
6.4. Analysis of the Two Cases
In cases of CSE_sq or CE_hex, the images with a square or a hexagonal arrangement originate the
respective common spaces. Generally, in the process of obtaining the results by using a common space
and a pixel position in the originated image to the common space, the corresponding pixel position
and its intensity value are estimated for another type of image which has another arrangement in
comparison to the originated image. Here, we address the three questions of (a) How different
are any two generated common spaces which are originated from two different arrangements;
e.g., the comparison of generated
SQCEorg
(representative of CSE_sq common space) and
Hex_ECEorg
(representative of CSE_hex common space)? (b) How similar are any generated common space and its
originated image; e.g., the comparison of
SQp
to
SQCEorg
or
Hex_Ep
to
Hex_ECEorg
? (c) What is the
accuracy of implementing any common space in addressing and assessment between two types of
arrangements; e.g., from SQ to Hex_E?
We generated ten CSE_sq and ten CSE_hex common spaces from the related images in the dataset;
i.e., each SQ image and its converted Hex_E image were used to create each related CSE_sq and
CSE_hex (a pair of common spaces). For each pair of the common spaces a pixel set of 200 chosen
pixels (see 6.1 and 6.2) of the originated images were chosen and organized as images. In this way,
ten
SQCEorg
and ten
Hex_ECEorg
images are obtained where each has size of 200 by 24 and represent
the relative common space. Question (a) is answered by comparison of the
SQCEorg
and
Hex_ECEorg
images. Figure 7shows the results of such comparisons where the absolute intensity value difference
of ten
SQCEorg
and
Hex_ECEorg
are measured. In the figure the colors from blue to yellow indicate that
Sensors 2019,19, 568 10 of 19
the difference value increases from 0 to 0.2. The total mean square error (MSE) between images shown
in Figure 7is 0.002 and multiple correlation among the images is 99.39%. The low MSE and high
correlation indicate that it is feasible to create almost the same common space for the two arrangements
of square and hexagonal. The created common spaces are close, but as expected, is not exactly the same;
e.g., a hexagonal arrangement has richer frequency spectrum than the square one which contributes to
obtain richer frequency spectrum on respective common space [23].
Sensors 2019, 19, x FOR PEER REVIEW 10 of 20
images. Figure 7 shows the results of such comparisons where the absolute intensity value difference
of ten 𝑆𝑄 and 𝐻𝑒𝑥_𝐸 are measured. In the figure the colors from blue to yellow indicate
that the difference value increases from 0 to 0.2. The total mean square error (MSE) between images
shown in Figure 7 is 0.002 and multiple correlation among the images is 99.39%. The low MSE and
high correlation indicate that it is feasible to create almost the same common space for the two
arrangements of square and hexagonal. The created common spaces are close, but as expected, is not
exactly the same; e.g., a hexagonal arrangement has richer frequency spectrum than the square one
which contributes to obtain richer frequency spectrum on respective common space [23].
Figure 7. Comparison of CSE_sq and CSE_Hex. The absolute intensity value difference of ten 𝑆𝑄
and 𝐻𝑒𝑥_𝐸 are shown.
Question (b) is answered by the comparison of 𝑆𝑄 to 𝑆𝑄 and 𝐻𝑒𝑥_𝐸 to 𝐻𝑒𝑥_𝐸
images. The results of such comparisons where the absolute intensity value difference of ten of 𝑆𝑄
to 𝑆𝑄 and 𝐻𝑒𝑥_𝐸 to 𝐻𝑒𝑥_𝐸 images are shown in Figures 8 and 9 respectively. The total
MSE between and multiple correlation among the images in Figure 8 is 0.0005 and 99.93%
respectively. In Figure 9, the total MSE between images is 0.00019 and multiple correlation among
them is 99.85%. The low MSE and high correlation in the results of the figures indicate that the
generated common spaces are very alike to their respective originated images but they are not strictly
the same.
Question (c) is answered by examining each case of CSE_sq and CSE_hex in addressing and
assessment between different types of arrangements. In case of CSE_sq ten of each 𝐻𝑒𝑥_𝐸,
𝐻𝑆_𝐸, and 𝑆𝑄 images are obtained, and they are compared to 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, and 𝑆𝑄 (i.e.,
the representatives of the images of Hex_E, Hs_E, and SQ). In case of CSE_hex ten of each
𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, and 𝑆𝑄 images are obtained, and they are compared to 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸,
and 𝑆𝑄. Figures 10 and 11 show two examples of such comparison between 𝑆𝑄 to 𝑆𝑄 and
𝐻𝑒𝑥 to 𝐻𝑒𝑥_𝐸 respectively.
In Figure 10, the total MSE between the ten 𝑆𝑄 and 𝑆𝑄 is 0.0059 and multiple correlation
between them is 99.03%. In Figure 10 the total MSE between the ten of 𝐻𝑒𝑥 and 𝐻𝑒𝑥_𝐸is 0.0099
and correlation between the pixel sets is 98.26%. The results in the Figures 7–10 show that by
implementing the common space, it is feasible to address different arrangements where the intensity
difference between any random pixel which is addressed via common space or via conversion is very
small.
Figure 7.
Comparison of CSE_sq and CSE_Hex. The absolute intensity value difference of ten
SQCEorg
and Hex_ECEor g are shown.
Question (b) is answered by the comparison of
SQp
to
SQCEorg
and
Hex_Ep
to
Hex_ECEorg
images.
The results of such comparisons where the absolute intensity value difference of ten of
SQp
to
SQCEorg
and
Hex_Ep
to
Hex_ECEorg
images are shown in Figures 8and 9respectively. The total MSE between
and multiple correlation among the images in Figure 8is 0.0005 and 99.93% respectively. In Figure 9,
the total MSE between images is 0.00019 and multiple correlation among them is 99.85%. The low MSE
and high correlation in the results of the figures indicate that the generated common spaces are very
alike to their respective originated images but they are not strictly the same.
Question (c) is answered by examining each case of CSE_sq and CSE_hex in addressing and
assessment between different types of arrangements. In case of CSE_sq ten of each
Hex_ECEsq
,
HS_EC Esq
, and
SQCEorg
images are obtained, and they are compared to
Hex_Ep
,
HS_Ep
, and
SQp
(i.e., the representatives of the images of Hex_E, Hs_E, and SQ). In case of CSE_hex ten of each
Hex_ECEorg
,
HS_EC Ehex
, and
SQCEhex
images are obtained, and they are compared to
Hex_Ep
,
HS_Ep
,
and
SQp
. Figures 10 and 11 show two examples of such comparison between
SQCEhex
to
SQp
and
HexCEsq to Hex_Eprespectively.
In Figure 10, the total MSE between the ten
SQCEhex
and
SQp
is 0.0059 and multiple correlation
between them is 99.03%. In Figure 10 the total MSE between the ten of
HexCEsq
and
Hex_Ep
is
0.0099 and correlation between the pixel sets is 98.26%. The results in the Figures 710 show that by
implementing the common space, it is feasible to address different arrangements where the intensity
difference between any random pixel which is addressed via common space or via conversion is
very small.
Sensors 2019,19, 568 11 of 19
Sensors 2019, 19, x FOR PEER REVIEW 11 of 20
Figure 8. Comparison of the common space of CSE_sq and its originated image of SQ. The absolute
intensity value difference of ten 𝑆𝑄 and 𝑆𝑄 are shown.
Figure 9. Comparison of the common space of CSE_hex and its originated image of Hex_E. The
absolute intensity value difference of ten 𝐻𝑒𝑥_𝐸 and 𝐻𝑒𝑥_𝐸 are shown.
Figure 8.
Comparison of the common space of CSE_sq and its originated image of SQ. The absolute
intensity value difference of ten SQpand SQCEorg are shown.
Sensors 2019, 19, x FOR PEER REVIEW 11 of 20
Figure 8. Comparison of the common space of CSE_sq and its originated image of SQ. The absolute
intensity value difference of ten 𝑆𝑄 and 𝑆𝑄 are shown.
Figure 9. Comparison of the common space of CSE_hex and its originated image of Hex_E. The
absolute intensity value difference of ten 𝐻𝑒𝑥_𝐸 and 𝐻𝑒𝑥_𝐸 are shown.
Figure 9.
Comparison of the common space of CSE_hex and its originated image of Hex_E. The
absolute intensity value difference of ten Hex_Epand Hex_ECEorg are shown.
Sensors 2019, 19, x FOR PEER REVIEW 12 of 20
Figure 10. Comparison of ten 𝑆𝑄 and 𝑆𝑄 images.
Figure 11. Comparison of 𝐻𝑒𝑥 and 𝐻𝑒𝑥_𝐸.
In each case of CSE_sq or CSE_hex, the intensity average and variance in the 24 tonal sub-ranges
of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸,
𝑆𝑄 are shown in Figures 12 and 13 respectively. The figures show that it is feasible to assess
pixels on different arrangements due to the estimation of pixel position and the intensity value in
different arrangement by using common space and without the need for any conversion means (see
Section 4). The pixel sets from hexagonal arrangement show the highest average intensity value and
variance in each type of common space indicating richer intensity variation and larger dynamic range
compared to SQ the other pixel sets. Figure 14 shows the mean (a) and variance (b) of ratio values of
ten corresponding pixel sets between each SQ and 𝑆𝑄 to Hex_E image. The mean (a) shows the
nonlinear relation between SQ to Hex_E which was previously shown in [3,18]. The mean (a) also
shows that the relation between 𝑆𝑄 to Hex_E is similar to the relation between SQ to Hex_E and
behaves in a nonlinear manner. The variance (b) shows that the relation between SQ and 𝑆𝑄to
Hex_E are similar and nonlinear.
Figure 10. Comparison of ten SQCEhex and SQpimages.
Sensors 2019,19, 568 12 of 19
Sensors 2019, 19, x FOR PEER REVIEW 12 of 20
Figure 10. Comparison of ten 𝑆𝑄 and 𝑆𝑄 images.
Figure 11. Comparison of 𝐻𝑒𝑥 and 𝐻𝑒𝑥_𝐸.
In each case of CSE_sq or CSE_hex, the intensity average and variance in the 24 tonal sub-ranges
of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸,
𝑆𝑄 are shown in Figures 12 and 13 respectively. The figures show that it is feasible to assess
pixels on different arrangements due to the estimation of pixel position and the intensity value in
different arrangement by using common space and without the need for any conversion means (see
Section 4). The pixel sets from hexagonal arrangement show the highest average intensity value and
variance in each type of common space indicating richer intensity variation and larger dynamic range
compared to SQ the other pixel sets. Figure 14 shows the mean (a) and variance (b) of ratio values of
ten corresponding pixel sets between each SQ and 𝑆𝑄 to Hex_E image. The mean (a) shows the
nonlinear relation between SQ to Hex_E which was previously shown in [3,18]. The mean (a) also
shows that the relation between 𝑆𝑄 to Hex_E is similar to the relation between SQ to Hex_E and
behaves in a nonlinear manner. The variance (b) shows that the relation between SQ and 𝑆𝑄to
Hex_E are similar and nonlinear.
Figure 11. Comparison of HexCEsq and Hex_Ep.
In each case of CSE_sq or CSE_hex, the intensity average and variance in the 24 tonal sub-ranges
of ten corresponding pixel sets of each
Hex_ECEsq
,
HS_EC Esq
,
SQCEorg
or
Hex_ECEorg
,
HS_EC Ehex
,
SQCEhex
are shown in Figures 12 and 13 respectively. The figures show that it is feasible to assess pixels
on different arrangements due to the estimation of pixel position and the intensity value in different
arrangement by using common space and without the need for any conversion means (see Section 4).
The pixel sets from hexagonal arrangement show the highest average intensity value and variance in
each type of common space indicating richer intensity variation and larger dynamic range compared
to SQ the other pixel sets. Figure 14 shows the mean (a) and variance (b) of ratio values of ten
corresponding pixel sets between each SQ and
SQCEhex
to Hex_E image. The mean (a) shows the
nonlinear relation between SQ to Hex_E which was previously shown in [
3
,
18
]. The mean (a) also
shows that the relation between
SQCEhex
to Hex_E is similar to the relation between SQ to Hex_E and
behaves in a nonlinear manner. The variance (b) shows that the relation between SQ and
SQCEhex
to
Hex_E are similar and nonlinear.
Sensors 2019, 19, x FOR PEER REVIEW 13 of 20
Figure 12. Intensity average of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or
𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄
Figure 13. Variance of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or
𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄.
Figure 12.
Intensity average of ten corresponding pixel sets of each
Hex_ECEsq
,
HS_ECEsq
,
SQCEorg
or
Hex_ECEor g,HS_ECEhex,SQCEhex .
Sensors 2019,19, 568 13 of 19
Sensors 2019, 19, x FOR PEER REVIEW 13 of 20
Figure 12. Intensity average of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or
𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄
Figure 13. Variance of ten corresponding pixel sets of each 𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄 or
𝐻𝑒𝑥_𝐸, 𝐻𝑆_𝐸, 𝑆𝑄.
Figure 13.
Variance of ten corresponding pixel sets of each
Hex_ECEsq
,
HS_ECEsq
,
SQCEorg
or
Hex_ECEor g,HS_ECEhex,SQCEhex .
Sensors 2019, 19, x FOR PEER REVIEW 14 of 20
(a) (b)
Figure 14. The mean (a) and variance (b) of ratio values of ten corresponding pixel sets between each
SQ and 𝑆𝑄 to Hex_E.
The pixel sets on corresponding arrangements via two types of common spaces are compared
and shown in Table 3. The comparison shows the correlation and MSE relation between each pair of
pixel sets. The results in the table indicate the feasibility of addressing each type of common space to
the same type of arrangement due to small MSE and high correlation values. The similar results of
correlation and MSE in Table 4 shows the assessment feasibility of different arrangements by
comparison of the pixel sets on different arrangement and via two types of common spaces.
Table 3. Comparison of pixel sets on corresponding arrangements via two types of common spaces.
Image Index
Pair of
pixel
sets
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10
Corre-
lation
𝑺𝑸𝑪𝑬𝒐𝒓𝒈
𝑺𝑸𝑪𝑬𝒉𝒆𝒙 99.63% 99.63% 99.62% 99.61% 99.66% 99.64% 99.63% 99.63% 99.61% 99.63%
𝑯𝑺_𝑬𝑪𝑬𝒔
𝒒
𝑯𝑺_𝑬𝑪𝑬
𝒉
98.25% 98.45% 98.28% 98.39% 98.79% 98.32% 98.41% 97.76% 98.49% 99.79%
𝑯𝒆𝒙_𝑬𝑪
𝑬
𝑯𝒆𝒙_𝑬𝑪
𝑬
98.23% 98.44% 98.26% 98.39% 98.78% 98.32% 98.41% 97.77% 98.49% 99.78%
MSE
𝑺𝑸𝑪𝑬𝒐𝒓𝒈
𝑺𝑸𝑪𝑬𝒉𝒆𝒙 0.0024 0.0021 0.0038 0.0046 0.0040 0.0038 0.0038 0.0044 0.0039 0.0005
𝑯𝑺_𝑬𝑪𝑬𝒔
𝒒
𝑯𝑺_𝑬𝑪𝑬
𝒉
0.0070 0.0054 0.0092 0.0085 0.0080 0.0091 0.0055 0.0065 0.0060 0.0080
𝑯𝒆𝒙_𝑬𝑪
𝑬
𝑯𝒆𝒙_𝑬𝑪
𝑬
0.0104 0.0080 0.0136 0.0125 0.0119 0.0135 0.0081 0.0095 0.0088 0.01193
Table 4. Assessment by comparison of the pixel sets on different arrangements via two types of
common spaces.
Image Index
Pair of
pixel
sets
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10
Corre-
lation
𝑺𝑸𝑪𝑬𝒐𝒓𝒈
𝑯𝑺_𝑬𝑪𝑬
𝒉
98.19% 98.17% 98.00% 98.00% 97.96% 97.89% 98.02% 97.96% 97.99% 98.07%
𝑺𝑸𝑪𝑬𝒐𝒓𝒈
𝑯𝒆𝒙_𝑬𝑪
𝑬
98.23% 98.16% 98.01% 97.94% 97.95% 97.88% 98.04% 97.98% 97.97% 98.07%
𝑯𝑺_𝑬𝑪𝑬𝒔
𝒒
𝑺𝑸𝑪𝑬𝒉𝒆𝒙 95.99% 96.19% 96.12% 96.20% 96.67% 96.16% 96.03% 95.49% 96.08% 97.76%
Figure 14. The mean (a) and variance (b) of ratio values of ten corresponding pixel sets between each
SQ and SQC Ehex to Hex_E.
The pixel sets on corresponding arrangements via two types of common spaces are compared
and shown in Table 3. The comparison shows the correlation and MSE relation between each pair
of pixel sets. The results in the table indicate the feasibility of addressing each type of common
space to the same type of arrangement due to small MSE and high correlation values. The similar
results of correlation and MSE in Table 4shows the assessment feasibility of different arrangements by
comparison of the pixel sets on different arrangement and via two types of common spaces.
Table 3. Comparison of pixel sets on corresponding arrangements via two types of common spaces.
Image Index
Pair of
Pixel Sets No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10
Corre-lation
SQCEorg
SQCEhex
99.63% 99.63% 99.62% 99.61% 99.66% 99.64% 99.63% 99.63% 99.61%
99.63%
HS_ECEsq
HS_ECEhex
98.25% 98.45% 98.28% 98.39% 98.79% 98.32% 98.41% 97.76% 98.49%
99.79%
Hex_ECEsq
Hex_ECEorg
98.23% 98.44% 98.26% 98.39% 98.78% 98.32% 98.41% 97.77% 98.49%
99.78%
Sensors 2019,19, 568 14 of 19
Table 3. Cont.
Image Index
Pair of
Pixel Sets No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10
MSE
SQCEorg
SQCEhex
0.0024 0.0021 0.0038 0.0046 0.0040 0.0038 0.0038 0.0044 0.0039 0.0005
HS_ECEsq
HS_ECEhex 0.0070 0.0054 0.0092 0.0085 0.0080 0.0091 0.0055 0.0065 0.0060 0.0080
Hex_ECEsq
Hex_ECEorg 0.0104 0.0080 0.0136 0.0125 0.0119 0.0135 0.0081 0.0095 0.0088 0.01193
Table 4.
Assessment by comparison of the pixel sets on different arrangements via two types of
common spaces.
Image Index
Pair of
Pixel Sets No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10
Corre-lation
SQCEorg
HS_ECEhex
98.19% 98.17% 98.00% 98.00% 97.96% 97.89% 98.02% 97.96% 97.99%
98.07%
SQCEorg
Hex_ECEhex
98.23% 98.16% 98.01% 97.94% 97.95% 97.88% 98.04% 97.98% 97.97%
98.07%
HS_ECEsq
SQCEhex
95.99% 96.19% 96.12% 96.20% 96.67% 96.16% 96.03% 95.49% 96.08%
97.76%
HS_ECEsq
Hex_ECEorg
98.22% 98.42% 98.22% 98.37% 98.77% 98.28% 98.39% 97.72% 98.47%
99.75%
Hex_ECEsq
SQCEhex
95.98% 96.19% 96.13% 96.20% 96.66% 96.18% 96.02% 95.52% 96.07%
97.76%
Hex_ECEsq
HS_ECEhex
98.24% 98.46% 98.30% 98.40% 98.79% 98.35% 98.41% 97.79% 98.49%
99.79%
SQCEorg
HS_ECEsq
96.43% 96.78% 96.55% 96.51% 96.93% 96.59% 96.55% 96.20% 96.51%
98.05%
SQCEorg
Hex_ECEsq
96.42% 96.78% 96.56% 96.51% 96.91% 96.59% 96.54% 96.22% 96.50%
98.04%
SQCEhex
HS_ECEhex
97.93% 97.84% 97.88% 97.86% 97.87% 97.77% 97.68% 97.70% 97.70%
97.99%
SQCEhex
Hex_ECEorg
97.98% 97.84% 97.90% 97.80% 97.86% 97.76% 97.72% 97.72% 97.68%
97.99%
HS_ECEsq
Hex_ECEorg
99.98% 99.98% 99.98% 99.98% 99.98% 99.98% 99.98% 99.98% 99.98%
99.98%
HS_ECEhex
Hex_ECEorg
99.90% 99.90% 99.89% 99.90% 99.90% 99.89% 99.90% 99.90% 99.89%
99.91%
MSE
SQCEorg
HS_ECEhex 0.0044 0.0037 0.0072 0.0071 0.0075 0.0068 0.0039 0.0047 0.0043 0.0069
SQCEorg
Hex_ECEorg 0.9823 0.9816 0.9801 0.9794 0.9795 0.9788 0.9804 0.9798 0.9797 0.9807
HS_ECEsq
SQCEhex
0.0089 0.0080 0.0096 0.0100 0.0086 0.0100 0.0100 0.0107 0.0099 0.0032
HS_ECEsq
Hex_ECEorg 0.0031 0.0027 0.0032 0.0029 0.0022 0.0031 0.0034 0.0036 0.0034 0.0012
Hex_ECEsq
SQCEhex
0.0255 0.0236 0.0274 0.0283 0.0259 0.0283 0.0278 0.0288 0.0274 0.0120
Hex_ECEsq
HS_ECEhex 0.0238 0.0204 0.0288 0.0274 0.0270 0.0287 0.0201 0.0220 0.0212 0.0279
SQCEorg
HS_ECEsq 0.0051 0.0048 0.0049 0.0050 0.0043 0.0049 0.0048 0.0052 0.0050 0.0030
SQCEorg
Hex_ECEsq 0.0148 0.0148 0.0136 0.0131 0.0123 0.0141 0.0138 0.0137 0.0135 0.01257
SQCEhex
HS_ECEhex 0.0018 0.0020 0.0021 0.0019 0.0022 0.0021 0.0023 0.0023 0.0021 0.0078
SQCEhex
Hex_ECEhex 0.0053 0.0064 0.0042 0.0051 0.0044 0.0046 0.0089 0.0083 0.0079 0.0029
HS_ECEsq
Hex_ECEsq 0.0061 0.0061 0.0061 0.0061 0.0061 0.0061 0.0061 0.0061 0.0061 0.0061
HS_ECEhex
Hex_ECEorg 0.0039 0.0040 0.0036 0.0036 0.0036 0.0036 0.0041 0.0040 0.0041 0.0035
Sensors 2019,19, 568 15 of 19
7. Conclusions
In the paper we proposed a method to create a common space, which eliminates the need for
defining new grid structures for addressing different sensor arrangements. We showed the feasibility of
addressing and assessing different spatial arrangements of sensors, specifically between the rectangular
and hexagonal arrangements. We explained how the common space is created by implementing
a continuous extension of discrete Weyl Group orbit function transform, which extends a discrete
arrangement to a continuous one. The results indicate that the common space facilitates an easy tool
for addressing any pixel position on any arrangement and specifically we showed such facilitation
on square and hexagonal arrangements. It was also shown that the tool has significant property to
assess the changes between different spatial arrangements by which, in the experiment, the pixel sets
on hexagonal images show richer intensity variation, nonlinear behavior, and larger dynamic range in
comparison to the pixel sets on rectangular images.
Author Contributions:
Data curation, W.W. and S.K.; Formal analysis, W.W. and S.K.; Methodology, O.K.;
Software, O.K.; Supervision, G.C.; Writing—original draft, W.W. and S.K.; Writing—review and editing, W.W.
and S.K.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
Appendix A
A.1. Root System
A root system is a configuration of vectors in a Euclidean space satisfying certain geometrical
properties. Let us define a root system as a finite set of non-trivial vectors
={αiRn}
that fulfil
three conditions:
Roots αispanRn
If
αi
, then
λα λ{1, 1}
: every root system contains only two scalar
multiples of each root: the root itself and its reflection,
α,βγαβ
: root system is closed under reflection with respect to
hyperplanes orthogonal to roots.
γαβ
denotes reflection of root
β
with respect
to hyperplane orthogonal to root α.
So-called crystallographic root systems also fulfil the fourth condition: α,β:2(α,β)
(α,α)Z
We can unambiguously choose a set of simple roots
Σ
. Simple roots fulfil two extra conditions:
all simple roots are linearly independent,
every root
αi
, can be expressed as a linear combination of simple roots, such
that all coefficients of this linear combinations are either all non-negative (such root is
called positive root), or are all non-positive (negative root).
When each root is expressed as a linear combination of simple roots, we can introduce ordering of
roots. So-called highest of roots is denoted
ξ
and is expressed as
ξ=m1α1+m2α2
, where
α1
,
α2
are
simple roots. Coefficients
m1
,
m2
are called marks. There are several significant sets of vectors that
are related to each root system: set of co-roots
(a
i)
, weights
(ωi)
and co-weights
(ω
j)
. Co-roots and
co-weights are normalized variants of roots and weights, respectively:
a
i=2αi
hαi,αii(A1)
ω
j=2ωi
hωi,ωii(A2)
Sensors 2019,19, 568 16 of 19
Roots and weights are dual to each other, in the following sense:
hαj,ω
ji=ha
i,ωii=δij (A3)
These four sets of vectors are used to form four lattices (root lattice Q, co-root lattice
Q
weight
lattice P and co-weight lattice
P
) which will be used in the definition of discrete orbit function. All four
lattices are defined in the following manner:
Q=Zα1+Zα2Q=Zα
1+Zα
2
P=Zω1+Zω2P=Zω
1+Zω
2
Each of these lattices can have its non-negative part (denoted with superscript +) and positive
part (denoted with superscript ++).
A.2. Weyl Groups
When having root system
composed of roots
αi
, we define
ri
, as a reflection with respect to
root
αi
. Set of reflections
ri
will generate so-called Weyl group W. Affine Weyl group is an extension
of Weyl group, it is generated by reflections
ri
plus reflection
r0
, which is a reflection with respect to
highest root
ξ
Weyl group orbit of point x is a finite set of points generated by all actions of Weyl group
W. Similarly, the affine orbit of point x is generated by all actions of
Waff
on point x, however, affine
orbit is an infinite set, due to the reflection
r0
. Fundamental region of
Waff
is a closed subset of
Rn
such
that it contains exactly one point of each affine Weyl group orbit. The fundamental region for affine
Weyl groups in R2space can be chosen a convex hull of points n0, ω
1
m1,ω
2
m2o.
The dual root system
is obtained as system of co-roots. Reflections related to dual root system
generate dual Weyl group
ˆ
W
. Dual Weyl group
ˆ
W
has its fundamental region
F
and can be
extended to affine dual Weyl group
ˆ
Waff
. Since roots and co-roots differ only with their lengths, both
W and
ˆ
W
generated by the same sets of reflections. However, highest co-root
η=m
1α
1+m
2α
2
differs from highest root
ξ
in both length and direction, and thus the dual affine Weyl group
ˆ
Waff
is not
the same as
Waff
. As a consequence,
F6=F
. Root systems are not the only way how to generate Weyl
groups. Roots of simple Lie algebras coincide with simple roots-designation of Lie algebras are often
used to designate Weyl groups generated by reflections with respect to roots of given Lie algebra.
A.3. Orbit Functions and Orbit Transforms
Weyl group orbit functions were defined for all simple Lie algebras
(An, Bn, Cn,
Dn, G2, F4, E6, E7, and E8)
and they can be used for generalized Fourier analysis of data on the
fundamental region F of the corresponding Weyl groups. This theory allows for similar discretization
as in the case of common Fourier discrete analysis studies, and can be used for the analysis of digitized
data on the fundamental region.
Sine, cosine functions, plus
eix
are generalized to systems with nonorthonormal basis through
orbit functions. Moreover, certain Weyl groups provide more types of functions, e.g.,
C2and G2
Weyl
groups allow us to define
Cs
,
Cl
,
Sland Ss
functions, as described in [
23
]. A_2 Weyl group provide
only straightforward generalization of cosine, sine and complex exponential functions. These orbit
functions are generally, i.e., regardless of underlying Weyl group, defined as:
Φλ(x)=
ωW
ei2πhωλ,xi(A4)
ϕλ(x)=
ωW
det(ω)ei2πhωλ,xi(A5)
Ξλ(x)=
ωW
ei2πhωλ,xi(A6)
Sensors 2019,19, 568 17 of 19
where parameter
xRn
and label
λQ
Since the orbit functions are invariant to operations of
W or We
, respectively, we can restrict the parameter x to the fundamental region F or even fundamental
region
Fe
, respectively. Since S-orbit function
ϕ
is anti-symmetric, it vanishes for x on boundary of F
and for
λ
on reflection hyperplane. Through these two facts, the restriction of x and
λ
looks as follows:
Φλ(x): x F, λP+,
ϕλ(x): x e
F, λP++,
Ξλ(x): x Fe,λP+r1P++,
the e
F denotes the interior of fundamental region F.
For the discretization of orbit functions, we choose arbitrary fixed positive integer M that defines
the density of the lattice. The discrete fundamental region
FM
is constructed as an intersection of
fundamental region F and stretched subset of lattice P:
FM=1
MP/QF=(s1
Mω
1+· · · +sn
Mω
ns0+
n
i=1
s1mi=M, s0, s1, . . . , snZ0)(A7)
For discrete orbit functions we use set
ΛM
, which is a set of discrete labels
λ
. The parameter M
has the same meaning as for discrete fundamental region. The set ΛMis expressed as follows:
FM=P/MQ MF=(s1ω1+· · · +snωns0+
n
i=1
s1m
i=M, s0, s1, . . . , snZ0)(A8)
Due to the invariance of functions to the actions of Weyl group W, and the (anti-)symmetry of
functions, discrete orbit functions can be restricted in the following way:
Φλ(x): x FM,λΛM,
ϕλ(x): x f
FM,λg
ΛM,
Ξλ(x): x Fe
M,λΛe
M,
For further relations, scalar product over discrete fundamental region is crucial. Having two
discrete functions
f(x)
and
g(x)
, defined over discrete fundamental region with density M, we define
their scalar product as
hf, giFM=
xFM
ε(x)f(x)g(x)(A9)
Note that the region of
FM
, may change depending on the used orbit function. E.g.,
when computing
hf
,
ϕλiFM
, we can omit boundary of
FM
since
ϕλ=
0 on the boundary of
FM
,
thus hf, ϕλiFM=hf, ϕλif
FM.
For λ,λ0ΛM, the orbit functions hold the orthogonality relation:
hΦλ,Φλ0iFM=cM2|W||stabW(λ)|δλλ0
hϕλ,ϕλ0if
FM=cM2|W||stabW(λ)|δλλ0
hΞλ,Ξλ0iFe
M0=cM2|We||stabWe(λ)|δλλ0
(A10)
M is the density of the discrete fundamental region, c denotes the determinant of Cartan matrix
for the underlying group W, Cartan matrix
C=cijn
i,j=1
,
cij =hαi
,
αji
.
|W|
is the order of group W,
stabW(λ)
is the stabilizer of the point
λ
under W. Generally speaking, the
stabG(x)
is a maximum
subgroup of G, such that it holds g(zx)=xgstabG(x).
Sensors 2019,19, 568 18 of 19
Since orbit functions are pairwise orthogonal over the finite region, we can expand the discrete
functions f(x), g(x) and h(x) into a finite series of orbit functions:
f(x)=
λΛM
F(Φ)
λΦλ(x)xFM,Φorbit transform
g(x)=
λf
FM
G(ϕ)
λϕλ(x)xf
FM,ϕorbit transform
h(x)=
λFe
M
H(Ξ)
λΞλ(x)xFe
M,Ξorbit transform
(A11)
Function f(x) needs to be defined on
FM
, g(x) must be defined on
f
FM
and h(x) is defined on
Fe
M
.
The F denotes the spectrum of discrete function f. The superscripts
Φ
,
ϕ
and
Ξ
are used for distinction
between different kinds of spectra and are not commonly used.
The spectra points are given by
F(Φ)
λ=hf,ΦλiFM
hΦλ,ΦλiFM
,λΛM
G(ϕ)
λ=hg,ϕλig
FM
hϕλ,ϕλig
FM
,λg
ΛM
H(Ξ)
λ=hh,ΞλiFe
M
hΞλ,ΞλiFe
M
,λΛe
M
(A12)
A.4. Continuous Extension
One of the key properties of orbit transform is that sequence of orbit transform and inverse orbit
transform preserve the processed data, e.g.,
f=λΛM
hf,ΦλiFM
hΦλ,ΦiFM
Φλ
. This property is preserved even
when discrete orbit function in the inverse orbit transform Equation (A11) is replaced with continuous
orbit function of the same family:
f(x)=
λΛM
F(Φ)
λΦλ(z)xRn
g(x)=
λg
ΛM
G(ϕ)
λϕλ(z)xRn
h(x)=
λΛe
M
H(Ξ)
λΞλ(z)xRn
(A13)
In this case, we obtain a continuous extension of the original data. As proven, see [
2
], certain
families of orbit functions can provide high-quality approximation with quick convergence to the
original continuous data.
References
1. Mead, C. Neuromorphic electronic systems. Proc. IEEE 1990,78, 1629–1636. [CrossRef]
2.
Lamb, T.D. Evolution of phototransduction, vertebrate photoreceptors and retina. Prog. Retinal Eye Res.
2013
,
36, 52–119. [CrossRef] [PubMed]
3.
Wen, W.; Khatibi, S. Back to basics: Towards novel computation and arrangement of spatial sensory in
images. Acta Polytech. 2016,56, 409–416. [CrossRef]
4. Luczak, E.; Rosenfeld, A. Distance on a hexagonal grid. IEEE Trans. Comput. 1976,25, 532–533. [CrossRef]
5. Wüthrich, C.A.; Stucki, P. An algorithmic comparison between square- and hexagonal-based grids. CVGIP
Graph. Models Image Process. 1991,53, 324–339. [CrossRef]
6.
Snyder, W.E.; Qi, H.; Sander, W.A. Coordinate system for hexagonal pixels. In Proceedings of the Medical
Imaging 1999: Image Processing, San Diego, CA, USA, 20–26 February 1999; Volume 3661, pp. 716–728.
7.
Her, I. A symmetrical coordinate frame on the hexagonal grid for computer graphics and vision.
J. Mech. Des.
1993,115, 447–449. [CrossRef]
Sensors 2019,19, 568 19 of 19
8.
Her, I.; Yuan, C.-T. Resampling on a pseudohexagonal grid. CVGIP Graph. Models Image Process.
1994
,56,
336–347. [CrossRef]
9.
Her, I. Geometric transformations on the hexagonal grid. IEEE Trans. Image Process.
1995
,4, 1213–1222.
[CrossRef] [PubMed]
10.
Nagy, B. Finding shortest path with neighbourhood sequences in triangular grids. In Proceedings of the
2nd International Symposium on Image and Signal Processing and Analysis (ISPA 2001), Pula, Croatia,
19–21 June 2001; pp. 55–60.
11.
Sheridan, P. Spiral Architecture for Machine Vision. Ph.D. Thesis, University of Technology Sydney, Sydney,
Australia, 1996.
12.
He, X. 2D-Object Recognition with Spiral Architecture. Ph.D. Thesis, University of Technology Sydney,
Sydney, Australia, 1999.
13.
Middleton, L.; Sivaswamy, J. Edge detection in a hexagonal-image processing framework. Image Vis. Comput.
2001,19, 1071–1081. [CrossRef]
14.
Middleton, L.; Sivaswamy, J. Framework for practical hexagonal-image processing. J. Electron. Imaging
2002
,
11, 104–115.
15.
Wu, Q.; He, S.; Hintz, T. Virtual Spiral Architecture. In Proceedings of the International Conference on
Parallel and Distributed Processing Techniques and Applications, Las Vegas, NV, USA, 21–24 June 2004.
16.
He, S.; Hintz, T.; Wu, Q.; Wang, H.; Jia, W. A new simulation of Spiral Architecture. In Proceedings of
the International Conference on Image Processing, Computer Vision and Pattern Recognition, Las Vegas,
NV, USA, 26–29 June 2006.
17.
Wen, W.; Khatibi, S. Novel Software-Based Method to Widen Dynamic Range of CCD Sensor Images.
In Proceedings of the International Conference on Image and Graphics, Tianjin, China, 13–16 August 2015;
pp. 572–583.
18.
Wen, W.; Khatibi, S. The Impact of Curviness on Four Different Image Sensor Forms and Structures. Sensors
2018,18, 429. [CrossRef] [PubMed]
19.
He, X.; Jia, W. Hexagonal Structure for Intelligent Vision. In Proceedings of the 2005 International Conference
on Information and Communication Technologies, Karachi, Pakistan, 27–28 August 2005; pp. 52–64.
20. Horn, B. Robot Vision; MIT Press: Cambridge, MA, USA, 1986.
21. Perlin, K. An image synthesizer. ACM SIGGRAPH Comput. Graph. 1985,19, 287–296. [CrossRef]
22. Parberry, I. Amortized noise. J. Comput. Graph. Tech. 2014,3, 31–47.
23.
Asharindavida, F.; Hundewale, N.; Aljahdali, S. Study on hexagonal grid in image processing. In Proceedings
of the 2012 International Conference on Information and Knowledge Management, Kuala Lumpur, Malaysia,
24–26 July 2012.
©
2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Furthermore, the actualized coincidence between the (anti)symmetric trigonometric and Fourier-Weyl transforms offers tools for connection and transfer of the discrete Fourier methods [11,19]. The successful interpolation tests for the 2D and 3D (anti)symmetric trigonometric functions [2,9,10,12] indicate matching interpolation behavior of the corresponding Fourier-Weyl transforms as well as relevance of both types of transforms in data processing methods [20]. In particular, the vast pool of recursive algorithms for fast computation of the (multivariate) trigonometric transforms [11] becomes linked to the central splitting of the discrete Fourier-Weyl transforms [21,22] and vice versa. ...
... = · · · = s σ, m = 0 for j > 0, m < n, the Dynkin diagram of U 1 corresponds from Figure 2 to type A m−j+1 . Thus, the value of Fourier-Weyl weight function is derived from Table 1 and defining relation (20) as ...
... (iii) If the only connected component U 1 is associated with the only zero magnified Kac coordinates of the form s σ, j = s σ, j+1 = · · · = s σ, n = 0, 0 < j ≤ n, the Dynkin diagram of U 1 is of type C n−j+1 . Therefore, the value of Fourier-Weyl weight function results from Table 1 and defining relation (20) as ...
Article
Full-text available
Explicit links of the multivariate discrete (anti)symmetric cosine and sine transforms with the generalized dual-root lattice Fourier–Weyl transforms are constructed. Exact identities between the (anti)symmetric trigonometric functions and Weyl orbit functions of the crystallographic root systems A1 and Cn are utilized to connect the kernels of the discrete transforms. The point and label sets of the 32 discrete (anti)symmetric trigonometric transforms are expressed as fragments of the rescaled dual root and weight lattices inside the closures of Weyl alcoves. A case-by-case analysis of the inherent extended Coxeter–Dynkin diagrams specifically relates the weight and normalization functions of the discrete transforms. The resulting unique coupling of the transforms is achieved by detailing a common form of the associated unitary transform matrices. The direct evaluation of the corresponding unitary transform matrices is exemplified for several cases of the bivariate transforms.
... The images are resampled by a space-variant model that is constantly reshaped to match the regions of interest detected in the image by a motion tracking procedure, thus effectively simulating a moving fovea that increasingly gathers high-resolution data across frames. An appropriate method is described in [11] to facilitate comparisons among different sensor arrangements. The idea is to provide a common space for creating lattices of any kind. ...
... Eleven (11) sequences where no face is found in the fovea were ignored. This decision was taken because no face recognition accuracy evaluations (using our models) would apply to these sequences. ...
Article
Full-text available
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further handled by the appropriate CV algorithms. Moreover, much of the acquired data are often redundant and outside of the application’s interest, which leads to unnecessary processing and energy spending. In the literature, techniques for sensing and re-sampling images in non-uniform fashions have emerged to cope with these problems. In this study, we propose Application-Oriented Retinal Image Models that define a space-variant configuration of uniform images and contemplate requirements of energy consumption and storage footprints for CV applications. We hypothesize that our models might decrease energy consumption in CV tasks. Moreover, we show how to create the models and validate their use in a face detection/recognition application, evidencing the compromise between storage, energy, and accuracy.
Article
Full-text available
The arrangement and form of the image sensor have a fundamental effect on any further image processing operation and image visualization. In this paper, we present a software-based method to change the arrangement and form of pixel sensors that generate hexagonal pixel forms on a hexagonal grid. We evaluate four different image sensor forms and structures, including the proposed method. A set of 23 pairs of images; randomly chosen, from a database of 280 pairs of images are used in the evaluation. Each pair of images have the same semantic meaning and general appearance, the major difference between them being the sharp transitions in their contours. The curviness variation is estimated by effect of the first and second order gradient operations, Hessian matrix and critical points detection on the generated images; having different grid structures, different pixel forms and virtual increased of fill factor as three major properties of sensor characteristics. The results show that the grid structure and pixel form are the first and second most important properties. Several dissimilarity parameters are presented for curviness quantification in which using extremum point showed to achieve distinctive results. The results also show that the hexagonal image is the best image type for distinguishing the contours in the images.
Article
Full-text available
The current camera has made a huge progress in the sensor resolution and the lowluminance performance. However, we are still far from having an optimal camera as powerful as our eye is. The study of the evolution process of our visual system indicates attention to two major issues: the form and the density of the sensor. High contrast and optimal sampling properties of our visual spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose a novel software-based method to create images on a compact dense hexagonal grid, derived from a simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible to achieve an image processing in the orbit domain and the generated hexagonal images are superior, in detection of curvature edges, to the square images. We believe that the orbit domain image processing has a great potential to be the standard processing for hexagonal images.
Conference Paper
Full-text available
In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.
Article
Full-text available
Evidence is reviewed from a wide range of studies relevant to the evolution of vertebrate photoreceptors and phototransduction, in order to permit the synthesis of a scenario for the major steps that occurred during the evolution of cones, rods and the vertebrate retina. The ancestral opsin originated more than 700 Mya (million years ago) and duplicated to form three branches before cnidarians diverged from our own lineage. During chordate evolution, ciliary opsins (C-opsins) underwent multiple stages of improvement, giving rise to the 'bleaching' opsins that characterise cones and rods. Prior to the '2R' rounds of whole genome duplication near the base of the vertebrate lineage, 'cone' photoreceptors already existed; they possessed a transduction cascade essentially the same as in modern cones, along with two classes of opsin: SWS and LWS (short- and long-wave-sensitive). These cones appear to have made synaptic contact directly onto ganglion cells, in a two-layered retina that resembled the pineal organ of extant non-mammalian vertebrates. Interestingly, those ganglion cells appear to be descendants of microvillar photoreceptor cells. No lens was associated with this two-layered retina, and it is likely to have mediated circadian timing rather than spatial vision. Subsequently, retinal bipolar cells evolved, as variants of ciliary photoreceptors, and greatly increased the computational power of the retina. With the advent of a lens and extraocular muscles, spatial imaging information became available for central processing, and gave rise to vision in vertebrates more than 500 Mya. The '2R' genome duplications permitted the refinement of cascade components suitable for both rods and cones, and also led to the emergence of five visual opsins. The exact timing of the emergence of 'true rods' is not yet clear, but it may not have occurred until after the divergence of jawed and jawless vertebrates. [292 words].
Article
Full-text available
With processing power of computers and capabilities of graphics devices increasing rapidly, the time is ripe to reconsider using hexagonal sampling for computer vision in earnest. This paper reports on an investigation of edge detection in the context of hexagonally sampled images. It presents a complete framework for processing hexagonally sampled images which addresses four key aspects: conversion of square to hexagonally sampled images, storage, processing, and display of these images. Results from using edge detection on this framework show that (a) the computational requirement for processing a hexagonally sampled image is less than that for square sampled images, and (b) a better qualitative performance which is due to the compact and circular nature of the hexagonal lattice. This last point needs to be exploited in the development of edge detectors for hexagonally sampled images.
Book
Full-text available
1 Introduction 2 Image Formation & Image Sensing 3 Binary Images: Geometrical Properties 4 Binary Images: Topological Properties 5 Regions & Image Segmentation 6 Image Processing: Continuous Images 7 Image Processing: Discrete Images 8 Edges & Edge Finding 9 Lightness & Color 10 Reflectance Map: Photometric Stereo 11 Reflectance Map: Shape from Shading 12 Motion Field & Optical Flow 13 Photogrammetry & Stereo 14 Pattern Classification 15 Polyhedral Objects 16 Extended Gaussian Images 17 Passive Navigation & Structure from Motion 18 Picking Parts out of a Bin & Hand-Eye Systems
Article
Biological in formation-processing systems operate on completely different principles from those with which most engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication.
Article
This paper investigates the hexagonal tessellation of picture elements used in computer graphics and computer vision systems. Its purpose is to propose a symmetrical hexagonal coordinate frame to replace the existing oblique coordinate system. In the proposed symmetrical coordinate frame, spatial properties of the hexagonal grid such as distance functions are better represented.
Article
This paper investigates resampling techniques on a pseudohexagonal grid. Hexagonal grids are known to be advantageous in many respects for sampling and representing digital images in various computer vision and graphics applications. Currently, a real hexagonal grid device is still difficult to find. A good alternative for obtaining the advantages of a hexagonal grid is to construct a pseudohexagonal grid on a regular rectangular grid device. In this paper we first describe the options and procedures for constructing such a pseudo-hexagonal grid and then demonstrate techniques of resampling digital images on the pseudohexagonal grid. Four distinct resampling kernels are tested, and their results are illustrated and compared.
Article
An algorithmic comparison between square and hexagonal grids, showing the strong similarity between them, is presented. General digitalization schemes, valid for all real curves of the plane, are presented for the hexagonal grid. A proof of the validity in the hexagonal case of Bresenham's straight-line algorithm for calculating the digitalization of a straight line joining two end-points is given. The transformations necessary to perform correctly the digitalization of a straight line using Bresenham's algorithm on a hexagonal grid are introduced. A simple circle drawing algorithm is presented. A simulator to render graphic primitives both on the hexagonal grid and on the square grid is then introduced, and various outputs of the simulator are shown and discussed.