Conference PaperPDF Available

Analysis of Hu's moment invariants on image scaling and rotation

Authors:

Abstract and Figures

Moment invariants have been widely applied to image pattern recognition in a variety of applications due to its invariant features on image translation, scaling and rotation. The moments are strictly invariant for the continuous function. However, in practical applications images are discrete. Consequently, the moment invariants may change over image geometric transformation. To address this research problem, an analysis with respect to the variation of moment invariants on image geometric transformation is presented, so as to analyze the effect of image's scaling and rotation. Finally, the guidance is also provided for minimizing the fluctuation of moment invariants.
Content may be subject to copyright.
Edith Cowan University
Research Online
ECU Publications Pre. 2011
2010
Analysis of Hu's Moment Invariants on Image
Scaling and Rotation
Zhihu Huang
Edith Cowan University
Jinsong Leng
Edith Cowan University
is article was originally published as: Huang, Z. , & Leng, J. (2010). Analysis of Hu’s Moment Invariants on Image Scaling and Rotation.
Proceedings of 2010 2nd International Conference on Computer Engineering and Technology (ICCET). (pp. 476-480). . Chengdu, China. IEEE.
Original article available here
is Conference Proceeding is posted at Research Online.
hp://ro.ecu.edu.au/ecuworks/6350
Analysis of Hu’s Moment Invariants on Image Scaling and Rotation
Zhihu
Huang
College of Computer Science,
ChongQing University, China
Chongqing Radio & TV University,
ChongQing, China
Email:Zh_huang@yeah.net
Jinsong Leng
School of Computer
and Security Science,
Edith Cowan University, Australia
Email: j.leng@ecu.edu.au
Abstract—Moment invariants have been widely applied to
image pattern recognition in a variety of applications due to its
invariant features on image translation, scaling and rotation.
The moments are strictly invariant for the continuous function.
However, in practical applications images are discrete.
Consequently, the moment invariants may change over image
geometric transformation. To address this research problem,
an analysis with respect to the variation of moment invariants
on image geometric transformation is presented, so as to
analyze the effect of image's scaling and rotation. Finally, the
guidance is also provided for minimizing the fluctuation of
moment invariants.
Keywords-Pattern recognition, Hu’s moment invariant,
Image transformation, Sapatial resolution
I.
I
NTRODUCTION
Moments and the related invariants have been
extensively analyzed to characterize the patterns in images in
a variety of applications. The well-known moments include
geometric moments[1], zernike moments[2], rotational
moments[3], and complex moments [4].
Moment invariants are firstly introduced by Hu. In[1],
Hu derived six absolute orthogonal invariants and one skew
orthogonal invariant based upon algebraic invariants, which
are not only independent of position, size and orientation but
also independent of parallel projection. The moment
invariants have been proved to be the adequate measures for
tracing image patterns regarding the images translation,
scaling and rotation under the assumption of images with
continuous functions and noise-free. Moment invariants
have been extensively applied to image pattern
recognition[4-7], image registration[8] and image
reconstruction. However, the digital images in practical
application are not continuous and noise-free, because
images are quantized by finite-precision pixels in a discrete
coordinate. In addition, the noise may be introduced by
various factors such as camera. In this respect, errors are
inevitably introduced during the computation of moment
invariants. In other words, the moment invariants may vary
with image geometric transformation. But how much are the
variation? To our knowledge, this issue has never been
quantitatively studied for image geometric transformation.
Salama[9] analyzed the effect of spatial quantization on
moment invariants. He found that the error decreases when
the image size increases and the sampling intervals decrease,
but does not decrease monotonically in general. Teh[10]
analyzed three fundamental issues related to moment
invariants, concluding that: 1) Sensitivity to image noise; 2)
Aspects of information redundancy; and 3) Capability for
image representation. Teh declared that the higher order
moments are the more vulnerable to noise. Computational
errors on moment invariants can be caused by not only the
quantization and pollution of noise, but also transformations
such as scaling and rotation. When the size of images are
increased or decreased, the pixels of images will be
interpolated or deleted. Moreover, the rotation of images also
causes the change of image function, because it involves
rounding pixel values and coordinates[11]. Therefore,
moment invariants may change as images scale or rotate.
This paper quantitatively analyzes fluctuation of moment
invariants on image scaling and rotation. Empirical studies
have been conducted with various images. The major
contributions of this paper include the findings the
relationship among the image scaling, rotation and resolution,
and the guidance of minimizing the errors of moment
invariants for image scaling and rotation.
The rest of paper is organized as follows: Section II
describes seven Hu’s moment invariants. Section III
discusses the variation of moment invariants when images
scale and rotate. Section IV analyzes the relationship
between moment invariants and computation. Finally,
section V concludes the paper.
II. M
OMENT
I
NVARIANTS
Two-dimensional (p+q)th order moment are defined as
follows:
(1)
0,1,2,qp,
),(
L=
=
∫∫
dxdyyxfyxm
qp
pq
If the image function f(x,y) is a piecewise continuous
bounded function, the moments of all orders exist and the
moment sequence {m
pq
} is uniquely determined by f(x,y);
V7-476
978-1-4244-6349-7/10/$26.00 c
2010 IEEE
and correspondingly, f(x,y) is also uniquely determined by
the moment sequence {m
pq
}.
One should note that the moments in (1) may be not
invariant when f(x,y) changes by translating, rotating or
scaling. The invariant features can be achieved using central
moments, which are defined as follows:
(2)
,2,1,0,
),()()(
L=
=
∫∫
qp
dxdyyxfyyxx
qp
pq
μ
where
00
01
00
10
m
m
y
m
m
x==
The pixel point (
x
,
y
) are the centroid of the image
f(x,y).
The centroid moments
pq
μ
computed using the centroid
of the image f(x,y) is equivalent to the m
pq
whose center has
been shifted to centroid of the image. Therefore, the central
moments are invariant to image translations.
Scale invariance can be obtained by normalization. The
normalized central moments are defined as follows:
(3
)
2,3,qp ,2/)2( ,
00
L
=+++== qp
pq
pq
γ
μ
μ
η
γ
Based on normalized central moments, Hu[1] introduced
seven moment invariants:
02201
η
η
φ
+=
2
11
2
02202
4)(
ηηηφ
+=
2
0321
2
12303
)3()3(
μηηηφ
+=
2
0321
2
12304
)()(
μηηηφ
+++=
])()(3[)(3(
])(3))[()(3(
2
0321
2
1230)03210321
2
0321
2
1230123012305
ηηηηηηηη
ηηηηηηηηφ
++++
+++=
))((4
])(-))[((
0321123011
2
0321
2
123002206
ηηηηη
ηηηηηηφ
+++
++=
])(-))[(3()(-3(-
])3(-))[()(3(
2
0321
2
123003211230
2
0321
2
1230123003217
ηηηηηηηη
ηηηηηηηηφ
+++
+++=
The seven moment invariants are useful properties of
being unchanged under image scaling, translation and
rotation.
III. M
OMENT
I
NVARIANTS AND
I
MAGE
T
RANSFORMATION
Image geometric transformation is a popular technique in
image processing, which usually involves image translation,
scaling and rotation. The translation operator maps the
position of each pixel in an input image into a new position
in an output image; the scale operator is used to shrink or
zoom the size of an image, which is achieved by
subsampling or interpolating to the input image; the rotation
operator maps the position of each pixel onto a new position
by rotating it through a specified angle, which may produce
non-integer coordinates. In order to generate the intensity of
the pixels at each integer position, different interpolation
methods can be employed such as nearest-neighbor
interpolation, bilinear interpolation and bicubic interpolation.
From the description in last paragraph, we can see that
translation only shifts the position of pixels, but scaling and
rotation not only shift the position of pixels but also change
the image function itself. Therefore, only the scaling and
rotation cause the errors of moment invariants. We employ
two images with 250 x 250 resolutions to study what extent
the image scaling and rotation can impact the moment
invariants, One is a complex image without principal
orientation (image A); the other is a simple image with
principal orientation (image B). Figure 1 shows the two
images.
Image A
Image B
Figure 1. Images for Experiments
A. Moment Invariants and Image Scaling
As described above, image scaling may cause change of
image function, so the moment invariants may also change
correspondingly. Therefore, it is very necessary to study the
relationship between moment invariants and image scaling.
We conduct the research by computing seven moment
invariants on image A and image B with resolution from
10x 0 to 500x500 in 10 increments. The Figure 2(a), (b)
separately shows the results of moment invariant for image
A and B. From the diagrams we see that the resolution of
130x130 is the threshold for image A. The values have
apparent fluctuations when the resolution less than the
threshold but stability when the resolution more than the
threshold. Correspondingly, the threshold of image B is
150x150.
[Volume 7] 2010 2nd International Conference on Computer Engineering and Technology V7-477
(a) Moment Invariants for Image A
(b) Moment Invariants for Image B
Figure 2. Moment invariants on image scaling
Table . Fluctuation Ratio on Image A
Moment
Invariants
Resolution
<Threshold
Resolution
Threshold
φ1 0.58% 0.04%
φ2 47.02% 1.06%
φ3 60.25% 1.41%
φ4 88.26% 1.40%
φ5 124.38% 5.68%
φ6 84.43% 1.84%
φ7 102.94% 2.59%
The equation (4) is used to measure the fluctuation a
series of data x.
(4) %100
|mean(x)|
min(x)max(x) ×
=R
Where, max(x), min(x) and mean(x) separately note the
maximum, minimum and mean in data x. The data in
Figure 2(a) are divided into two groups by the threshold
130x130 to separately compute the fluctuation by equation
(4). The results are displayed in table . The maximum of
fluctuation comes up to 124.38% in φ5 when resolution of
image less than threshold. But it decreases to 5.68% in φ5
when resolution of image greater then threshold.
B. Moment Invariants and Image Rotation
Figure 3. Moment Invariants on Image B Rotated from
1
o
to 360
o
in 1
o
Increments
Since the rotation of an image can change the function of
the image more or less, the moment invariants keep changing
1.03E-6
1.02
E-6
1.01
E-6
1 45 90 135 180 225 270 315 360
φ2
3.74E-10
3.71E-10
3.68E-10
1 45 90 135 180 225 270 315 360
φ4
2.84E-20
2.76E-20
2.68E-20
1 45 90 135 180 225 270 315 360
φ5
4.25E-20
4.10
E-20
3.95E-20
1 45 90 135 180 225 270 315 360
φ7
5.0E-11
4.8E-11
4.6E-11
1 45 90 135 180 225 270 315 360
φ3
3.70E-13-
3.68E-13
3.66E-13
1 45 90 135 180 225 270 315 360
φ6
10 50 100 150 200 250 300 350 400 450 500
40
35
30
25
20
15
10
5
0
φ
1
φ2
φ3
φ4
φ5
φ6
φ7
Moment Value
Resolution
Moment Value
40
35
30
25
20
15
10
5
0
10 50 100 150 200 250 300 350 400 450 500
φ1
φ2
φ3
φ4
φ5
φ6
φ7
Resolution
2.624E-2
2.6248E-2
2.625
6E-2
1 45 90 135 180 225 270 315 360
φ1
V7-478 2010 2nd International Conference on Computer Engineering and Technology [Volume 7]
as the image is rotated. In order to investigate the fluctuation
of moment invariants result from the image rotation, we
conduct the two kinds of experiments, i.e., one kind is the
different images (A, B) with the same resolution; another is
the different resolutions of the same image. Each input image
is rotated from 1
o
to 360
o
in 1
o
increments.
Figure 3 display the results of moment invariants for
image B rotated from 1
o
to 360
o
, it shows the fluctuation of
moment invariants becomes strong when the rotated angle
near to 45,135, 225 and 315 degrees, while becomes weak as
the rotated angle near to 90,180, 270 and 360 degree.
In Section III A, we know the threshold of resolution on
Image B is 150x150. Does the same image have same
threshold between scaling and rotation? In order to answer
this question, we produce a series of images with resolutions
from 60x60 to 330x330 in 30 increments using the same
original image, and compute the moment invariants for each
image rotated from 1
o
to 360
o
. Then, the fluctuation is
computed by equation (4) for each 360 images. Figure 4
shows the trend of fluctuation of moment invariants. From
the diagram, we can see that the threshold is different
between image scaling and image rotation. The threshold of
image rotation on Image B is not 150x150 but 240x240.
Figure 4. Fluctuation of Moment Invariants
on Image B with Different Resolution
Table shows the values of fluctuation for seven
moment invariants on different resolution from 60x60 to
330x330. We can see that the fluctuation decreases as the
image spatial resolution increases. The fluctuation almost
comes up to 1921.1% when the resolution is only 60x60, but
rapidly decreases to 1.1% when the resolution is 270x270.
The fluctuation obviously decreases as the resolution
increases until to the threshold. However, the fluctuation
does not monotonically decrease any more when the
resolution greater than 270x270.
IV. M
OMENT
I
NVARIANTS AND
C
OMPUTATION
As discussed in Section III, we can reach the conclusion:
The higher the resolution is, the lower the fluctuation is.
However, the computation of moment invariants will
increase when the resolution increases. As a consequence,
the research of relationship between the resolution of images
and computation is necessary.
We calculate the computation of moment invariants for
the resolutions from 10x10 to 1500x1500 in 30 increments
on a PC (CPU P4 2.0G, RAM 1G). Figure 5 shows the
results. From the diagram, we can see that the relationship
between the resolution and the computation is non-linear.
Therefore, we must select an acceptable resolution to balance
computation and resolution on the real application.
Table . Fluctuation of Moment Invariants on Different
Resolution of Image B
Resolution φ
φφ
φ1 φ
φφ
φ2 φ
φφ
φ3 φ
φφ
φ4 φ
φφ
φ5 φ
φφ
φ6 φ
φφ
φ7
60x60 18.7 39.9 1084.7 193.8 1157.5 280.6 1921.1
90x90 13.3 26.5 730.9 145.3 1118,1 194.7 842.0
120x120 10.7 19.1 436.0 109.9 947.6 140.7 517.4
150x150 7.4 13.6 328.0 86.3 532.0 98.9 302.1
180x180 4.5 8.2 159.2 51.5 237.3 57.7 140.0
210x210 3.2 5.6 88.1 36.2 179.3 38.9 75.5
240x240 1.1 1.9 21.4 12.3 46.2 12.8 19.7
270x270 0.2 0.3 1.8 0.4 2.9 0.5 1.1
300x300 0.2 0.5 1.4 0.3 2.0 0.5 1.3
330x330 0.1 0.3 1.9 0.2 1.2 0.2 1.7
Figure 5. Computation of Moment Invariants for
Different Resolution
V. C
ONCLUSION
This paper has presented an analysis of fluctuation of
Hu’s moment invariants on image scaling and rotation. Our
findings may be summarized as follows: (1) the moment
invariants change as images scale or rotate, because images
are not continuous function or polluted by noise; (2) the
fluctuation decreases when the spatial resolution of images
increases. However, the change will not remarkably decrease
as resolution increases if the resolution greater than the
6000
4500
3000
1500
0
0 500 1000 1500
Resolution
Com
p
utation ms
2000
1600
1200
800
400
0
60 90 120 150 180 210 240 270 300 330
φ1
φ2
φ3
φ4
φ5
φ6
φ7
Resolution
Fluctuation
[Volume 7] 2010 2nd International Conference on Computer Engineering and Technology V7-479
threshold; (3) the computation increases quickly as
resolution increases.
From the experimental studies, we find that the choice of
image spatial resolution is very important to keep invariant
features. To decrease the fluctuation of moment invariants,
the image spatial resolution must be higher than the
threshold of scaling and rotation. However, the resolution
cannot be too high, because the computation will remarkably
increase as the resolution increases. Therefore, the choice of
resolution must balance computation and resolution on the
real application
R
EFERENCES
[1] H. Ming-Kuei, "Visual pattern recognition by moment
invariants,"
Information Theory, IRE Transactions,
vol. 8, pp.
179-187, 1962.
[2] M. R. Teaque, "Image Analysis via the General Theory of
Moments,"
Journal of the Optical Society of America,
vol. 70,
pp. 920-930, 1980.
[3] J. F. Boyce and W. J. Hossack, " Moment Invariants for Pattern
Recognition,"
Pattern Recognition Letters,
vol. 1, pp. 451-456,
1983.
[4] Y. S. Abu-Mostafa and D. Psaltis, "Recognitive Aspects of
Moment Invariants,"
Pattern Analysis and Machine Intelligence,
IEEE Transactions on,
vol. PAMI-6, pp. 698-706, 1984.
[5] M. Schlemmer, M. Heringer, F. Morr, I. Hotz, M. H. Bertram, C.
Garth, W. Kollmann, B. Hamann, and H. Hagen, "Moment
Invariants for the Analysis of 2D Flow Fields,"
Visualization
and Computer Graphics, IEEE Transactions on,
vol. 13, pp.
1743-1750, 2007.
[6] C. Qing, P. Emil, and Y. Xiaoli, "A comparative study of
Fourier descriptors and Hu's seven moment invariants for image
recognition," in
Electrical and Computer Engineering, 2004.
Canadian Conference on
, 2004, pp. 103-106 Vol.1.
[7] J. Flusser, B. Zitova, and T. Suk, "Invariant-based registration
of rotated and blurred images," in
Geoscience and Remote
Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999
International
, 1999, pp. 1262-1264.
[8] Jan Flusser and Tomas Suk, "A Moment-based Approach to
Registration of Images with Affine Geometric Distortion,"
IEEE
transaction on Geoscience and Remote Sensing,
vol. 32, pp.
382-387, 1994.
[9] G. I. Salarna and A. L. Abbott, "Moment invariants and
quantization effects," in
Computer Vision and Pattern
Recognition, 1998. Proceedings. 1998 IEEE Computer Society
Conference on
, 1998, pp. 157-163.
[10] Cho-huak Teh and Roland T. Chin, "On Image Analysis by the
Methods of Moments,"
IEEE Transactions on Pattern Analysis
and machine Intelligence,
vol. 10, pp. 496-513, 1988.
[11] Rafael C. Gonzalez and Richard E. Woods,
Digital Image
Processing(Third Edition)
: Prentice Hall, 2008.
V7-480 2010 2nd International Conference on Computer Engineering and Technology [Volume 7]
... It has been observed that using the color moments alone cannot achieve better tracking results as shown in Fig. 3a. In this work, an ensemble of color and Hu moments are used to represent the object blob which gives significantly better tracking results as shown in Fig. 3b. a b Figure 3. a) Tracking results using color moments b) Tracking results using color and Hu moments Image moments have been extensively used in image analysis because of its robustness to translation, scaling and rotation [29,30]. In order to represent the object blob by good features which will help in robust tracking, Hu moments of object blob are also extracted. ...
Preprint
Multiple objects tracking finds its applications in many high level vision analysis like object behaviour interpretation and gait recognition. In this paper, a feature based method to track the multiple moving objects in surveillance video sequence is proposed. Object tracking is done by extracting the color and Hu moments features from the motion segmented object blob and establishing the association of objects in the successive frames of the video sequence based on Chi-Square dissimilarity measure and nearest neighbor classifier. The benchmark IEEE PETS and IEEE Change Detection datasets has been used to show the robustness of the proposed method. The proposed method is assessed quantitatively using the precision and recall accuracy metrics. Further, comparative evaluation with related works has been carried out to exhibit the efficacy of the proposed method.
... Where: χ is each value µ = is the image mean N is number of values in the image The second group of features is the moment features [6,11]. It is a certain particular weighted average of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation Based on normalized central moments, the invariant features can be achieved using central moments, which are defined as follows: ...
Article
Full-text available
Skin cancer arises from the uncontrolled proliferation of abnormal skin cells, primarily triggered by exposure to the harmful ultraviolet (UV) rays of the sun and the utilization of UV tanning beds. This condition poses a heightened risk due to its potential to progress into blood cancer and lead to rapid fatality. Extensive research efforts have been dedicated to advancing the treatment of this perilous ailment. This paper presents a system designed for the examination and diagnosis of pigmented skin lesions and melanoma. The system incorporates a supervised classification algorithm that combines Convolutional Neural Network (CNN) and Deep Neural Network (DNN) architectures with feature extraction techniques. It operates in two distinct stages: the initial stage classifies images into two categories, namely benign or malignant, while the subsequent stage further categorizes the images into one of three classes: basal cell carcinomas, squamous cell carcinomas, or melanoma. Consequently, the comprehensive system addresses four classes, namely benign, basal cell carcinomas, squamous cell carcinomas, and melanoma. This work contributes to the system's design in three significant ways. Firstly, it implements multiple iterations to select the most optimal images, resulting in the highest classification accuracy. Secondly, it employs various statistical methods to identify the most pertinent features, thereby enhancing the classifier's accuracy by focusing on the most informative features for the classification task. Lastly, a two-stage classification approach is implemented, employing two distinct classifiers at different levels within the overall system. Despite the inherent complexity of the real-world problem, the overall system attains a commendable level of classification accuracy. Following rigorous experimentation, the study identifies the top three models. Each approach culminates in a classifier for each stage. The first approach, utilizing a deep learning classifier, achieves an accuracy of 81.82% in the initial cancer discrimination stage and 58.33% in the subsequent stage. The second approach, employing a machine learning classifier, attains an accuracy of 74.63% in the first stage and 64.41% in the second stage. The third approach, utilizing a linear regression classifier, achieves an accuracy of 98% in the first stage and 90% in the second stage. These results underscore the significance of feature selection in influencing model accuracy and suggest the potential for further optimization.
Article
Purpose This study proposes a novel approach combining mathematical modeling and machine learning (ML) to classify four Alzheimer’s disease (AD) stages from magnetic resonance imaging (MRI) scans. Methodology We first mapped each MRI pixel value matrix to a 2 × 2 matrix, using the techniques of forming a moment of inertia (MI) tensor, commonly used in physics to measure the mass distribution. Using the properties of the obtained inertia tensor and their eigenvalues, along with ML techniques, we classify the different stages of AD. Results In this study, we have compared the performance of an intuitive mathematical model integrated with a machine learning approach across various ML models. Among them, the Gaussian Naïve Bayes classifier achieves the highest accuracy of 95.45%. Conclusions Beyond improved accuracy, our method offers potential for computational efficiency due to dimensionality reduction and provides novel physical insights into AD through inertia tensor analysis.
Chapter
Artificial intelligence (AI) is fundamentally reshaping the management of fish genetic resources and conservation biology, ushering in unprecedented opportunities for sustainable practices. Across the globe, AI is being applied to various facets of fisheries, from modeling fish abundance and distribution to predicting the effects of environmental changes. In India, efforts to embrace technological advancements have seen notable progress in applying AI to sectors like fisheries in recent years. While a range of machine learning algorithms have been explored for addressing diverse challenges in fisheries resource management, artificial neural networks stand out due to their ability to capture non-linear relationships among variables. Despite these advancements, significant hurdles remain. Challenges such as poor underwater image quality, the black-box nature of deep learning algorithms, and the high cost of sensor technologies for data collection, particularly in developing countries, continue to impede progress. Addressing these limitations is crucial to fully harnessing the potential of AI-based modeling for sustainable fishery management. This chapter discusses the potential applications of AI-based models in fish genetic resource management with case studies and future scope for sustainable fisheries development.
Article
Full-text available
Two-dimensional image moments with respect to Zernike polynomials are defined, and it is shown how to construct an arbitrarily large number of independent, algebraic combinations of Zernike moments that are invariant to image translation, orientation, and size. This approach is contrasted with the usual method of moments. The general problem of two-dimensional pattern recognition and three-dimensional object recognition is discussed within this framework. A unique reconstruction of an image in either real space or Fourier space is given in terms of a finite number of moments. Examples of applications of the method are given. A coding scheme for image storage and retrieval is discussed.
Article
Full-text available
Various types of moments have been used to recognize image patterns in a number of applications. A number of moments are evaluated and some fundamental questions are addressed, such as image-representation ability, noise sensitivity, and information redundancy. Moments considered include regular moments, Legendre moments, Zernike moments, pseudo-Zernike moments, rotational moments, and complex moments. Properties of these moments are examined in detail and the interrelationships among them are discussed. Both theoretical and experimental results are presented.
Article
Full-text available
We present a novel approach for analyzing two-dimensional (2D) flow field data based on the idea of invariant moments. Moment invariants have traditionally been used in computer vision applications, and we have adapted them for the purpose of interactive exploration of flow field data. The new class of moment invariants we have developed allows us to extract and visualize 2D flow patterns, invariant under translation, scaling, and rotation. With our approach one can study arbitrary flow patterns by searching a given 2D flow data set for any type of pattern as specified by a user. Further, our approach supports the computation of moments at multiple scales, facilitating fast pattern extraction and recognition. This can be done for critical point classification, but also for patterns with greater complexity. This multi-scale moment representation is also valuable for the comparative visualization of flow field data. The specific novel contributions of the work presented are the mathematical derivation of the new class of moment invariants, their analysis regarding critical point features, the efficient computation of a novel feature space representation, and based upon this the development of a fast pattern recognition algorithm for complex flow structures.
Conference Paper
Full-text available
Various types of moments have been used to recognize image patterns in a number of applications. The authors evaluate a number of moments and addresses some fundamental questions, such as image representation ability, noise sensitivity, and information redundancy. Moments considered include regular moments, Legendre moments, Zernike moments, pseudo-Zernike moments, rotational moments and complex moments. Properties of these moments are examined in detail, and the interrelationships among them are discussed. Both theoretical and experimental results are presented
Article
Invariant combinations of moments of arbitrary order are defined. Application to a vehicle image shows that a reconstructed image having <10% error may be obtained by using invariants formed from moments uop to order eight.
Conference Paper
Image registration is a classical problem in remotely sensed image analysis. In this paper, attention is paid to control point matching for the case when the images are blurred and rotated compared to one another. The matching is performed by means of a new class of moment invariants, which are calculated over the neighborhood of each candidate. The proposed invariants are invariant not only to rotation and translation but also to image blurring by any symmetric point spread function. Thanks to this, the authors can register blurred images directly without any de-blurring
Article
Moment invariants are evaluated as a feature space for pattern recognition in terms of discrimination power and noise tolerance. The notion of complex moments is introduced as a simple and straightforward way to derive moment invariants. Through this relation, properties of complex moments are used to characterize moment invariants. Aspects of information loss, suppression, and redundancy encountered in moment invariants are investigated and significant results are derived. The behavior of moment invariants in the presence of additive noise is also described.
Conference Paper
The paper evaluates and compares the performance of Fourier descriptors and Hu's seven moment invariants for recognizing images with different spatial resolutions. Both Fourier descriptors and Hu's seven moment invariants have the preferred invariance property against image transformations, including scale change, translation and rotation. However, spatial resolution thresholds exist for both of them. In our experiment with the image recognition engine, for Fourier descriptors, with feature vectors composed by the first 10 elements of the series, the spatial resolution should not be less than 64×64 to achieve 100% recognition. For Hu's seven moment invariants, the minimum spatial resolution is 128×128.
Conference Paper
This paper considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. In practice, however, the digitization process introduces errors that violate the invariance assumption. This paper presents an analysis of quantization-induced error on (two-dimensional) Hu moment invariants and affine moment invariants, and on invariants derived from (one-dimensional) contour moments. Error bounds are given in several cases