ArticlePDF Available
International Journal of Engineering Research and
Advanced Technology (IJERAT)
E-ISSN : 2454-6135
DOI: http://doi.org/10.31695/IJERAT.2018.3315
Volume.4, Issue 8
August -2018
www.ijerat.com Page 80
Journal Impact Factor: 5.99
Feature Extraction Based on Wavelet Transform and Moment
Invariants for Medical Image
Raniah Ali Mustafa1 Kawther Thabt Saleh 2 and Haitham Salman Chyad3
rania83computer@uomustansiriyah.edu.iq kawtherthabt@uomustansiriyah.edu.iq
haitham@uomustansiriyah.edu.iq
Research Scholar 1 2 3
Department of Computer Science
College of Education
Baghdad
Iraq
_______________________________________________________________________________________
ABSTRACT
The objective of feature extraction is to decrease the original data set by measuring definite features, or properties,
which recognizes one input pattern from another .The main idea of the proposed system depends on the feature
extraction where the system uses two phases the first phase discrete wavelet transform and the second phase seven
moment’s invariants. In this paper apply two level discrete wavelet transform of medical image; obtain seven bands of
texture features are extracted from wavelet coefficients and then apply seven moments invariant for each band where
obtain 49 features for each medical image. The proposed system was implemented on a real human MRI dataset, some
of them were obtained from the hospitals and the other were obtained from the dataset (Brain-Tumor-Progression),
available in the Internet and the proposed system implemented in programing language Visual Basic 6.0.
Key Words: Feature Extraction, Medical Image, Discrete wavelet transform, Moment Invariants.
_______________________________________________________________________________________________
1. INTRODUCTION
Medical image processing has practiced dramatic increase, and has been an interdisciplinary research field interesting expertise
from implemention mathematics, engineering, computer sciences, physics, statistics, medicine and biology. Computer-Aided
Diagnostic processing already has become an significant portion of clinical routine. escort by a new development rush in high
technology and different imaging way utilize, more challenges appear; for instance, how to analyse and process an important
volume of images consequently that high quality information can be created for treatment and disease diagnoses [1].
Along with the fast developments in image processing and pattern recognition methods, computer helped tumours diagnosis
attracts further consideration. Many accomplishments with classification technique to define an image depend on metadata as
histograms, texture or shape features to afford an accurate definition of an image depend on the image’s content employing neural
networks to categorize images and classify the tumours [2].
The feature extraction is transformation the image into set of features. Some of the features of a three - or two -dimensional object
pattern are the volume, area, etc. which can be measured by counting the pixels. Similarly, the shape of object may be defined by
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 81
DOI : 10.31695/IJERAT.2018.3315
its boundary. The object colour is an extremely significant feature, which can be defined in different colour spaces. The
approaches to measure the characteristic are called as feature extraction approaches [3].
1.1 TUMOR TYPES OF THE BRAIN
Body is creating of many cells. Each cell has definite responsibility. The cells grow in the body and are divided to regenerate other
cells. These divisions are actual vital for right roles of the body. When every cell loses the skill of controlling its growth, these
divisions are done without any restrictions, and tumor emerges. A tumor is a mass of tissue that grows out of the normal forces
control that normalizes growth.
Brain is the essential part of the body. Brain has a very difficult structure. The brain can be affected by a problematic which
causes change in its normal behavior and its normal structure. This problematic is identified as brain tumor. It is one of the main
causes for the rise in humanity among adults and children. The classification of Brain Tumors is as follows [4]:
A. Benign Tumor i.e. Non Cancerous Tumor: This type of tumor is noncancerous, which is meaning it does not extent or
invade the surrounding tissue.
B. Malignant Tumor i.e. Cancerous Tumor: This type of tumor is cancerous, which is meaning it extents and invades the
surrounding tissue. It is classified as secondary and primary tumor.
Primary Tumors: A primary tumors denote to a mass or tumor that is increasing in the location wherever cancer
originated. Most of them are generally effectively treated with procedures as surgery.
Secondary Tumor (Metastatic): A secondary (Metastatic) brain tumor happens when cancer cells spread to the brain from
a primary cancer in additional part of the body. Secondary tumors are approximately three times more popular than
primary tumors in the brain.
Table 1.1: Some Types of Tumor in Human Brain
Description
Type of tumor
No.
Primary lymphomas rise from lymphatic cells create
in the brain. The classic seaming of lymphoma most
frequently is look as a ring.
Lymphoma
1
GBM is most popular tumor of high level
astrocytoma and its most malignant tumors. It
account about for 30% of all primary brain tumors.
Glioblastoma multiform (GBM)
2
It contains of homogeneous, rounded cells with
different borders and clear cytoplasm surrounding a
dense central nucleus, also it look like a “fried egg”.
Cystic oligodendroglioma
3
Ependymomas are the tumors that rise from
ependymal cells in the brain. This tumor is
histologically benign but works malignantly.
Ependymoma
4
Meningiomas are the most popular benign tumors,
accounting for 25-30% of all primary brain tumors.
They are more popular in women.
Meningioma
5
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 82
Index Copernicus Value (2016): I C V = 75.48
1.2 MEDICAL IMAGE ANALYSIS
Medical imaging is the method and process utilized to produce images of the human anatomy for medical research, treatment and
diagnosis. It is nowadays one of the fastest-growing parts of clinical equipment. The modalities generally utilized to gained
medical images are Magnetic Resonance Imaging (MRI), Computed Tomography (CT), X-rays and ultrasound imaging. one of
the scanning devices in clinical imaging is MRI which employs magnetic fields to capture images onto movies [5]. Image analysis
methods have played an significant role in a number of clinical applications. In general, the applications include the automatic
medical features clinical from the image which are then employed for a variety of classification functions, such as differentiating
abnormal tissue from normal tissue. basing on the particular classification function, the extracted features may be colour
properties, shape properties, or definite textural properties of the image [2].
2. WAVELET TRANSFORM
A “wave” is generally described as an oscillatory function from time or space, as a sinusoid. “Wavelet” expressing of a “small
wave”, that implicate energy concentrated in time to offer a tool for the analyses of transient, non-stationary, or time-varying
phenomena [6]. A wavelet is a mathematical function employed to separate continuous-time signal or a given function into
different scale constituents. Figure 1.1 displays the wave (sinusoid) and the wavelet [7].
Figure 1.1: A Wave and Wavelet
Wavelet transform (WT) offers robust signal analysis tools, which will be commonly employed within image de-noising,
compression, feature extraction, image retrieval applications, detection, and recognition. Wavelet decomposition is the
exceedingly spread used multi-resolution method in image processing. Owing to the excellent time-frequency localization
distinctive. WT offer a robust mathematical tool. Images have classically locally diverging statistics that outcome of several
combinations of unexpected characteristics as edges, of textured regions and of relatively low-contrast homogeneous regions.
However similar spatial and inconsistency non-stationary challenge each single statistical definition, the multi-resolution
constitutes is more simply handled. WT could be completed successfully for each scale and translation [8]. There are two types of
Anaplastic astrocytoma are most popular tumors of
high level astrocytoma. They may seem as low
inhomo-geneous lesions or density lesions or with
parts of both low and high densities in the identical
lesion.
Anaplastic astrocytoma
6
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 83
DOI : 10.31695/IJERAT.2018.3315
WT; discrete wavelet transforming (DWT) and continuous wavelets transform (CWT). The major modification among DWT and
CWT is that DWT utilizes an clear subset of translation and scale values and CWT utilize each possible translation and scale [6].
2.1 HAAR BASIS FILTER
The Haar basis filter contains of Low pass filter (LPF) and high pass filter (HPF) are represented in the following equations (1)-
(2):
HPF:

LPF:

The LP and HP filters are so termed the decomposition filters for they distinct decompose the picture or the picture down to
detailed and approximation. The convolution with the band pass filter in a definite direction results in so termed details picture
and the convolution with the LPF outcomes in a so termed approximation picture.
LL band (approximation band) is the outcome of implementing LPF in vertical and horizontal directions and its filter
gain as follows: 




LH band (detail band) is the effect of applying horizontal LPF and vertical HPF and its filter is derived as follows:





HL band (detail band) is the result of implementing vertical HPF and horizontal LPF and its filter is gain as follows:





HH band (detail band) is the outcome of implementing vertical and horizontal HPF and its filter is gain as follows:





The transformed bands (LL scaling bands and three wavelet bands HH HL, and LH) can be gained by implementing above filters
to every (22) adjacent pixel of the entire picture pixel. This method is named Fast Mallat transform algorithm [9]
Figure 1.2: Two level Wavelet Sub Band Decomposition
LL2
LH2
LH1
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 84
Index Copernicus Value (2016): I C V = 75.48
2.2 DISCRETE WAVELET TRANSFORM DECOMPOSITION
The DWT for 2-dimensional images can be identically defined through applying 1-dimensional DWT to every dimension
m and n independently:  . 2-dimensional WT decomposes a image to “subbands” which are centralized in
orientation and frequency. WT is formed through passing the image by sequences of filter bank phases. One phase is
demonstrated in Figure 1.3. In which an image is initially filtered in the horizontal direction [10].
Figure 1.3: Illustration of 2-D Wavelet Transforms
The scaling function (LPF) & wavelet function (HPF) are limited pulsation reply filters. In other words, the output at every point
relies just on a limited part of the input. The filtered outputs are then down sampled by a factor of 2  in the horizontal
direction. Those signals are then each filtered through a similar filter pair in the vertical direction. Eventually decomposite the
image to 4 sub bands indicated by (LL, HL, LH, HH). Each of those sub bands could be thought of as a smaller version of the
image which represents various image properties. The Low-Low is a coarser approximation to the original image. Low High &
High Low record the differences of the image along vertical and horizontal directions, consecutive. High offers the high frequency
component of the image. 2-Level decomposition could then be conducted on the Low sub band. Figure 1.4 demonstrates two-
Level wavelet decomposition [10].
(a)
(b)
Figure 1.4: (a) Original Image (b) Two-Level Wavelet Decomposition
3. MOMENT INVARIANTS
This approach has been widely used to image pattern recognition in a variety of applications because of their unchanged features
on picture rotation, scaling, and translation. Moment invariants are helpful for calculating sets of region characteristics that can be
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 85
DOI : 10.31695/IJERAT.2018.3315
utilized for shape recognition. The two dimensional geometric moments of order (p+q) of a function  are described in
equation (7) [11]:
 



 
Where

M: the number of rows, N: the number of columns.
The moments that have the feature of translation unchange are called central moments and are referred by  , they are
described as in equation (8) [11]:
  



 
Where  and  are the centred coordinates and they are calculated employing equations (9) and (10) [11].




It can be simplify confirmed that the central moments up to the order ,may be calculated by the following equations
(11) - (20) and formulas [11]: 









Scale unchanged could be got by utilizing normalized central moments , as equations (21) and (22) [11].


where

International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 86
Index Copernicus Value (2016): I C V = 75.48
A seven non-linear absolute moment invariants, computed from normalizing central moments through order three are gotten as
equations (23) to (29) [11]: 














4. Normalization Features
An important step between feature extraction and distance computation is feature normalization. Complex image database
retrieval systems use features that are generated by many different feature extraction algorithms with different kinds of sources.
These feature vectors usually exist in a very high dimensional space. Not all of these features have the same range [12]. The
normalization of feature (f) is performed using the following equation (30):
 

 
  
where fmin and fmax are min and max feature values found over all image samples listed in the image database, form is the
normalized value. The applied normalization process maps the extracted feature values to the range between [0, 1]. [13]
5. THE PROPOSED SYSTEM
The main idea of the proposed system depends on the feature extraction from medical image based on seven moment invariants
for each band in two level discrete wavelet transform; texture features are extracted from wavelet coefficients and then using
seven moments invariant . To be used for extract features. The proposed system is done in following steps
1. Read image
2. Feature extraction based on wavelet transform
3. Feature extraction based on seven moment invariants
In this paper, each steps in the proposed system will explained with the results. The figure 1.5 illustrates the each step for the
proposed system.
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 87
DOI : 10.31695/IJERAT.2018.3315
Figure 1.5: The proposed system for Medical Image
Read Image
DWT Decomposition up to 2-levels
LH1
HL1
HH1
LL2
LH2
HL2
HH2
Apply seven moment invariants for each band
Compute the coordinates of the center of mass 𝑥𝑦
𝜇𝜇𝜇𝜇𝜇𝜇𝜇𝜇𝜇𝜇
Calculate the central moment of the order
Calculate the normalized central moment for the order
(𝜂, 𝜂, 𝜂, 𝜂, 𝜂, 𝜂, 𝜂)
Using normalized central moment, calculate the seven
moment invariants (,,, , , , )
Store the seven moment invariants (,,, , 
, , ) in Database Feature for Medical Image
𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚
Calculate the moment of order
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 88
Index Copernicus Value (2016): I C V = 75.48
5.1 READ IMAGE
Gray-scale images are referred to one-color, images. They contain brightness information only, no colour information. The
number of bits used for each pixel determines the number of different brightness levels available. The typical image contains 8
bits/pixel data, which allows us to have 256 (0-255) different brightness (gray) levels.
5.2 FEATURE EXTRACTION
In this section, only the significant information is extracted from the medical image ; The extracted information denotes the
necessary features vector to make a distinction between Medical images. The project employs two diverse sets of discriminating
characteristic: (i) wavelet transform and (ii) seven moment invariant.
5.2.1 FEATURE EXTRACTION BASED ON WAVELET TRANSFORM STAGE
In this stage, medical image will passing different subphase for the extracted texture features objective, which utilizes for
extracted features in moment invariant. A wavelet transform is implemented over these entered images employing Haar filter
wavelet decomposition which is implemented the data by computing the averages and changes of adjacent elements. The Haar
wavelet operates first on adjacent vertical elements and then on adjacent horizontal elements. The Haar wavelet transform for N
elements is calculated as presented see section (2.1).
The purpose of implementing this stage is to extract the factors from the medical image. In this system one image of
approximation (low-resolution image) and 6 images of details are gained. Therefore, each medical image is defined by 7 wavelet
coefficient matrices, which denote an actual great amount of information (equal to the input image size). The applying of feature
extraction process is done employing the steps in the algorithm (1.1).
Algorithm (1.1) Feature Extraction Steps
Input: medical Image.
Output: Extract features vector.
Step1: Read medical image data
Step2: Apply the wavelet transform up to 2- levels of decomposition (which were enough to give a small representation of
coefficients) using Haar discrete wavelet transform.
The Haar Basis filter is performed by using that following equation that are mentioned in section (2.1):
HPF:

LPF:

Step6: End.
The problem of reducing the representation of an image to a small number of components carrying enough discriminating
information is referred to as "feature extraction" .An efficient way of reducing dimensionality and characterizing textural
information is to compute a set of moments. In this paper, the result for each bands (LH1, HL1, HH1, LL2, HL2, LH2 and HH2
bands) are usually useful to represent characteristics or features of data; it is simple and requires less computation load. See Figure
1.6.
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 89
DOI : 10.31695/IJERAT.2018.3315
Medical Image (Types of Tumor Image)
Discrete Wavelet transform
Lymphoma
Glioblastoma multiform (GBM)
Cystic oligodendroglioma
Ependymoma
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 90
Index Copernicus Value (2016): I C V = 75.48
Figure 1.6: Two Level Wavelet Decomposition of Medical image
In this stage, after entered to two levels wavelet transform, it will produce one image of approximation and six image of details
and save them. Table 1.2 shows analysis results of bands (LH1, HL1, HH1, LL2, LH2, HL2, and HH2) from some sample
medical image database.
Table 1.2: Analysis Results Bands of Wavelet Transform from Sample of medical Image Database
Meningioma
Anaplastic astrocytoma
NO.
Medical Image
LH1
HL1
HH1
LL2
LH2
HL2
HH2
1
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 91
DOI : 10.31695/IJERAT.2018.3315
2
3
4
5
6
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 92
Index Copernicus Value (2016): I C V = 75.48
5.2.2 FEATURE EXTRACTION BASED ON SEVEN MOMENT INVARIANT STAGE
In this stage will be used the medical image to extract features. The moments' invariant technique will be extract seven values as
features; all details are mentioned in section (3). These seven values are extracted from medical image after implement discrete
wavelet transform and then seven moment invariants apply for each band (seven bands). The applying of feature extraction
process is done employing the steps in the algorithm (1.2).
Algorithm (1.2) Feature Extraction Steps
Input: medical Image
Output: Hu's Moment for each medical image
Step1: Read madical image data
Step 2: Get image width (For i=0 to image.width -1)
Get image height (For j=0 to image.height -1)
Step 3: Calculate the moment of order  from equation (7).
Step 4: Compute the coordinates of the center of mass by using equations (9) and (10).
Step 5: Calculate the central moment (denoted by ) by using equation (8) of the order
 and for easy calculation use equations (11) to (20).
Step 6: Calculate the normalized central moment (denoted by ) by using equation (21) and (22) for the order ( , , ,
, , , ).
Step7: Using normalized central moment calculate the seven moment invariants (,,, , , , ), equations (23) to
(29).
Step 8: Store the seven moment invariants (,,, ,  , , ) in Training database feature for medical image.
Step 9: If there are more medical image, repeating the steps from (1-8) for all medical images.
Step 10: End.
In this stage, after analysis medical images in two level wavelet transform will be applied to normalized central moment to obtain
seven moment invariants of ( , , , , ) feature vectors extraction for each band (seven band) and save them in
training database features (TRDBF). Table (1.3) shows results of extracted features (Before normalization) for sample of medical
image database.
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 93
DOI : 10.31695/IJERAT.2018.3315
Table 1.4: Result of Extracted Features for sample of Medical Image Database (Before Normalization)
Features
LH1
0.319139595003
979
1.1232350620857E-03
2.834734963120
47E-04
2.5923328
0139565E-
06
-
1.722589402747
52E-09
8.4900304804199
1E-08
2.846894610071
54E-11
HL1
0.390501581
892102
1.5709748331101
3E-03
1.026908672
92295E-04
1.732499
0668161
6E-05
-
6.981447831
0683E-08
-
6.7195432847
7067E-07
6.1871286045
4209E-10
HH1
0.543182127
493015
3.3728404779632
2E-04
4.768609073
04783E-04
4.470265
8402935
7E-05
4.431322644
61421E-06
-
2.7170263349
4392E-07
3.4936315135
9528E-09
LL2
0.734091696
755431
4.1473113585873
9E-02
8.076989335
53286E-04
0.004155
4548791
37
1.164004119
19949E-05
8.2724563642
5744E-04
-
5.7624207207
7209E-06
LH2
0.202381142
215875
7.8969217524236
3E-04
7.109756807
59436E-05
2.828794
5525156
8E-06
-
2.767597671
29675E-08
-
6.9328245172
3336E-08
3.7050347057
5524E-11
HL2
0.196477544
182135
1.3127948538622
3E-03
2.180174746
03248E-05
1.320512
6313478
6E-06
-
1.556883498
08676E-08
-
3.9309896240
8772E-08
-
2.7057747854
3619E-13
HH2
0.233692835
098198
2.1442788136865
4E-03
1.663300345
89102E-04
3.524301
6671107
6E-05
5.415140326
42168E-11
7.7073849894
8925E-07
-
2.6525448110
8465E-09
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 94
Index Copernicus Value (2016): I C V = 75.48
In this sub stage, after seven moment invariants feature vector extraction normalizing () will be performed by finding
maximum () and minimum () values for each feature for sample in the Training database Features (TRDBF). Table (1.5)
shows results of extracted (,) for features vectors for sample medical image database.
Table 1.5: Result of Extracted (,  ) for Features from sample of medical Image Database
After finding maximum () and minimum () values for each features for all samples in the Training database Features
(TRDBF) process normalization will be applied. Table (1.6) shows results of extracted features for sample of medical image
database.
Table 1.6: Result of Extracted Features from Sample of Medical Image Database (After Normalization)
Features

0.734091696755
431
4.147311358587
39E-02
8.07698933553286E-04
0.00415545487913
7
1.1640041191994
9E-05
8.272456364257
44E-04
3.4936315135
9528E-09

0.202381142215
875
3.372840477963
22E-04
7.10975680759436E-05
2.59233280139565
E-06
-
6.9814478310683
E-08
-
6.719543284770
67E-07
-
5.7624207207
7209E-06

0.233692835098
198
2.144278813686
54E-03
1.66330034589102E-04
3.52430166711076
E-05
5.4151403264216
8E-11
7.707384989489
25E-07
-
2.7057747854
3619E-13

0.196477544182
135
1.312794853862
23E-03
2.18017474603248E-05
1.32051263134786
E-06
-
1.5568834980867
6E-08
-
3.930989624087
72E-08
-
2.6525448110
8465E-09
Features
LH1
0.641
0
0.5509
0.0101
0.3844
0.0005
1
HL1
0.3538
0.03
0.0429
0.0035
0
0
0.9995
HH1
0.2196
0.0191
0.2883
0
0.0058
0.0009
0.9994
LL2
1
1
1
1
1
1
0
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 95
DOI : 10.31695/IJERAT.2018.3315
Figures (1.7), (1.8), (1.9), (1.10) ,(1.11), (1.12) and (1.13) show the strong features extracted from seven moment invariant by
comparison among features (, , , , , , ) for sample medical image.
Figure 1.7: Comparison among seven moment Figure 1.8: Comparison among seven moment
invariants (1) of discrete wavelet transform (2) of discrete wavelet transform
Figure 1.9: Comparison among seven moment Figure 1.10: Comparison among seven moment
invariants (3) of discrete wavelet transform (4) of discrete wavelet transform
1 2 3 4
Two level DWT 1 0 0 1
One level DWT 0.5509 0.0429 0.2883
0
0.5
1
1.5
2
Seven Moment Invariants (3)
LH2
0
0.011
0
0.0001
0.0036
0.0007
0.9994
HL2
0
0
0
0
0
0
1
HH2
1
1
1
1
1
1
0
1 2 3 4
Two level DWT 1 0 1 0
One level DWT 0.641 0.3538 0.2196
0
0.5
1
1.5
2
Seven Moment invariants(1)
1 2 3 4
Two level DWT 1 0.011 0 1
One level DWT 0 0.03 0.0191
0
0.2
0.4
0.6
0.8
1
1.2
Seven Moment Invariants(2)
1 2 3 4
Two level DWT 1 0.0001 0 1
One level DWT 0.0101 0.0035 0
0
0.5
1
1.5
Seven Moment Invariants (4)
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 96
Index Copernicus Value (2016): I C V = 75.48
Figure 1.11:Comparison among seven moment Figure 1.12: Comparison among seven moment
invariants (5) of discrete wavelet transform (6) of discrete wavelet transform
Figure 1.13 Comparison among seven moment invariants (7) of discrete wavelet transform
6. THE COLLECTED DATA
In this system proposed six types of tumors and the normal case of brain MRI image were used. Figure (4.3) presents the samples
of this types of brain MRI Images. The proposed system was implemented on a real human MRI dataset, some of them were
obtained from the hospitals and the other was obtained from the dataset (Brain-Tumor-Progression), available in the Internet from
this types of tumors. The number of collected images are 140 they are gray level image with 8 bit (pixel value 0..255) and the type
of them are: BMP, JPEG, and PNG. With various image sizes.
1 2 3 4
Two level DWT 1 0.0036 0 1
One level DWT 0.3844 0 0.0058
0
0.5
1
1.5
seven Moment Invariants (5)
1 2 3 4
Two level DWT 0 0.9994 1 0
One level DWT 1 0.9995 0.9994
0
0.5
1
1.5
2
2.5
Seven Moment Invariants (7)
Lymphoma
Glioblastoma multiform
cystic oligodendroglioma
1 2 3 4
Two level DWT 1 0.0007 0 1
One level DWT 0.0005 0 0.0009
0
0.5
1
1.5
Seven Moment Invariants (6)
Raniah Ali Mustafa et. al., Feature Extraction Based on Wavelet Transform ….
www.ijerat.com Page 97
DOI : 10.31695/IJERAT.2018.3315
Figure 1.14: Samples of different types of brain MRI images.
7. CONCLUSIONS
From The practical part of this work, the following conclusions can be drawn:
1. The proposed system is applicable to different sizes and types of image formats such as BMP, JPEG, and PNG file format.
2. Using the wavelet transform to extract texture features leads to good outcomes because this technique is strong to extract
texture features from medical image .The wavelet transform employed for feature extraction which shows robustness against
secondary variances in the image sample (i.e. scaling, lighting conditions, and noise).
3. The seven moment invariant utilized for feature extraction displays robustness because of invariant features on scaling,
translation, and rotation image.
4. We utilized normalization to implement to features which were extracted from seven moment invariant to achieve the
objective to reducing the features size.
8. SUGGESTIONS FOR FUTURE WORK
During the development of this work, many recommendations come to mind. In this context some ideas may be considered for
further work on the proposed method:
Ependymoma
Meningioma
Anaplastic astrocytoma
Brain-Tumor-Progression
International Journal of Engineering Research And Advanced Technology, Vol.4, Issue 8, August-2018
www.ijerat.com Page 98
Index Copernicus Value (2016): I C V = 75.48
1. The use of other types of brain tumors, or other parts of human body.
2. Use of other Neural Network algorithm such as, probabilistic Neural Network, Markov Models Using back-propagation
neural network and Using swarm intelligent model with back-propagation algorithm. Also I suggest using genetic
algorithm. For the purpose of discrimination for tumors
REFERENCES
[1] H. Zhu, "Medical Image Processing Overview", University of Calgary, pp. 1-27, 2003.
[2] AL-Taiyy A., "Medical Image Classification Using Kohonen Neural Network", M.Sc Thesis, Informatics Institute for
Postgraduate Studies at the Iraqi Commission for Computers, September, 2006.
[3] T. Acharya, A. Ray, "Image Processing: Principles and Applications", A John Wiley & Sons, New Jersey, 2005.
[4] H. Khotanlou, "3D Brain Tumors and Internal Brain Structures Segmentation in MR images", Ph.D. Thesis, Ecole Nationale
Supérieure des Télécommunications, 2008.
[5] N. Abdullah, U. Ngah and S. Aziz, "Image Classification of Brain MRI Using Support Vector Machine", Imaging Systems and
Techniques (IST), IEEE International Conference, pp. 242-247, May, 2011.
[6] S. Bharadwaj and M.Vatsa, and R. Singh, " Aiding Face Recognition with Social Context Association Rule based Re-
Ranking", IIIT-Delhi, India, 2014.
[7] R. Kapoor, P. Mathur, "Face Recognition Using Moments and Wavelets", International Journal of Engineering Research and
Applications (IJERA), ISSN: 2248-9622, Vol. 3, Issue 4, pp. 82-95, Jul-Aug 2013.
[8]"Color Spaces", Apple Computer, Inc., 2006, Web Site: http://developer.apple.com/documentation/mac/ACI/ACI-3.html
[9] G. R. Kumar, G. A. Ramachandra and G. Sunitha, "An Evolutionary Algorithm for Mining Association Rules Using Boolean
Approach", IJCES International Journal of Computer Engineering Science, Volume1 Issue 3, December 2011.
[10] B.L. Zhang, H. Zhang and S. S. Ge, "Face Recognition by Applying Wavelet Subband Representation and Kernel
Associative Memory", IEEE Transactions on Neural Networks, Vol.15, No.1, January 2004.
[11] M. Mercimek, K. Gulez and T. V. Mumcu, "Real Object Recognition Using Moment Invariants", Sadhana, Vol. 30, No. Part
6, pp. 765775, December, 2005.
[12] C. Nastar, M. Mitschke, and C. Meilhac," Efficient Query Refinement For Image Retrieval", In Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, June 1998.
[13] I. N. Ibraheem, "Color Models Indexing for Content Based Image Retrieval System", Ph.D. Thesis, Department of
Computer Sciences of the University of Technology, 2009.
... Suppose (F (x, y)) defined a 2 dimension array of numbers in a spatial model. A Geometric moment of arrangement (p + q) is illustrated in Eq.1 (8). ...
... For Eq.4, ( p + q = 2 , 3 ,…,p*q) . A seven nonlinear absolute moment invariants, computed from normalizing central moments through order three are gotten in Eq.5 (8). 0 1 , 2 0 , 3 2 , 1 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 1 0 , 3 3 , 0 1 , 2 3 , 0 1 , 2 2 , 1 0 , 3 1 , 1 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 0 0 , 2 2 3 , 0 1 , 2 2 2 , 1 0 , 3 3 , 0 1 , 2 3 , 0 1 , 2 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 1 0 , 3 2 , 1 0 , 3 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 3 , 0 1 , 2 2 2 , 1 0 , 3 1 , 1 2 ...
... This group of normalized central moments is constant to rotation, translation and scale alterations for an image (8). ...
Article
Full-text available
Although the number of stomach tumor patients reduced obviously during last decades in western countries, but this illness is still one of the main causes of death in developing countries. The aim of this research is to detect the area of a tumor in a stomach images based on fuzzy clustering. The proposed methodology consists of three stages. The stomach images are divided into four quarters and then features elicited from each quarter in the first stage by utilizing seven moments invariant. Fuzzy C-Mean clustering (FCM) was employed in the second stage for each quarter to collect the features of each quarter into clusters. Manhattan distance was calculated in the third stage among all clusters' centers in all quarters to disclosure of the quarter that contains a tumor based on the centroid value of the cluster in this quarter, which is far from the centers of the remaining quarters. From the calculations conducted on several images' quarters, the experimental outcomes show that the centroid value of the cluster in each quarter was greater than 0.9 if this quarter did not contain a tumor while the value of the centroid value for the cluster containing a tumor was less than 0.4.For examples, in a quarter no.1 for STOMACH_1 medical image, the centroid value of the cluster was 0.973 while the value of the cluster centroid in quarter no.3 was 0.280. For this reason the tumor area was found in quarter no.(3) of the medical image STOMACH_1. Also, the centroid value of the cluster in a quarter no.2 was 0.948 for STOMACH_2 while, the value of the cluster centroid in quarter no.4 was 0.397. For this reason the tumor area was found in a quarter no.4 of the medical image STOMACH_2.
... Suppose (F (x, y)) defined a 2 dimension array of numbers in a spatial model. A Geometric moment of arrangement (p + q) is illustrated in Eq.1 (8). ...
... For Eq.4, ( p + q = 2 , 3 ,…,p*q) . A seven nonlinear absolute moment invariants, computed from normalizing central moments through order three are gotten in Eq.5 (8). 0 1 , 2 0 , 3 2 , 1 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 1 0 , 3 3 , 0 1 , 2 3 , 0 1 , 2 2 , 1 0 , 3 1 , 1 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 0 0 , 2 2 3 , 0 1 , 2 2 2 , 1 0 , 3 3 , 0 1 , 2 3 , 0 1 , 2 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 , 1 0 , 3 2 , 1 0 , 3 2 3 , 0 1 , 2 2 2 , 1 0 , 3 2 3 , 0 1 , 2 2 2 , 1 0 , 3 1 , 1 2 ...
... This group of normalized central moments is constant to rotation, translation and scale alterations for an image (8). ...
Article
Full-text available
Although the number of stomach tumor patients reduced obviously during last decades in western countries, but this illness is still one of the main causes of death in developing countries. The aim of this research is to detect the area of a tumor in a stomach images based on fuzzy clustering. The proposed methodology consists of three stages. The stomach images are divided into four quarters and then features elicited from each quarter in the first stage by utilizing seven moments invariant. Fuzzy C-Mean clustering (FCM) was employed in the second stage for each quarter to collect the features of each quarter into clusters. Manhattan distance was calculated in the third stage among all clusters' centers in all quarters to disclosure of the quarter that contains a tumor based on the centroid value of the cluster in this quarter, which is far from the centers of the remaining quarters. From the calculations conducted on several images' quarters, the experimental outcomes show that the centroid value of the cluster in each quarter was greater than 0.9 if this quarter did not contain a tumor while the value of the centroid value for the cluster containing a tumor was less than 0.4.For examples, in a quarter no.1 for STOMACH_1 medical image, the centroid value of the cluster was 0.973 while the value of the cluster centroid in quarter no.3 was 0.280. For this reason the tumor area was found in quarter no.(3) of the medical image STOMACH_1. Also, the centroid value of the cluster in a quarter no.2 was 0.948 for STOMACH_2 while, the value of the cluster centroid in quarter no.4 was 0.397. For this reason the tumor area was found in a quarter no.4 of the medical image STOMACH_2.
... The medical image processing technique for the diagnosis of an important disease is considered vital for human life. Medical imaging can be defined as the method and process to create visual representation regarding the body's interior for medical intervention and clinical analysis, also the visual representations related to the functions of a few tissues (physiology) or organs [1]. The image technologies have various types, including ultrasonography, X-ray imaging, computed tomography (CT), biopsy imaging, optical coherence tomography, and magnetic resonance (MR) images were majorly utilized in the clinical diagnosis for many disease types. ...
... Also, efficient medical images might be of high importance to aid in treatment and diagnosis; they might also be significant in the domain of education for health-care students through explaining with such images help them in their study [2]. The list of images in figure (1) represents the CT-scan and MRI images for most body's sections (Bone, Breast, Kidney, Liver, Lung, Brain in two states (abnormal, normal) that are used in different researches in this subject review [3]. ...
Article
Full-text available
Medical imaging has become an important part of diagnosing, early detection, and treating cancers. In this paper, a comprehensive survey on various image processing techniques for medical images specifically examined cancer diseases for most body sections. These sections are Bone, Liver, Kidney, Breast, Lung, and Brain. Detection of medical imaging involves different stages such as classification, segmentation, image pre-processing, and feature extraction. With regard to this work, many image processing methods will be studied, over 10 surveys reviewing classification, feature extraction, and segmentation methods utilized image processing for cancer diseases for most body's sections are clearly reviewed.
... The Low-Low is approximation to the original image, and offers robust signal analysis tools, which is used to compression images [23]. This step of preprocessing is used with PCA method only where the size of image 128X128 is compressed to 64X64. ...
Article
Full-text available
At the present time, everyone is interested in dealing with images in different fields such as geographic maps, medical images, images obtaining by Camera, microscope, telescope, agricultural field photos, paintings, industrial parts drawings, space photos, etc. Content Based Image Retrieval (CBIR) is an efficient retrieval of relevant images from databases based on features extracted from the image. Follow the proposed system for retrieving images related to a query image from a large set of images, based approach to extract the texture features present in the image using statistical methods (PCA, MAD, GLCM, and Fusion) after pre-processing of images. The proposed system was trained using 1D CNN using a dataset Corel10k which widely used for experimental evaluation of CBIR performance the results of proposed system shows that the highest accuracy is 97.5% using Fusion (PCA, MAD), where the accuracy is 95% using MAD, 90% using PCA. The performance result is acceptable compared to previous work.
... The extraction of features can be viewed as a preprocessing step that eliminates distracting inconsistency from a dataset. Shape-based features namely area, perimeter; pixel-based features, namely maximum intensity, minimum intensity, and mean intensity and statistical features like contrast, energy, correlation, homogeneity, and entropy are extracted and fed into Artificial neural network (ANN) classifier, Random forest classifier and K-Nearest neighbor classifier (K-NN) as an input to classify the type of cancer as either benign or malignant [42][43][44][45][46][47][48][49][50]. Shape-based features and Pixel-based features are extracted using Matlab region properties function while the statistical features are extracted using the gray level co-occurrence matrix (GLCM) method. ...
Article
Objective Image fusion-based cancer classification models used to diagnose cancer and assessment of medical problems in earlier stages that help doctors or health care professionals to plan the treatment plan accordingly. Methods In this work, a novel Curvelet transform-based image fusion method is developed. CT and PET scan images of benign type tumors fused together using the proposed fusion algorithm and the same way MRI and PET scan images of malignant type tumors fused together to achieve the combined benefits of individual imaging techniques. Then the Marker controlled watershed Algorithm applied on fused image to segment cancer affected area. The various color features, shape features and texture-based features extracted from the segmented image. Then a data set formed with various features, which have given as input to different classifiers namely neural network classifier, Random forest classifier, K-NN classifier to determine the nature of cancer. The results of the classifier will be Normal, Benign or Malignant category of cancer. Results The performance of the proposed fusion Algorithm compared with existing fusion techniques based on the parameters PSNR, SSIM, Entropy, Mean and Standard Deviation. Curvelet transform based fusion method performs better than already existing methods in terms of five parameters. The performances of classifiers are evaluated using three parameters Accuracy, Sensitivity, and Specificity. K-NN Classifier performs better when compared to the other two classifiers and it provides overall accuracy of 94%, Sensitivity of 88% and Specificity of 84%. Conclusion The proposed Curvelet transform based image fusion method combined with the K-NN classifier provides better results when compared to other two classifiers when two input images used individually.
Article
Full-text available
The main topic of this thesis is to segment brain tumors, their components (edema and necrosis) and internal structures of the brain in 3D MR images. For tumor segmentation we propose a framework that is a combination of region-based and boundary-based paradigms. In this framework, we first segment the brain using a method adapted for pathological cases and extract some global information on the tumor by symmetrybased histogram analysis. The second step segments the tumor and its components. For this, we propose a new and original method that combines region and boundary information in two phases: initialization and refinement. For initialization, which is mostly region-based, we present two new methods. The first one is a new fuzzy classification method which combines the membership, typicality and neighborhood information of the voxels. The second one relies on symmetry-based histogram analysis. The initial segmentation of the tumor is refined relying on boundary information of the image. This method is a deformable model constrained by spatial relations. The spatial relations are obtained based on the initial segmentation and surrounded tissues of the tumor. The proposed method can be used for a large class of tumors in any modality of MR images. To segment a tumor and its components full automatically the proposed framework needs only a contrast enhanced T1-weighted image and a FLAIR image. In the case of a contrast enhanced T1-weighted image only, some user interaction will be needed. We evaluated this method on a data set of 20 contrast enhanced T1-weighted and 10 FLAIR images with different types of tumors. Another aim of this thesis is the segmentation of internal brain structures in the presence of a tumor. For this, a priori knowledge about the anatomy and the spatial organization of the structures is provided by an ontology. To segment each structure, we first exploit its relative spatial position from a priori knowledge. We then select the spatial relations which remain consistent using the information on the segmented tumor. These spatial relations are then fuzzified and fused in a framework proposed by our group. As for the tumor, the segmentation process of each structure has two steps. In the first step we search the initial segmentation of the structure in a globally segmented brain. The search process is done in the region of interest (ROI) provided by the fused spatial relations. To globally segment the brain structures we use two methods, the first one is the proposed fuzzy classification and the second one is a multiphase level sets. To refine the initial segmentation, we use a deformable model which is again constrained by the fused spatial relations of the structure. This method was also evaluated on 10 contrast enhanced T1-weighted images to segment the ventricles, caudate nucleus and thalamus.
Article
Full-text available
Moments and functions of moments have been extensively employed as invariant global features of images in pattern recognition. In this study, a flexible recognition system that can compute the good features for high classification of 3-D real objects is investigated. For object recognition, regardless of orientation, size and position, feature vectors are computed with the help of nonlinear moment invariant functions. Representations of objects using two-dimensional images that are taken from different angles of view are the main features leading us to our objective. After efficient feature extraction, the main focus of this study, the recognition performance of classifiers in conjunction with moment-based feature sets, is introduced
Article
The field of face recognition has been explored a lot and the work is still going on. In this paper we have proposed a novel approach for face recognition using moments. Four feature extraction methods have been used: Hu moments, Zernike moments, Legendre moments and Cumulants. Hu moments include a set of seven moments which are derived from the conventional geometric moments. These moments are invariant against rotation, scaling and translation. Legendre and Zernike moments have an orthogonal basis set and can be used to represent an image with minimum amount of information redundancy. These are based on the theory of orthogonal polynomials and can be used to recover an image from moment invariants. Cumulants are sensitive to image details and therefore are suitable for representing the image features. For feature extraction, moments of different orders are calculated which form the feature vectors. The feature vectors obtained are stored in the database and are compared using three different classifiers. In case of cumulants, we have calculated the bispectrum of images and compressed it using wavelets.
Article
Humans are very efficient at recognizing familiar face images even in challenging conditions. One reason for such capabilities is the ability to understand social context between individuals. Sometimes the identity of the person in a photo can be inferred based on the identity of other persons in the same photo, when some social context between them is known. This research presents an algorithm to utilize co-occurrence of individuals as the social context to improve face recognition. Association rule mining is utilized to infer multi-level social context among subjects from a large repository of social transactions. The results are demonstrated on the G-album and on the SN-collection pertaining to 4675 identities prepared by the authors from a social networking website. The results show that association rules extracted from social context can be used to augment face recognition and improve the identification performance.
Article
Biomedical image processing has experienced dramatic expansion, and has been an interdisciplinary research field attracting expertise from applied mathematics, computer sciences, engineering, statistics, physics, biology and medicine. Computer-aided diagnostic processing has already become an important part of clinical routine. Accompanied by a rush of new development of high technology and use of various imaging modalities, more challenges arise; for example, how to process and analyze a significant volume of images so that high quality information can be produced for disease diagnoses and treatment. The principal objectives of this course are to provide an introduction to basic concepts and techniques for medical image processing and to promote interests for further study and research in medical imaging processing.
Conference Paper
One of the primary diagnostic and treatment evaluation tools for brain interpretation has been magnetic resonance imaging (MRI). It has been a widely-used method of high quality medical imaging, especially in brain imaging where MR's soft tissue contrast and non invasiveness are clear advantages. MR images can also be used to determine a normal and abnormal types of brain. Moreover, the MRI characteristics will help the doctor to avoid the human error in manual interpretation of medical content. Computer-based classification has remained largely experimental work with approaches, one of them is, Support vector machine (SVM). SVM is a pattern recognition algorithm which learns to assign labels to objects through examples. This research paper is an attempt to use SVM to automatically classify brain MRI images under two categories, either normal or abnormal brain which refers to brain tumor. The determination of normal and abnormal brain image is based on symmetry which is exhibited in the axial and coronal images. Using feature vector gained from the MRI images, SVM classifiers are use to classify the images. The process consists of two components which are training phase and a testing phase. Percentage of accuracy on each parameter in SVM will give the idea to choose the best one to be used in further works. Other than that, value of percentage will give the first interpretation either the brain image has the possibility of brain tumor or normal. After all, we are using LabView Advanced Signal Processing Toolkit as the software in our experimental work. We believe with the easiness of this graphical programming and the capabilities of SVM will give a very good result.
Article
A gray tone image possesses ambiguity within pixels because of the possible multi-valued levels of brightness in the image. This indeterminacy is due to the inherent vagueness or imprecision embedded in an image, which can be adequately modeled using fuzzy sets. The fundamentals of fuzzy set theory has been reviewed in this chapter. Fuzzy enhancement techniques, such as fuzzy contrast enhancement, has been presented in detail. Fuzzy spatial filters have found applications for noise removal - this has been discussed here. Fuzzy histogram modeling has been presented. A number of mage segmentation techniques by fuzzy methods have been described here. The chapter concludes with an introductory note on neuro-fuzzy techniques for image analysis.
Informatics Institute for Postgraduate Studies at the Iraqi Commission for Computers
  • A Al-Taiyy
AL-Taiyy A., "Medical Image Classification Using Kohonen Neural Network", M.Sc Thesis, Informatics Institute for Postgraduate Studies at the Iraqi Commission for Computers, September, 2006.