# An automated palmprint recognition system.

**0**Bookmarks

**·**

**258**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**In this study, a new approach to the palmprint recognition phase is presented. 2D Gabor filters are used for feature extraction of palmprints. After Gabor filtering, standard deviations are computed in order to generate the palmprint feature vector. Genetic Algorithm-based feature selection is used to select the best feature subset from the palmprint feature set. An Artificial Neural Network (ANN) based on hybrid algorithm combining Particle Swarm Optimization (PSO) algorithm with back-propagation algorithms has been applied to the selected feature vectors for recognition of the persons. Network architecture and connection weights of ANN are evolved by a PSO method, and then, the appropriate network architecture and connection weights are fed into ANN. Recognition rate equal to 96% is obtained by using conjugate gradient descent algorithm.Neural Computing and Applications 22(1). · 1.76 Impact Factor - [Show abstract] [Hide abstract]

**ABSTRACT:**This paper presents a bimodal biometric recognition system based on the extracted features of the human palmprint and iris using a new graph-based approach termed Fisher locality preserving projections (FLPP). This new technique employs two graphs with the first being used to characterize the within-class compactness and the second dedicated to the augmentation of the between-class separability. By applying the FLPP, only the most discriminant and stable palmprint and iris features are retained. FLPP was implemented on the frequency domain by transforming the extracted region of interest extraction of both biometric modalities using Fourier transform. Subsequently, the palmprint and iris features vectors obtained are matched with their counterpart in the templates databases and the obtained scores are fused to produce a final decision. The proposed combination of palmprint and iris patterns has shown an excellent performance compared to unimodal palmprint biometric recognition. The system was evaluated on a database of 108 subjects and the experimental results show that our system performs very well and achieves a high accuracy expressed by an equal error rate of 0.00%.Journal of Real-Time Image Processing 09/2013; · 1.16 Impact Factor - [Show abstract] [Hide abstract]

**ABSTRACT:**This paper employs both two-dimensional (2D) and three-dimensional (3D) features of palmprint for recognition. While 2D palmprint image contains plenty of texture information, 3D palmprint image contains the depth information of the palm surface. Using two different features, we can achieve higher recognition accuracy than using only one of them. In addition, we can improve the robustness. To recognize palmprints, we use two-phase test sample representation (TPTSR) which is proved to be successful in face recognition. Before TPTSR, we perform principal component analysis to extract global features from the 2D and 3D palmprint images. We make decision based on the fusion of 2D and 3D features matching scores. We perform experiments on the PolyU 2D + 3D palmprint database which contains 8,000 samples and achieve satisfying recognition performance.Neural Computing and Applications 03/2014; · 1.76 Impact Factor

Page 1

An automated palmprint recognition system

Tee Connie*, Andrew Teoh Beng Jin, Michael Goh Kah Ong, David Ngo Chek Ling

Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia

Received 13 March 2004; received in revised form 10 November 2004; accepted 14 January 2005

Abstract

Recently, biometric palmprint has received wide attention from researchers. It is well-known for several advantages such as stable line

features, low-resolution imaging, low-cost capturing device, and user-friendly. In this paper, an automated scanner-based palmprint

recognition system is proposed. The system automatically captures and aligns the palmprint images for further processing. Several linear

subspace projection techniques have been tested and compared. In specific, we focus on principal component analysis (PCA), fisher

discriminant analysis (FDA) and independent component analysis (ICA). In order to analyze the palmprint images in multi-resolution-multi-

frequency representation, wavelet transformation is also adopted. The images are decomposed into different frequency subbands and the best

performing subband is selected for further processing. Experimental result shows that application of FDA on wavelet subband is able to yield

both FAR and FRR as low as 1.356 and 1.492% using our palmprint database.

q 2005 Elsevier B.V. All rights reserved.

Keywords: Biometric; Palmprint recognition; Palmprint pre-processing; Subspace projection methods; Similarity matching

1. Introduction

Recently, a new biometric feature based on palmprint has

been introduced. Palmprint recognition refers to the process

of determining whether two palmprints are from the same

person based on line patterns of the palm. Palmprint is

referred to the principal lines, wrinkles and ridges appear on

the palm, as showed in Fig. 1. There are three principal lines

on a typical palm, named as heart line, head line and life

line, respectively. These lines are clear and they hardly

change throughout the life of a person. Wrinkles are lines

that are thinner than the principal lines and are more

irregular. The lines other than principal lines, as well as

wrinkles, are known as ridges, and they exist all over the

palm.

Palmprint serves as a reliable human identifier because

the print patterns are not duplicated in other people, even

in monozygotic twins. More importantly, the details of

these patterns are permanent. The rich structures of the

palmprint offer plenty of useful information for recog-

nition. There are two popular approaches to palmprint

recognition. The first approach is based on the palmprint

statistical features while the other on structural features.

For statistical based palmprint recognition approach, the

works that appear in the literature include eigenpalm [1],

fisherpalms [2], Gabor filters [3], Fourier Transform [4],

and local texture energy [5].

Another important feature extraction approach is to

extract structural information, like principal lines and

creases, from the palm for recognition. Funada et al. [6]

devised a minutiae extraction method for palmprints. This

idea was inspired by the fact that palmprint also contains

minutiae like fingerprints. Zhang and Shu [7] determined

the datum points derived from the principal lines using the

directional projection algorithm. These datum points were

location and rotational invariant due to the stability of the

principal lines. Unlike the work proposed by [7], Duta et al.

[8] did not explicitly extract palm lines, but used only

isolated points that lie along palm lines as they deduced that

feature point connectivity was not essential for the matching

purposes. As opposed to the work by [6], Chen et al. [4]

recognized palmprint by using creases. Their work was

motivated by the finding that some crease patterns are

related to some diseases of people. Another structural based

method used by Wu et al. [10] was to implement fuzzy

0262-8856/$ - see front matter q 2005 Elsevier B.V. All rights reserved.

doi:10.1016/j.imavis.2005.01.002

Image and Vision Computing 23 (2005) 501–515

www.elsevier.com/locate/imavis

* Corresponding author. Tel.: C60 252 3611.

E-mail address: tee.connie@mmu.edu.my (T. Connie).

Page 2

directional element energy feature (FEDDF) which origi-

nated from the idea of a Chinese character recognition

method called directional element feature (DEF). On the

other hand, Han et al. [11] performed Sobel’s and

morphological operations to extract palmprint structural

features from the region of interest (ROI).

In the first statistical features based palmprint recognition

approach, the palmprint image is treated as a whole for

extraction, representation and comparison. Thus, the

recognition process is straightforward. However, as abun-

dant textural details are ignored, the natural and structural

information of the palmprint cannot be characterized. On

the other hand, structural approach can represent the

palmprint structural features clearly. Besides, image with

lower quality can be used for structural approach as lines

can be detected under low-resolution. However, this method

is restricted by the complication in determining the

primitives and placements of the line structures, and usually

more computational power is required to match the line

segments with the templates stored in the database. Each

approach demonstrates its strengths and weaknesses, and

the choice depends on the temperament of application:

operational mode, processing speed, memory storage and

quality of the image acquired.

In addition to the feature selection process, image

capturingmethodisanotherimportantfactortobeevaluated.

The palmprint recognition methods proposed by [4–10]

utilized inked palmprint images. These approaches are able

to provide high-resolution images and are suitable for

methods which require fine resolution images to extract

lines, datum points and minutiae features. However, these

methods are not suitable for online security systems as two

steps are required to be performed: ink the palmprint images

on papers and then scan them to obtain digital images. Some

recent works demonstrated by [3,10,12] used CCD based

digital camera to capture palmprint images. The digital

images acquired could be directly fed into computer for

computation. Another approach proposed by [11] used

scanner as the acquiring device. The advantage of scanner

is that it is equipped with a flat glass that enables the users to

flatten their palm properly on the glass to reduce bended

ridges and wrinkle errors. Some authors like [4,10] fixed

some guidance pegs on the sensor’s platform to limit the

palm’sshiftandrotation.Someuserswillfeeluncomfortable

when their hands images are acquired. In addition, this

approach requires additional peg-removal algorithm to

remove the pegs from the hand image. Works introduced

by [12] do not use fixed pegs to increase flexibility and user-

friendliness of the system.

In this paper, an automated peg-free scanner-based

palmprint recognition system is proposed. Two novel

components are contained in the proposed system. First, a

pre-processing module that automatically aligns palmprint

images from peg-free sensor is developed. This module

segments hand image from the background and extracts the

center region of the palm for recognition. Second,

systematic comparison and analysis of three types of

subspace projection techniques, namely principal com-

ponent analysis, fisher discriminant analysis and indepen-

dent component analysis, using a standard palmprint

database is presented. In order to analyze palmprint images

in multi-resolution-multi-frequency representation, the

wavelet transformation is also adopted.

In the next section, the overview of the proposed

palmprint recognition system is provided and each of the

system’s components is discussed in details. Section 3

presents the experiment setup, as well as the results of this

research. In Section 4, we make some concluding remarks.

Finally, the review of PCA, FDA, ICA and Wavelet

Transform theories are provided in Appendix A for the

convenience of readers unfamiliar with these techniques.

2. Overview of system architecture

The proposed system is divided into two phases, namely

the enrollment and verification phase, as shown in Fig. 2.

The important tasks contain in the system include the

pre-processing, feature extraction as well as feature

matching. In the pre-processing stage, the alignment and

orientation of the hand images are corrected for use in the

successive tasks. In the feature extraction stage, the most

discriminating features from the palms are extracted for

representation, and finally in the feature matching stage

comparison is performed and decision is made whether two

palmprint features are from the same person. The details of

each of these components are discussed in the subsequent

sections.

2.1. Pre-processing

In this system, no guidance pegs are fixed on the

scanner’s platform and the users are allowed to place their

hands freely on the platform of the scanner when scanned.

Thus, palmprint images with different sizes, shifts and

rotations are produced. Therefore, a pre-processing algor-

ithm has been developed to correct the orientation of

the images and also convert the palmprints into same size

Fig. 1. The line patterns on the palmprint. The three principal lines on a

typical palm: 1-heart line, 2-head line and 3-life line.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515502

Page 3

images. Successful pre-processing measure can provide the

foundation for both feature extraction and matching.

Before alignment and orientation are performed on the

palmprint, a smaller region from the center of the palm,

called region of interest (ROI), is automatically extracted.

The ROI is defined in square shape and it contains sufficient

information to represent the palmprint for further proces-

sing. Fig. 3 depicts the appearance of ROI of a palm.

We applied the salient-point detection algorithm pro-

posed by Goh et al. [13] to obtain the three crucial points, v1,

v2and v3as shown in Fig. 3, used to locate the ROI. First, an

image thresholding technique is applied to segment the hand

image from the background. The proposed technique can

also detect fingernails and rings by analyzing the skin color

of the hand. The hand image acquired is in 256-RGB colors

with stable background in grey. The background can be

segmented based on the values of the image’s color

component r, g, and b which represent red, green and

blue, respectively. The image thresholding technique

proposed is shown in Eq. (1):

C1ðu;vÞ Z

1

jrðu;vÞKbðu;vÞj!T

otherwise0

(

(1)

Eq. (1) is repeated for setting, jr(u,v)Kg(u,v)j, yielding

C2(u,v) and jb(u,v)Kg(u,v)j, C3(u,v). The threshold value T

is set to 50 to filter all the grey level color to white and other

color to black. The resultant image of binary pixel C1, C2

and C3are ANDed to obtain the binary image, I:

I Z

X

After that, contour of the hand shape is obtained by using

eight neighborhood border tracing algorithm [14]. The

process starts by scanning the pixels of the binary image

from the bottom-left to the right. When the first black pixel

is detected the border tracing algorithm is initiated to trace

the border of the hand in clockwise directions. During the

border tracing process, all the coordinates of the border

pixels were recorded in order to represent the signature of

the hand, f(i) where i is the array index. The hand signature

h

vZ1

X

w

uZ1

h

3

iZ1Ciðu;vÞ

(2)

is blocked into non-overlapping frames of 10 samples, f(i).

Every frame is checked for existence of stationary points

and in this way the valleys of the fingers, v1, v2and v3could

be pinpointed. Based on the information of these three

crucial points, the outline of the ROI could be obtained as

follows:

1. The two valleys beside the middle finger, v1, v2, are

connected to form a reference line.

2. The reference line is extended to intersect the right-edge

of the hand.

3. The intersection point obtained from step (2) is used to

find the midpoint, m1, based on the midpoint formula.

4. Steps (1) to (3) are repeated to find the other midpoint,

m2, by using the valleys v2, v3.

5. The two midpoints, m1and m2, are connected to form the

base line to obtain the ROI.

6. Based on the principal of geometrical square where all

the four edges having equal length, the other two points,

Fig. 2. Block diagram of the proposed palmprint verification system.

Fig. 3. Outline of the region of interest (ROI) from the palm.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515503

Page 4

m3and m4, needed to form the square outline of the ROI

can be obtained (refer Fig. 3).

Fig. 4 shows some examples of the ROIs extracted from

different individuals, obtained from both the right and left

palms.

There are some variances in the locations of the base

points, m1and m2, used to obtain the ROI. This variance is

caused by the different stretching degree of the hand.

Experimental statistic shows that the average standard

deviation of the location of the base point is approximately

2.462 pixels. However, the variance in locations of the base

points does not cause much effect to the feature extraction

process, as it only affects the capturing size of the outline of

the ROI. Most of the information significant for the

recognition task lies in the center of the ROI, thus small

variationinthelocationofthebasepointswillnotjeopardize

thesystem’sperformance.Infact,experimentalresultshows

that the system performs well using these ROI features.

From Fig. 4, it can be observed that the ROI have

different sizes, due to the varying palm’s size. Usually men

have larger palms’ sizes than the women. For example the

ROI of a man shown in Fig. 4(d) is larger in size than ROI of

a lady shown in Fig. 4(c). Besides the differences in size, all

the ROIs lie in various directions. Due to these incon-

sistencies, the preprocessing job is performed to align all the

ROIs into the same location in their images.

First, the images are rotated to the right-angle position by

using Y-axis as the rotation-reference axis. The next step is

to convert the RGB ROI into grayscale image. After that, as

the sizes of the ROIs vary from hand to hand (depending on

the sizes of the palms), they are resized to 150!150 pixels

images by using bicubic interpolation.

The last procedure in the pre-processing stage is to

normalize the palmprint images in order to smoothen

the noise and lighting effect. The normalization method

deployed in this research follows the discussion by Shi et al.

[15]. Let P(x, y) represents the pixel value at the coordinate

(x, y), m and n be the image mean and variance, respectively.

The normalized image is computed by using the operation

below:

P0ðx;yÞZ

mtCb ifPðx;yÞOm

mtKb otherwise

wherebZ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

n

ntfPðx;yÞKmg2

s

8

:

<

(3)

where mtand ntare the pre-set values for mean and variance

for the image. In this experiment, the value of mtand ntwere

set to 10, respectively. Fig. 5(b) depicts the palmprint image

after the normalization process.

2.2. Palmprint feature extractions

After the well-aligned ROIs are obtained from the pre-

processing stage, we extract important features from

Fig. 4. ROIs obtained from different individuals. They have different sizes and rotations. (a), (b), (c) and (d) depicts ROIs from the right palms from four

individuals, while (e), (f), (g) and (h) are ROIs from the left palms from another four individuals.

Fig. 5. Extracted ROI from the palm. (a) Palmprint image before

normalization. (b) Palmprint after normalization.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515504

Page 5

the image for recognition task. As discussed in Section 1,

there are many approaches to achieve this purpose. In this

paper, three subspace projection techniques are experimen-

ted and compared. In particular, we use principal component

analysis (PCA), fisher discriminant analysis (FDA) and

independent component analysis (ICA). The subspace

projection technique is performed as a two-step process of

constructing the subspace basis followed by projecting the

palmprint images into the compressed subspace. New test

images are then projected into the same subspace for image

matching. It is computationally more efficient to perform

image matching in subspaces as the dimensions have been

reduced significantly. For example, image with 22,

500 pixels (150!150) might be projected into a subspace

with only 20–60 dimensions.

2.2.1. Principal component analysis

PCA has been widely used for dimensionality reduction

in computer vision [1,16,17]. It finds a set of orthogonal

basis vectors which describe the major variations among the

training images, and with minimum reconstruction mean

square error. This is useful as it helps to decrease the

dimensions used to describe the set of images and also scale

each variable according to its relative importance in

describing the observation. The eigen bases generated

from the set of palmprint images are shown in Fig. 6(a).

As these bases have the same dimension as the original

images and are like palmprint in appearance, they are also

called eigenpalms.

2.2.2. Fisher discriminant analysis

The successful implementation of PCA in various

recognition tasks popularized the idea of matching

images in the compressed subspaces FDA is another

popular subspace projection technique which computes a

subspace that best discriminates among classes. It is

different from PCA in the aspect that it deals directly

with class separation while PCA treats images in its

entirety without considering the underlying class struc-

ture. The bases generated using FDA are also known as

fisherpalms. Some appearances of fisherpalms are

depicted in Fig. 6(b).

2.2.3. Independent component analysis

While both PCA and FDA only impose independence

only up to the second order, there is also a lot of interest to

decorrelate higher order statistics from the training images

ICA is one such approach that computes the basis

components that are statistically independent or as inde-

pendent as possible [18]. ICA is originally used to solve

blind source separation (BSS) problem. When applied in

palmprint recognition, the palmprint images are considered

as the mixture of an unknown set of statistically independent

source images by an unknown mixing matrix. A separating

matrix is learnt by ICA to recover a set of statistically

independent basis images. The bases generated are spatially

localized in various portions in the palmprint image, as

shown in Fig. 6(c).

Fig. 6. The first five bases generated by (a) PCA (b) FDA (c) ICA.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515505

Page 6

2.2.4. Wavelet decomposition

Multiresolution analysis of the images is performed by

using wavelet decomposition In this paper, Wavelet

Transformation is integrated into the feature extractors as

follows:

1. Decompose the palmprint image by using different

families of wavelet.

2. Retain the low-frequency subband of the approximation

coefficients.

3. Feed the reduced images into {PCAjFDAjICA}

computation.

According to wavelet theory, it is generally found that

most of the energy content is concentrated in the low

frequency subband, as compared to higher frequency

subbands. Low frequency subband is the smoothed version

of the original image and it helps to reduce the influence of

noise on one hand, and on the other hand preserves the local

edges well which helps to capture the features that is

insensitive to small distortion. On the other hand, the higher

frequency subbabds only contain low energy content and

their high pass feature tends to enhance the edges detail,

including noise and shape distortion.

WT is selected upon other filtering designs as it can

decompose the palmprint images into different-frequency-

multi-resolution subband images for analysis. In decom-

posing the image into lower resolution images, WT can

conserve the energy signals and redistribute them into more

compact form. Usually the low frequency subbands contain

most of the energy content and it is able to preserve local

edges well which helps to capture the features that are

insensitive to small distortion. In addition, as the subband

image is only quarter size of the original image, the

computational complexity can be reduced by working on a

lower resolution image. This makes WT distinguishable

from other noise/resolution reduction techniques like spatial

filters with dyadic down-sampling.

For readers unfamiliar with PCA, FDA, ICA and wavelet

transformation theories, some brief review of these methods

can be found in Appendix A.

2.3. Feature matching/classification procedures

Identity of an individual can be verified through the

feature matching or classification process. The output of

each feature extraction algorithm produces a feature vector

that is used for classification. The simplest classification

method is based on the concept of similarity where samples

that are similar are classified to the same class. Some

popular similarity measures include the Manhattan (or city

block), Euclidean and Mahalanobis distances. As ICA

produces basis vectors that are not mutually orthogonal, the

cosine distance measure is also employed here. Cosine

measure can be used since ICA allows the basis vectors to

be non-orthogonal, and the angles and distances between

images differ from each other.

Another classification approach is to construct decision

boundaries directly by optimizing an error criterion.

Artificial neural network (ANN) is one such famous

technique. ANN can generalize well on the data it has not

seen before and can take into account the subtle differences

between the modeled data, without the need to assume the

type of relationship and the degree of nonlinearity between

the various independent and dependent variables. In this

research, the Probabilistic neural network (PNN) is

deployed. PNN was first introduced by Specht [19,20] and

it offers several advantages over back-propagation network.

Despite its generalization ability, the training speed of PNN

is much faster because the learning rule is simple and

requires only a single pass through the training data. Most

importantly, PNN new training data can be added anytime

without the need to retrain the entire network [20–22] his is

an important factor in this research when this system is to be

extended to real-time application in the future.

3. Experiment and discussion

3.1. Experiment setup

In our research, a standard PC with Intel Pentium III

processor (1 GHz) and 256 MB random access memories is

used Our input device is the Hewlett–Packard ScanJet

3500c optical scanner. Resolution of 150 dpi, with color

output type in 256 RGB format is adopted when the hand

images are scanned. The original size of the hand image is

about 600!800 pixels but consequently, only the region of

interest (ROI) of the image will be extracted and resized to

150!150 pixels.

The proposed methodology is tested on a modest-sized

database containing palm images from 75 individuals.

Thirty-seven of them are females, 46 of them are less than

30 years old, and three of them are more than 50 years old.

The users come from different ethnic groups: 37 Chinese,

followed by 24 Malays, 11 Indian, a Pakistani, an African

and an Iranian. Most of them are students and lecturers from

Multimedia University. To investigate how well the system

can identify unclear or worn palmprints due to labour work,

we have also invited ten cleaners to contribute their

palmprint images to our system.

The users are asked to present their palms at different

directions and stretching degrees when scanned. They do

not need to remove rings or other ornaments from their

fingers when their hand images are taken. The users are

allowed to rotate their palms within G208. If they fail to do

so, an error will be detected in the pre-processing module

and they will be requested to repeat the hand scanning

process again. Fig. 7 illustrates an example of an error

detected during the image acquiring process when the user’s

hand exceeds the permitted angle of rotation.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515506

Page 7

Each user was requested to provide six images from their

right and left hands with different positions in two

occasions. The average interval between the two occasions

is 21 days. Since the right and left palmprints of each person

are different, both of them are captured and treated as

palmprints from different user. Therefore, there are

altogether 900 (75!6!2) palmprints images in our

database. Among the six images from each palm, three are

selected for training (enrollment) while the other three are

used for testing. Fig. 8 illustrates some palmprint samples in

our database.

3.2. Performance evaluation criteria

The results obtained in this paper are evaluated in terms

of their: (i) correct recognition rate and (ii) verification rate.

Fig. 7. An example of an error detected during the image acquiring process when the user’s hand exceeds the permitted angle of ratation.

Fig. 8. Palmprint samples in the database.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515 507

Page 8

Correct recognition rate represents the percentage of the

number of people that can be identified by the system. On

the other hand, verification rate is investigated using several

measures like the False Acceptance Rate, False Rejection

Rate, as well as Equal Error Rate. FAR is defined as

FAR ZNumber of accepted imposter claims

Total number of imposter accesses

!100%

(4)

while FRR is defined as

FRR ZNumber of rejected genuine claims

Total number of genuine accesses!100%

(5)

The system threshold value is obtained based on the

Equal Error Rate (EER) criteria where FAR is equals to

FRR. This is based on the rationale that both rates must be as

low as possible for the biometric system to work effectively.

Another performance measurement is obtained from FAR

and FRR which is called Total Success Rate (TSR). It

represents the verification rate of the system and is

calculated as follow:

TSR Z

1K

FACFR

Total number of accesses

??

!100%

(6)

3.2.1. Principal component analysis

In the first experiment, we investigate the performance of

PCA by using different number of principal components (or

feature lengths), varying from 30 to 90. Experimental result

shows that longer feature length leads to higher recognition

rate. Table 1 displays the correct recognition rate of using

the principal components that yield significant changes in

the result. Several classifiers are used to justify the

performance, namely L1and L2distance measure, cosine

measure and Probabilistic Neural Network (PNN).

It is demonstrated by our experiment that as the number

of feature lengths/principal components increases, the

correct recognition rate also increases. The performance

peaks when 55 principal components is used. Itis interesting

to discover that the performance stabilizes, or even begins to

decrease after this point. PNN gives correct recognition rate

of 93.1% onwards after 55 feature lengths, and the other

classifiers indicates the performance is deteriorating.

It can be anticipated that the classification accuracy of

the methods will improve when a more sophisticated

classifier is used. In this research, PNN is used to show

how well the result can improve when a sophisticated

classifier is used.

The verification rates of PCA using the various principal

components are shown in Table 2. We use the distance

measure that maximized the performance, which is L2, for

this purpose.

The palmprint verification method can achieve ideal

result with FARZ2.6% and FRRZ2.6%, respectively.

Fig. 9 shows the Receiver Operating Characteristic (ROC)

curve to serve as a comparison among the performances of

the different principal components.

Based on the experimental result, it can be concluded that

the first few eigenpalms contain the largest variance

direction in the learning set. In this way, we can find

directions in which the learning set has the most significant

amounts of energy. However, as the principal components

increases, it tends to maximize other insignificant infor-

mation such as noise, which will decrease the performance

of the system. In fact, it can be shown from the images

generated by using higher-order eigenpalms that they

only contain noise, and do not look like palms at all

Table 1

Correct recognition rates of using different numberof principal components

Number of

feature

length

L1measure

(%)

L2measure

(%)

Cosine

measure

(%)

Probabilis-

tic neural

network (%)

30

40

45

50

55

60

70

80

90

89.4

90.2

90.7

90.9

92.4

92.1

91.7

91.1

91.1

90.7

92.2

92.9

92.9

93.1

92.9

92.4

92.4

92.2

85.1

85.7

87.4

89.2

90.4

89.4

89.1

88.4

87.7

91.7

92.4

92.7

93.9

94.1

94.1

94.1

94.1

94.1

Table 2

Performance evaluation of using different principal components

Feature lengthFAR (%)FRR (%)TSR (%) EER (%)

30

40

45

50

55

60

70

80

90

3.3

3.3

3.2

2.6

2.6

3.2

3.3

3.3

3.7

3.3

3.3

3.3

2.6

2.6

3.3

3.3

3.3

4.0

96.7

96.7

96.8

97.3

97.3

96.7

96.7

96.7

96.3

3.3

3.3

3.2

2.6

2.6

3.2

3.3

3.3

3.8

Fig. 9. ROC curve that serves as a comparison among the performance of

the different principal components.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515508

Page 9

(refer Fig. 10). In this, we deduce that in our palmprint

database, 20% of the principal components are attributed to

true correlation effects while the rest to small trailing

eigenvalues. Therefore, the discarded dimensions are the

ones along which the variance in the data distribution is the

least, which fail to capture enough information for

representation.

3.2.2. Fisher discriminant analysis

To investigate the performance of FDA, the correct

recognition and verification rates are displayed in Tables 3

and 4, respectively. Again, four possible distance metrics

(L1and L2distance measure, Mahalanobis distance and

PNN) are used.

Experimental result shows that FDA is able to achieve

correct recognition rate of 97.7, 95.2 and 95.7% by using

PNN, L1and L2measures, respectively. On the other hand,

verification rate of 98.1% could be attained by using L2

metric.

A comparative experiment betweenFDA/ L2and PCA/L2

has been conducted in this research. The comparison

is made by considering their representative ROC curves.

Fig. 11 shows the ROC curve to compare the performance

of both FDA and PCA.

As expected, experimental result shows that FDA

performs better than PCA. While PCA maximizes all

scatter for data representation, FDA tends to take into

account the within—and between-class scatter for classifi-

cation. Its intention is to maximize the between-class scatter

as to minimize the within-class scatter.

In choosing the projection that maximizes the total

scatter, PCA retains some unwanted variations [23]. On the

other hand, FDA provides more class separability by

building a decision region between the classes. Therefore,

it is undoubtedly that FDA outperforms PCA as it

transforms the samples into the ‘best separable space’

focusing on the most discriminant feature extraction.

Another reason is that the palmprint images contained in

our database essentially contain lower within-class variation

as compared to the others. This is due to the use of scanner-

based approach in which the images captured are nearly

unaffected by the changes in ambient illumination. There-

fore, with this low within-class variability in our palmprint

nature, and with sufficiently high between-class variability,

FDA discriminant ability can be boosted in our experiment.

3.2.3. Independent component analysis

Many criterion functions for ICA were proposed by [24]

based on different search criteria. In this research, the

InfoMax algorithm is deployed. The complete InfoMax

algorithm written in Matlab code is publicly available at:

http://ergo.ucsd.edu/~marni/. In this research, we follow the

experimental setup adopted by Bartlett et al. [25] and set the

parameters for InfoMax as below:

† Block size—50

† Learning rate—The initial learning rate was set to 0.001.

After 1000 iterations, it was reduced to 0.0005, 0.00025

and 0.0001 every 200 epochs subsequently.

† Total number of iterations—1600

Following the discussion in [23], we first apply PCA to

project the data into a subspace of dimension 55 to control

the number of independent components generated by ICA.

The InfoMax algorithm is then applied to the eigenvectors

to minimize the statistical dependence among the resulting

basis images.

Since ICA basis vectors are not mutually orthogonal, the

cosine distance measure is often used to retrieve images in

the ICA subspaces. Tables 5 and 6 present the performance

result reported using cosine, L1and L2measures, as well as

PNN.

Fig. 10. Eigenpalms generated by using different number of feature lengths. (a) Eigenpalms generated using the 55th principal components. (b) Eigenpalms

generated using the 70th principal components. (c) Eigenpalms generated using the 80th principal components. (d) Eigenpalms generated using the 90th

principal components. (e) Eigenpalms generated using the 100th principal components.

Table 3

Correct recognition of FDA using L1 and L2 measures, Mahalanobis

distance and PNN

ClassifiersCorrect recognition rate (%)

L1measure

L2measure

Mahalanobis distance

PNN

95.2

95.7

94.0

97.7

Table 4

Standard error rates of FDA using L2measure

Verification

measures

FARFRRTSR EER

Verification rate (%) 1.92.0 98.11.9

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515509

Page 10

As expected, PNN clearly outperforms the other distance

measures, yielding correct recognition rate of 97%. Among

the other distance measures, cosine measure performs

slightly better than L1 and L2 measures by providing

95.7% correct recognition rate. Subsequently for the

verification rate, 98.0% is achieved using the cosine

measure.

A comparison for ICA/cosine has been made with its

FDA/L1and PCA/L1counterparts by plotting their respect-

ive ROCs in Fig. 12.

Our result shows that ICA performs considerably better

than PCA, but does not provide significant advantage over

FDA. Although the idea of computing higher order

moments using ICA is attractive, the assumption that the

palmprint images comprise of a set of independent basis

images is not intuitively clear. Besides, viewing the nature

of palmprint texture that are made up of many crossing and

overlapping ridges, recognition by using localized features

is not suitable. In addition, the fact that some lines are so

thin and unobvious that they will be simply ignored by the

feature extraction algorithm will also confound the

performance of ICA.

To evaluate the computation load taken by the three

methods, the time needed for training and testing (in the

identification stage) 450 templates in our database for each

of these methods are recorded and displayed in Table 7.

According to the recorded measurement, the compu-

tation burden required by ICA, approximately half an hour

to compute the basis vector for 450 images, is much greater

than PCA and FDA. The reason lies in the large number of

iterations needed to refine the ICA basis. On the other hand,

there is no significant difference in the time taken by the

algorithms for testing.

In order to reduce the computational time required, we

decided to decompose the images into lower resolution

before further processing. With this, we employ the wavelet

transformation to achieve dimension reduction purpose. The

result of applying wavelet transformation on the images is

provided in the next section.

3.2.4. Wavelet transformation

In feature extraction tasks, the commonly adopted

approach is to select the subband image that contains the

highest energy distribution. Therefore, low frequency

subband is selected upon high frequency subbands for

palmprint structure representation in our research. To justify

our selection, we have adopted three wavelet bases namely

Haar, Daubechies order 2 and Symmlet order 2 (using the

first level decomposition) to demonstrate that the highest

energy distribution is indeed contained in the low frequency

subband and it is able to yield the highest verification rate.

Table 8 illustrates the relationship between the energy

distribution content and verification rate using the three

wavelet bases in our database.

Based on the result shown in the table above, it is obvious

that the high order wavelet basis contains the highest energy

distribution and yields better recognition rate. Our finding is

opposed to the work proposed by Feng et al. [26] which

claimed that highest-energy subband does not necessary

give the best recognition accuracy. In fact our palmprint

Fig. 11. Comparison between PCA and FDA by using ROC curve.

Table 5

Correct recognition rates of ICA

Classifiers Correct recognition rate (%)

L1measure

L2measure

Cosine measure

PNN

95.0

95.0

95.7

97.0

Table 6

Standard error rates of ICA using Cosine measure

Verification

measures

FAR FRR TSREER

Verification rate (%) 2.02.0 98.02.0

Fig. 12. ROC that compares ICA/cosine against FDA/L1and PCA/L1.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515 510

Page 11

database shows that all the highest-energy subbands

obviously outperform other subband images. Holding this

principle, we focus on the low-frequency subband for

subsequent analysis.

WefirstintegrateWTwithPCAwiththesystematicuseof

different wavelet families. The integration of WT with PCA

is known as WPCA for brevity. Three decomposition levels

were tested. The first level decomposition decomposes the

imagesfrom150!150 pixelsinto79!79,followedby44!

44 and 26!26 pixels in the first, second and third level

decompositions (as we use the Matlab Wavelet Toolbox, the

decomposed sizes obtained are not half the original image

like in other ordinary cases). Although the image can still be

furtherdecomposedintolowerresolution,westopatthethird

level decomposition. This is based on the rationale that finer

resolution contains less useful information for the recog-

nition task. Table 9 presents the performance of using the

different wavelet bases for PCA.

Different wavelet bases exhibit different performances.

We can observe that Daubechies order 5 level 2 yields the

best performance by giving FAR, FRR, TSR and EER of

2.9596, 3, 97.04 and 2.9798%, respectively. In general,

it can be observed that level 3 decomposition performs

poorer than the other two levels of decomposition. This is

due to the reason that the down sampling process eliminates

the change offeature structures of the coarser images, and in

turn causes the discriminant power of the WPCA feature to

be lower than before. This finding confirms our justification

to stop at level 3 decomposition.

Table 7

CPU time used to calculate PCA, FDA and ICA (measured in seconds)

Method PCA (using 55

feature lengths)

FDA ICA

Training time (in second)

Testing time (in second)

72.7

0.4

141.6

0.4

1655.8

0.5

Table 8

Relationship between the energy distributions and verification rates of the

four subband images

Sub band

Haar wavelet basis (level 1)

L1a

D1horizontalb

D1verticalc

D1diagonald

Energy dis-

tribution

FAR

FRR

TSR

EER

Daubechies order 2 level 1 wavelet basis

Energy dis-

tribution

FAR1.5

FRR1.6

TSR98.4

EER 1.5

Symmlet order 2 level 1 wavelet basis

Energy dis-

tribution

FAR 1.5

FRR 1.6

TSR 98.4

EER1.5

99.3480.060 0.0800.003

8.4

9.0

91.6

8.7

24.9

27.0

75.0

25.5

24.1

25.0

75.9

24.5

40.7

41.0

59.2

40.8

99.8300.0620.0580.041

29.4

28.0

70.6

28.7

28.6

30.0

71.4

29.3

46.4

41.0

53.6

43.7

99.8470.0690.077 0.007

28.6

30.0

74.4

29.3

29.4

28.0

70.6

28.7

46.4

41.0

53.6

43.7

aLow frequency subband. High frequency subband details in

bHorizontal.

cVertical.

dDiagonal orientation.

Table 9

Comparative result of using the different wavelet bases on PCA

FilterDecompo-

sition level

FAR

(%)

FRR

(%)

TSR

(%)

EER

(%)

Haar1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

2.9

4.1

3.9

2.2

2.9

2.9

2.6

2.9

2.9

2.6

2.9

2.9

2.6

2.9

2.9

2.6

3.9

2.9

2.6

4.0

4.0

2.6

4.0

4.0

2.6

3.0

3.0

2.6

3.0

3.0

2.6

3.0

3.0

2.6

3.0

3.0

2.6

4.0

3.0

2.6

4.0

4.0

97.0

95.9

96.1

97.7

97.0

97.0

97.3

97.0

97.0

97.3

97.0

97.0

97.3

97.0

97.0

97.3

96.0

97.0

97.3

95.9

95.9

2.7

4.0

3.9

2.4

2.9

2.9

2.6

2.9

2.9

2.6

2.9

2.9

2.6

2.9

2.9

2.6

3.9

2.9

2.6

4.0

4.0

Daubechies 4

Daubechies 5

Daubechies 6

Symmlet 6

Symmlet 7

Symmlet 8

Table 10

Wavelet transformation on FDA

FilterDecompo-

sition level

FAR

(%)

FRR

(%)

TSR

(%)

EER

(%)

Daubechies 21

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1.4

2.4

4.5

1.3

1.4

4.5

1.5

2.9

3.0

1.3

2.5

3.9

1.4

3.0

4.4

1.4

1.5

2.9

1.3

2.8

4.5

1.3

2.3

4.3

1.4

2.9

4.4

1.4

1.4

4.4

1.4

2.9

2.9

1.4

2.9

4.4

1.4

2.9

4.4

1.4

1.4

2.9

1.4

2.9

4.4

1.4

1.4

4.4

98.5

97.5

95.4

98.6

98.5

95.4

98.4

97.0

96.9

98.6

97.4

96.0

98.5

96.9

95.5

98.5

98.4

97.0

98.6

97.1

95.4

98.6

97.6

95.6

1.4

2.7

4.5

1.4

1.4

4.5

1.5

2.9

3.0

1.4

2.7

4.2

1.4

3.0

4.4

1.4

1.5

2.9

1.4

2.9

4.5

1.4

1.6

4.4

Daubechies 3

Daubechies 4

Symmlet 5

Symmlet 6

Symmlet 7

Symmlet 8

Symmlet 9

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515511

Page 12

Similarly, WT can also be combined with FDA. This

gives rise to another terminology called WFDA. Table 10

shows the performance result by using the different wavelet

bases on FDA.

ExperimentalresultshowsthatDaubechiesorder3Level1,

Symmlet order 8 Level 1 and Symmlet order 9 Level 1 are

most suitable for FDA computation. They yield 1.3, 1.4, 98.6

and 1.4% for FAR, FRR, TST and EER, respectively.

For comparative study, ICA is also combined with

Wavelet transformation, yielding another term called

WICA. The result of WICA is shown in Table 11.

In WICA, Symmlet order 2 Level 1 yields the best result

by providing 1.9, 2, 98 and 1.9% for FAR, FRR, TSR and

EER, respectively.

After testing wavelet with the three subspace projection

techniques, a comparative test is conducted among PCA,

WPCA, FDA, WFDA, ICA and WICA. Fig. 13 depicts this

comparison by using their respective ROC curves.

Based on the diagram above, it can be observed that WT

improves the overall performance of the originaltechniques.

This shows that WT does not only help to reduce the image

size but also increases the performance. WT has indeed

helped to boost the performance of the original method by

proper accounting of global features without loss of

information on key local features. Experimental result

shows that among all the methods, WFDA is able to give the

highest verification rate of 98.6418% when the FAR and

FRR is set low to 1.3569 and 1.4925%, respectively.

4. Conclusion

We have developed a scanner-based palmprint recog-

nition system to automatically authenticate the identity of an

individual based on biometric palmprint features. The

proposed system is reliable and user friendly as high

recognition result is provided and convenient acquiring

process is offered.

Our experiments suggest a number of conclusions:

1. Holistic analysis statistical approach is very suitable for

palmprint authentication task. This approach is faster,

less computationally intensive and less prone to

misconceptions in the extraction task used since little a

priori assumptions are made on the nature of the

palmprint.

2. The position and scaling of the palmprint is critical to the

success of palmprint template-based approach, and the

alignment of the training images is determinant. We

stress that the good performance of the palmprint

recognition method depend on the precision of the pre-

processing step.

3. For the feature extraction stage, ICA does not provide

significant advantage over FDA. Thus, it is not

intuitively clear that the palm images comprise of a set

of independent basis images for recognition. It also

suggests that localized feature basis provided by ICA

may not be suitable to represent the crossing and

overlapping ridge structures of the palmprint.

4. The intrinsic structure of the modeled data can boost the

performance of FDA. Images with low within-class

variability and sufficiently high between-class variability

have proven to be able to increase FDA’s performance.

5. As WT conserves energy and redistribute them into more

compact form, performing operations in the wavelet

domain and then reconstructing the result is more

efficient than performing the same operation in the

standard feature space. At the same time, the memory

burden can be reduced.

In this paper, systematic testing and analysis have been

conducted using a modest palmprint database. We are

currently investigating how well the subspace projection

techniques perform when extended to large database.

Table 11

Wavelet transform on ICA

Filter Decompo-

sition Level

FAR (%)FRR (%)TSR (%) EER (%)

Symmlet 21

2

3

1

2

3

1

2

3

1

2

3

1

2

3

1.9

2.0

2.0

2.0

1.9

1.9

2.0

1.9

2.0

1.9

1.9

2.0

2.0

2.0

3.1

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

2.0

3.0

98.0

98.0

98.0

98.0

98.0

98.0

97.9

98.0

98.0

98.0

98.0

97.9

98.0

98.0

96.8

1.9

2.0

2.0

2.0

1.9

1.9

2.0

1.9

2.0

1.9

1.9

2.0

2.0

2.0

3.0

Symmlet 3

Symmlet 4

Symmlet 5

Symmlet 6

Fig. 13. ROC curve compares the performances among PCA, WPCA, FDA,

WFDA, ICA and WICA, respectively.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515512

Page 13

We conjecture that exploration in palmprint recognition is

very promising in that it would become an important

complement to the existing biometric technology.

Appendix A. Subspace projection techniques

In this appendix, we provide a brief review of basic PCA,

FDA, ICA and Wavelet Transformation theories. For more

detailed explanation, please refer to [23,24,27] for compre-

hensive understanding on the topics.

A.1. Principal component analysis

Let us consider a set of M palmprint images, i1,i2,.,iM,

the average palm of the set is defined as

?i Z1

M

X

M

jZ1

ij

(A.1)

Each palmprint image differs from the average palm?i, by

the vector fnZinK?i. A covariance matrix is constructed

where:

C Z

X

M

jZ1

fjfT

j

(A.2)

Then, eigenvectors, vkand eigenvalues, lkwith sym-

metric matrix C are calculated. vkdetermines the linear

combination of M difference images with f to form the

eigen bases:

blZ

X

M

kZ1

vlkfk;

l Z1;.;M (A.3)

From these eigen bases, K(!M) basis are selected to

correspond to the K highest eigenvalues.

The set of palmprint images, is transformed into its eigen

bases components (projected into the palm space) by the

operation:

unkZbkðinK?iÞ

where nZ1,.,M and kZ1,.,K.

The weights obtained form a vector UnZ[un1,un2,.,-

unK] that describes the contribution of each eigen basis in

representing the input palm image, treating the eigen bases

as a basis set for the palm images.

(A.4)

A.2. Fisher discriminant analysis

Consider a set of M palmprint images having c classes of

images, with each class containing n set images, i1,i2,.,in.

Let the mean of images in each class and the total mean of

all images be represented by ~ mcand m, respectively, the

images in each class are centered as

fc

nZic

nK ~ mc

(A.5)

and the class mean is centered as

ucZ ~ mcKm(A.6)

The centered images are then combined side by side into

a data matrix. By using this data matrix, an orthonormal

basis U is obtained by calculating the full set of eigenvectors

of the covariance matrix fcT

then projected into this orthonormal basis as follow

nfc

n. The centered images are

^fc

nZUTfc

n

(A.7)

The centered means are also projected into the

orthonormal basis as

^ ucZUTuc

Based on this information, the within class scatter matrix

SWis calculated as

(A.8)

SWZ

X

c

jZ1

X

nj

kZ1

^fj

k^fjT

k

(A.9)

and the between class scatter matrix SBis calculated as

SBZ

X

C

jZ1

nj~ uj~ uT

j

(A.10)

The generalized eigenvectors V and eigenvalues l of the

within class and between class scatter matrix are solved as

follow:

SBV ZlSWV (A.11)

The eigenvectors are sorted according to their associated

eigenvalues. The first cK1 eigenvectors are kept as the

Fisher basis vectors, W. The rotated images, aMwhere aMZ

UTiMare projected into the Fisher basis by

6nkZWTaM

(A.12)

where nZ1,.,M and kZ1,.,MK1.

The weights obtained is used to form a vector YnZ

[6n1,6n2,.,6nK] that describes the contribution of each

fisherpalm in representing the input palm image, by treating

fisherpalms as a basis set for the palm images.

A.3. Independent component analysis

Let s be the vector of unknown source images and x be

the vector of observed mixtures. If A is the unknown mixing

matrix, then the mixing process is written as

x ZA^ s(A.13)

The goal of ICA is to find the separating matrix W such

that

^ s ZWx (A.14)

However, there is no closed form expression to find W.

Instead, many iterative algorithms are used to approximate

W in order to optimize independence of ^ s. Thus, the vector ^ s

is actually an estimate of the true source s. In this research,

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515513

Page 14

the InfoMax principle which was derived from a neural

network perspective is deployed [28].

Sometimes, it is expedient to work on lower dimension-

ality. Preprocessing steps can be applied to x to reduce the

dimension space. There are two common preprocessing

steps in ICA. The first step is to centered the images as,

^ x ZxKEfxg

such that Ef^ xgZ0. This enables ICA to deal with only zero

mean data. The next step is to apply whitening transform V

to the data such that

(A.15)

V ZDK1=2RT

(A.16)

where D is the eigenvalues on the diagonal and R is the

orthogonal eigenvectors of the covariance matrix of ^ x. The

whitening process helps to uncorrelate the data so that PCA

can work with unit variance.

In this research, in order to reduce the number of

independent components produced by ICA, PCA is first

appliedtoprojectthedataintoasubspaceofdimensionm,as

described by Bartlett et al. (2002). The InfoMax algorithm is

then applied to the eigenvectors to minimize the statistical

independence among the resulting basis vectors. The pre-

application of PCA can discard small trailing eigenvalues

before whitening and reduce computational complexity by

minimizing pair-wise dependency [29].

Let the input to ICA, V, be a p by m matrix, where p

represents the number of pixels in the training image, and m

be the first m eigenvectors of a set of n palm images (Section

3.5.1). ICA is performed on VT.

After that, the independent basis vector,^S, is computed

as follows:

^S ZW!VK1

(A.17)

Next, by taking R as the PCA coefficient where RZX!

V, with X representing the n set of zero-mean images (image

data is contained in each row), the coefficients matrix of

ICA can be calculated as

B ZR!WK1

(A.18)

Therefore, the reconstruction of the original palmprint

image can be achieved by

X ZB!^S (A.19)

A.4. Wavelet transformation

The wavelet decomposition of a signal f(x) can be

obtained by convolution of signal with a family of real

orthonormal basis, ja,b(x)

ðWjfðxÞÞða;bÞ

ZjajK1=12

ð

R

fðxÞj

xKb

a

??

dxfðxÞ2L2ðRÞ

(A.20)

where a, b2R and as0 are the dilation parameter and the

translation parameter, respectively. The basis function

ja,b(x) are obtained through translation and dilation of a

kernel function j(x) known as mother wavelet as defined

below:

ja;bðxÞ Z2Ka=2jð2KaxKbÞ

The mother wavelet j(x) can be constructed from a

scaling function, f(x). The scaling function f(x) satisfies the

following two-scale difference equation

p X

where h(n) is the impulse response of a discrete filter which

has to meet several conditions for the set of basis wavelet

functions to be orthonormal and unique. The scaling

function f(x) is related to the mother wavelet j(x) via

p X

The coefficients of the filter g(n) are conveniently

extracted from filter h(n) from the following relation

(A.21)

fðxÞ Z

ffiffiffi

2

n

hðnÞfð2xKnÞ

(A.22)

jðxÞ Z

ffiffiffi

2

n

gðnÞfð2xKnÞ

(A.23)

gðnÞ ZðK1Þnhð1KnÞ

The discrete filters h(n) and g(n) are the quadrature

mirror filters (QMF), and can be used to implement a

wavelet transform instead of explicitly using a wavelet

function.

For 2D signal such as images, there exists an algorithm

similar to the one-dimensional case for two dimensional

wavelets and scaling functions obtained from one-dimen-

sional ones by tensiorial product. This kind of two-

dimensional wavelet transform leads to a decomposition

of approximation coefficients at level jK1 in four

components: the approximations at level j, and the details

in three orientations (horizontal, vertical and diagonal)

(A.24)

Ljðm;nÞ Z½Hx? ½Hy? LjK1?Y2;1?Y1;2ðm;nÞ

(A.25)

Dj vervitalðm;nÞ Z½Hx? ½Gy? LjK1?Y2;1?Y1;2ðm;nÞ

(A.26)

Dj horizontalðm;nÞ Z½Gx? ½Hy? LjK1?Y2;1?Y1;2ðm;nÞ

(A.27)

Dj diagonalðm;nÞ Z½Gx? ½Gy? LjK1?Y2;1?Y1;2ðm;nÞ

where * denotes the convolution operator, Y2,1 ([2,1)

subsampling along the rows (columns), H and G are a low

pass and bandpass filter, respectively. Similarly, two levels

of the wavelet decomposition is obtained by applying

Wavelet Transform on the low-frequency band sequentially.

(A.28)

References

[1] G. Lu, D. Zhang, K. Wang, Palmprint recognition using eigenpalms

features, Pattern Recognition Letters 24 (9–10) (2003) 1473–1477.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515514

Page 15

[2] X. Wu, D. Zhang, K. Wang, Fisherpalms based palmprint recognition,

Pattern Recognition Letters 24 (2003) 2829–2838.

[3] W.K. Kong, D. Zhang, W. Li, Palmprint feature extraction using 2-D

Gabor filters, Pattern Recognition 36 (10) (2003) 2339–2347.

[4] W. Li, D. Zhang, Z. Xu, Palmprint identification by Fourier

Transform, International Journal of Pattern Recognition and Artificial

Intelligence 16 (4) (2003) 417–432.

[5] J. You, W. Li, D. Zhang, Hierarchical palmprint identification via

multiple feature extraction, Pattern Recognition 35 (2003) 847–859.

[6] J. Funada, N. Ohta, M. Mizoguchi, T. Temma, K. Nakanishi,

A. Murai, T. Sugiuchi, T. Wakabayashi, Y. Yamada, Feature

extraction method for palmprint considering elimination of creases,

Proceedings of the 14th International Conference on Pattern

Recognition, vol. 2 1998 pp. 1849–1854.

[7] D. Zhang, W. Shu, Two novel characteristics in palmprint verifica-

tion: datum point invariance and line feature matching, Pattern

Recognition 32 (1999) 691–702.

[8] N. Duta, A.K. Jain, K.V. Mardia, Matching of Palmprint, Pattern

Recognition Letters 23 (2002) 477–485.

[9] J. Chen, C. Zhang, G. Rong, Palmprint recognition using creases,

Proceedings of International Conference on Image Processing, 2001

pp. 234–237.

[10] X. Wu, K. Wang, D. Zhang, Fuzzy directional element energy feature

(FDEEF) based palmprint identification, International Conference on

Pattern Recognition 1 (2002) 95–98.

[11] C.C. Han, H.L. Cheng, C.L. Lin, K.C. Fan, Personal authentication

using palmprint features, Pattern Recognition 36 (2) (2003) 281–371.

[12] D. Zhang, W.K. Kong, J. You, M. Wong, On-line palmprint

identification, IEEE Transactions on Pattern Analysis and Machine

Intelligence 25 (9) (2003) 1041–1051.

[13] K.O. Goh, C. Tee, B.J. Teoh, C.L. Ngo, Automated hand geometry

verification system based on salient points, The 3rd International

Symposium on Communications and Information Technologies

(ISCIT 2003) Songkhla, Thailand, September, 2003 pp. 720–724.

[14] M. Sonka, V. Hlavac, R. Boyle, Image Processing, Analysis and

Machine Vision, PWS publisher, 1999. pp. 142–147.

[15] W. Shi, D. Zhang, Automatic palmprint verification, International

Journal of Image and Graphics 1 (1) (2001) 135–151.

[16] M.A. Turk, A.P. Pentland, Eigenfaces for recognition, Journal of

Cognitive Neuro-Science 3 (1) (1991) 71–86.

[17] X. Wang, K.P. Kuldip, Feature extraction and dimensionality

reduction algorithms and their applications in vowel recognition,

Pattern Recognition 36 (10) (2003) 2429–2439.

[18] P. Cornon, Independent component analysis—a new concept?, Signal

Processing 36 (2002) 287–314.

[19] D.F. Specht, Probabilistic neural networks for classification, mapping,

or associative memory, Proceeding of the IEEE International

Conference on Neural Networks, vol. 1 1998 pp. 525–532.

[20] D.F. Specht, Probabilistic neural networks (original contribution),

Neural Networks 3 (1) (1990) 109–118.

[21] T. Masters, Advanced Methods in Neural Computing, Van Nostrand

Reinhold, New York, 1993. pp. 35–55.

[22] T. Masters, Practical Neural Networks Recipes, Wiley, New York,

1993 (Chapter 12).

[23] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs

Fisherfaces: Recognition using class specific linear projection, IEEE

Transaction Pattern and Analysis and Machine Intelligence 19 (1997)

711–720.

[24] A. Hyva ¨rinen, E. Oja, Independent component analysis: algorithms

and applications, Neural Networks 13 (4–5) (2000) 411–430.

[25] M.S. Bartlett, J.R. Movellan, T.J. Sejnowski, Face recognition by

independent component analysis, IEEE Transaction on Neural

Networks 13 (2002) 1450–1464.

[26] G.C. Feng, P.C. Yuen, D.Q. Dai, Human face recognition using PCA

on wavelet subband, SPIE Journal of Electronic Imaging 9 (2) (2000).

[27] I. Daubechies, Ten Lectures on Wavelets, Capital City Press,

Vermont, 1992.

[28] A.J. Bell, T.J. Sejnowski, An information-maximization approach to

blind separation and blind deconvolution, Neural Computation 7

(1995) 1129–1159.

[29] K. Baek, B. Draper, R. Beveridge,K.She,PCA vs ICA:Acomparison

on the FERET data set, Joint Conference on Information Sciences,

Durham, NC, March 2002, pp. 824–827.

T. Connie et al. / Image and Vision Computing 23 (2005) 501–515515

#### View other sources

#### Hide other sources

- Available from Tee Connie · May 22, 2014
- Available from mmu.edu.my