Conference PaperPDF Available

Face recognition and detection using neural networks

Face Recognition and Detection using Neural
Vinita Bhandiwad, Assistant Professor,
Department of Information technology, Vidyalankar
Institute of technology, Mumbai
Bhanu Tekwani, Assistant Professor, Department
of Information technology, Vidyalankar Institute of
technology, Mumbai
Abstract: Face recognition is one of the latest technology
being studied area in biometric as it has wide area of
applications. But Face detection is one of the
challenging problems in Image processing. The basic
aim of face detection is determine if there is any face in
an image & then locate position of a face in an image.
Evidently face detection is the first step towards
creating an automated system which may involve other
face processing. The neural network is created &
trained with training set of faces & non-faces. All results
are implemented in MATLAB 2013 environment.
Keywords: Face recognition; image processing;
MATLAB; Neural network.
I. Introduction
The demand for personal identification in
computerized access control has resulted in an
increased interest in biometrics to replace passwords
and identification card. They can be easily breached
since the passwords can be divulged to an
unauthorized user and ID card can be stolen.
Biometrics which makes use of human features such
as iris, retina, face etc can be used to verify person’s
identity. The face recognition system has the benefit
of being a passive, non-intrusive system for verifying
personal identity. The proposed face recognition
system consists of face verification and face
recognition task. In verification task, the system
known a priori the identity of the user and has to
verify this identity i.e. the system has to decide
whether the a priori user is an imposter or not. It is
often useful to have a machine perform pattern
recognition. In particular machines which can read
face images are very cost effective. Therefore this
kind of application saves time and money and
eliminates the requirement that a human perform
such a repetitive task.
II. Why neural network
It has a feature of adaptive learning i.e. an ability to
learn how to do tasks. Also it can create its own
organization. It has a remarkable ability to derive
meaning from complicated or imprecise data. Today
neural networks are occurring everywhere. ANN are
relatively crude electronic models based on the neural
structure of the brain. Computers do rote things well
like keeping ledgers or performing complex math.
But computers have trouble recognizing even simple
pattern. The research shows that brain stores
information as patterns. Some of the patterns are
complicated and allows us the ability to recognize
individual faces from many different angles.
Basically all ANN have a similar structure or
topology. In the structure some of the neurons
interface to the real world to receive its input. The
output might be the particular character that the
network thinks it has scanned or the particular image
it thinks is being viewed. All the rest of the neurons
are hidden from view.
But a neural network is more than a bunch of
neurons. One of the easiest ways to design a structure
is to create layers of elements.
III. Different type of connection
There are basically two types of connections in neural
network they are feedforward neural network and
backpropagation neural network.
Backpropagation neural network: BPNN is the
most popular and oldest supervised learning
multiplayer feed forward neural network algorithm. It
is based on high mathematical foundation and has
very good application potential such as pattern
recognition, dynamic modelling, sensitive analysis
etc. The back propagation is the best known and
widely used learning algorithm in training multilayer
perceptron. The MLP refers to the network consisting
of a set of sensory units that constitute the input layer,
one or more hidden layers, and an output layer. The
input signal propagates through the network in
forward condition, from left to right and on a layer by
layer basis. The BPNN provides a computationally
efficient method for changing the weights in feed
forward network. The main aim is to train the
network to achieve a balance between the ability to
respond correctly the input pattern that are used for
training and ability to provide good response to the
input that are similar. As all techniques possesses
backpropagation too has its pros and cons and has its
problems such as slow convergence rate & problem
to get stuck in local minima however it is known for
its accuracy. Backpropagation was less to be used
because of its time length needed to train the network
to achieve the best result possible.
Feedforward neural network: In feedforward NN
the information flow is unidirectional. A unit sends
information to other unit from which it does not
receive any information. There are no feedback loops.
They are used in pattern generation/recognition/
classification. They have fixed inputs and outputs.
Here feedback loops are allowed they are used in
content addressable memories. A multilayer
feedforward neural network (MLFFNN) consists of
an input layer, hidden layer and an output layer of
neurons. Every node in a layer is connected to every
other node in neighbouring layer. A FFNN has no
memory and output is solely determined by current
input and weight values. A feedforward neural
network consists of one or more layers of usually
non-linear processing units. The output of each layer
serves as input to the next layer. The objective of
training a NN is to produce desired output when a set
of input is applied to the network.
IV. ANN Structure
A layer with n inputs Xi and corresponding weights
Wji (i=1,2 ,......n) function sums the n weighted
input and passes the result through a non linear
function ø(.) called activation function.
The function ø processes the adding results plus a
threshold value θ thus producing the output Y.
ANN is very well known powerful and robust
classification technique that has been used to
approximate real valued functions. ANN has been
used in many areas such as interpreting visual scenes,
speech recognition etc.
V. Experiment Results.
In this experiment where the images taken for
detection and recognition is from ORL database.
ORL stands for Olivette Research Laboratory
Database which is of size 3.3 Mb. It consists of total
400 images in database out of which 188 images are
used for testing purpose.
The training set is created based on this 188 images
whose complete data is been stored in the network.
Once training set is created then the test input is taken
whose initially all data is extracted using inbuilt
functions. Once the data is extracted it is matched
with the training set. Below fig shows the stage wise
output obtained.
Above fig shows GUI where on left hand side is the
input to be detected and matched. On the right hand
side we are comparing the test input with the
database input each part of the face is tested
separately for ex: eye co-ordinates, nose point, mouth
region etc. The data from each region is noted
separately and stored in database which will be useful
for comparing.
After comparing the test input faces with the database
images it displays the result as match with the image.
Above fig indicated the final result of neural network
where there are 26 input layers, 10 hidden layers and
1 output layer.
VI. Conclusion
After performing the experiment it can be concluded
that face detection and recognition works very well
with neural networks because even though the face is
not proper it can be detected precisely because of
hidden layer processing. Therefore all the images in
the database have been tested and obtained exact
VII. References
[1] Applying artificial Neural network for face
recognition. By Thai Hoang Le, Department of
computer science, Ho chi Minh university of science,
[2] Face recognition system using artificial neural
network approach. By Normah Binti Omar
University Teknologi MARA, IEEE-ICSCN 2007,
MIT Campus, Anna university, Chennai, India.
[3] Artificial Neural network based face recognition.
By Adjoudj reda university of sidi, Algeria & Dr
Boukelif Aoued university of sidi, Algeria.
[4] A survey on backpropagation algorithm for
feedforward neural network. By kuldip vora, shruti
[5] Artificial neural networks in face detection. By
Tarun Kumar, kushal veer singh, shekhar malik.
International journal of computer application.
[6] Face recognition using neural network. By
P.Latha, Dr.L.Ganesan & Dr.S.Anna Signal
processing: An international journal.
[7] Face recognition using artificial neural network.
By Ashvini E shivdas WWW.IJRMST.ORG
[8] Neural network based face recognition using
Matlab. By Shamla mantra, MITCOE Pune, India,
... The emergence and subsequent evolution of neural network-based methods for face detection have demonstrated promising outcomes. Bhandiwad et al. [9] constitute a prime example of such advanced methodologies. Despite the impressive results, these techniques do present notable disadvantages when juxtaposed with traditional methods like the Viola-Jones algorithm and the KLT tracker. ...
Full-text available
In the context of safety and security, the ability to track and identify faces in hazy conditions presents a significant challenge. The deleterious effects of haze on video quality, such as the diminution of detail, reduction in contrast, distortion of color, and complications in depth estimation, impede effective facial recognition. Additionally, the complexity of live video tracking is exacerbated by factors such as occlusion, positional variations, and lighting changes. Despite these challenges, video sequences offer an abundance of information, surpassing static images in terms of potential data extraction. In this study, a dual approach strategy is employed to detect and track faces in hazy conditions. The Kanade-Lucas-Tomasi (KLT) algorithm, celebrated for its adept feature tracking capabilities, is deployed to execute face tracking. The effectiveness of this algorithm lies in its ability to accurately trace points across successive image frames, a crucial aspect of reliable face tracking. Concurrently, the Viola-Jones algorithm is utilized for face detection. The algorithm harnesses Haar-like features to efficiently discern faces in real-time, effectively overcoming the challenge of identifying faces within video frames. To further enhance the quality of the video, the dark channel prior (DCP) image dehazing technique is employed. This technique improves visibility by increasing contrast and color saturation, whilst concurrently identifying and eliminating air haze from the video frames.
... The booming technology for object recognition and detection, action recognition and face recognition, is the neural networks techniques. Face recognition technology has varied applications in the fields of banking (password service), forensics, national security etc [4]. A neural network consists of many interconnected neurons. ...
This paper proposes an AES based Encryption and Decryption of a live IP video used for security in surveillance systems. Here, the key is generated based on neural networks techniques for facial recognition. Principal component analysis and Eigen vector algorithms are used to extract biometric facial features which are used to train the neural network. At the receiver side, the original video plays only if the user is authenticated or else it plays an encrypted video. This work proposes an AES architecture based on optimizing timing in terms of adding inner and outer pipeline registers for each round and Key Expansions. Further by optimizing the Crypto Multiplication for Mix columns via LUT based approach aid in further optimization in terms of timing. LUT and Pipelined based implementation techniques are optimal for FPGA based implementations. ROM table and pipelining are the two techniques used to implement AES. Result indicates that with the combination of pipelined architecture and Distributed/Split LUT-Pipelined techniques, the encryption has higher throughput and speed.
... Deep learning algorithms are also integrated into face recognition technologies to make it more efficient and accurate [3]- [4]. New Convolution neural networks are developed specifically to enhance the efficiency of face detection and face recognition [5].In existing systems, the accuracy of face recognition decreases when there are some changes to the facial features, such as the person grew a beard, or the person is wearing spectacles [6]. There are a number of CNN architectures available for image classification and face recognition, Face Recognition Based Attendance System DOI: 10.9790/0661-2201045660 ...
Full-text available
The first and one of the most important task that any workplace or an educational institute does is checking whether the students or the employees are present, also known as marking attendance. The method of marking attendance varies on multiple factors such as the number of people, departments and number of slots where attendance has to be taken. Schools and colleges can rely on roll call since the number of students is less. Some universities started to implement swipe cards which is a much efficient and time-saving method. Large organizations use swipe cards since there are a lot of employees to manage. Biometrics are also used in a few places to assist with attendance marking. Biometrics such as iris, handprint, fingerprint, face are used to uniquely identify a person.With growing developments and research in the field of deep learning and computer vision, both face recognition and deep learning can be integrated to create high-level models, which can be used to recognize faces and mark the appropriate attendance. We are trying to develop a system that automatically marks the attendance by using face recognition with models which are trained using deep learning.
Face recognition is the science of identifying and recognizing human faces in various situations, keeping the constraints like pose variation and occlusion in mind. Due to its impactful applications in safety and security systems, face recognition is becoming extremely popular and is researched extensively even today. The FaceNet method is among the most tested approaches in Deep learning face identification. This method uses a deep convolution neural network for training the data. The face embedding generated can be used to train a face identification system. This study aims to comprehend the FaceNet system, evaluate its performance, and test its accuracy on seven standard datasets. The study also tries to compare how well the FaceNet method works compared to other popular holistic and hybrid methods. From the outcomes of this study, we can conclude that FaceNet showed outstanding results and was better than the other methods. The FaceNet system reached a minimum of 90% accuracy on all standard datasets used on both the pre-trained models, which is a significant number for any face recognition method.
The tag antenna exhibiting operation in European and North American regions covering major UHF RFID bands resonating at 866 MHz and 915 MHz, respectively, has been designed in this paper. The tag antenna operating in single UHF RFID region is converted to operate in dual UHF RFID region band tag antenna by modifying its geometry and optimizing the final geometry to obtain resonance at the required resonant frequencies. The tag antenna proposed in this paper comprises a meandered line element with extended lower stub to obtain additional band at European Region. The designed tag employs Alien Higgs-4 RFID chip having capacitive reactance. The designed tag utilizes inductive spiral loop to obtain conjugate impedance to match the capacitive RFID IC. Further, the designed modified tag antenna is simulated and its performance is analyzed based on different parameters such as its resistance, reactance, radiation efficiency, realized gain, etc. Also, it has been seen that the designed dual band antenna shows bidirectional and omnidirectional radiation pattern in E-plane and H-plane, respectively
Full-text available
In Wireless Sensor Networks (WSNs), issues including energy management, topology management, bandwidth estimation, packet loss calculation, etc. are dealt. Only a few works have paid attention to mitigating congestion while estimating delay and bandwidth for transmitting data packets in WSNs. Several works on queue management and congestion control have incorporated Soft Computing (SC) techniques to solve some of the problems in WSNs. In this paper, an Active Queue Management (AQM) model to estimate congestion using Random Early Detection (RED) and mitigate congestion is proposed. It is found that the results obtained outperform the existing methods in terms of delay between intermediate nodes, end-to-end delay, packet loss ratio, packet loss probability, queue size and energy consumption.
The ever-growing security issues in various scenarios create an urgent demand for a reliable and convenient identification system. Traditional identification systems request users to provide passwords, fingerprints or other easily stolen information. Existing works show that everyone’s gait and respiration have unique characteristics and are difficult to imitate. But these works only use gait or respiration information to achieve identification, which leads to low accuracy or long identification time. And they have no strong anti-interference ability, which leads to the limitation in practical application. Toward this end, we propose a new system which uses both Gait and Respiratory biometric characteristics to achieve user identification using Wi-Fi (GRi-Fi) in the presence of interferences. In our system, we design a segmentation algorithm to segment gait and respiration data. And we design a weighted subcarrier screening method to improve the anti-interference ability. In order to shorten the identification time, we propose a feature integration method based on weighted average. Finally, we use a deep learning method to identify users accurately. Experimental results show that GRi-Fi can identify the user’s identity with an average accuracy of 98.3% in non-interference environments. Even in the presence of multiple interferences, the average identification accuracy also reaches 91.2%. In future applications, our system can be applied to many fields of IoT, such as smart home systems and clocking in at companies.
Full-text available
This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN) to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.
Full-text available
Face detection is one of the challenging problems in the image processing. The basic aim of face detection is determine if there is any face in an image. And then locate position of a face in image. Human face detected in an image can represent the presence of a human in a place. Evidently, face detection is the first step towards creating an automated system, which may involve other face processing. The novel face detection approach relies on neural networks, which can be used to detect faces by using transformation fast Fourier transform (FFT). The neural network is created and trained with training set of faces and non-faces. The network used is a two layer feed-forward network. There are two modifications for the classical use of neural networks in face detection. First, the neural network tests only the face candidate regions for faces, thus the search space is reduced. Second, the window size used by the neural network in scanning the input image is adaptive and depends on the size of the face candidate region. All results are implemented in MATLAB 7.0 environment.
Conference Paper
Full-text available
This document demonstrates how a face recognition system can be designed with artificial neural network. Note that the training process did not consist of a single call to a training function. Instead, the network was trained several times on various input ideal and noisy images, the images that contents faces. In this case training a network on different sets of noisy images forced the network to learn how to deal with noise, a common problem in the real world.
Automatic recognition of human faces is asignificant problem in the development andapplication of pattern recognition. In thispaper, we introduce a simple technique foridentification of human faces in clutteredscenes based on neural nets. In detectionphase, neural nets are used to test whether awindow of 18x27 pixels contains a face ornot. A major difficulty in learning processcomes from the large database required forface / nonface images. We solve thisproblem by dividing these data into twogroups. Such division results in reduction ofcomputational complexity and thusdecreasing the time and memory neededduring the test of an image. The proposedface recognition technique consists ofthree parts; preprocessing, featureextraction, and recognition steps. GradientVector method is used for facial featureextraction. A face recognition system basedon recent method which concerned withboth representation and recognition usingartificial neural networks is presented. Itthen evaluates the performance of thesystem by applying two photometricnormalization techniques: histogramequalization and homomorphic filtering.The system produces promising results forface verification and face recognition.
Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
Conference Paper
Optical character recognition is examined to find a general framework by which it can be realized. A hierarchical `cone' with feature extraction layers of increasing sophistication is described. The system, unlike the artificial neural net examples in the literature, does not use one network only. Allowing recognition to take place in parallel over different representations of the same symbol introduces redundancy, facilities learning and thus improves performance. The resource requirements of the system, which parallel operation inevitably increases, can be decreased by limiting the size of the image that can be `seen' at one time. There is an `eye' that can be moved around and fixed on any part of the scene which returns detailed information about a small part of the scene. The integration of successive eye fixations is a temporal process and the operation of the system also turns into that of relaxation in time where temporal expectations and selective attention should be taken into account. One possibility for representing spatial relations by introducing sequential scanning of the image is shown. Synapses with an internal delay, together with a temporal summation mechanism, are proposed by which this order can be checked. Work is currently going on to apply this mechanism to more realistic objects, feature sets, and scanning orders