Fig 1 - uploaded by Jiansheng Chen
Content may be subject to copyright.
An example of facial feature points localization (105 points in total) 

An example of facial feature points localization (105 points in total) 

Source publication
Conference Paper
Full-text available
The algorithm of 105 facial feature points localization has been proposed in [1]. In this paper, we studied the stability of these feature points in different photos of the same person, and then we presented an improved face recognition system using these facial feature points to perform face recognition and check duplicate entries in database. All...

Context in source publication

Context 1
... algorithm locates on a facial image 105 points, of which the locations are shown in Fig. 1. We took 200 facial photos from 100 individuals with the same light condition and facial pose. That means two different photos of each person. All of these photos are normalized to the same size (320*480 pixels), then are located with 105 facial feature points localization algorithm. So we can get the coordinates of 105 points of every ...

Similar publications

Conference Paper
Full-text available
In facial identification tasks, face alignment is the most important aspect. Utilizing localization of various face landmarks for structural face normalization, in particular, has proven to be extremely reliable, significantly enhancing recognition performance. This article presents a survey on popular repositories such as BioID face repository, XM...

Citations

... II. LITERATURE REVIEW Face recognition algorithms have been successfully used for deduplication purposes, such as to identify duplicate passport images [4]. Facial feature points have been used for face recognition for the purposes of deduplication [5]. ...
... A huge number of works are devoted to the mathematical formulation of the problem of automatic pattern recognition. When discussing general issues of pattern recognition, geometric representations are widely used [21][22][23][24][25]. ...
Article
Full-text available
The article analyzes the possibility and rationality of using proctoring technology in remote monitoring of the progress of university students as a tool for identifying a student. Proctoring technology includes face recognition technology. Face recognition belongs to the field of artificial intelligence and biometric recognition. It is a very successful application of image analysis and understanding. To implement the task of determining a person’s face in a video stream, the Python programming language was used with the OpenCV code. Mathematical models of face recognition are also described. These mathematical models are processed during data generation, face analysis and image classification. We considered methods that allow the processes of data generation, image analysis and image classification. We have presented algorithms for solving computer vision problems. We placed 400 photographs of 40 students on the base. The photographs were taken at different angles and used different lighting conditions; there were also interferences such as the presence of a beard, mustache, glasses, hats, etc. When analyzing certain cases of errors, it can be concluded that accuracy decreases primarily due to images with noise and poor lighting quality.
... They provide results on a database with 1 009 identities. In [Ya11], de-duplication based on facial feature points is reported on a database of Chinese ID cards with 60 000 entries and 100/100 duplicates detected with 8 false hits. The main subject of the paper is, however, the presentation of a face recognition method based on 105 facial feature points, and the part on de-duplication performance is very brief. ...
... The detected facial features are utilized to align facial regions for later processes. Moreover, facial feature regions are required for recognition tasks [3]. Due to illumination, poses, expressions and the other quality factors of image, it is hard to detect facial features accurately even though face region is located. ...
Conference Paper
Full-text available
Efficiency of facial feature detection is very crucial in face related applications such as face recognition and reconstruction. Traditional algorithms of high precision are often with expensive computation. In this paper, we propose a fast facial feature detection algorithm with good precision. The basic idea is to combine fast search strategy in the global image and high precision classifier in the local regions. In a global image, we borrow the idea of Active Shape Model (ASM) and utilize the average search template to decrease the search area in images. At each local region, we use trained random forest classifier (RF) to identify the existence of facial feature. An iteration procedure of template adjustment is specially designed to ensure the detection precision. Experiments show the effectiveness of the proposed facial feature detection algorithm.
Article
Full-text available
The recognition of human faces poses a complex challenge within the domains of computer vision and artificial intelligence. Emotions play a pivotal role in human interaction, serving as a primary means of communication. This manuscript aims to develop a robust recommendation system capable of identifying individual faces from rasterized images, encompassing features such as eyes, nose, cheeks, lips, forehead, and chin. Human faces exhibit a wide array of emotions, with some emotions, including anger, sadness, happiness, surprise, fear, disgust, and neutrality, being universally recognizable. To achieve this objective, deep learning techniques are leveraged to detect objects containing human faces. Every human face exhibits common characteristics known as Haar features, which are employed to extract feature values from images containing multiple elements. The process is executed through three distinct stages, starting with the initial image and involving calculations. Real-time images from popular social media platforms like Facebook are employed as the dataset for this endeavor. The utilization of deep learning techniques offers superior results, owing to their computational demands and intricate design when compared to classical computer vision methods using OpenCV. The implementation of deep learning is carried out using PyTorch, further enhancing the precision and efficiency of face recognition.
Article
As Internet of Things (IoT) is a fascinating paradigm in which all things and objects are connected together, it holds a significant position in fostering intelligent high-level services. However, the future IoT architecture is still under evolution profiting from the overwhelming development of cyberspace and cyber technologies. Based on the traditional physical-based IoT, social-inspired Internet of People (IoP) and brain-abstracted Internet of Thinking (IoTk), an intelligent embryo of cyber-enabled Internet of X (IoX) is being established where all things, entities, people and thinking are interacted seamlessly. In this paper, we clearly introduce the cyber-enabled IoX from perspective of both ubiquitous connections and space convergence, and design an architecture with four pillars, namely things, people, thinking and cyberentities in respective spaces. In addition, we analyze the fundamental issues in IoX development, such as information exploding, link exploding and application exploding from the view of ubiquitous connections, entity exploding and relationship exploding on the basis of space convergence, and service exploding from overall aspects, where potential solutions are discussed at the same time. The intelligent cyber-enabled IoX will be the cornerstone for future techniques and applications, and proves to be the solid foundation for upcoming intelligent and proactive era.
Article
Centroid selection plays a key role in image deduplication. It means selecting an optimal solution as a centroid image in a duplicate image set. Meanwhile, it will delete other image copies and establish pointers to point to the centroid image in the original position. At present, there is not a mature centroid selection scheme. Centroid selection mainly relies on users to manually complete according to experience. In a massive data environment, it will consume a lot of human resources, and it is easy to make mistakes by subjective judgment. Therefore, in order to solve this problem, this article proposes an automatic centroid image selection method based on fuzzy logic reasoning. In a duplicate image set, the image attribute information is used to automatically infer comprehensive quantized values to represent images, and the centroid image is selected by comparing the quantized values. The experimental results showed that the scheme not only could meet the visual perception characteristics, but also meet the purpose of image deduplication.
Chapter
Facial feature detection is a well-studied field. Efficient facial feature detection is significant in face analysis based applications, especially on mobile devices. Balance between accuracy and time efficiency is a practical problem in real time applications. This paper aims at proposing a real-time and accurate algorithm for facial feature detection. It is based on the assumption that classifiers may improve performance by limiting searching region. We propose a simplified Active Shape Model (ASM) to speed up such searching process. To ensure accuracy, several facial feature detectors are compared, such as the Adaboost classifiers with the Haar-feature, and the random forest classifiers. Since the simplified ASM provides a good constraint to different facial features, the detected results are promoted as well. We also design multiple experiments to verify our hypothesis by varying searching region. Experiments on MBGC databases prove the effect of the proposed simplified ASM model (sASM).