Chapter

Possibilities of Applying the Triangulation Method in the Biometric Identification Process

Authors:
  • International Vision University -Gostivar
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This chapter presents the possibilities of applying the triangulation method in the biometric identification process. The chapter includes a method in field of computational geometry (specifically polygon triangulation) in combination with face recognition techniques. The chapter describes some authentication techniques with an emphasis on face recognition technologies and polygon triangulation as a fundamental algorithm in computational geometry and graphics. The proposed method is based on generating one’s own key (faceprint), where everyone has a potential key in a 3D view of their characteristic facial lines; therefore, everybody is the carrier of his/her own unique key that is generated from a triangulation of the scanned polygon. The proposed method could find application exactly in biometric authentication. This is a pretty interesting possibility of authenticating users because there is no possibility of stealing the key (as is the case with the approaches: “something you know” and “something you have”). By introducing this procedure of determining the authentication of users, unauthorized access to computers, mobile devices, physical locations, networks, or databases is made difficult. In experimental research, the authors present concrete possibilities of applying the polygon triangulation in the biometric identification process. The authors tested a proposed solution using the appropriate two sets of data. The results show that using the triangulation method in combination with the face recognition technique, the success rate of authentication is achieved equally well as recognition with the application of some other methods.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In many cases, scammers took advantage of absent or outdated/ ineffectual identity verification systems, for instance, knowledge-based authentication ( e.g. " In what city were you born?") or social security numbers rather than more advanced digital authentication systems utilising biometric data, such as fingerprints, retina scans or facial recognition systems etc. ( Hope, 2021;Saračevič, Elhoseny, Selimi, & Lončeravič, 2021). ...
Book
Full-text available
Are pandemics the end of cities? Or, do they present an opportunity for us to reshape cities in ways making us even more innovative, successful and sustainable? Pandemics such as COVID-19 (and comparable disruptions) have caused intense debates over the future of cities. Through a series of investigative studies, Designing Smart and Resilient Cities for a Post-Pandemic World: Metropandemic Revolution seeks to critically discuss and compare different cases, innovations and approaches as to how cities can utilise nascent and future digital technology and/or new strategies in order to build stronger resilience to better tackle comparable large-scale pandemics and/or disruptions in the future. The authors identify ten separate societal areas where future digital technology can impact resilience. These are discussed in individual chapters. Each chapter concludes with a set of proposed "action points" based on the conclusions of each respective study. These serve as solid policy recommendations of what courses of action to take, to help increase the resilience in smart cities for each designated area. Securing resilience and cohesion between each area will bring about the metropandemic revolution. This book features a foreword by Nobel laureate Peter C. Doherty and an afterword by Professor of Urban Technologies, Carlo Ratti. It provides fresh and unique insights on smart cities and futures studies in a pandemic context, offers profound reflections on contemporary societal functions and the needs to build resilience and combines lessons learned from historical pandemics with possibilities offered by future technology.
... In many cases, scammers took advantage of absent or outdated/ ineffectual identity verification systems, for instance, knowledge-based authentication ( e.g. " In what city were you born?") or social security numbers rather than more advanced digital authentication systems utilising biometric data, such as fingerprints, retina scans or facial recognition systems etc. ( Hope, 2021;Saračevič, Elhoseny, Selimi, & Lončeravič, 2021). ...
Chapter
Full-text available
The aim of this chapter is to explore how physical and digital security can be safeguarded in a smart city and how it can be used to strengthen resilience. The point of departure is that there are two types of safety and security dimensions, “physical protection” and “digital protection”. The chapter looks at how the nature of crime has changed in each respective area and what types are the most recurrent. The chapter then discusses potential ways of addressing the most recurring types of crime, including concepts such as natural language processing (NLP), public–private partnerships, multi-factor user authentication, autonomous robots, apps, wearables etc. In conclusion, it is argued that in order to successfully uphold the social contract during times of pandemics, it will be necessary to devise new strategies and approaches to effectively combat crime and heighten people’s sense of security.
Chapter
The present COVID pandemic has transformed a physical world to a digital world. Electronic communication has become a major part of the human life that leads to a threat to digital network. So, hiding and protecting the information against unintended persons are highly essential nowadays. This can be done by encryption process. Encryption techniques are derived from mathematical concepts like number theory, graph theory, and algebra. The present paper explains a symmetric packet cipher using polygon triangulation and Catalan number of applied number theory. Here, a natural number n is secret between the users. The Catalan number Cn and number of triangles of n-angle Tn have major role in encryption process with simple logical XOR operation. To protect the cipher against different active and passive attacks, to achieve avalanche effect, the present plaintext packet is concatenated with the previous cipher text packet. KeywordsCatalan number CnPolygon triangulation TnEncryptionDecryption
Article
Full-text available
This paper presents a new technique of generation of convex polygon triangulation based on planted trivalent binary tree and ballot notation. The properties of the Catalan numbers were examined and their decomposition and application in developing the hierarchy and triangulation trees were analyzed. The method of storage and processing of triangulation was constructed on the basis of movements through the polygon. This method was derived from vertices and leaves of the planted trivalent binary tree. The research subject of the paper is analysis and comparison of a constructed method for solving of convex polygon triangulation problem with other methods and generating graphical representation. The application code of the algorithms was done in the Java programming language.
Article
Full-text available
Computational geometry is an integral part of mathematics and computer science deals with the algorithmic solution of geometry problems. From the beginning to today, computer geometry links different areas of science and techniques, such as the theory of algorithms, combinatorial and Euclidean geometry, but including data structures and optimization. Today, computational geometry has a great deal of application in computer graphics, geometric modeling, computer vision, and geodesic path, motion planning and parallel computing. The complex calculations and theories in the field of geometry are long time studied and developed, but from the aspect of application in modern information technologies they still are in the beginning. In this research is given the applications of computational geometry in polygon triangulation, manufacturing of objects with molds, point location, and robot motion planning.
Article
Full-text available
In this paper, a procedure for the application of one computational geometry algorithm in the process of generating hidden cryptographic keys from one segment of a 3D image is presented. The presented procedure consists of three phases. In the first phase, the separation of one segment from the 3D image and determination of the triangulation of the separated polygon are done. In the second phase, a conversion from the obtained triangulation of the polygon in the record that represents the Catalan key is done. In the third phase, the Catalan-key is applied in the encryption of the text based on the balanced parentheses combinatorial problem.
Article
Full-text available
U radu su opisane neke tehnike autentifikacije zasnovane na prepoznavanju lica sa akcentom na 2D i 3D tehnologije za prepoznavanje lica. U radu su predstavljene analize koje pokazuju da uspešnost nekog biometrijskog sistema zahteva merenje njegovih performansi, kroz nekoliko različitih vrsta parametara. Prikazane su karakteristike i komparativna analiza dvodimenzionalnog načina prepoznavanja lica u odnosu na tro-dimenzionalni, na osnovu brojnih istraživanja objavljenih u stručnim i naučnim radovima. U radu je naglašeno da je potrebno analizirati, pored performansi sistema, i greške koje se javljaju u biometrijskim sistemima tokom procesa upisa, verifikacije i identifikacije. Analizirana je i OpenCV biblioteka koja raspolaže sa više od 500 funkcija sa kojima je obuhvaćeno više oblasti iz Computer Vision-a i oblasti automatskog prepoznavanja oblika. Ključne reči: Biometrija, Autentifikacija, Prepoznavanje lica, Identifikacija, Open Computer Vision.
Article
Full-text available
In this paper we present a method for Catalan number decomposition in the expressions of the form (2 + i). This method gives convex polygon triangulations in Hurtado-Noy ordering. Therefore, we made a relationship between the expressions and the ordering mentioned above. The corresponding algorithm for Catalan number decomposition is developed and implemented in Java, as well as the algorithm which generates convex polygon triangulations. At the end, we have provided the comparison of Hurtado's algorithm and our algorithm based on the decomposition method.
Article
Full-text available
Given a weighted graph G ( V;E ), a minimum spanning tree for G can be obtained in linear time using a randomized algorithm or nearly linear time using a deterministic algorithm. Given n points in the plane, we can construct a graph with these points as nodes and an edge between every pair of nodes. The weight on any edge is the Euclidean distance between the two points. Finding a minimum spanning tree for this graph is known as the Euclidean minimum spanning tree problem (EMSTP). The minimum spanning tree algorithms alluded to before will run in time O ( n ² ) (or nearly O ( n ² )) on this graph. In this note we point out that it is possible to devise simple algorithms for EMSTP in k - dimensions (for any constant k ) whose expected run time is O ( n ), under the assumption that the points are uniformly distributed in the space of interest. CR Categories: F2.2 Nonnumerical Algorithms and Problems; G.3 Probabilistic Algorithms
Article
Full-text available
Consider the following heuristic for planar Euclidean instances of the Traveling Salesman Problem (TSP): select a subset of the edges which induces a planar graph, and solve either the TSP or its graphical relaxation on that graph. In this paper, we give several motivations for considering this heuristic, along with extensive computational results. It turns out that the Delaunay and greedy triangulations make effective choices for the induced planar graph. Indeed, our experiments show that the resulting tours are on average within 0.1% of optimality.
Article
Full-text available
In this paper we present a fast approach for solving large scale Traveling Salesman Problems (TSPs). The heuristic is based on Delaunay Triangulation and its runtime is therefore bounded by O(n log n). The algorithm starts by construction the convex hull and successively replaces one edge with two new edges of the triangulation, thus inserting a new city. The decision which edge to remove is based on edge ranks. Finally the tour is subject to a node insertion improvement heuristic. By extensive case studies it will be shown that only highly optimized 2/3-Opt heuristics are superior to this approach in both running time and tour lengths for very large TSP instances.
Article
Full-text available
Human faces are remarkably similar in global properties, including size, aspect ratio, and location of main features, but can vary considerably in details across individuals, gender, race, or due to facial expression. We propose a novel method for 3D shape recovery of faces that exploits the similarity of faces. Our method obtains as input a single image and uses a mere single 3D reference model of a different person's face. Classical reconstruction methods from single images, i.e., shape-from-shading, require knowledge of the reflectance properties and lighting as well as depth values for boundary conditions. Recent methods circumvent these requirements by representing input faces as combinations (of hundreds) of stored 3D models. We propose instead to use the input image as a guide to "mold" a single reference model to reach a reconstruction of the sought 3D shape. Our method assumes Lambertian reflectance and uses harmonic representations of lighting. It has been tested on images taken under controlled viewing conditions as well as on uncontrolled images downloaded from the Internet, demonstrating its accuracy and robustness under a variety of imaging conditions and overcoming significant differences in shape between the input and reference individuals including differences in facial expressions, gender, and race.
Article
Full-text available
In this paper, we present a novel approach to 3D face matching that shows high effectiveness in distinguishing facial differences between distinct individuals from differences induced by nonneutral expressions within the same individual. The approach takes into account geometrical information of the 3D face and encodes the relevant information into a compact representation in the form of a graph. Nodes of the graph represent equal width isogeodesic facial stripes. Arcs between pairs of nodes are labeled with descriptors, referred to as 3D Weighted Walkthroughs (3DWWs), that capture the mutual relative spatial displacement between all the pairs of points of the corresponding stripes. Face partitioning into isogeodesic stripes and 3DWWs together provide an approximate representation of local morphology of faces that exhibits smooth variations for changes induced by facial expressions. The graph-based representation permits very efficient matching for face recognition and is also suited to being employed for face identification in very large data sets with the support of appropriate index structures. The method obtained the best ranking at the SHREC 2008 contest for 3D face recognition. We present an extensive comparative evaluation of the performance with the FRGC v2.0 data set and the SHREC08 data set.
Article
Full-text available
One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.
Article
Full-text available
This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature.
Article
Full-text available
The essential midline symmetry of human faces is shown to play a key role in facial coding and recognition. This also has deep and important connections with recent explorations of the organization of primate cortex, as well as human psychophysical experiments. Evidence is presented that the dimension of face recognition space for human faces is dramatically lower than previous estimates. One result of the present development is the construction of a probability distribution in face space that produces an interesting and realistic range of (synthetic) faces. Another is a recognition algorithm that by reasonable criteria is nearly 100% accurate.
Article
Since research on face recognition began in the 1960's, the field has rapidly widened to automated face analysis including face detection, facial gesture recognition, and facial expression recognition. Automated Face Analysis: Emerging Technologies and Research provides theoretical background to understand the overall configuration and challenging problem of automated face analysis systems, featuring a comprehensive review of recent research for the practical implementation of the analysis system. A must-read for practitioners and students in the field, this book provides understanding by systematically dividing the subject into several subproblems such as detection, modeling, and tracking of the face.
Chapter
The rapid development of biometric technologies is one of the modern world's phenomena, which can be justified by the strong need for the increased security by the society and the spur in the new technological developments driven by the industries. This chapter examines a unique aspect of the problem — the development of new approaches and methodologies for biometric identification, verification and synthesis utilizing the notion of proximity and topological properties of biometric identifiers. The use of recently developed advanced techniques in computational geometry and image processing is examined with the purpose of finding the common denominator between the different biometric problems, and identifying the most promising methodologies. The material of the chapter is enhanced with recently obtained experimental results for fingerprint identification, facial expression modeling, iris synthesis, and hand boundary tracing.
Article
The solutions of traveltime inversion problems are often not unique because of the poor match between the raypath distribution and the tomographic grid. However, by adapting the local resolution iteratively, by means of a singular value analysis of the tomographic matrix, we can reduce or eliminate the null space influence on our earth image: in this way, we get a much more reliable estimate of the velocity field of seismic waves. We describe an algorithm for an automatic regridding, able to fit the local resolution to the available raypaths, which is based on Delaunay triangulation and Voronoi tessellation. It increases the local pixel density where the null space energy is low or the velocity gradient is large, and reduces it elsewhere. Consequently, the tomographic image can reveal the boundaries of complex objects, but is not affected by the ambiguities that occur when the grid resolution is not adequately supported by the available raypaths.
Conference Paper
Let p and q be a pair of points in a set S of N points in the plane. Let d(p,q) be the Euclidean distance between p and q and let DT(p,q) be the length of the shortest path from p to q in the Delaunay triangulation of S. We show that that the ratio \fracDT(p,q)d(p,q) \leqslant \frac2p3cos(\fracp6) » 2.42\frac{{DT(p,q)}}{{d(p,q)}} \leqslant \frac{{2\pi }}{{3\cos (\frac{\pi }{6})}} \approx 2.42 independent of S and N.
Conference Paper
3D face recognition is a very active biometric research field. Due to the 3D data’s insensitivity to illumination and pose variations, 3D face recognition has the potential to perform better than 2D face recognition. In this paper, we focus on local feature based 3D face recognition, and propose a novel Faceprint method. SIFT features are extracted from texture and range images and matched, the matching number of key points together with geodesic distance ratios between models are used as three kinds of matching scores, likelihood ratio based score level fusion is conducted to calculate the final matching score. Thanks to the robustness of SIFT, shape index, and geodesic distance against various changes of geometric transformation, illumination, pose and expression, the Faceprint method is inherently insensitive to these variations. Experimental results indicate that Faceprint method achieves consistently high performance comparing with commonly used SIFT on texture images.
Article
To detect human body and remove noises from complex background, illumination variations and objects, the infrared thermal imaging was applied to collect gait video and an infrared thermal gait database was established in this paper. Multi-variables gait feature was extracted according to a novel method combining integral model and simplified model. Also the wavelet transform, invariant moments and skeleton theory were used to extract gait features. The support vector machine was employed to classify gaits. This proposed method was applied to the infrared gait database and achieved 78%–91% for the probability of correct recognition. The recognition rates were insensitive for the items of holding ball and loading package. However, there was significant influence for the item of wearing heavy coat. The infrared thermal imaging was potential for better description of human body moving within image sequences.
Conference Paper
We present a new technique for fingerprint minutiae matching. The proposed method connects minutiae using a Delaunay triangulation and analyzes the relative position and orientation of each minutia with respect to its neighbors obtained by the triangle structure. Due to non-linear deformations, we admit a certain degree of triangle deformation. If rotations and translations are present, the triangle structure does not change consistently. Two fingerprints are considered matching, if their triangle structures are similar according the neighbor relationship. The algorithm performance are evaluated on a public domain database.
Article
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
Article
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Article
Most traditional face recognition systems attempt to achieve a low recognition error rate, implicitly assuming that the losses of all misclassifications are the same. In this paper, we argue that this is far from a reasonable setting because, in almost all application scenarios of face recognition, different kinds of mistakes will lead to different losses. For example, it would be troublesome if a door locker based on a face recognition system misclassified a family member as a stranger such that she/he was not allowed to enter the house, but it would be a much more serious disaster if a stranger was misclassified as a family member and allowed to enter the house. We propose a framework which formulates the face recognition problem as a multiclass cost-sensitive learning task, and develop two theoretically sound methods for this task. Experimental results demonstrate the effectiveness and efficiency of the proposed methods.
Article
In this paper, we present a compositional and dynamic model for face aging. The compositional model represents faces in each age group by a hierarchical And-Or graph, in which And nodes decompose a face into parts to describe details (e.g., hair, wrinkles, etc.) crucial for age perception and Or nodes represent large diversity of faces by alternative selections. Then a face instance is a transverse of the And-Or graph-parse graph. Face aging is modeled as a Markov process on the parse graph representation. We learn the parameters of the dynamic model from a large annotated face data set and the stochasticity of face aging is modeled in the dynamics explicitly. Based on this model, we propose a face aging simulation and prediction algorithm. Inversely, an automatic age estimation algorithm is also developed under this representation. We study two criteria to evaluate the aging results using human perception experiments: 1) the accuracy of simulation: whether the aged faces are perceived of the intended age group, and 2) preservation of identity: whether the aged faces are perceived as the same person. Quantitative statistical analysis validates the performance of our aging model and age estimation algorithm.
Conference Paper
In this paper, we describe an algorithm for generating three dimensional models of human faces from uncalibrated images. Input images are taken by a camera generally with a small rotation around a single axis which may cause degenerate solutions during auto-calibration. We describe a solution to this problem by a priori assumptions on the camera. To generate a specific person's head, a generic human head model is deformed according to the 3D coordinates of points obtained by reconstructing the scene using images calibrated with our algorithm. The deformation process is based on a physical based massless spring model and it requires local re-triangulation in the area with high curvatures. This is achieved by locally applying Delaunay traingulation method. However, there may occur degeneracies in Delaunay triangulation such as encroaching of edges. We describe an algorithm for removing the degeneracies during triangulation by modifying the definition of the Delaunay cavity. This algorithm has also the effect of preserving the curavature in the face area. We have compared the models generated with our algorithm with the models obtained using cyberscanners. The RMS geometric error in these comparisons are less than 1.8 x 10^{-2}.
Conference Paper
Presents an indexing-based approach to fingerprint identification. Central to the proposed approach is the idea of associating a unique topological structure with the fingerprint minutiae using Delaunay triangulation. This allows for choosing more “meaningful” minutiae groups (i.e., triangles) during indexing, preserves index selectivity, reduces memory requirements without sacrificing recognition accuracy, and improves recognition time. Specifically, assuming N minutiae per fingerprint on average, the proposed approach considers only O(N) minutiae triangles during indexing or recognition. This compares favorably to O(N<sup>3</sup>), the number of triangles usually considered by other approaches, leading to significant memory savings and improved recognition time. Besides their small number, the minutiae triangles we used for indexing have good discrimination power since, among all possible minutiae triangles, they are the only ones satisfying the properties of the Delaunay triangulation. As a result, index selectivity is preserved and indexing can be implemented in a low-dimensional space. Some key characteristics of the Delaunay triangulation are: (i) it is unique (assuming no degeneracies), (ii) can be computed efficiently in O(NlogN) time, and (iii) noise or distortions affect it only locally. The proposed approach has been tested on a database of 300 fingerprints (10 fingerprints from 30 persons), demonstrating good performance
The application of delaunay triangulation to face recognition
  • JY Chiang
  • RC Wang
  • JL Chang
Authentication techniques based on face recognition
  • Z Lončarević
  • K Delac
  • M Grgic
  • BM Stewart