Article

Biometric Recognition in Automated Border Control: A Survey

Authors:
  • Università degli Studi di Milano
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The increasing demand for traveler clearance at international border crossing points (BCPs) has motivated research for finding more efficient solutions. Automated border control (ABC) is emerging as a solution to enhance the convenience of travelers, the throughput of BCPs, and national security. This is the first comprehensive survey on the biometric techniques and systems that enable automatic identity verification in ABC.We survey the biometric literature relevant to identity verification and summarize the best practices and biometric techniques applicable to ABC, relying on real experience collected in the field. Furthermore, we select some of the major biometric issues raised and highlight the open research areas.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... One may argue that identifiers with low circumvent should be incorporated such as iris or retina scan to compensate for the high circumvent of facial recognition. However, multi modal biometrics can increase system costs as well as acquisition times and computational times ( Kosmerlj, Fladsrud, Hjelmås & Snekkenes, 2005 ;Labati et al., 2016 ). Nevertheless, due to the coronavirus pandemic where travellers are required to wear a masque, iris recognition and facial recognition may be more efficient ( Kosmerlj et al., 2005 ). ...
... Similarly, ( Labati et al., 2016 ) suggest almost identical factors and additional factors such as: ...
... According to Fergusson (2015) and Labati et al. (2016) , border officers typically have 12 s to decide whether the traveller is allowed to cross the border. Such time limitations emphasise the need for automated processing to facilitate the clearance of passengers while also maintaining high-security levels ( Knol et al., 2019 ). ...
Article
Biometrics in an airport environment can provide a contactless way of identity verification. U.S. Department of Homeland Security (DHS) has been trialling and implementing the Biometric Entry Exit Program at U.S. Customs and Border Control (CBP). Using the Traveller Verification System (TVS), the program biometrically confirms the traveller's identity and their entry or exit, with an increased ability to detect fraudulent documents and visa overstays. This paper assesses the Biometric Exit Program to analyse the use of biometrics at airports and identify the challenges faced. An analysis is conducted on the Entry Exit Program at Dublin Airport, including facial recognition boarding gates. Pilot test results from Dublin Airport and other U.S. airports are used to identify challenges. These included a gap in stakeholder support, low biometric matching rate, infrastructure and network connectivity issues, privacy concerns amongst travellers, and heavy reliance on airlines. Recommendations and solutions for advancement are provided.
... (1,3,4) Furthermore, many countries use automated border control (ABC) systems with the e-passport as the required token to consolidate security at borders in the face of threats of terrorism and immigration pressures. (5)(6)(7)(8) A fingerprint sensing application is used to manage border control in the border-checking environment. framework, and human-machine interface; and (4) to establish an integration system to understand sustainable border strategies. ...
... The bordercrossing checks include three procedures: (1) authentication of the travel document, (2) verification of the traveler's identity, and (3) determination of whether or not the traveler is allowed to cross the border. (6,35) The BIS first checks whether the traveler is carrying valid travel documents. Then, it captures a biometric sample of the holder (typically, a face image, fingerprint, or other modality) and checks it against the image obtained from the biometric features database in verification mode. ...
... Finally, it assesses whether or not the traveler is authorized to cross the border. The ABC process steps are shown in Fig. 3. (6) A biometric system based on border control should be combined with automated identity verification and inspection procedures. First, a traveler presents their passport, and the document is inspected at the entrance to security. ...
Article
Rapidly increasing traveler traffic has burdened the border security and clearance service. Therefore, assisting border guards and travelers to carry out sustainable border strategies is becoming more crucial. The key of such strategies centers on the interaction between sensing systems and user characteristics, and sensing technology is of great help for an efficient clearance service and smart border control. However, a practical problem is that deciding whether to allow or prevent entry at the border is ambiguous because checking identities with only a visual inspection is cumbersome. Hence, a biometric verification system (BVS) that compares a live biometric sample of a traveler against images of them obtained from the database is needed. In this study, we model and develop a fingerprint sensing system using design science research methodology (DSRM). An integrated framework, components, workflows, and schema are proposed. The novelty of this system is that the artifact retains the flexibility of the acquisition and match processes. After presenting a case study with fingerprint modal biometrics, we evaluate the usage of such a sensing application and the results to generate a sustainable strategy for managing security issues in the border-checking environment. This study offers a solution to balance the convenience of travelers and the role of inspectors. © 2021 M Y U Scientific Publishing Division. All rights reserved.
... The future generation security checkpoint for mass-transit hubs and large public events is anticipated to be a system that combines: biometric-enabled authentication [12], [19], [31], [45] and watchlist check [32], screening strategies [37], deception feature detection [1], [41], [43], [55], as well as concealed illicit item detection [38], [53]. Such a system will be an integral part of the security infrastructure, including logistics and surveillance network with abilities of tracking individuals-of-interest and analyzing the group behavior. ...
... -Source A: Acoustic domain as a human-machine communication channel with frequency range of 70 to 200 Hz for men, 150 to 400 Hz for women, and 200 to 600 Hz for children; the pitch, loudness and timbre of a human voice are the main parameters used by an e-interviewer for emotions and deception detection [1], [41], [55] [62]; -Source B: Infrared domain; the human body radiates nonvisible infrared light (3 − 12μm waves) in proportion to its temperature; this band is used for assessment of both cognitive and physical state [43]; -Source C: Visual domain, 400 − 700μm, for authentication and emotional state assessment using face and face expression recognition [31], [32]; -Source D: Radar illumination, 3-10 GHz; certain concealed items can be detected using the Ultra Wide Band (UWB) radar [24], [38]. ...
... Sources of information can have dual or multiple roles over the states. 2) Time limits of the operations place certain constraints on the sytem performance [28], [31], [45]. An example is a security problem known as the "bottleneck" and "traveler redress". ...
... In a segregated two-step process, the validity of the document and the travel authorization can be checked at a different time and place during a border crossing. In order to perform the required steps, four subsystems are involved: (i) the Document Authentication System (DAS), which checks the validity of the document; (ii) the Biometric Verification System (BVS), which captures live biometric samples and compares them with the ones contained in the document; (iii) the Central Systems Interface (CSI), which handles communication with external systems; (iv) and the Border Guard Maintenance System (BGMS), which is used by the officers to monitor the ABC system [6]. ...
... In such systems, each passenger's transaction is submitted to a monitoring system that provides timely and complete information on passengers identity and transaction status. The complex software framework is also able to detect identity fraud, extract biometric and biographic information, perform real-time checks of intelligence and criminal databases, and instantly alert border control officers in case of anomalies/emergencies [6]. Such solutions should not only be fast and reliable for airports and border authorities, but also user friendly for travelers. ...
... Ruggero et al. [6], provided a comprehensive survey of biometric techniques and systems that enable automatic identity verification for ABC systems. In [8], J. Snchez del Ro discussed an ABC e-gate system that used a face recognition algorithm along with its performance and image quality metrics recommended by International Civil Aviation Organization 4 . ...
Conference Paper
Full-text available
Over 4.1 billion aircraft passengers flew in 2017. This number is expected to be nearly double by 2037. Due to the significant growth in airline services and passenger traffic, automated border control (ABC) systems have been installed to control the air traffic flow in an automatic manner while maintaining the security. However, there are multiple security checkpoints from airport entry to boarding the flight which requires passenger's re-authentication multiple times. We propose a deep representation based learning to combine face and soft-biometrics. The proposed model has applications in automated border control to further ease the traffic flow while maintaining high security at various checkpoints. Using a deep learning-based feature fusion framework, our method obtains 99.02% genuine match rate (GMR) at false match rate (FMR) = 10 −5 using ResNet-18 model.
... Most of these systems employ automated, multimodal, biometrics-based identification and verification for personal recognition. Hence, there have been massive deployments of such systems in Automated Border Control (ABC) [14,20], typically through so-called e-gates, in many EU ports of entry in recent years [15]. ...
... Generally speaking, a shift of balance towards automation perceived simply just as a normal course of development in any branch of industry or service market. Use of biometric technologies is a key driver in this shift, as it facilitates identification of humans by machines, enabling the new technology of Automated Border Control (ABC) [14,20]. This enables border authorities to regroup human resources from mass border checks (called a "first line" check) to special cases ("second line" or "thorough" checks) where more attendance is needed (for example to detect victims of child trafficking at the border). ...
... ABC [14,20] utilizes biometric technologies (which exploit physiological and behavioral characteristics for identification and verification) as a means of identifying and verifying an individual's identity. The method used at the ABC involves the verification of an individual's identity by comparing the new template generated at the automated biometric gate (e-gate) to the enrolled template from the biometric samples stored in an electronic document such as the e-Passport [3]. ...
Preprint
Full-text available
Advances in technology have a substantial impact on every aspect of our lives, ranging from the way we communicate to the way we travel. The Smart mobility at the European land borders (SMILE) project is geared towards the deployment of biometric technologies to optimize and monitor the flow of people at land borders. However, despite the anticipated benefits of deploying biometric technologies in border control, there are still divergent views on the use of such technologies by two primary stakeholders travelers and border authorities. In this paper, we provide a comparison of travelers and border authorities views on the deployment of biometric technologies in border management. The overall goal of this study is to enable us to understand the concerns of travelers and border guards in order to facilitate the acceptance of biometric technologies for a secure and more convenient border crossing. Our method of inquiry consisted of in person interviews with border guards (SMILE project end users), observation and field visits (to the Hungarian-Romanian and Bulgarian-Romanian borders) and questionnaires for both travelers and border guards. As a result of our investigation, two conflicting trends emerged. On one hand, border guards argued that biometric technologies had the potential to be a very effective tool that would enhance security levels and make traveler identification and authentication procedures easy, fast and convenient. On the other hand, travelers were more concerned about the technologies representing a threat to fundamental rights, personal privacy and data protection.
... Most of these systems employ automated, multimodal, biometrics-based identification and verification for personal recognition. Hence, there have been massive deployments of such systems in Automated Border Control (ABC) [14,20], typically through so-called e-gates, in many EU ports of entry in recent years [15]. ...
... Generally speaking, a shift of balance towards automation perceived simply just as a normal course of development in any branch of industry or service market. Use of biometric technologies is a key driver in this shift, as it facilitates identification of humans by machines, enabling the new technology of Automated Border Control (ABC) [14,20]. This enables border authorities to regroup human resources from mass border checks (called a "first line" check) to special cases ("second line" or "thorough" checks) where more attendance is needed (for example to detect victims of child trafficking at the border). ...
... ABC [14,20] utilizes biometric technologies (which exploit physiological and behavioral characteristics for identification and verification) as a means of identifying and verifying an individual's identity. The method used at the ABC involves the verification of an individual's identity by comparing the new template generated at the automated biometric gate (e-gate) to the enrolled template from the biometric samples stored in an electronic document such as the e-Passport [3]. ...
Article
Full-text available
Advances in technology have a substantial impact on every aspect of our lives, ranging from the way we communicate to the way we travel. The Smart mobility at the European land borders (SMILE) project is geared towards the deployment of biometric technologies to optimize and monitor the flow of people at land borders. However, despite the anticipated benefits of deploying biometric technologies in border control, there are still divergent views on the use of such technologies by two primary stakeholders–travelers and border authorities. In this paper, we provide a comparison of travelers’ and border authorities’ views on the deployment of biometric technologies in border management. The overall goal of this study is to enable us to understand the concerns of travelers and border guards in order to facilitate the acceptance of biometric technologies for a secure and more convenient border crossing. Our method of inquiry consisted of in-person interviews with border guards (SMILE project’s end users), observation and field visits (to the Hungarian-Romanian and Bulgarian-Romanian borders) and questionnaires for both travelers and border guards. As a result of our investigation, two conflicting trends emerged. On one hand, border guards argued that biometric technologies had the potential to be a very effective tool that would enhance security levels and make traveler identification and authentication procedures easy, fast and convenient. On the other hand, travelers were more concerned about the technologies representing a threat to fundamental rights, personal privacy and data protection.
... Hence, automated face recognition can be exploited in a large number of practical applications [2]. Among others, it has shown excellent capabilities in security applications like intelligent surveillance [3,4], user authentication applications like traveler verification at border crossing points [5,6], and diverse other mobile and social media applications [7][8][9][10]. Indeed, person identity prediction based on facial features for practical purposes is a valuable tool in modern information technology [11]. ...
... Holistic approaches assign equal importance to all the pixels rather than special attention to a set of points of interest. Hence, these approaches encompass higher distinctive power at the cost of increased computational complexity [6,30]. ...
Article
Full-text available
Abstract Deep learning technology has enabled successful modeling of complex facial features when high-quality images are available. Nonetheless, accurate modeling and recognition of human faces in real-world scenarios “on the wild” or under adverse conditions remains an open problem. Consequently, a plethora of novel deep network architectures addressing issues related to low-quality images, varying pose, illumination changes, emotional expressions, etc., have been proposed and studied over the last few years.This survey presents a comprehensive analysis of the latest developments in the field. A conventional deep face recognition system entails several main components: deep network, optimization loss function, classification algorithm, and train data collection. Aiming at providing a complete and comprehensive study of such complex frameworks, this paper first discusses the evolution of related network architectures. Next, a comparative analysis of loss functions, classification algorithms, and face datasets is given. Then, a comparative study of state-of-the-art face recognition systems is presented. Here, the performance of the systems is discussed using three benchmarking datasets with increasing degrees of complexity. Furthermore, an experimental study was conducted to compare several openly accessible face recognition frameworks in terms of recognition accuracy and speed.
... − security checkpoints (identity management) [1], − personal health monitoring systems [2], − e-coaching for health [3], − driver assistant (e.g., fatigue and stress detection) [4], − multi-factor authentication systems [5], − spoken conversational agents [6], − epidemiological surveillance [7] and, − preparedness systems for emerging health service [8]. ...
... Synthetic data is required at various CI operations and processes. For example, the sub-state S (1) m of state S m is defined under learnt ID source reliability using authentic data from previous experience, while the potential attack data can be synthesized. This enables assessment of the R-T-B of such rare events (attacks). ...
Article
Full-text available
Biometrics and biometric-enabled decision support systems (DSS) have become a mandatory part of complex dynamic systems such as security checkpoints, personal health monitoring systems, autonomous robots, and epidemiological surveillance. Risk, trust, and bias (R-T-B) are emerging measures of performance of such systems. The existing studies on the R-T-B impact on system performance mostly ignore the complementary nature of R-T-B and their causal relationships, for instance, risk of trust, risk of bias, and risk of trust over biases. This paper offers a complete taxonomy of the R-T-B causal performance regulators for the biometric-enabled DSS. The proposed novel taxonomy links the R-T-B assessment to the causal inference mechanism for reasoning in decision making. Practical details of the R-T-B assessment in the DSS are demonstrated using the experiments of assessing the trust in synthetic biometric and the risk of bias in face biometrics. The paper also outlines the emerging applications of the proposed approach beyond biometrics, including decision support for epidemiological surveillance such as for COVID-19 pandemics.
... The key system property for the usability of SSI solutions in practical situations is low interaction latency. It would be inconvenient to wait an hour at the airport to have your identity verified and, in fact, electronic border control should take no longer than 30 seconds to be considered usable [33]. In this section we show that the latency of the interactions within the SSI overlay is well within this 30-second limit, finishing consistently within three seconds for all of the implementations of credential proofs we evaluate. ...
... What is gained in enrollment latency is lost in verification latency and vice-versa, with Hyperledger Indy being the most consistent between these two categories (but also not the fastest). The evaluated solutions all finish within 3 seconds, making them usable for, e.g., electronic border control, which should be finished in 30 seconds [33]. ...
Conference Paper
Full-text available
Existing digital identity management systems fail to deliver the desirable properties of control by the users of their own identity data, credibility of disclosed identity data, and network-level anonymity. The recently proposed Self-Sovereign Identity (SSI) approach promises to give users these properties. However, we argue that without addressing privacy at the network level, SSI systems cannot deliver on this promise. In this paper we present the design and analysis of our solution TCID, created in collaboration with the Dutch government. TCID is a system consisting of a set of components that together satisfy seven functional requirements to guarantee the desirable system properties. We show that the latency incurred by network-level anonymization in TCID is significantly larger than that of identity data disclosure protocols but is still low enough for practical situations. We conclude that current research on SSI is too narrowly focused on these data disclosure protocols.
... • authentication machines [40], [96], e-passports/ID technology [8], ...
... -Authentication and risk assessment machines for masstransit hubs and large public events. These machines operate under the umbrella of specific supporting infrastructure [96], [99], [100]. -Ambient assistants aims at social issues of critical importance: care management based on new cognitive paradigms [77]. ...
Preprint
Full-text available
This paper aims at identifying emerging computational intelligence trends for the design and modeling of complex biometric-enabled infrastructure and systems. Biometric-enabled systems are evolving towards deep learning and deep inference using the principles of adaptive computing, - the front tides of the modern computational intelligence domain. Therefore, we focus on intelligent inference engines widely deployed in biometrics. Computational intelligence applications that cover a wide spectrum of biometric tasks using physiological and behavioral traits are chosen for illustration. We highlight the technology gaps that must be addressed in future generations of biometric systems. The reported approaches and results primarily address the researchers who work towards developing the next generation of intelligent biometric-enabled systems.
... In the case of the configurations of ABC systems, they can have several physical configurations ( Morosan, 2018 ). The most typical ones use electronic gates (e-gates) ( Donida Labati et al., 2016 ). These devices regulate travellers flow through the border with the use of biometric sensors (e.g. ...
Article
Millions of passengers travel every day, border crossing being one of their most common activities. At these points it is extremely important that security is completely guaranteed. However, the maintaining adequate security levels is a very demanding issue. This has promoted the development of systems capable of providing support to border authorities by automatising some of their tasks. Thus, Automated Border Control (ABC) systems have become a key tool. These systems increase the flow of travellers as they can achieve fast evaluations of individuals through their machine-readable travel documents. However, this has motivated the appearance of attacks that try to avoid the identity detection of individuals by these systems. Presentation Attack Detection (PAD) algorithms have emerged to mitigate such a problem. This paper presents the On-the-Fly Presentation Attack Detection (FlyPAD) framework that implements a set of dynamic PAD techniques. It allows multiple types of attacks to be detected as the traveller approaches the ABC system, rather than being static in front of the cameras. Several experiments have been carried out, both in the laboratory and in real environments, obtaining promising results.
... As there are numerous situations where human identity plays an important role for safety purposes like in construction sites to prevent intruder entry, shopping mall entries and monitoring, entries in parks and to identify the movement of humans near the line of control (LOC) and borders [3,13,24]. In this study, a framework for IoT-based, fog cloud computingassisted Human Identification in construction sites is proposed. ...
Article
Full-text available
Human identification on construction sites is critical for minimizing safety mishaps. Existing approaches have shortcomings such as a low recognition rate, workplace locating errors, and alarming latency. These challenges are addressed in this study by developing a Fog Cloud Computing, Internet of Things (IoT)-based Human Identification system based on Gait Sequences. Gait recognition, as a prospective biometric identification approach, has several advantages, including the ability to identify humans at a great distance, without any interaction, and the difficulty of imitating. However, due to the complexity of the external components involved in the collection and sampling of gait data and changes in the clothing style of an individual to be recognized, this recognition technology continues to confront several obstacles in real-time applications. In this study, the purpose is to offer a unique method for gait feature extraction and classification at construction sites. The feature vectors derived from Speeded Up Robust Features (SURF) and Convolutional Neural Networks (CNN) are integrated. The classification is performed by applying a Support Vector Machine (SVM) to increase the recognition rate at the Fog layer. The decision-making, record storage, and monitoring processing are performed in the cloud layer. On comparative analysis, experimental results demonstrate that our proposed model outperforms the existing methods and attained the highest accuracy of 97.19%.
... Today, MFA systems can be classified into the following market groups: Forensic segmentcorpse investigation, missing person search, criminal investigation, etc. [104]; Government segment -border control, driver license, government ID, etc. [105]; Commercial segment -account access, e-commerce, physical access control, etc. [106]. ...
Thesis
Full-text available
There has been an unprecedented increase in the use of smart devices globally, together with novel forms of communication, computing, and control technologies that have paved the way for a new category of devices, known as high-end wearables. While massive deployments of these objects may improve the lives of people, unauthorized access to the said private equipment and its connectivity is potentially dangerous. Hence, communication enablers together with highly-secure human authentication mechanisms have to be designed. In addition, it is important to understand how human beings, as the primary users, interact with wearable devices on a day-to-day basis; usage should be comfortable, seamless, user-friendly, and mindful of urban dynamics. Usually the connectivity between wearables and the cloud is executed through the user’s more power independent gateway: this will usually be a smartphone, which may have potentially unreliable infrastructure connectivity. In response to these unique challenges, this thesis advocates for the adoption of direct, secure, proximity-based communication enablers enhanced with multi-factor authentication (hereafter refereed to MFA) that can integrate/interact with wearable technology. Their intelligent combination together with the connection establishment automation relying on the device/user social relations would allow to reliably grant or deny access in cases of both stable and intermittent connectivity to the trusted authority running in the cloud. The introduction will list the main communication paradigms, applications, conventional network architectures, and any relevant wearable-specific challenges. Next, the work examines the improved architecture and security enablers for clusterization between wearable gateways with a proximity-based communication as a baseline. Relying on this architecture, the author then elaborates on the social ties potentially overlaying the direct connectivity management in cases of both reliable and unreliable connection to the trusted cloud. The author discusses that social-aware cooperation and trust relations between users and/or the devices themselves are beneficial for the architecture under proposal. Next, the author introduces a protocol suite that enables temporary delegation of personal device use dependent on di�erent connectivity conditions to the cloud. After these discussions, the wearable technology is analyzed as a biometric and behavior data provider for enabling MFA. The conventional approaches of the authentication factor combination strategies are compared with the ‘intelligent’ method proposed further. The assessment finds significant advantages to the developed solution over existing ones. On the practical side, the performance evaluation of existing cryptographic primitives, as part of the experimental work, shows the possibility of developing the experimental methods further on modern wearable devices. In summary, the set of enablers developed here for wearable technology connectivity is aimed at enriching people’s everyday lives in a secure and usable way, in cases when communication to the cloud is not consistently available.
... Individuals interact with recommender systems-algorithmic systems that make suggestions about what a user may like-on a daily basis, be it to choose a song, a movie, a product or even a friend (Paraschakis 2017;Perra and Rocha 2019;Milano et al. 2020). At the same time, schools and hospitals (Obermeyer et al. 2019;Zhou et al. 2019;Morley et al. 2019a, b), financial institutions (Lee and Floridi 2020; Aggarwal 2020) courts (Green and Chen 2019; Yu and Du 2019), local governmental bodies (Eubanks 2017;Lewis 2019), and national governments (Labati et al. 2016;Hauer 2019;Taddeo and Floridi 2018a;Taddeo et al. 2019;Roberts et al. 2019), all increasingly rely on algorithms to make significant decisions. ...
Article
Full-text available
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
... Hostage to the timbre of its voice or the pattern of its irises, the body offers itself up despite the subject.' Although critical scholars have pointed out how biometrics continue to fail to deliver on their powerful promises in practice (Jacobsen and Rao 2018; van der Ploeg and Sprenkels 2011), this has not diminished their political appeal (Caldwell 2015;Gelb and Clark 2013) but has rather reinforced technical efforts to address previous shortcomings (Abomhara et al. 2020;Labati et al. 2016;Robertson et al. 2017). ...
Article
Building on Scott’s notion of identity as a key concept in early modern statehood, this paper historically contextualises and analyses the current political re-problematisation of identity in the EU. Engaging the recently adopted interoperability initiative that is set to biometrically verify and cross-validate identity records between all European border management, migration, and security databases, it argues that interoperability presents a shift from traditional modes of identity production at the border towards a digital space of identity management. Such identity management is predicated on the establishment of a biometric super-layer structure that cuts across databases without dissolving their legal foundations and introduces a new mode of ‘truth’ production in the form of a dedicated ‘identity confirmation file’ that is supposed to re-introduce a reliable baseline for the government of the Schengen area.
... governments (Labati et al. 2016;Hauer 2019;Taddeo and Floridi 2018a;Taddeo, McCutcheon, and Floridi 2019;Roberts et al. 2019), all increasingly rely on algorithms to make significant decisions. ...
... The DSS concept was once known as an "expert system" that provides certain automation of reasoning (though mainly based on deterministic rules rather than Bayesian approach) and interpretation strategies to extend experts' abilities to apply their strategies efficiently [8]. Examples of contemporary DSS are personal health monitoring systems [9], e-coaching for health [10], security checkpoints [11], [12], and multifactor authentication systems [13]. This paper focuses on developing a DSS for ES, with a CI core based on causal Bayesian networks. ...
Preprint
Assessment of risks of pandemics to communities and workplaces requires an intelligent decision support system (DSS). The core of such DSS must be based on machine reasoning techniques such as inference and shall be capable of estimating risks and biases in decision making. In this paper, we use a causal network to make Bayesian inference on COVID-19 data, in particular, assess risks such as infection rate and other precaution indicators. Unlike other statistical models, a Bayesian causal network combines various sources of data through joint distribution, and better reflects the uncertainty of the available data. We provide an example using the case of the COVID-19 outbreak that happened on board of USS Theodore Roosevelt in early 2020.
... Specific-centered DSS is a well-identified trend in using the CI in many fields including the ES. Examples of contemporary DSS are personal health monitoring systems [10], e-coaching for health [11], security checkpoints [12], [13], and multi-factor authentication systems [14]. Usability of CI tools to support a team of practitioners with or without technical background should be estimated using various measures compared to the decision making tools performance. ...
Preprint
Contemporary Epidemiological Surveillance (ES) relies heavily on data analytics. These analytics are critical input for pandemics preparedness networks; however, this input is not integrated into a form suitable for decision makers or experts in preparedness. A decision support system (DSS) with Computational Intelligence (CI) tools is required to bridge the gap between epidemiological model of evidence and expert group decision. We argue that such DSS shall be a cognitive dynamic system enabling the CI and human expert to work together. The core of such DSS must be based on machine reasoning techniques such as probabilistic inference, and shall be capable of estimating risks, reliability and biases in decision making.
... B iometric-enabled decision support is a mandatory mechanism of various complex systems, such as: − security checkpoints (identity management) [1], − personal health monitoring systems [2], − e-coaching for health [3], − driver assistant (e.g., fatigue and stress detection) [4], − multi-factor authentication systems [5], − spoken conversational agents [6], − epidemiological surveillance [7] and, − preparedness systems for emerging health service [8]. ...
Preprint
Full-text available
Biometrics and biometric-enabled decision support systems (DSS) have become a mandatory part of complex dynamic systems such as security checkpoints, personal health monitoring systems, autonomous robots, and epidemiological surveillance. Risk, trust, and bias (R-T-B) are emerging measures of performance of such systems. The existing studies on the R-T-B impact on system performance mostly ignore the complementary nature of R-T-B and their causal relationships, for instance, risk of trust, risk of bias, and risk of trust over biases. This paper offers a complete taxonomy of the R-T-B causal performance regulators for the biometric-enabled DSS. The proposed novel taxonomy links the R-T-B assessment to the causal inference mechanism for reasoning in decision making. Practical details of the R-T-B assessment in the DSS are demonstrated using the experiments of assessing the trust in synthetic biometric and the risk of bias in face biometrics. The paper also outlines the emerging applications of the proposed approach beyond biometrics, including decision support for epidemiological surveillance such as for COVID-19 pandemics.
... Since the first works on biometric verification, increasingly accurate and time efficient biometric recognition systems have been proposed over the years. These works have allowed a wider deployment of biometric authentication techniques, including border control ( Labati et al., 2016 ), smartphone authentication ( Patel et al., 2016 ), mobile payments ( Meng et al., 2015 ), or law enforcement and forensics ( Jain and Ross, 2015 ). With this generalised use of biometric systems, new concerns have arisen regarding their potential weaknesses. ...
Article
With the widespread use of biometric recognition, several issues related to the privacy and security provided by this technology have been recently raised and analysed. As a result, the early common belief among the biometrics community of templates irreversibility has been proven wrong. It is now an accepted fact that it is possible to reconstruct from an unprotected template a synthetic sample that matches the bona fide one. This reverse engineering process, commonly referred to as inverse biometrics, constitutes a severe threat for biometric systems from two different angles: on the one hand, sensitive personal data (i.e., biometric data) can be derived from compromised unprotected templates; on the other hand, other powerful attacks can be launched building upon these reconstructed samples. Given its important implications, biometric stakeholders have produced over the last fifteen years numerous works analysing the different aspects related to inverse biometrics: development of reconstruction algorithms for different characteristics; proposal of methodologies to assess the vulnerabilities of biometric systems to the aforementioned algorithms; development of countermeasures to reduce the possible effects of attacks. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem.
... Prior work explains why each of these is insufficient for our purposes. Spring et al. [165] addresses the shortcomings [109] Homoliak et al. [80]* Li et al. [114] Li et al. [113] Barmpatsalou et al. [12] Islam et al. [86] Jiang et al. [93] Cho et al. [38] Liu et al. [116] Botacin et al. [19] Ramaki et al. [146] Jhaveri et al. [92] Pendleton et al. [142] Khan et al. [103] Laszka et al. [108] Kalgutkar et al. [99] Biddle et al. [17] Milenkoski et al. [123] Tang et al. [174] Meng et al. [121] Calzavara et al. [28] Labati et al. [107] Ye et al. [189] Edwards et al. [51]* Avancha et al. [8] Roy et al. [152] Chandola et al. [34] Pearce et al. [141] Peng et al. [143] Younan et al. [190] Egele et al. [52]* ...
Article
We review practical advice on decision-making during computer security incident response. Scope includes standards from the IETF, ISO, FIRST, and the US intelligence community. To focus on human decision-making, the scope is the evidence collection, analysis, and reporting phases of response, which includes human decision-making within and connecting these phases. The results indicate both strengths and gaps. A strength is available advice on how to accomplish many specific tasks. However, there is little guidance on how to prioritize tasks in limited time or how to interpret, generalize, and convincingly report results. Future work should focus on these gaps in explication and specification of decision-making during incident analysis.
... Identification and verification procedures at the border are to ensure that entry-to or exit-from a country will be granted to the right persons. In recent years, Member States have seen an increased use of biometric identification and authentication systems at EU's borders including airports and land borders [2,24]. More significantly, the large-scale EU information systems such as Visa Information System (VIS), Second-generation Schengen Information System (SIS II), European Asylum Dactyloscopy Database (EURODAC) and Entry Exit System (EES) etc. [22] employ biometrics for migration and border control and management. ...
Chapter
Full-text available
Complying with the European Union (EU) perspective on human rights goes or should go together with handling ethical, social and legal challenges arising due to the use of biometrics technology as border control technology. While there is no doubt that the biometrics technology at European borders is a valuable element of border control systems, these technologies lead to issues of fundamental rights and personal privacy, among others. This paper discusses various ethical, social and legal challenges arising due to the use of biometrics technology in border control. First, a set of specific challenges and values affected were identified and then, generic considerations related to mitigation of these issues within a framework is provided. The framework is expected to meet the emergent need for supplying interoperability among multiple information systems used for border control.
... Fingerprints, retina, and iris recognition systems are known to yield very accurate results [1], but hardened criminals, being sensitively aware of the security implications, mostly avoid presenting their biometric features to be captured into databases. us, automated face recognition systems are now the obvious choice [2] because people cannot hide their facial images from installed CCTV cameras all the time. ...
Article
Full-text available
Amidst the wide spectrum of recognition methods proposed, there is still the challenge of these algorithms not yielding optimal accuracy against illumination, pose, and facial expression. In recent years, considerable attention has been on the use of swarm intelligence methods to help resolve some of these persistent issues. In this study, the principal component analysis (PCA) method with the inherent property of dimensionality reduction was adopted for feature selection. The resultant features were optimized using the particle swarm optimization (PSO) algorithm. For the purpose of performance comparison, the resultant features were also optimized with the genetic algorithm (GA) and the artificial bee colony (ABC). The optimized features were used for the recognition using Euclidean distance (EUD), K-nearest neighbor (KNN), and the support vector machine (SVM) as classifiers. Experimental results of these hybrid models on the ORL dataset reveal an accuracy of 99.25% for PSO and KNN, followed by ABC with 93.72% and GA with 87.50%. On the central, an experimentation of the PSO, GA, and ABC on the YaleB dataset results in 100% accuracy demonstrating their efficiencies over the state-of-the art methods.
... By contrast, research on automated border control has typically been conducted in the fields of engineering and computer sciences, often exploring and comparing the benefits and drawbacks of biometric solutions in ABC (e.g. Labati et al. 2016;Robertson et al. 2017;Sanchez del Rio et al. 2016), comparing human and machine performance e.g. in detecting fraudulent identity documents (Gariup and Piskorski 2019) or discerning traveller risk assessment in ABC (Yanushkevich et al. 2018). ...
Thesis
Full-text available
This dissertation studies political agency, which shapes automated border control (ABC) policies and the ensuing border control practices in the European Union (EU). In the articles of which the dissertation is composed, we research the subjective orientation of parliamentarians and stakeholders involved in the decision-making processes on the technologization and digitalization of border control. The motivation for the research initially came from the European Commission’s push to harmonize and automate border control in the Schengen area of free mobility. Essentially, the Commission has been advocating for harmonized, electronic, automated border control gates, which use sensitive biometric identifiers, such as fingerprints recorded on the electronic passport to speed up border control for the masses and allow for the border control officials to concentrate on ‘high-risk’ travellers. Automated border control is a part of a wider trend where societal surveillance has shifted from a physical process at the border to a complex information technology driven data governance and surveillance apparatus. The EU is a prime example of an actor pushing for an assessment of the ‘risks’ travellers pose before they travel utilizing a large amount of passenger data. Large registers of people’s personal information, ranging from dietary restrictions to iris scans are now used and shared to determine the risks, and for several reasons, this warrants critical scrutiny. For instance, it is tempting for multi-function institutions such as governments to subsequently use data of registers for purposes other than those for what it was collected. In the EU it is now envisioned that e.g. national law enforcement agencies would have access to border control data to solve crimes classified as serious and to combat terrorism. The potential ‘function creeps’ may become dangerous, e.g. for any kinds of minorities, political opponents of governments etc. since societal conditions change over time but the data remains on record. The aim of the first two articles of the dissertation is to understand what kind of automated border control devices and practices are politically acceptable to the participants acting as decision-makers and experts in EU Member States. We study Finnish decision-makers’ subjective orientation in our first iteration and compare those of four EU Member States in the second one. This is a novel line of enquiry given the lack so far of studies at the level of Member States. Existing literature has overall been quite theoretical and not empirically, especially experimentally rich. The novelty value is also tied to the fact that these policies and technologies are somewhat new and have thus not yet been studied in detail from the societal and political point of view. In the third article, we also establish another new avenue of research: we research how people with disabilities should be taken into account when designing technologically reinforced border control systems in the EU. Furthermore – as a leap towards the more abstract and broader issue of how International Relations (IR) should research subjective agency – the fourth article argues for the potential in combining a recent IR venture in ontology, namely Alexander Wendt’s (2015) quantum social ontology with the main methodology applied in the thesis, Q methodology. The thesis is empirically driven, and the first three articles report the results of our empirical work carried out with Q methodology, a questionnaire and interviews. The work is critical of ‘armchair research’, which the research concerning (European) border control often is, although it recognizes the important ethical arguments of IR’s critical security studies. The interest in empirical work is reiterated in the fourth article, which presents ideas on how to operationalise Wendt’s ontological notions with the help of Q methodology in (future) empirical social scientific work. The heuristic background of the dissertation is in ‘practice-oriented’ IR. This includes commitments to pragmatism and practice theory, but also criticism of those theory traditions, since research making use of them paradoxically does not often progress beyond the level of theoretical constructs and theoretical feuds within IR. The dissertation finds three distinct factors or view types on ABC in both the first and second articles. The latter are more significant, since the first article is a ‘pilot iteration’ for Finnish participants, while the second compares political views in Finland, Romania, Spain and the United Kingdom. Article II finds that the views on ABC are tied to the political affiliation rather than e.g. the nationality of the participants. Participants espousing the first view were politically oriented to the (centre) left and were worried about the potential erosion of civil liberties, the lack of due process and the disadvantages of technocracy in the context of ABC. Participants supporting the second view came from (centre) right political parties and welcomed technologically enforced border control with its risk profiling approach as an increase in security and efficacy. The third view was supported by Eurosceptic (far) right parties and used the core strategies of populist argumentation. Its supporters were concerned about increasing immigration and demanded that border control be organized nationally. The third article finds that a universal design concept in (ABC) technology development should be adopted both from the standpoint of equal rights and usability, and thus also the efficacy of the technology. It also finds that accessibility of people with disabilities is feasible from the technological, economic and operational points of view, especially if this accessibility requirement is made in the tenders of ABC technology and vulnerable groups such as people with disabilities are involved in the design processes. Finally, the fourth research article finds that there is ample potential for combining Q methodology and the quantum social ontology in future work on subjective agency in IR, which is regrettably an understudied area of IR, as the concepts ‘international’ and ‘politics’ would not exist without human agency. The compatibility of the methodology and the views on ontology – whether the quantum view is indeed taken as an ontology, an analogy or a heuristic – stems from their shared principles as regards measuring states of mind and states of matter, their rejection of rational choice and their belief in creative potential in agency. Keywords: accessibility, automated border control, European Union, Q methodology, quantum social ontology, Smart Borders, subjective agency
... Clearly, critically depending on a Proof-of-Work blockchain in an identity solution is unacceptable, as it pushes the latency of credential creation to an hour (to reach the commonly imposed 6-block rule for probabilistic finality [95]). This is too slow for most passport-grade interactions, e.g., electronic border control should be done in 30 seconds [58]. As a side note, using a permissioned blockchain to create identities with a lower latency creates a chicken-andegg problem: you require a centrally known identity to create your identity (also having to answer to a central party before entering the system is naturally not Self-Sovereign). ...
Preprint
Full-text available
Digital identity is essential to access services such as: online banking, income tax portals, and online higher education. Digital identity is often outsourced to central digital identity providers, introducing a critical dependency. Self-Sovereign Identity gives citizens the ownership back of their own identity. However, proposed solutions concentrate on data disclosure protocols and are unable to produce identity with legal status. We identify how related work attempts to legalize identity by reintroducing centralization and disregards common attacks on peer-to-peer interactions, missing out on the strong privacy guarantees offered by the data disclosure protocols. To address this problem we present IPv8, a complete system for passport-grade Self-Sovereign Identity. Our design consists of a hierarchy of middleware layers which are minimally required to establish legal viability. IPv8 is comprised of a peer-to-peer middleware stack with Sybil attack resilience and strong privacy through onion routing. No other work has offered an operational prototype of an academically pure identity solution without any trusted third parties, critical external services, or any server in general.
... Face and fingerprints are quite popular and are in advanced stages of commercialization while iris is most secure and robust and is in advanced stages of implementation currently. The iris as a biometric is used in several applications like border control, secure transactions, and safety measures in finance and defense areas [18]. Earlier it was believed that templates of biometric traits do not change with time but as the researchers working on these modalities, they have found valid reasons that templates do get aged with time and they have a negative effect on the performance of biometric recognition systems. ...
Article
Full-text available
Biometric Systems are well-known security systems that can be used anywhere for authentication, authorization or any kind of security verifications. In biometric systems, the samples are trained first, and then it can be used for testing in long runs. Many recent pieces of research have shown that a biometric system may fail or get compromised because of the aging of the biometric templates. The fact that temporal duration affects the performance of the biometric system has shattered the belief that iris does not change over the lifetime. This is also possible in the case of iris. So, the main focus of this work is to analyze the effect of aging and also to propose a new system that can deal with template aging. We have proposed a new iris recognition system with an image enhancement mechanism and different feature extraction mechanisms. In this work, three different features are extracted, which are then fused to be used as one. The full system is trained on a dataset of 2500 samples for the year 2008 and testing is done in three different phases (i) No-Lapse, (ii) 1-Year Lapse and (iii) 2-Year Lapse. A portion of the ND-Iris-Template-Aging dataset [11] is used with a period of three years lapse. Results show that the performance of the hybrid classifier AHyBrK [17] is improved as compared to KNN and ANN and the effect of aging in terms of degraded performance is clear. The performance of this system is measured in terms of False Rejection Rate, Error Rate, and Accuracy. The overall performance of AHyBrK is 51.04% and 52.98% better than KNN and ANN respectively in terms of False Rejection Rate and Error Rate whereas the accuracy of this proposed system is also improved by 5.52% and 6.04% as compared to KNN and ANN respectively. This proposed system also achieved high accuracy for all the test phases.
... Face verification (FV) is deployed in applications such as biometrics [8,20], social media information structuring [16] or classical media archive organization [5]. Performance has strongly progressed in recent years due to the introduction of deep learning techniques [23,40]. ...
Preprint
Full-text available
Face verification aims to distinguish between genuine and imposter pairs of faces, which include the same or different identities, respectively. The performance reported in recent years gives the impression that the task is practically solved. Here, we revisit the problem and argue that existing evaluation datasets were built using two oversimplifying design choices. First, the usual identity selection to form imposter pairs is not challenging enough because, in practice, verification is needed to detect challenging imposters. Second, the underlying demographics of existing datasets are often insufficient to account for the wide diversity of facial characteristics of people from across the world. To mitigate these limitations, we introduce the $FaVCI2D$ dataset. Imposter pairs are challenging because they include visually similar faces selected from a large pool of demographically diversified identities. The dataset also includes metadata related to gender, country and age to facilitate fine-grained analysis of results. $FaVCI2D$ is generated from freely distributable resources. Experiments with state-of-the-art deep models that provide nearly 100\% performance on existing datasets show a significant performance drop for $FaVCI2D$, confirming our starting hypothesis. Equally important, we analyze legal and ethical challenges which appeared in recent years and hindered the development of face analysis research. We introduce a series of design choices which address these challenges and make the dataset constitution and usage more sustainable and fairer. $FaVCI2D$ is available at~\url{https://github.com/AIMultimediaLab/FaVCI2D-Face-Verification-with-Challenging-Imposters-and-Diversified-Demographics}.
... governments (Labati et al. 2016;Hauer 2019;Taddeo and Floridi 2018a;Taddeo, McCutcheon, and Floridi 2019;Roberts et al. 2019), all increasingly rely on algorithms to make significant decisions. ...
Chapter
Full-text available
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2). https://doi.org/10.1177/2053951716679679, 2016). The golas are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
... In the two-stage process, The two-step process can be combined into one location or separated into two distinct locations [14]. A pre-enrolment kiosk for traveller verification and a single-door e-Gate for border crossing are common examples of the separated two-step process. ...
... Post 9/11 R&D period focused on increasing security, by introducing biometric-enabled ID and trusted traveler service [4], [5], self-service kiosks (gates) [6], [7], biometric-enabled watchlist screening [8], [9], [10], identity de-duplication detection [11], authentication machines [12], [13], implementation and deployment guides and regulations [7], [1], roadmapping [14], human right protection [15], as well as harmonization of advanced management techniques in order to increase a checkpoint performance [16]. The key performance measures remained the travelers' satisfaction with the service such as time of authentication, public acceptance of the biometric traits (face, iris, fingerprints), waiting times, privacy issues, and addressing the language barriers. ...
Preprint
Full-text available
In this study of the face recognition on masked versus unmasked faces generated using Flickr-Faces-HQ and SpeakingFaces datasets, we report 36.78% degradation of recognition performance caused by the mask-wearing at the time of pandemics, in particular, in border checkpoint scenarios. We have achieved better performance and reduced the degradation to 1.79% using advanced deep learning approaches in the cross-spectral domain.
... Identification and verification procedures at the border are to ensure that entry-to or exit-from a country will be granted to the right persons. In recent years, Member States have seen an increased use of biometric identification and authentication systems at EU's borders including airports and land borders [2,24]. More significantly, the large-scale EU information systems such as Visa Information System (VIS), Second-generation Schengen Information System (SIS II), European Asylum Dactyloscopy Database (EURODAC) and Entry Exit System (EES) etc. [22] employ biometrics for migration and border control and management. ...
Preprint
Full-text available
Complying with the European Union (EU) perspective on human rights goes or should go together with handling ethical, social and legal challenges arising due to the use of biometrics technology as border control technology. While there is no doubt that the biometrics technology at European borders is a valuable element of border control systems, these technologies lead to issues of fundamental rights and personal privacy, among others. This paper discusses various ethical, social and legal challenges arising due to the use of biometrics technology in border control. First, a set of specific challenges and values affected were identified and then, generic considerations related to mitigation of these issues within a framework is provided. The framework is expected to meet the emergent need for supplying interoperability among multiple information systems used for border control.
... As there are many situations where gender recognition and human identity play their role for security purposes like in gender base mall entry [6], entries in parks [7], detection of intruders, and identifying the movement of humans near the line of control (LOC) and borders [8]. In this study, we projected a fog computing-based Human Identification and Gender Recognition using SRML and SVM with SURF from Gait sequences, which will improve its accuracy in real time. ...
Article
Full-text available
Security threats are always there if the human intruders are not identified and recognized well in time in highly security-sensitive environments like the military, airports, parliament houses, and banks. Fog computing and machine learning algorithms on Gait sequences can prove to be better for restricting intruders promptly. Gait recognition provides the ability to observe an individual unobtrusively, without any direct cooperation or interaction from the people, making it very attractive than other biometric recognition techniques. In this paper, a Fog Computing and Machine Learning Inspired Human Identity and Gender Recognition using Gait Sequences (FCML-Gait) are proposed. Internet of things (IoT) devices and video capturing sensors are used to acquire data. Frames are clustered using the affinity propagation (AP) clustering technique into several clusters, and cluster-based averaged gait image(C-AGI) feature is determined for each cluster. For training and testing of datasets, sparse reconstruction-based metric learning (SRML) and Speeded Up Robust Features (SURF) with support vector machine (SVM) are applied on benchmark gait database ADSC-AWD having 80 subjects of 20 different individuals in the Fog Layer to improve the processing. The performance metrics, for instance, accuracy, precision, recall, F-measure, C-time, and R-time have been measured, and a comparative evaluation of the projected method with the existing SRML technique has been provided in which the proposed FCML-Gait outperforms and attains the highest accuracy of 95.49%.
... The rapid evolution of facial recognition systems in real-time applications has raised new concerns about their ability to resist identity threats, particularly in non-intrusive applications. For instance, automatic border control systems [80] of the aviation industry, where an automated face recognition system is installed at each entry and exit point to authenticate the identity of individuals without any security personnel. The face is a traditionally exposed (i.e., visible) part of the human body and is easily accessible in photos and videos through various social media such as Facebook, WhatsApp, Instagram, and others. ...
Article
Full-text available
The human face is considered the prime entity in recognizing a person’s identity in our society. Henceforth, the importance of face recognition systems is growing higher for many applications. Facial recognition systems are in huge demand, next to fingerprint-based systems. Face-biometric has a highly dominant role in various applications such as border surveillance, forensic investigations, crime detection, access management systems, information security, and many more. Facial recognition systems deliver highly meticulous results in every of these application domains. However, the face identity threats are evenly growing at the same rate and posing severe concerns on the use of face-biometrics. This paper significantly explores all types of face recognition techniques, their accountable challenges, and threats to face-biometric-based identity recognition. This survey paper proposes a novel taxonomy to represent potential face identity threats. These threats are described, considering their impact on the facial recognition system. State-of-the-art approaches available in the literature are discussed here to mitigate the impact of the identified threats. This paper provides a comparative analysis of countermeasure techniques focusing on their performance on different face datasets for each identified threat. This paper also highlights the characteristics of the benchmark face datasets representing unconstrained scenarios. In addition, we also discuss research gaps and future opportunities to tackle the facial identity threats for the information of researchers and readers.
... Identification and verification procedures at the border are to ensure that entry-to or exit-from a country will be granted to the right persons. In recent years, Member States have seen an increased use of biometric identification and authentication systems at EU's borders including airports and land borders [2,24]. More significantly, the large-scale EU information systems such as Visa Information System (VIS), Second-generation Schengen Information System (SIS II), European Asylum Dactyloscopy Database (EURODAC) and Entry Exit System (EES) etc. [22] employ biometrics for migration and border control and management. ...
... At the same time, schools and hospitals (Obermeyer et al. 2019;Zhou et al. 2019;Morley, Machado, et al. 2019), financial institutions (M. S. A Lee and Floridi 2020; Aggarwal 2020) courts (Green and Chen 2019; Yu and Du 2019), local governmental bodies (Eubanks 2017;Lewis 2019), and national governments (Labati et al. 2016;Hauer 2019;Taddeo and Floridi 2018a;Taddeo, McCutcheon, and Floridi 2019;Roberts et al. 2019), all increasingly rely on algorithms to make significant decisions. ...
Preprint
Full-text available
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Conference Paper
The present article aims to discuss how governments have turned to biometric technologies to fight the spread of COVID-19, mainly through the adoption of facial recognition technologies, and the risks to people’s privacy of inadequate measures to protect their personal data. We have identified seven systems from different countries (i.e., China, France, Israel, Poland, Singapore, South Korea, and Russia) that present some form of facial recognition during their operation and pointed out their functionalities and released information on safeguards for data protection. The data collected so far has shown that, in most countries, the necessary safeguards to protect people’s privacy and their personal data in the short-term and long-term, are not receiving sufficient considerations.
Article
Full-text available
Human recognition with biometrics is a rapidly emerging area of computer vision. Compared to other well-known biometric features such as the face, fingerprint, iris, and palmprint, the ear has recently received considerable research attention. The ear recognition system accepts 2D or 3D images as input. Since pose, illumination, and scale all affect 2D ear images, it is evident that they all impact recognition performance; therefore, 3D ear images are employed to address these issues. The geometric shapes of 3D ears are utilized as rich features to improve recognition accuracy. We present recent advances in several areas relevant to 3D ear recognition and provide directions for future research. To the best of our knowledge, no comprehensive review has been conducted on using 3D ear images in human recognition. This review focuses on three primary categories of 3D ear recognition techniques: (1) registration-based recognition, (2) local and global feature-based recognition, and (3) a combination of (1) and (2). Based on the above categorization and publicly available 3D ear datasets, this article reviews existing 3D ear recognition techniques.
Chapter
This chapter introduces a multimodal technique that uses 2D and 3D ear images for secure access. The technique uses 2D-modality to identify keypoints and 3D-modality to describe keypoints. Upon detection and mapping of keypoints into 3D, a feature descriptor vector is computed around each mapped keypoint in 3D. We perform a two-stage, coarse and fine alignment to fit the 3D ear image of the probe to the 3D ear image of the gallery. The probe and gallery image keypoints are compared using feature vectors, where very similar keypoints are used as coarse alignment correspondence points. Once the ear pairs are matched fairly closely, the entire data is finely aligned to compute the matching score. A detailed systematic analysis using a large ear database has been carried out to show the efficiency of the technique proposed.
Article
Full-text available
There is an increasing interest in the use of various digital technologies for interacting with passengers at smart airports. However, there is a need to identify and address the privacy issues that may threaten passenger’s digital information during their interactions with smart airports applications. This study applies a systematic literature review method and reports the passenger’s digital information privacy issues that arise during different stages of their travel journey in smart airports. This research identified a set of 324 studies, which were then reviewed to obtain a final set of 31 relevant studies to address the research questions in hand. The review results were organized into five major categories: passenger travel journey through smart airports applications; elements involved (people, process, information, and technology) in the journey; passenger’s digital information privacy challenges; current solutions for identified challenges; passenger’s information standards and privacy regulations. These results are further analyzed to report important insights and future research directions about the privacy of passenger’s digital information in smart airports. This study will aid researchers and practitioners in obtaining a better understanding of the privacy concerns when dealing with the passenger’s digital information at the digitally enabled smart airports.
Thesis
Full-text available
this is a PhD thesis about secured distributed system ,where E- voting system is designed and implemented to be secured. where the system is protected at three level using different techniques at each level ,for authentication iris recognition depending on deep learning model was used , secured channel to transmit important data encrypted with modified light weight algorithm and also ensure the security of data during saving.
Conference Paper
Background: Cybersecurity has risen to international importance. Almost every organization will fall victim to a successful cyberattack. Yet, guidance for computer security incident response analysts is inadequate. Research Questions: What heuristics should an incident analyst use to construct general knowledge and analyse attacks? Can we construct formal tools to enable automated decision support for the analyst with such heuristics and knowledge? Method: We take an interdisciplinary approach. To answer the first question, we use the research tradition of philosophy of science, specifically the study of mechanisms. To answer the question on formal tools, we use the research tradition of program verification and logic, specifically Separation Logic. Results: We identify several heuristics from biological sciences that cybersecurity researchers have re-invented to varying degrees. We consolidate the new mechanisms literature to yield heuristics related to the fact that knowledge is of clusters of multi-field mechanism schema on four dimensions. General knowledge structures such as the intrusion kill chain provide context and provide hypotheses for filling in details. The philosophical analysis answers this research question, and also provides constraints on building the logic. Finally, we succeed in defining an incident analysis logic resembling Separation Logic and translating the kill chain into it as a proof of concept. Conclusion: These results benefits incident analysis, enabling it to expand from a tradecraft or art to also integrate science. Future research might realize our logic into automated decision-support. Additionally, we have opened the field of cybersecuity to collaboration with philosophers of science and logicians.
Article
Full-text available
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases
Conference Paper
Full-text available
The face quality assessment from video must quantitatively measure the applicability of the face images that are typically captured over multiple frames with various degradations. In this work, we address the face quality assessment from the video captured using Automatic Border Control (ABC) system. To this extent, we employed MorphoWayTM ABC system as a data capture device to construct a new database by simulating real-life scenario. We then propose a new scheme for face quality estimation that can be viewed in three steps: (1) Pose estimation by detecting face parts (eyes and nose) to separate frontal from non-frontal faces. (2) We then consider the frontal face and evaluate its corresponding image quality by analyzing its texture components using Grey Level Co-occurrence Matrix (GLCM). (3) Finally, we quantify the quality of the given face image using likelihood values obtained using Gaussian Mixture Model (GMM). Extensive experiments are carried out on our new database that exhibits various quality degradations due to head pose variations, change in illumination, expression, motion blur, etc. The experimental results have indicated that the proposed face quality assessment algorithm can effectively classify the input image into relevant quality bins that in turn can be employed for the improved face verification.
Article
Full-text available
This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.
Conference Paper
Full-text available
Automated border control (ABC) systems have been proposed as the best option to satisfy the actual needs that airport security has to face nowadays and in the next years. These needs are demanded by the rapid flow growth. High efficient biometric recognition systems using face, iris or fingerprint modalities placed inside the gates at airports entry points should cope with the problems caused by this growth. Problems such as, congestion at these gates, delays in the planned arrival schedules or lack of security control, require a fast automated biometric solution. Face modality is present in 2nd generation passports and it is well accepted by travellers. In this work the state of the art for the most important face recognition models that are currently employed in ABC e-Gates is presented. The importance that image quality has in the ABC systems performance and how external factors affect this quality is described.
Article
Full-text available
Biometric applications are increasingly used to provide enhanced security in verification and identification procedures. However, privacy concerns are raised from the processing of biometric features, which are deemed as personal data and in certain circumstances, as sensitive personal data. Their processing may infringe the privacy of the individual, as it provides more potential for control of the individual. Data Protection Authorities in the EU Member States have issued various decisions, prohibiting biometric applications in some occasions. Their decisions, however, lack consistency and are contradictory. Therefore, it should be examined if the legal review of biometric applications should be based on the application of existing legislation on data protection and particularly of the proportionality principle, or whether specific legal provisions should be introduced. It is stressed out that discrimination exists as regards biometric applications in the private sector and those in the public sector, since biometry has been introduced in passports and plans exist to introduce them also in identity cards. The processing of biometric data is regulated in some EU Member States data protection acts, which provide for the principles of proportionality and/or prior notification to the supervisory authority, while other laws define biometric data as sensitive data and thus, grant enhanced protection to data subjects.
Article
Full-text available
Several issues related to the vulnerability of fingerprint recognition systems to attacks have been highlighted in the biometrics literature. One such vulnerability involves the use of artificial fingers, where materials such as Play-Doh, silicone, and gelatin are inscribed with fingerprint ridges. Researchers have demonstrated that some commercial fingerprint recognition systems can be deceived when these artificial fingers are placed on the sensor; that is, the system successfully processes the ensuing fingerprint images, thereby allowing an adversary to spoof the fingerprints of another individual. However, at the same time, several countermeasures that discriminate between live fingerprints and spoof artifacts have been proposed. While some of these antispoofing schemes are hardware based, several software-based approaches have been proposed as well. In this article, we review the literature and present the state of the art in fingerprint antispoofing.
Article
Full-text available
Identity is the collection of all characteristics related to an entity, be it a person, enterprise or an object. Identities allow entities to be distinguished from each other. This is what makes an identity a key component in several administrative, social and economic transactions. Identity management is important for government, communication, commerce and just about any significant societal activity. Government identity management systems are not usually static and are frequently changed with the change of time and situation. A flexible approach is needed towards the addition of new features in identity management systems. This paper analyzes different identity cards used for different purposes in the Kingdom of Saudi Arabia as a case study followed by their deficiencies. Finally, a biometric based identity management system is proposed referring to the successful and extendible identity management system of developed countries.
Conference Paper
Full-text available
This paper introduces a formalization of the Au-tomated Border Control (ABC) machines deployed worldwide as part of the eBorder infrastructure for automated traveller clearance. Proposed formalization includes classification of the eBorder technologies, definition of the basic components of the ABC machines, identification of their key properties, estab-lishment of metrics for their evaluation and comparison, as well as development of a dedicated architecture based on the assistant-based concept. Specifically, three generations of the ABC machines are identified: Gen-1 ABC machines which are biometric enabled kiosks, such as Canada's NEXUS or UK IRIS, to process low-risk pre-enrolled travellers; Gen-2 ABC machines which are eGate systems to serve travellers with biometric eID / ePassports; and Gen-3 ABC machines that will be working to support the eBorder process of the future. These ABC machines are compared in this paper based on certain criteria, such as availability of the dedicated architectural components, and in terms of the life-cycle performance metrics. This paper addresses the related problems of deployment and evaluation of the ABC technologies and machines, including the vulnerability analysis and strategic planning of the eBorder infrastructure. I. INTRODUCTION The International Air Transport Association (IATA) esti-mates that the volume of international air travel passengers will grow at around 6% per year. It will result in nearly 1.4 billion passengers in 2015 [25]. The entire traveler screening system paradigm should be changed to provide a solution that balances the trade-off between maximizing security while minimizing the border processing time. The key challenges in the border domain are migration pressure, increased security threat, and predicted increase of the travellers' flow. The human factor has also been regarded as a highly vul-nerable element of the border screening system. In 2009, there were about 30,000 personnel members performing passenger checkpoint screening at all USA airports [10]. As mentioned in the above report, many factors contribute to human per-formance limitations, including inadequate training, lack of motivation and job satisfaction, fatigue, and workplace con-ditions, as well as general human perception and performance limitations.
Conference Paper
Full-text available
There are many reasons why passengers are unable or reluctant to use self-service e-gate systems. In order for designers to build better systems with higher uptake by end-users they need to have a more thorough understanding of the non-users. This paper investigates the reasons of non-use of Automated Border Control at European airports by applying Wyatt's taxonomy and adding an "unawares" category. It also presents possible solutions to turn current non-users into future users of e-gates.
Book
While traveling the data highway through the global village, most people, if they think about it at all, consider privacy a non-forfeitable right. They expect to have control over the ways in which their personal information is obtained, distributed, shared, and used by any other entity. According to recent surveys, privacy, and anonymity are the fundamental issues of concern for most Internet users, ranked higher than ease-of-use, spam, cost, and security. Digital Privacy: Theory, Techniques, and Practices covers state-of-the-art technologies, best practices, and research results, as well as legal, regulatory, and ethical issues. Editors Alessandro Acquisti, Stefanos Gritzalis, Costas Lambrinoudakis, and Sabrina De Capitani di Vimercati, established researchers whose work enjoys worldwide recognition, draw on contributions from experts in academia, industry, and government to delineate theoretical, technical, and practical aspects of digital privacy. They provide an up-to-date, integrated approach to privacy issues that spells out what digital privacy is and covers the threats, rights, and provisions of the legal framework in terms of technical counter measures for the protection of an individual’s privacy. The work includes coverage of protocols, mechanisms, applications, architectures, systems, and experimental studies. Even though the utilization of personal information can improve customer services, increase revenues, and lower business costs, it can be easily misused and lead to violations of privacy. Important legal, regulatory, and ethical issues have emerged, prompting the need for an urgent and consistent response by electronic societies. Currently there is no book available that combines such a wide range of privacy topics with such a stellar cast of contributors. Filling that void, Digital Privacy: Theory, Techniques, and Practices gives you the foundation for building effective and legal privacy protocols into your business processes.
Book
Offering the first comprehensive analysis of touchless fingerprint-recognition technologies, Touchless Fingerprint Biometrics gives an overview of the state of the art and describes relevant industrial applications. It also presents new techniques to efficiently and effectively implement advanced solutions based on touchless fingerprinting. The most accurate current biometric technologies in touch-based fingerprint-recognition systems require a relatively high level of user cooperation to acquire samples of the concerned biometric trait. With the potential for reduced constraints, reduced hardware costs, quicker acquisition time, wider usability, and increased user acceptability, this book argues for the potential superiority of touchless biometrics over touch-based methods. The book considers current problems in developing high-accuracy touchless recognition technology. It discusses factors such as shadows, reflections, complex backgrounds, distortions due to perspective effects, uncontrolled finger placement, inconstant resolution of the ridge pattern, and reconstruction and processing of three-dimensional models. The last section suggests what future work can be done to increase accuracy in touchless systems, such as intensive studies on extraction and matching methods and three-dimensional analytical capabilities within systems. In a world where usability and mobility have increasing relevance, Touchless Fingerprint Biometrics demonstrates that touchless technologies are also part of the future. A presentation of the state of the art, it introduces you to the field and its immediate future directions.
Article
We present a system for recognizing human faces from single images out of a large database containing one image per person. Faces are represented by labeled graphs, based on a Gabor wavelet transform. Image graphs of new faces are extracted by an elastic graph matching process and can be compared by a simple similarity function. The system differs from the preceding one in three respects. Phase information is used for accurate node positioning. Object-adapted graphs are used to handle large rotations in depth. Image graph extraction is based on a novel data structure, the bunch graph, which is constructed from a small set of sample image graphs.
We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Article
In this paper we extend the Parts-Based approach of face verification by performing a frequency-based decomposition. The Parts-Based approach divides the face into a set of blocks which are then considered to be separate observations, this is a spatial decomposition of the face. This paper extends the Parts-Based approach by also dividing the face in the frequency domain and treating each frequency response from an observation separately. This can be expressed as forming a set of sub-images where each sub-image represents the response to a different frequency of, for instance, the Discrete Cosine Transform. Each of these sub-images is treated separately by a Gaussian Mixture Model (GMM) based classifier. The classifiers from each sub-image are then combined using weighted summation with the weights being derived using linear logistic regression. It is shown on the BANCA database that this method improves the performance of the system from an Average Half Total Error Rate of 26.59% for a baseline GMM Parts-Based system to 14.85% for a column-based approach on the frequency sub-images, for Protocol P.
Conference Paper
Last few years have witnessed an ever-increasing demand of border crossing, whose processing introduces the need to speed-up the clearance process at the Border Crossing Points (BCP). Automated Border Control (ABC) gates, or shortly e-Gates, can verify the identity of the travelers crossing the borders by exploiting their biometric traits, without the need of a constant human intervention. Biometric technologies have a relevant impact on the improvement of efficiency, effectiveness and security of the checking processes. Automated biometric recognition can increase the border processing throughput of the BCP, as well as facilitate the clearance procedures. To grant the passage of the border, the e-Gate compares the biometric samples of the traveler stored into the electronic document with live acquisitions. This paper presents the latest substantial advances in the design of e-Gates. In particular, it presents the Biometric Verification System in detail, including its hardware and software components, as well as the procedures followed during the biometric verification of the traveler's identity. We address the complex issue of measuring the performance of an ABC system, considering the real applicability of the figures of merit usually adopted in biometric system's evaluation. To complete the view of the current e-Gates, we highlight the main challenges and the research trends relating to the biometric systems currently used in e-Gates.
Chapter
Iris images contain rich texture information for reliable personal identification. However, forged iris patterns may be used to spoof iris recognition systems. This paper proposes an iris anti-spoofing approach based on the texture discrimination between genuine and fake iris images. Four texture analysis methods include gray level co-occurrence matrix, statistical distribution of iris texture primitives, local binary patterns (LBP) and weighted-LBP are used for iris liveness detection. And a fake iris image database is constructed for performance evaluation of iris liveness detection methods. Fake iris images are captured from artificial eyeballs, textured contact lens and iris patterns printed on a paper, or synthesised from textured contact lens patterns. Experimental results demonstrate the effectiveness of the proposed texture analysis methods for iris liveness detection. And the learned statistical texture features based on weighted-LBP can achieve 99accuracy in classification of genuine and fake iris images.
Article
Iris recognition standards are open specifications for iris cameras, iris image properties, and iris image records. Biometric data standards are a necessity for applications in which a consumer of a data record must process biometric input from an arbitrary producer. The archetype for standard iris storage formats has been the flight of standards already developed for the storage of biometric data on electronic passports.
Article
Iris sample quality has a number of important applications. It can be used at a variety of processing levels in iris recognition systems, for example, at the acquisition stage, at image enhancement stage, or at matching and fusion stage. Metrics designed to evaluate iris sample quality are used as figures of merit to quantify degradations in iris images due to environmental conditions, unconstrained presentation of individuals or due to postprocessing that can reduce iris information in the data. This chapter presents a short summary of quality factors traditionally used in iris recognition systems. It further introduces new metrics that can be used to evaluate iris image quality. The performance of the individual quality measures is analyzed, and their adaptive inclusion into iris recognition systems is demonstrated. Three methods to improve the performance of biometric matchers based on vectors of quality measures are described. For all the three methods, the reported experimental results show significant performance improvement when applied to iris biometrics. This confirms that the newly proposed quality measures are informative in the sense that their involvement results in improved iris recognition performance.
Article
Iris recognition is both a technology already in successful use in ambitious nation-scale applications and also a vibrant, active research area with many difficult and exciting problems yet to be solved. This chapter gives a brief introduction to iris recognition and an overview of the chapters in the Second Edition of the Handbook of Iris Recognition.
Chapter
Iris sample quality has a number of important applications. It can be used at a variety of processing levels in iris recognition systems, for example, at the acquisition stage, at image enhancement stage, or at matching and fusion stage. Metrics designed to evaluate iris sample quality are used as figures of merit to quantify degradations in iris images due to environmental conditions, unconstrained presentation of individuals or due to postprocessing that can reduce iris information in the data. This chapter presents a short summary of quality factors traditionally used in iris recognition systems. It further introduces new metrics that can be used to evaluate iris image quality. The performance of the individual quality measures is analyzed, and their adaptive inclusion into iris recognition systems is demonstrated. Three methods to improve the performance of biometric matchers based on vectors of quality measures are described. For all the three methods, the reported experimental results show significant performance improvement when applied to iris biometrics. This confirms that the newly proposed quality measures are informative in the sense that their involvement results in improved iris recognition performance.
Chapter
Iris recognition standards are open specifications for iris cameras, iris image properties, and iris image records. Biometric data standards are a necessity for applications in which a consumer of a data record must process biometric input from an arbitrary producer. The archetype for standard iris storage formats has been the flight of standards already developed for the storage of biometric data on electronic passports.
Chapter
In this chapter, we examine the implications of ageing for the usability of biometric solutions. We first set out what usability means, and which factors need to be considered when designing a solution that is ‘usable’. We review usability successes and issues with past biometric techniques, in the context of a set of solutions, before considering how usability will be affected for ageing users because of the physical and cognitive changes they undergo. Finally, we identify the opportunities and challenges that ageing presents for researchers, developers and operators of biometric systems.
Article
Iris recognition is one of the most promising fields in biometrics. Notwithstanding this, there are not so many research works addressing it by Machine Learning techniques. In this survey, we especially focus on recognition, and leave the detection and feature extraction problems in the background. However, the kind of features used to code the iris pattern may significantly influence the complexity of the methods and their performance. In other words, complexity affects learning, and iris patterns require relatively complex feature vectors, even if their size can be optimized. A cross-comparison of these two parameters, feature complexity vs. learning effectiveness, in the context of different learning algorithms, would require an unbiased common benchmark. Moreover, at present it is still very difficult to reproduce techniques and experiments due to the lack of either sufficient implementation details or reliable shared code.
Conference Paper
Face recognition based on 2D images is a widely used biometric approach. This is mainly due to the simplicity and high usability of this approach. Nonetheless, those solutions are vulnerable to spoof attacks made by non-real faces. In order to identify malicious attacks on such biometric systems, 2D face liveness detection approaches are developed. In this work, face liveness detection approaches are categorized based on the type of liveness indicator used. This categorization helps understanding different spoof attacks scenarios and their relation to the developed solutions. A review of the latest works dealing with face liveness detection works is presented. A discussion is made to link the state of the art solutions with the presented categorization along with the available and possible future datasets. All that aim to provide a clear path for the future development of innovative face liveness detection solutions.
Article
Biometrics is becoming increasingly common in establishments that require high security such as state security and financial sectors. The increased threat to national security by terrorists has led to the explosive popularity of biometrics. A number of biometric devices are now available to capture biometric measurements such as fingerprints, palm, retinal scans, keystroke, voice recognition and facial scanning. However, the accuracy of these measurements varies, which has a direct relevance on the levels of security they offer. With the need to combat the problems related to identity theft and other security issues, society will have to compromise between security and personal freedoms. Securing Biometrics Applications investigates and identifies key impacts of biometric security applications, while discovering opportunities and challenges presented by the biometric technologies available. This volume also includes new biometric measures for national ID card, passport and centralized database system. Securing Biometrics Applications is designed for a professional audience, researchers, and practitioners. This book is suitable for advanced-level students as a secondary text or reference book in computer science. © 2008 Springer Science+Business Media, LLC. All rights reserved.
Conference Paper
This paper presents and an evaluation of results obtained from a face recognition system in a real uncontrolled localization. The involved infrastructure is Barajas Airport (the international airport in Madrid, Spain). The use of this infrastructure during normal operation hours has imposed some constrains. It was not allowed to change or to add new cameras and passengers should not be disturbed by any means. Passengers should not be aware of the presence of the system, so no request should be done to change their normal behavior. To fulfill these requirements, three video surveillance cameras were selected: two in the corridor areas and one in a control point. Images were acquired and processed with illumination changes, several quality levels, collaborative and non-collaborative subjects and during three weeks. The influence of data compression method and classificator has been detailed in the paper. Three scenarios were simulated: first one is a normal operational mode, second one is a high security mode and last one is a friendly or soft-recognition mode. Four data compression methods were considered in the paper: 1dpca (1d principal components analysis), 2dpca (2d principal components analysis), 2dlda (2d linear discriminant analysis) and csa (coupled subspace analysis). Csa has obtained the best performance. For classificatory purpoises, svm (support vector machines) were selected with excellent results. The overall analysis shows that the approach taken will lead to excellent results given the hard conditions of a real scenario such an airport.
Article
The overall ePassport authentication procedure should be fast to have a sufficient throughput of people at border crossings such as airports. At the same time, the ePassport and its holder should be checked as thoroughly as possible. By speeding up the ePassport authentication procedure, more time can be spend on verification of biometrics. We demonstrate that our proposed solution allows to replace the current combination of PACE and EAC with a more efficient authentication procedure that provides even better security and privacy guarantees. When abstracting away from the time needed for the ePassport to verify the terminal's certificate, a speed-up of at least 40% in comparison with the current ePassport authentication procedure is to be expected.
Conference Paper
Recent studies on ECG signals proved that they can be employed as biometric traits able to obtain sufficient accuracy in a wide set of applicative scenarios. Most of the systems in the literature, however, are based on templates consisting in vectors of integer or floating point numbers. While any numerical representation is inherently binary, in here we consider as binary templates only those codings in which similarity or distance metrics can be directly applied to the for performing identity comparisons. With respect to templates composed by integer or floating point values, the use of binary templates presents important advantages, such as smaller memory space, and faster and simpler matching functions. Binary templates could therefore be adopted in a wider range of applications with respect to traditional ECG templates, like wearable devices and body area networks. Moreover, binary templates are suitable for most of the biometric template protection methods in the literature. This paper presents a novel approach for computing and processing binary ECG templates (HeartCode). Experimental results proved that the proposed approach is effective and obtains performance comparable to more mature biometric methods for ECG recognition, obtaining Equal Error Rate (EER) of 8.58% on a significantly large database of 8400 samples extracted from Holter acquisitions performed in uncontrolled conditions.