Preprint

Automated Inference on Criminality using Face Images

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

We study, for the first time, automated inference on criminality based solely on still face images. Via supervised machine learning, we build four classifiers (logistic regression, KNN, SVM, CNN) using facial images of 1856 real persons controlled for race, gender, age and facial expressions, nearly half of whom were convicted criminals, for discriminating between criminals and non-criminals. All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic. Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

... How neural networks change the basis of knowledge production as compared to visual systems is demonstrated very clearly in several computer vision and data science projects, for instance Wu and Zhang's 2016 paper 'Automated Inference on Criminality Using Face Images' [82]. That paper used 1856 government ID images, about half of which included individuals with a criminal conviction, as a dataset for a criminal-propensity classier. ...
... That paper used 1856 government ID images, about half of which included individuals with a criminal conviction, as a dataset for a criminal-propensity classier. 9 The theoretical grounding of this research is similar to other APR exercises, ostensibly testing the social inference hypothesis [82]. But the research is more revealing in terms of how computer vision systems claim to generate knowledge about individuals. ...
... But the research is more revealing in terms of how computer vision systems claim to generate knowledge about individuals. Most telling is the authors' acknowledgment that the variance between criminal and non-criminal populations is not evident from visual assessments or simple Euclidean measurements [82]. In other words, visual information, in the sense of information that is visually perceivable and interpretable, what Galton had unsuccessfully relied on, was insucient for physiognomic purposes. ...
Conference Paper
Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it.
... One of these papers states that "As expected, the state-ofthe-art CNN classifier performs the best, achieving 89.51% accuracy...These highly consistent results are evidences for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic" [40]. Another paper states that "the test accuracy of 97%, achieved by CNN, exceeds our expectations and is a clear indicator of the possibility to differentiate between criminals and noncriminals using their facial images" [14]. ...
... The experimental dataset used by Wu and Zhang [40] has similar problems. The Non-Criminal images for that work are described as follows. ...
... The essential point is that if there is anything at all different about ID photos acquired from the Internet versus ID photos supplied by a police department, this difference is 100% correlated with the Criminal / Non-Criminal labels and will be used by the trained CNN to classify the images. So, just as with the experiments in [14], there is no good reason to believe that the CNN in the experiments in [40] was able to learn a model of Criminal / Non-Criminal facial structure. ...
Preprint
The automatic analysis of face images can generate predictions about a person's gender, age, race, facial expression, body mass index, and various other indices and conditions. A few recent publications have claimed success in analyzing an image of a person's face in order to predict the person's status as Criminal / Non-Criminal. Predicting criminality from face may initially seem similar to other facial analytics, but we argue that attempts to create a criminality-from-face algorithm are necessarily doomed to fail, that apparently promising experimental results in recent publications are an illusion resulting from inadequate experimental design, and that there is potentially a large social cost to belief in the criminality from face illusion.
... In order to illustrate the ethical implications of goal-oriented design, let's take an example from machine learning that most readers will find straight-forwardly problematic. Here are two conclusions from an abstract on automated infer-ence of criminality using faces (Wu and Zhang, 2016): ...
... The data for Wu and Zhang (2016) comes from China and includes only men, but it is ethically safer to assume that data from the ministry of public security and various police departments is biased than it is to assume that it is balanced and representative. ...
... The ethics you adopt has a lot to do with what you think of human beings. In the case of Wu and Zhang (2016), tying facial structure to criminality suggests that some humans are "bad". ...
... Such quite subtle cues in sociopsychological aspect seem to be picked up by our CNN method, otherwise it would be difficult to explain the good performance of the proposed face classifier. Finally, to safe guard against possible risk of data overfitting by our neural network for statistical inference on sociopsychological perceptions of attractive female faces, we conduct the fault-finding experiment proposed in [24] , in seeking for terexamples. We randomly label the faces in our sample set as positive and negative instances with equal probability, and redo the above experiments of classification after retraining the CNN with the artificially labeled samples. ...
... This work is a sequel to our earlier paper [24]. We drive the research on face processing, analysis and recognition beyond the tasks of biometric-based identification, and try to extend it in the direction of automatic statistical inferences on sociopsychological perceptions, such as personality traits and behavioral propensity. ...
Preprint
This article is a sequel to our earlier paper [24]. Our main objective is to explore the potential of supervised machine learning in face-induced social computing and cognition, riding on the momentum of much heralded successes of face processing, analysis and recognition on the tasks of biometric-based identification. We present a case study of automated statistical inference on sociopsychological perceptions of female faces controlled for race, attractiveness, age and nationality. Like in [24], our empirical evidences point to the possibility of teaching computer vision and machine learning algorithms, using example face images, to predict personality traits and behavioral predisposition.
... Appearance-based implicit judgements have a strong impact on people's behaviour in social interactions (Todorov, Mandisodza, Goren, & Hall, 2005), although the validity of such inferences has been repeatedly questioned (e.g., Bengstrom & West, 2017;Kilianski, 2008;Olivola & Todorov 2010;Porter & ten Brinke, 2009;Wu & Zhang, 2016;Zebrowitz et al., 1996). ...
... While some researchers speak in favour of our ability to make relatively accurate and reliable inferences on others' criminality (e.g. Valla et al., 2011;Wu & Zhang, 2016), others argued that a person's facial appearance is not a valid indicator of their underlying characteristics due to the shortcomings of previous research (e.g. Todorov et al., 2015). ...
Article
Full-text available
Every day, people make quick, spontaneous and automatic appearance-based inferences of others. This is particularly true for social attributes, such as intelligence or attractiveness, but also aggression and criminality. There are also indications that certain personality traits, such as the dark traits (i.e. Machiavellianism, narcissism, psychopathy, sadism), influence the degree of accuracy of appearance-based inferences, even though not all authors agree to this. Therefore, this study aims to investigate whether there are interpersonal advantages related to the dark traits when assessing someone's criminality. For that purpose, an on-line study was conducted on a convenience sample of 676 adult females, whose task was to assess whether a certain person was a criminal or not based on their photograph. The results have shown that narcissism and Machiavellianism were associated with a greater tendency of indicating that someone is a criminal, reflecting an underlying negative bias that the individuals high on these traits hold about people in general.
... Recently, Wu and Zhang studied the use of artificial intelligence for the purpose of facial recognition of criminals and offenders (31). Furthermore, there have been several attempts (2004 -2016) to assess the role of NPS, research chemicals, and cognitive enhancers (32)(33)(34). ...
Article
Full-text available
Background: The ability of humans to recognise faces of countless individuals is unique and has an evolutionary basis. The cortical surface, responsible for this task, is significantly large in humans. The aim of this study was to analyze the face recognition abilities of a selected population of Iraqi students and to determine the correlation of these abilities with gender, handedness, and ethnicity. Objectives: To identify potential super-recognizers in a population of Iraqi medical students. Methods: This cross sectional study started in October 2016. The participants included medical students (n, 309), aged 17 - 25 years, form 4 ethnic groups: Arabs (288), Kurds (12), Turks (7), and Christian ethnicities (2). The face recognition ability was quantitatively scored (0 - 14), using a face recognition test. The test was distributed electronically via bit-encrypted Intranet systems. Nonparametric and inferential statistics were measured to determine the correlation between the scores and gender, handedness, and ethnicity. Results: More than half of the participants (51.5%) were found to be potential super-recognizers. There was a significant difference between males and females (10.72 vs. 10.05; P = 0.027). However, there was no significant difference between right- and left-handed individuals (10.29 vs. 10.09; P = 0.394). On the other hand, there was a significant interethnic difference between Arabs and Kurds (10.19 vs. 11.5; P = 0.022). Conclusions: Face recognition abilities had not been investigated in Iraqi populations before the present study. This study indicated the correlation of face recognition abilities with gender and ethnicity. Individuals with high scores on face recognition tests were known as super-recognizers. These individuals can be valuable to law-enforcement and intelligence agencies worldwide. Nonetheless, practical applications of this study are not limited to artificial intelligence, biometrics, or anthropometrics.
... Proponents of algorithms often cite algorithms' alleged ability to make bias-free decisions that are more accurate than those of their human counterparts. Recent studies have even reported that algorithms are more accurate than humans at detecting sexual orientation (Wang and Kosinski, 2017), disguised faces (Singh et al., 2017), and criminality using face images (Wu and Zhang, 2016). Such studies raise serious ethical concerns and present claims based on inconclusive, inscrutable, or misguided evidence (Mittelstadt et al., 2016). ...
Preprint
Full-text available
Recently, media and communication researchers have shown an increasing interest in critical data studies and ways to utilize data for social progress. In this commentary, I highlight several useful contributions in the International Panel on Social Progress' (IPSP) report toward identifying key data justice issues, before suggesting extra focus on algorithmic discrimination and implicit bias. Following my assessment of the IPSP's report, I emphasize the importance of two emerging media and communication areas-data ontology and semantic technology-that impact internet users daily yet receive limited attention from critical data researchers. I illustrate two examples to show how data ontologies and semantic technologies impact social processes by engaging in the hierarchization of social relations and entities, a practice that will become more common as the internet changes states towards a "smarter" version of itself.
... How much information about a person can be extracted from the person's facial features, and to what extent it matters how different features of the face are combined in making the judgment [22] are interesting questions. They are also questions with potential to challenge the organisation of society, in the event judgments are done at mass scale by computers with live access to extensive networks of cameras [23]. ...
Article
Full-text available
In many developed countries, human life expectancy has doubled over the last 180 years from ~40 to ~80 years. Underlying this great advance is a change in how we age, yet our understanding of this change remains limited. Here we present a unique database rich with possibilities to study the human ageing process: the AgeGuess.org database on people's perceived and chronological ages. Perceived age (i.e. how old one looks to others) correlates with biological age, a measure of a person's health condition in comparison to the average of same-aged peers. Determining biological age usually involves elaborate molecular and cellular biomarkers. Using instead perceived age as a biomarker of biological age enables us to collect large amounts of data on biological age through a citizen science project, where people upload pictures of themselves and guess the ages of other people at http://www.ageguess.org. It furthermore allows to collect data retrospectively, because people can upload photographs of themselves when they were younger or of their parents and grandparents. We can thus study the temporal variation in the gap between perceived age and chronological age to address questions such as whether we now age slower or delay ageing until older ages. The here presented perceived age data span birth cohorts from the years 1877 to 2014. Since 2012 the database has grown to now contain around 200,000 perceived age guesses. More than 4000 citizen scientists from over 120 countries of origin have uploaded ~5000 facial photographs. We detail how the data are collected, where the data can be downloaded free of charge, and the contained variables. Beyond ageing research, the data present a wealth of possibilities to study how humans guess ages and to use this knowledge for instance in advancing and testing emerging applications of artificial intelligence and deep learning algorithms.
... (Note that image synthesis is commonly used to create datasets for healthcare applications [13,44].) Similarly, if images from a database of criminals are used to train a face generation algorithm [67], membership inference may expose an individual's criminal history. ...
Conference Paper
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.
... Proponents of algorithms often cite algorithms' alleged ability to make bias-free decisions that are more accurate than those of their human counterparts. Recent studies have even reported that algorithms are more accurate than humans at detecting sexual orientation (Wang and Kosinski, 2017), disguised faces (Singh et al., 2017) and criminality using face images (Wu and Zhang, 2016). Such studies raise serious ethical concerns and present claims based on inconclusive, inscrutable or misguided evidence (Mittelstadt et al., 2016). ...
Article
Full-text available
Recently, media and communication researchers have shown an increasing interest in critical data studies and ways to utilize data for social progress. In this commentary, I highlight several useful contributions in the International Panel on Social Progress (IPSP) report toward identifying key data justice issues, before suggesting extra focus on algorithmic discrimination and implicit bias. Following my assessment of the IPSP’s report, I emphasize the importance of two emerging media and communication areas – applied ontology and semantic technology – that impact internet users daily, yet receive limited attention from critical data researchers. I illustrate two examples to show how applied ontologies and semantic technologies impact social processes by engaging in the hierarchization of social relations and entities, a practice that will become more common as the Internet changes states towards a ‘smarter’ version of itself.
... Even more alarming than the scientific criticism about the method and data, were the ethical concerns regarding the possible uses for this studye.g., homosexuality is a crime in several countries. Earlier, another study investigated automated inference on criminality from facial images in China [59], The cited cases are not isolated or sporadic. The Start-ups and high-tech companies have continuously offered examples of how organizations may affect society not only with their technical solutions, but also with their processes, rules, and business strategies. ...
Article
Full-text available
Academic literature has indicated a new moment for the HCI field that requires it to revisit methods and practices to consider aspects that are difficult to deal with, such as human values and culture. Although recognized as important and a challenge for HCI, human values is still a topic that demands investigation, discussion, and practical results (theoretical, methodological, technical) so that it may become somewhat useful for HCI as both a discipline and a community. This paper presents an informed discussion in which we explore possible understandings for values in HCI, the importance of the topic, and existing approaches. We draw on the literature and on our own research experiences in the topic to develop critical discussions and suggest possible directions for advancing the research and practice in the context of this challenge. Source: https://sol.sbc.org.br/journals/index.php/jis/article/view/689
... The problem is not novel and becomes dangerous if used for decision making [95]. Besides tragicomic revamping of phrenology through deep learning [96], ProPublica's assessment of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm, a tool used to predict a person's risk of recidivism, is a serious example of bias-learning models [97]. ...
Article
Full-text available
Background Nowadays, trendy research in biomedical sciences juxtaposes the term ‘precision’ to medicine and public health with companion words like big data, data science, and deep learning. Technological advancements permit the collection and merging of large heterogeneous datasets from different sources, from genome sequences to social media posts or from electronic health records to wearables. Additionally, complex algorithms supported by high-performance computing allow one to transform these large datasets into knowledge. Despite such progress, many barriers still exist against achieving precision medicine and precision public health interventions for the benefit of the individual and the population. Main body The present work focuses on analyzing both the technical and societal hurdles related to the development of prediction models of health risks, diagnoses and outcomes from integrated biomedical databases. Methodological challenges that need to be addressed include improving semantics of study designs: medical record data are inherently biased, and even the most advanced deep learning’s denoising autoencoders cannot overcome the bias if not handled a priori by design. Societal challenges to face include evaluation of ethically actionable risk factors at the individual and population level; for instance, usage of gender, race, or ethnicity as risk modifiers, not as biological variables, could be replaced by modifiable environmental proxies such as lifestyle and dietary habits, household income, or access to educational resources. Conclusions Data science for precision medicine and public health warrants an informatics-oriented formalization of the study design and interoperability throughout all levels of the knowledge inference process, from the research semantics, to model development, and ultimately to implementation.
... (Note that image synthesis is commonly used to create datasets for healthcare applications [13,44].) Similarly, if images from a database of criminals are used to train a face generation algorithm [67], membership inference may expose an individual's criminal history. ...
Article
Full-text available
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.
... For instance, an organization may use datasets from a specific health provider or a specific commercial dataset, and discovering that a record was in the training set may leak further information about a health condition or a commercial relationship of an individual. Concretely, if an image database from a database of criminals is used to train a face generation algorithm, a membership inference attack on a picture leaks information about a person's criminal history [40]. ...
Article
Full-text available
Recent advances in machine learning are paving the way for the artificial generation of high quality images and videos. In this paper, we investigate how generating synthetic samples through generative models can lead to information leakage, and, consequently, to privacy breaches affecting individuals' privacy that contribute their personal or sensitive data to train these models. In order to quantitatively measure privacy leakage, we train a Generative Adversarial Network (GAN), which combines a discriminative model and a generative model, to detect overfitting by relying on the discriminator capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, and show how to improve it through auxiliary knowledge of samples in the dataset. We test our attacks on several state-of-the-art models such as Deep Convolutional GAN (DCGAN), Boundary Equilibrium GAN (BEGAN), and the combination of DCGAN with a Variational Autoencoder (DCGAN+VAE), using datasets consisting of complex representations of faces (LFW) and objects (CIFAR-10). Our white-box attacks are 100% successful at inferring which samples were used to train the target model, while the best black-box attacks can infer training set membership with over 60% accuracy.
... A recent example of physiognomy as pseudo-science is a Chinese study claiming to be able to detect criminality from identity photographs [18]. ...
Preprint
Recent research used machine learning methods to predict a person's sexual orientation from their photograph (Wang and Kosinski, 2017). To verify this result, two of these models are replicated, one based on a deep neural network (DNN) and one on facial morphology (FM). Using a new dataset of 20,910 photographs from dating websites, the ability to predict sexual orientation is confirmed (DNN accuracy male 68%, female 77%, FM male 62%, female 72%). To investigate whether facial features such as brightness or predominant colours are predictive of sexual orientation, a new model based on highly blurred facial images was created. This model was also able to predict sexual orientation (male 63%, female 72%). The tested models are invariant to intentional changes to a subject's makeup, eyewear, facial hair and head pose (angle that the photograph is taken at). It is shown that the head pose is not correlated with sexual orientation. While demonstrating that dating profile images carry rich information about sexual orientation these results leave open the question of how much is determined by facial morphology and how much by differences in grooming, presentation and lifestyle. The advent of new technology that is able to detect sexual orientation in this way may have serious implications for the privacy and safety of gay men and women.
... L'applicazione di tecniche di machine learning alla prevenzione della criminalità si è spinta fino al punto di rivalutare il pensiero di Cesare Lombroso. Secondo una controversa ricerca (Wu, Zhang, 2016), è possibile distinguere tra criminali e non criminali con il 90,0% di accuratezza semplicemente attraverso l'analisi automatica di immagini di volti umani. Le parole degli autori sono imbevute del mito della neutralità algoritmica: «unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc., no mental fatigue, no preconditioning of a bad sleep or meal» (Ivi, 2). ...
Article
Full-text available
From the recommendation of cultural content to the identification of potential criminals, a growing number of activities are ordinarily delegated to algorithms and AI systems. These are narrated as neutral technologies which make complex processes more efficient and lead to objective results. However, a wide literature argues that algorithms are social products that reflect the particular interests, cultural assumptions and biases of individuals and organizations. The present contribution aims to deconstruct in a Foucaultian way the algorithmic neutrality myth, illustrating its genesis, discursive facets and weaknesses, also drawing from a series of empirical cases. In the conclusion, we propose a counternarrative of the algorithm focused on explainability and collective sovereignty. Article included in The Lab's Quaterly special issue on "Algorithms as social constructions" (edited by Martella, Campo & Ciccarese).
... 7 Will such a 'new discipline' be ''an example of statistics-led research with no theoretical underpinning''? This is how Professor Susan McVie, professor of quantitative criminology at the University of Edinburgh, responded to the publicity surrounding a recent paper uploaded to the most important 'hard' science e-print server, the arXiv (BBC, 2016;Wu and Zhang, 2016). This paper claimed that, using supervised machine learning, the authors -who work in an Electrical Engineering department 8 -had developed a system for distinguishing criminals from non-criminals (or as the authors label them, 'normal people'), with criminals successfully identified 89% of the time. ...
Article
Full-text available
This paper argues that analyses of the ways in which Big Data has been enacted in other academic disciplines can provide us with concepts that will help understand the application of Big Data to social questions. We use examples drawn from our Science and Technology Studies (STS) analyses of -omic biology and high energy physics to demonstrate the utility of three theoretical concepts: (i) primary and secondary inscriptions, (ii) crafted and found data, and (iii) the locus of legitimate interpretation. These help us to show how the histories, organisational forms, and power dynamics of a field lead to different enactments of big data. The paper suggests that these concepts can be used to help us to understand the ways in which Big Data is being enacted in the domain of the social sciences, and to outline in general terms the ways in which this enactment might be different to that which we have observed in the ‘hard’ sciences. We contend that the locus of legitimate interpretation of Big Data biology and physics is tightly delineated, found within the disciplinary institutions and cultures of these disciplines. We suggest that when using Big Data to make knowledge claims about ‘the social’ the locus of legitimate interpretation is more diffuse, with knowledge claims that are treated as being credible made from other disciplines, or even by those outside academia entirely.
... Both theories have a troubled history, as they have been used to justify racial discrimination as well as eugenic theories 75,76 . While physiognomy in its original formation has been largely debunked, modern studies have found correlations between facial width-to-height ratios and aggressive tendencies and behaviors 77 , with regrettable renewed efforts in using machine learning approaches to detect such correlations raising serious ethical concerns 78,79 . However, our results argue that while the ancient human intuition of a close relationship between the face and the brain has genetic support at the level of morphology, there is no genetic evidence for the supposed predictive value of face shape in behavioral-cognitive traits, which formed the core of physiognomy and related theories. ...
Preprint
Full-text available
Evidence from both model organisms and clinical genetics suggests close coordination between the developing brain and face, but it remains unknown whether this developmental link extends to genetic variation that drives normal-range diversity of face and brain shape. Here, we performed a multivariate genome-wide association study of cortical surface morphology in 19,644 European-ancestry individuals and identified 472 genomic loci influencing brain shape at multiple levels. We discovered a substantial overlap of these brain shape association signals with those linked to facial shape variation, with 76 common to both. These shared loci include transcription factors with cell-intrinsic roles in craniofacial development, as well as members of signaling pathways involved in brain-face crosstalk. Brain shape heritability is equivalently enriched near regulatory regions active in either brain organoids or in facial progenitor cells. However, brain shape association signals shared with face shape are distinct from those shared with behavioral-cognitive traits or neuropsychiatric disorder risk. Together, we uncover common genetic variants and candidate molecular players underlying brain-face interactions. We propose that early in embryogenesis, the face and the brain mutually shape each other through a combination of structural effects and paracrine signaling, but this interplay may have little impact on later brain development associated with cognitive function.
... The modest history of machine learning-led user classification is abundant with examples of marginalization 1 , from inferring crime risks from face analysis [241] to using unknowingly sexist AI recruiting tools [100]. Regardless of the accuracy of interaction-based sensing techniques, there is similarly a range of potential unethical use-cases of such technology. ...
Preprint
Full-text available
The variety of information about users hidden in the details of interaction data is increasingly being utilized for recognizing complex mental processes. Digital systems can correspondingly influence mental processes of users, paving the way for new interactive systems that interface with the human mind. This thesis presents advances to such interfaces: through four papers I show how human affect and cognition can be sensed and influenced computationally.Paper 1 presents two studies that together show that affect influences mobile interaction, which allows for binary discrimination between neutral and positive affect using sensor led machine learning classification. Paper 2 builds upon the methods presented in Paper 1 and extends the classification domain to dishonesty, also using mobile interaction data. The paper shows across three studies how dishonesty and honesty vary in interactional details, and how this difference can be utilized for estimating the veracity of user behavior based on features that are engineered by mobile interaction data.Paper 3 presents a feasibility study of conducting virtual reality studies outside a laboratory, to increase heterogeneity and power. The paper shows through two studies how a range of VR tasks can be conducted without the use of an immediate experimenter, with participants carrying out experiments themselves. In Paper 4 I apply this methodology, and conduct a VR study with more than 200 participants to study how manipulations to avatars can influence affect responses. The paper presents evidence supporting the link between affect and avatars, and additionally discusses the interplay between positive affect and body ownership.
... For example, scholars have argued that machine learning models applied to social data often do not account for myriad biases that arise during the analysis pipeline that can undercut the validity of study claims (Olteanu et al., 2016). Attempts to identify criminality (Wu and Zhang, 2016) and sexuality (Wang and Kosinski, 2018) from people's faces and predicting recidivism using criminal justice records (Larson and Angwin, 2016) have led to critiques that current attempts to apply machine learning to social data represent a new form of physiognomy (Aguera y Arcas et al., 2017). Physiognomy was the attempt to explain human behavior through body types and was characterized by poor theory and sloppy measurement (Gould, 1996). ...
Article
Full-text available
Research at the intersection of machine learning and the social sciences has provided critical new insights into social behavior. At the same time, a variety of issues have been identified with the machine learning models used to analyze social data. These issues range from technical problems with the data used and features constructed, to problematic modeling assumptions, to limited interpretability, to the models' contributions to bias and inequality. Computational researchers have sought out technical solutions to these problems. The primary contribution of the present work is to argue that there is a limit to these technical solutions. At this limit, we must instead turn to social theory. We show how social theory can be used to answer basic methodological and interpretive questions that technical solutions cannot when building machine learning models, and when assessing, comparing, and using those models. In both cases, we draw on related existing critiques, provide examples of how social theory has already been used constructively in existing work, and discuss where other existing work may have benefited from the use of specific social theories. We believe this paper can act as a guide for computer and social scientists alike to navigate the substantive questions involved in applying the tools of machine learning to social data.
... 5 Lowry and Macpherson (1988), p. 657. 6 See also Wu and Zhang (2016). 7 For a comprehensive overview of the current state of affairs regarding machine learning programs in social technology, see O'Neil (2016). ...
Article
Full-text available
Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call 'the Proxy Problem'. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algorithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution.
... Bajo las etiquetas de Big Data o Inteligencia Artificial encontramos diversas técnicas de tratamiento automatizado de elevados volúmenes de datos con el fin de descubrir patrones de comportamiento entre los individuos o fenómenos estudiados. Esto ha dado lugar a una industria del dato orientada a la predicción de fenómenos sociales con diversos fines, a saber: mejorar los procesos organizativos, optimizar las rutas logísticas, anticipar picos de demanda en los servicios, etc. Pero también otros cuya ética es más dudosa, por ejemplo: deducir la orientación sexual de una persona a través del reconocimiento facial (Wang y Kosinski, 2018), detectar enfermedades genéticas a partir del rostro de un paciente (Gurovich, Hanani, Bar, et al., 2019), inferir el grado de criminalidad de una persona por su rostro (Wu y Zhang, 2016), predecir en qué zonas se producirán más crímenes, calcular la probabilidad de que un preso llegue a ser reincidente o de que un denunciante esté mintiendo en su declaración, etc. (Suresh y Guttag, 2019;Kleinberg, Ludwig, Mullainathan y Sunstein, 2019;Ricaurte, 2019;Tayebi y Glasser, 2016;Hardyns y Rummens, 2017;Cui, 2016;Skeem y Lowenkamp, 2016;Fass, Heilbrun, DeMatteo y Fretz, 2008;Babuta, 2018). ...
Article
Full-text available
La defensa de la privacidad se ha basado, históricamente, en la protección de la autonomía y dignidad de los individuos. Sin embargo, el reciente desarrollo de la economía de la vigilancia ha hecho que se multipliquen los tipos de tecnologías de seguimiento y los objetivos de observación disponibles. Esta situación está revelando los límites de los mecanismos políticos, jurídicos y sociales con los que las sociedades de raigambre liberal protegían la privacidad; y nos obliga a repensar las amenazas que se vierten sobre la privacidad desde una nueva perspectiva. A tal fin se defenderá que es necesario desarrollar una dimensión colectiva sobre la privacidad, que no esté centrada únicamente en los individuos, como mecanismo para comprender e interrelacionar el conjunto de cambios sociotécnicos que amenazan la dignidad, autonomía política y soberanía digital de los grupos humanos. Palabras clave: Big Data; dataficación; economía de los datos; espacio público; historia conceptual; política de los datos.
... Second, learning-theoretic induction might fail us when ML picks up on nonsensical trends in data. Take a recent ML paper that created a highly accurate classifier for criminals using only portraits (Wu and Zhang [2016]). A rebuttal found that this algorithm simply classified criminals with high accuracy based on whether they frowned in their portraits. ...
Preprint
In the ML fairness literature, there have been few investigations through the viewpoint of philosophy, a lens that encourages the critical evaluation of basic assumptions. The purpose of this paper is to use three ideas from the philosophy of science and computer science to tease out blind spots in the assumptions that underlie ML fairness: abstraction, induction, and measurement. Through this investigation, we hope to warn of these methodological blind spots and encourage further interdisciplinary investigation in fair-ML through the framework of philosophy.
... Machine learning on social data often does not account for myriad biases that arise during the analysis pipeline that can undercut the validity of study claims [87]. Attempts to identify criminality [120] and sexuality [117] from people's faces and predicting recidivism using criminal justice records [67] have led to critiques that current attempts to apply machine learning to social data represent a new form of physiognomy [3]. Physiognomy was the attempt to explain human behavior through body types and was characterized by poor theory and sloppy measurement [40]. ...
Preprint
Research at the intersection of machine learning and the social sciences has provided critical new insights into social behavior. At the same time, scholars have identified myriad ways in which machine learning, when applied without care, can also lead to incorrect and harmful claims about people (e.g. about the biological nature of sexuality), and/or to discriminatory outcomes. Here, we argue that such issues arise primarily because of the lack of, or misuse of, social theory. Walking through every step of the machine learning pipeline, we identify ways in which social theory must be involved in order to address problems that technology alone cannot solve, and provide a pathway towards the use of theory to this end.
... It should be noted that there are no indications of the algorithmic biases in biometrics being deliberately put into the algorithms by design; rather, they are typically a result of the used training data and other factors. In any case, one should also be mindful, that as any technology, biometrics could be used in malicious or dystopian ways (e.g., privacy violations through mass surveillance [185] or "crime prediction" [186]). Consequently, a framework for human impact assessments [187] should be developed for biometrics as soon as possible. ...
Preprint
Full-text available
Systems incorporating biometric technologies have become ubiquitous in personal, commercial, and governmental identity management applications. Both cooperative (e.g. access control) and non-cooperative (e.g. surveillance and forensics) systems have benefited from biometrics. Such systems rely on the uniqueness of certain biological or behavioural characteristics of human beings, which enable for individuals to be reliably recognised using automated algorithms. Recently, however, there has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems (including biometrics). Most prominently, face recognition algorithms have often been labelled as "racist" or "biased" by the media, non-governmental organisations, and researchers alike. The main contributions of this article are: (1) an overview of the topic of algorithmic bias in the context of biometrics, (2) a comprehensive survey of the existing literature on biometric bias estimation and mitigation, (3) a discussion of the pertinent technical and social matters, and (4) an outline of the remaining challenges and future work items, both from technological and social points of view.
Thesis
Les technologies de l’intelligence artificielle (IA) sont en phase de développement rapide et sont largement adoptées. Les algorithmes déterminent ce que nous achetons sur les plates-formes d’achats en ligne, choisissent les publications que nous voyons sur le fil d’actualité de nos réseaux sociaux, suggèrent les résultats dans nos moteurs de recherche, ou encore filtrent les profils des applications de rencontre. Au-delà des assistants et des systèmes de recommandations personnalisés, ces technologies sont discrètement intégrées dans des domaines plus critiques tels que le système judiciaire, la finance, le recrutement en entreprise, la sécurité publique ainsi que les contrôles des frontières ou la régulation de l’immigration où elles sont de plus en plus utilisées pour prédire de tout, de notre type de personnalité à nos opportunités professionnelles. En cette ère numérique, les systèmes de surveillance par reconnaissance biométrique qui intègrent de l’IA se multiplient et les populations du monde entier se voient ainsi de plus en plus réduites à des agrégats de données analysables. Les entreprises de l’IA promettent que leurs technologies peuvent identifier des schémas comportementaux subtils et bien plus encore. La reconnaissance faciale associée à l’apprentissage automatique, ou machine learning, leur permet d’extraire et d’analyser de façon très efficace les caractéristiques physiques du visage d’une personne et d’ainsi discerner et dessiner un type de personnalité ou une tendance comportementale. On appelle cela le profilage facial.
Article
Full-text available
The automation brought about by big data analytics, machine learning and artificial intelligence systems challenges us to reconsider fundamental questions of criminal justice. The article outlines the automation which has taken place in the criminal justice domain and answers the question of what is being automated and who is being replaced thereby. It then analyses encounters between artificial intelligence systems and the law, by considering case law and by analysing some of the human rights affected. The article concludes by offering some thoughts on proposed solutions for remedying the risks posed by artificial intelligence systems in the criminal justice domain.
Article
Biometrics traits such as faces, fingerprints, and irises, are becoming prevalent in computer security applications: from authentication systems to identification systems. Given the sensitive nature of biometrics, a great deal of effort is put into protecting the biometric data after it is acquired - from secure sketch and fuzzy extractors to the use of secure multiparty computations (in protocols such as SCiFI or GSHADE). While these solutions make sure that the extracted values (e.g., binary strings or vectors) that correspond to the biometrics are kept privately and securely, their practical implementations are not optimal with respect to privacy guarantees in the process of extracting the information from the raw biometric data.
Article
Full-text available
The survival of organizations, especially universities, depends largely on the nature of the strategies that are formulated and how to deal with them in the light of changes. The organizations are looking for ways to succeed in this task and looking for the strategic factors that contribute to achieving organizational sustainability. And the concept of strategic physiognomy as a modern term in science. This study aims to identify the effect of strategic physiognomy on organizational sustainability. The conceptual model was formulated through the components of strategic physiognomy (Empowerment, Inspiration, and Deep Understanding) and its relation to organizational sustainability. The results indicated that strategic physiognomy has directly and significantly impacted on organizational sustainability.
Thesis
Les technologies de l’intelligence artificielle (IA) sont en phase de développement rapide et sont largement adoptées. Les algorithmes déterminent ce que nous achetons sur les plates-formes d’achats en ligne, choisissent les publications que nous voyons sur le fil d’actualité de nos réseaux sociaux, suggèrent les résultats dans nos moteurs de recherche, ou encore filtrent les profils des applications de rencontre. Au-delà des assistants et des systèmes de recommandations personnalisés, ces technologies sont discrètement intégrées dans des domaines plus critiques tels que le système judiciaire, la finance, le recrutement en entreprise, la sécurité publique ainsi que les contrôles des frontières ou la régulation de l’immigration où elles sont de plus en plus utilisées pour prédire de tout, de notre type de personnalité à nos opportunités professionnelles. En cette ère numérique, les systèmes de surveillance par reconnaissance biométrique qui intègrent de l’IA se multiplient et les populations du monde entier se voient ainsi de plus en plus réduites à des agrégats de données analysables. Les entreprises de l’IA promettent que leurs technologies peuvent identifier des schémas comportementaux subtils et bien plus encore. La reconnaissance faciale associée à l’apprentissage automatique, ou machine learning, leur permet d’extraire et d’analyser de façon très efficace les caractéristiques physiques du visage d’une personne et d’ainsi discerner et dessiner un type de personnalité ou une tendance comportementale. On appelle cela le profilage facial. Cependant, l’application du profilage facial dans les domaines dans lesquels les décisions ont un réel impact sur nos vies ne devrait pas se faire sous le seul prétexte d’une efficacité technique. De fait, les systèmes de profilage, pilotés par des techniques d’apprentissage automatique ne sont pas intrinsèquement neutres. Ils reflètent les priorités, les préférences et les préjugés —le regard biaisé— de ceux qui façonnent l’intelligence artificielle. Le choix des données collectées et les règles de traitement de ces dernières, qui restent opaques, peuvent générer des biais dans les résultats des algorithmes. Ces derniers ne formulent pas des prédictions objectives car ils ont étés écrits par des êtres humains, qui y ont introduit, délibérément ou pas, leurs propres préjugés. Par conséquence, le biais algorithmique promet d’exacerber l’inégalité, les préjugés et la discrimination. Cette recherche a pour objectif de mettent en garde contre le fait que les développements rapides de l’intelligence artificielle et de l’apprentissage automatique ont permis au racisme « scientifique » d’entrer dans une nouvelle ère, dans laquelle les modèles utilisés et intériorisés par la machine intègrent les biais présents dans le comportement humain. Qu’il soit intentionnel ou non, ce blanchiment des préjugés humains par des algorithmes informatiques peut les cautionner de façon indue. À l’ère des caméras omniprésentes et du Big data, compte tenu du recours croissant de la société à l’apprentissage automatique pour automatiser les tâches cognitives, la physionomie apprise par la machine peut également être appliquée à une échelle sans précédent et l’impact que ces systèmes peuvent avoir sur nos vies ne saurait être sous-estimé.
Article
In this article we use the natural lab of music festivals to examine behavioral change in response to the rapid introduction of smart surveillance technology into formerly unpoliced spaces. Festivals are liminal spaces, free from the governance of everyday social norms and regulations, permitting participants to assert a desired self. Due to a number of recent festival deaths, drug confiscations, pickpockets, and a terroristic mass shooting, festivals have quickly introduced smart security measures such as drones and facial recognition technologies. Such a rapid introduction contrasts with urban spaces where surveillance is introduced gradually and unnoticeably. In this article we use some findings from an online survey of festivalgoers to reveal explicit attitudes and experiences of surveillance. We found that surveillance is often discomforting because it changes experience of place, it diminishes feelings of safety, and bottom-up measures (health tents, being in contact with friends) are preferred to top-down surveillance. We also found marked variation between men, women, and nonbinary people’s feelings toward surveillance. Men were much less affected by surveillance. Women have very mixed views on surveillance; they simultaneously have greater safety concerns (especially sexual assault in public) and are keener on surveillance than men but also feel that it is ineffective in preventing assault (but might be useful in providing evidence subsequently). Our findings have significant ramifications for the efficacy of a one-size-fits-all solution of increased surveillance and security in smart places and cities and point to the need for more bottom-up safety measures. Key Words: anxiety, festivals, smart city, surveillance, well-being.
Article
Full-text available
The rapid development of information and communication technology has made it imperative that new human rights be spelled out, to cope with an array of expected threats associated with this process. With artificial intelligence being increasingly put to practical uses, the prospect arises of Man’s becoming more and more AI-dependant in multiple walks of life. This necessitates that a constitutional and international dimension be imparted to a right that stipulates that key state-level decisions impacting human condition, life and freedom must be made by humans, not automated systems or other AI contraptions. But if artificial intelligence were to make decisions, then it should be properly equipped with value-based criteria. The culture of abdication of privacy protection may breed consent to the creation and practical use of technologies capable to penetrate an individual consciousness without his or her consent. Evidence based on such thought interference must be barred from court proceedings. Everyone’s right to intellectual identity and integrity, the right to one’s thoughts being free from technological interference, is as essential for the survival of the democratic system as the right to privacy – and it may well prove equally endangered.
Article
Full-text available
Book
In the 1980s, Robert Prechter became the most famous Elliott Wave disciple. In March 1986, USA Today called Prechter the “hottest guru on Wall Street,” after a bullish forecast he made in September 1985 came true. The same article advised readers of his forecast that the Dow would rise to 3600–3700 by 1988; however, the high for 1988 turned out to be 2184. In October 1987, Prechter said, “The worst case [is] a drop to 2295,” just days before theDow collapsed to 1739. TheWall Street Journal published a front-page article in 1993 with the headline, “Robert Prechter sees his 3600 on the Dow – But 6 years late.” The Dow hit 3600, just as he predicted, but six years after he said it would. Prechter should have heeded the advice given me by a celebrated investor about how difficult it is to predict stock prices: “If you give a number, don’t give a date.”
Thesis
Full-text available
Les technologies de l’intelligence artificielle (IA) sont en phase de développement rapide et sont largement adoptées. Les algorithmes déterminent ce que nous achetons sur les plates-formes d’achats en ligne, choisissent les publications que nous voyons sur le fil d’actualité de nos réseaux sociaux, suggèrent les résultats dans nos moteurs de recherche, ou encore filtrent les profils des applications de rencontre. Au-delà des assistants et des systèmes de recommandations personnalisés, ces technologies sont discrètement intégrées dans des domaines plus critiques tels que le système judiciaire, la finance, le recrutement en entreprise, la sécurité publique ainsi que les contrôles des frontières ou la régulation de l’immigration où elles sont de plus en plus utilisées pour prédire de tout, de notre type de personnalité à nos opportunités professionnelles. En cette ère numérique, les systèmes de surveillance par reconnaissance biométrique qui intègrent de l’IA se multiplient et les populations du monde entier se voient ainsi de plus en plus réduites à des agrégats de données analysables. Les entreprises de l’IA promettent que leurs technologies peuvent identifier des schémas comportementaux subtils et bien plus encore. La reconnaissance faciale associée à l’apprentissage automatique, ou machine learning, leur permet d’extraire et d’analyser de façon très efficace les caractéristiques physiques du visage d’une personne et d’ainsi discerner et dessiner un type de personnalité ou une tendance comportementale. On appelle cela le profilage facial.
Chapter
This chapter focuses on the rise of new digital inequalities by looking at the costs and consequences of algorithmic decision-making on citizens’ everyday lives and how they are affecting social inequalities. More specifically, the chapter looks at the inequalities in (a) knowledge (inequalities intended as the different levels of understanding of how algorithms influence everyday life and different skills and creative techniques to escape algorithms’ “suggestions”); (b) creating dataset (how data on which algorithms and AI are based are biased); and (c) treatment (the unequal treatment that AI and algorithms reserve for different individuals based on their socio-economic and socio-demographic characteristics). These three levels of (new) digital inequalities are tied both with the main axes of social inequalities and with the rise of digital technologies, and affect, often silently, citizens’ lives and social hierarchy.
Conference Paper
Through a design-led inquiry focused on smart home security cameras, this research develops three key concepts for research and design pertaining to new and emerging digital consumer technologies. Digital leakage names the propensity for digital information to be shared, stolen, and misused in ways unbeknownst or even harmful to those to whom the data pertains or belongs. Hole-and-corner applications are those functions connected to users' data, devices, and interactions yet concealed from or downplayed to them, often because they are non-beneficial or harmful to them. Foot-in-the-door devices are product and services with functional offerings and affordances that work to normalize and integrate a technology, thus laying groundwork for future adoption of features that might have earlier been rejected as unacceptable or unnecessary. Developed and illustrated through a set of design studies and explorations, this paper shows how these concepts may be used analytically to investigate issues such as privacy and security, anticipatorily to speculate about the future of technology development and use, and generatively to synthesize design concepts and solutions.
Article
Full-text available
A growing body of evidence suggests that rapid, yet accurate, dispositional inferences can be made after minimal exposure to the physical appearance of others. In this study, we explore the accuracy of inferences regarding criminality made after brief exposure to static images of convicted criminals’ and non-criminals’ faces. We begin with a background of research and theory on the curiously recurrent, and historically controversial, topic of appearance-based inferences of criminality, and a brief justification of our re-opening of the debate about the accuracy of appearance-based criminality judgments. We then report two experiments in which participants, given a set of headshots of criminals and non-criminals, were able to reliably distinguish between these two groups, after controlling for the gender, race, age, attractiveness, and emotional displays, as well as any potential clues of picture origin. Empirical and theoretical implications, limitations, and further questions are discussed in light of these findings. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
Article
Full-text available
This article explored the finding that cross-race (CR) faces are more quickly classified by race than same race (SR) faces. T. Valentine and M. Endo (1992) modeled this effect by assuming that face categories can be explained on the basis of node activations in a multidimensional exemplar space. Therefore, variations in exemplar density between and within face categories explain both facilitated classification of CR faces and the relationship between typicality and classification RT within face categories. The present findings from classification and visual search tasks suggest that speeded classification of CR faces is instead caused by a quickly coded race feature that marks CR but not SR faces. Also, systematic manipulations of facial typicality cause no variation in classifiability aside from slowed classification of very distinctive faces. These results suggest that the exemplar model cannot explain important aspects of face classification. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Although trustworthiness judgments based on a stranger's face occur rapidly (Willis & Todorov, 2006), their accuracy is unknown. We examined the accuracy of trustworthiness judgments of the faces of 2 groups differing in trustworthiness (Nobel Peace Prize recipients/humanitarians vs. America's Most Wanted criminals). Participants viewed 34 faces each for 100 ms or 30 s and rated their trustworthiness. Subsequently, participants were informed about the nature of the 2 groups and estimated group membership for each face. Judgments formed with extremely brief exposure were similar in accuracy and confidence to those formed after a long exposure. However, initial judgments of untrustworthy (criminals') faces were less accurate (M=48.8%) than were those of trustworthy faces (M=62.7%). Judgment accuracy was above chance for trustworthy targets only at Time 1 and slightly above chance for both target types at Time 2. Participants relied on perceived kindness and aggressiveness to inform their rapidly formed intuitive decisions. Thus, intuition plays a minor facilitative role in reading faces. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Previous studies have shown that trustworthiness judgments from facial appearance approximate general valence evaluation of faces (Oosterhof & Todorov, 2008) and are made after as little as 100 ms exposure to novel faces (Willis & Todorov, 2006). In Experiment 1, using better masking pro-cedures and shorter exposures, we replicate the latter findings. In Experi-ment 2, we systematically manipulate the exposure to faces and show that a sigmoid function almost perfectly describes how judgments change as a function of time exposure. The agreement of these judgments with time-unconstrained judgments is above chance after 33 ms, improves with ad-ditional exposure, and does not improve with exposures longer than 167 ms. In Experiment 3, using a priming paradigm, we show that effects of face trustworthiness are detectable even when the faces are presented below the threshold of objective awareness as measured by a forced choice rec-ognition test of the primes. The findings suggest that people automatically make valence/trustworthiness judgments from facial appearance. Person impressions are often formed rapidly and spontaneously from minimal information (Todorov & Uleman, 2003; Uleman, Blader, & Todorov, 2005). One rich source of such information is facial appearance and there is abundant research about the effects of facial appearance on social outcomes (e.g., Blair, Judd, & Chap-leau, 2004; Eberhardt, Davies, Purdie-Vaughns, & Johnson, 2006; Hamermesh & Biddle, 1994; Hassin & Trope, 2000; Langlois et al., 2000; Montepare & Zebrowitz, 1998; Zebrowitz, 1999). For example, inferences of competence, based solely on fa-cial appearance, predict the outcomes of the U.S. congressional (Todorov, Mandi-sodza, Goren, & Hall, 2005) and gubernatorial elections (Ballew & Todorov, 2007; Hall, Goren, Chaiken, & Todorov, 2009), and inferences of dominance predict mili-tary rank attainment (Mazur, Mazur, & Keating, 1984; Mueller & Mazur, 1996).
Conference Paper
Full-text available
We introduce a class of geodesic distances and extend the K-means clustering algorithm to employ this distance metric. Empirically, we demonstrate that our geodesic K-means algorithm exhibits several desirable characteristics missing in the classical K-means. These include adjusting to varying densities of clusters, high levels of resistance to outliers, and handling clusters that are not linearly separable. Furthermore our comparative experiments show that geodesic K-means comes very close to competing with state-of-the-art algorithms such as spectral and hierarchical clustering.
Conference Paper
Full-text available
In this work, we present a novel approach to face recognition which considers both shape and texture information to represent face images. The face area is first divided into small regions from which Local Binary Pattern (LBP) histograms are extracted and concatenated into a single, spatially enhanced feature histogram efficiently representing the face image. The recognition is performed using a nearest neighbour classifier in the computed feature space with Chi square as a dissimilarity measure. Extensive experiments clearly show the superiority of the proposed scheme over all considered methods (PCA, Bayesian Intra/extrapersonal Classifier and Elastic Bunch Graph Matching) on FERET tests which include testing the robustness of the method against different facial expressions, lighting and aging of the subjects. In addition to its efficiency, the simplicity of the proposed method allows for very fast feature extraction.
Article
Full-text available
Using a composite-face paradigm, we show that social judgments from faces rely on holistic processing. Participants judged facial halves more positively when aligned with trustworthy than with untrustworthy halves, despite instructions to ignore the aligned parts (experiment 1). This effect was substantially reduced when the faces were inverted (experiments 2 and 3) and when the halves were misaligned (experiment 3). In all three experiments, judgments were affected to a larger extent by the to-be-attended than the to-be-ignored halves, suggesting that there is partial control of holistic processing. However, after rapid exposures to faces (33 to 100 ms), judgments of trustworthy and untrustworthy halves aligned with incongruent halves were indistinguishable (experiment 4a). Differences emerged with exposures longer than 100 ms. In contrast, when participants were not instructed to attend to specific facial parts, these differences did not emerge (experiment 4b). These findings suggest that the initial pass of information is holistic and that additional time allows participants to partially ignore the task-irrelevant context.
Article
Full-text available
This paper defines and studies for independent identically distributed observations a new parametric estimation procedure which is asymptotically efficient under a specified regular parametric family of densities and is minimax robust in a small Hellinger metric neighborhood of the given family. Associated with the estimator is a goodness-of-fit statistic which assesses the adequacy of the chosen parametric model. The fitting of a normal location-scale model by the new procedure is exhibited numerically on clear and on contaminated data.
Article
Full-text available
This study investigated visual cues to age by using facial composites which blend shape and colour information from multiple faces. Baseline measurements showed that perceived age of adult male faces is on average an accurate index of their chronological age over the age range 20-60 years. Composite images were made from multiple images of different faces by averaging face shape and then blending red, green and blue intensity (RGB colour) across comparable pixels. The perceived age of these composite or blended images depended on the age bracket of the component faces. Blended faces were, however, rated younger than their component faces, a trend that became more marked with increased component age. The techniques used provide an empirical definition of facial changes with age that are biologically consistent across a sample population. The perceived age of a blend of old faces was increased by exaggerating the RGB colour differences of each pixel relative to a blend of young faces. This effect on perceived age was not attributable to enhanced contrast or colour saturation. Age-related visual cues defined from the differences between blends of young and old faces were applied to individual faces. These transformations increased perceived age.
Article
Full-text available
The finding that photographic and digital composites (blends) of faces are considered to be attractive has led to the claim that attractiveness is averageness. This would encourage stabilizing selection, favouring phenotypes with an average facial structure. The 'averageness hypothesis' would account for the low distinctiveness of attractive faces but is difficult to reconcile with the finding that some facial measurements correlate with attractiveness. An average face shape is attractive but may not be optimally attractive. Human preferences may exert directional selection pressures, as with the phenomena of optimal outbreeding and sexual selection for extreme characteristics. Using composite faces, we show here that, contrary to the averageness hypothesis, the mean shape of a set of attractive faces is preferred to the mean shape of the sample from which the faces were selected. In addition, attractive composites can be made more attractive by exaggerating the shape differences from the sample mean. Japanese and caucasian observers showed the same direction of preferences for the same facial composites, suggesting that aesthetic judgements of face shape are similar across different cultural backgrounds. Our finding that highly attractive facial configurations are not average shows that preferences could exert a directional selection pressure on the evolution of human face shape.
Conference Paper
Full-text available
We demonstrate a fast, robust method of interpreting face images using an Active Appearance Model (AAM). An AAM contains a statistical model of shape and grey level appearance which can generalise to almost any face. Matching to an image involves finding model parameters which minimise the difference between the image and a synthesised face. We observe that displacing each model parameter from the correct value induces a particular pattern in the residuals. In a training phase, the AAM learns a linear model of the correlation between parameter displacements and the induced residuals. During search it measures the residuals and uses this model to correct the current parameters, leading to a better fit. A good overall match is obtained in a few iterations, even from poor starting estimates. We describe the technique in detail and show it matching to new face images
Conference Paper
Full-text available
In this paper we describe a face recognition method based on PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The method consists of two steps: first we project the face image from the original vector space to a face subspace via PCA, second we use LDA to obtain a best linear classifier. The basic idea of combining PCA and LDA is to improve the generalization capability of LDA when only few samples per class are available. Using PCA, we are able to construct a face subspace in which we apply LDA to perform classification. Using FERET dataset we demonstrate a significant improvement when principal components rather than original images are fed to the LDA classifier. The hybrid classifier using PCA and LDA provides a useful framework for other image recognition tasks as well
Article
Human adults attribute character traits to faces readily and with high consensus. In two experiments investigating the development of face-to-trait inference, adults and children ages 3 through 10 attributed trustworthiness, dominance, and competence to pairs of faces. In Experiment 1, the attributions of 3- to 4-year-olds converged with those of adults, and 5- to 6-year-olds' attributions were at adult levels of consistency. Children ages 3 and above consistently attributed the basic mean/nice evaluation not only to faces varying in trustworthiness (Experiment 1) but also to faces varying in dominance and competence (Experiment 2). This research suggests that the predisposition to judge others using scant facial information appears in adultlike forms early in childhood and does not require prolonged social experience.
Article
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
Article
Twenty-four perceivers saw portraits of unacquainted persons for either 150ms, 100ms, or 50ms, and rated their personality on adjective scales. Moreover, stimulus persons described themselves on these scales and the NEO Five-Factor Inventory. Consensus among perceivers and self-other agreement were not systematically related to exposure time, but self-other agreement differed strongly between traits, being highest for extraversion. Even ratings of extraversion by single perceivers were related to the stimulus persons’ self-reports. Particularly strong were correlations between perceived extraversion and self-reports on items measuring the extraversion facets excitement seeking and positive emotions. Self-other agreement for extraversion was mostly mediated by cheerfulness of facial expressions that was related to self-reports of extraversion but not of the other personality traits.
Article
Discusses research on facial expressions of emotion and presents suggestions for recognizing and interpreting various expressions. Using many photographs of faces that reflect surprise, fear, disgust, anger, happiness, and sadness, methods of correctly identifying these basic emotions and of understanding when people try to mask or simulate them are outlined. Practice exercises are also included. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The practice of psychological testing has advanced with great rapidity within recent years. The early crude methods are being replaced by scientific procedures, and the early naive views in regard to the test-aptitude relation and the possibilities of tests are giving way before more adequate theories and more sober expectations. In a word, aptitude testing, like medicine and engineering, is ceasing to be a job for amateurs and is becoming the work of technically trained professionals. It has been the purpose of the author to include within a convenient space two of the essentials of the training for aptitude work: (1) an account of the fundamental principles of aptitude testing and (2) an intelligible description of the most effective and the most economical methods of constructing batteries of aptitude tests. Specifically the book is designed as a text for university and college classes in aptitude testing and as a general handbook for those engaged in aptitude work of all kinds, whether in the form of vocational guidance, general personnel work, or employment selection. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The face is our primary source of visual information for identifying people and reading their emotional and mental states. With the exception of prosopagnosics (who are unable to recognize faces) and those suffering from such disorders of social cognition as autism, people are extremely adept at these two tasks. However, our cognitive powers in this regard come at the price of reading too much into the human face. The face is often treated as a window into a person's true nature. Given the agreement in social perception of faces, this paper discusses that it should be possible to model this perception.
Article
A face recognition algorithm based on modular PCA approach is presented in this paper. The proposed algorithm when compared with conventional PCA algorithm has an improved recognition rate for face images with large variations in lighting direction and facial expression. In the proposed technique, the face images are divided into smaller sub-images and the PCA approach is applied to each of these sub-images. Since some of the local facial features of an individual do not vary even when the pose, lighting direction and facial expression vary, we expect the proposed method to be able to cope with these variations. The accuracy of the conventional PCA method and modular PCA method are evaluated under the conditions of varying expression, illumination and pose using standard face databases.
Conference Paper
A sparse representation of Support Vector Machines (SVMs) with respect to input features is desirable for many applications. In this paper, by introducing a 0-1 control variable to each input feature, l0-norm Sparse SVM (SSVM) is converted to a mixed integer programming (MIP) problem. Rather than directly solving this MIP, we propose an efficient cutting plane algorithm combining with multiple kernel learning to solve its convex relaxation. A global convergence proof for our method is also presented. Comprehensive experimental results on one synthetic and 10 real world datasets show that our proposed method can obtain better or competitive performance compared with existing SVM-based feature selection methods in term of sparsity and generalization performance. Moreover, our proposed method can effectively handle large-scale and extremely high dimensional problems. 1.
Article
Though the psychological literature is replete with information about the perception of faces presented at a full-frontal view, we know very little about how faces are perceived-and impressions formed-when viewed from other angles. We tested impressions of faces at full-frontal, three-quarter, and profile views. Judgments of personality (aggressiveness, competence, dominance, likeability, and trustworthiness) and physiognomy (attractiveness and facial maturity) were significantly correlated across full-frontal, three-quarter, and profile views of male faces. When under time pressure, with only a 50 ms exposure to each face, the correlations for profile with full-frontal and three-quarter view judgments of personality (but not physiognomy) dropped considerably. However, judgments of the full-frontal and three-quarter faces were significantly correlated across the self-paced and 50 ms viewing durations. These findings therefore show that perceptions of full faces lead to relatively similar interferences across both viewing angle and time.
Article
Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.
Article
People often draw trait inferences from the facial appearance of other people. We investigated the minimal conditions under which people make such inferences. In five experiments, each focusing on a specific trait judgment, we manipulated the exposure time of unfamiliar faces. Judgments made after a 100-ms exposure correlated highly with judgments made in the absence of time constraints, suggesting that this exposure time was sufficient for participants to form an impression. In fact, for all judgments-attractiveness, likeability, trustworthiness, competence, and aggressiveness-increased exposure time did not significantly increase the correlations. When exposure time increased from 100 to 500 ms, participants' judgments became more negative, response times for judgments decreased, and confidence in judgments increased. When exposure time increased from 500 to 1,000 ms, trait judgments and response times did not change significantly (with one exception), but confidence increased for some of the judgments; this result suggests that additional time may simply boost confidence in judgments. However, increased exposure time led to more differentiated person impressions.
Article
Here we show that rapid judgments of competence based solely on the facial appearance of candidates predicted the outcomes of gubernatorial elections, the most important elections in the United States next to the presidential elections. In all experiments, participants were presented with the faces of the winner and the runner-up and asked to decide who is more competent. To ensure that competence judgments were based solely on facial appearance and not on prior person knowledge, judgments for races in which the participant recognized any of the faces were excluded from all analyses. Predictions were as accurate after a 100-ms exposure to the faces of the winner and the runner-up as exposure after 250 ms and unlimited time exposure (Experiment 1). Asking participants to deliberate and make a good judgment dramatically increased the response times and reduced the predictive accuracy of judgments relative to both judgments made after 250 ms of exposure to the faces and judgments made within a response deadline of 2 s (Experiment 2). Finally, competence judgments collected before the elections in 2006 predicted 68.6% of the gubernatorial races and 72.4% of the Senate races (Experiment 3). These effects were independent of the incumbency status of the candidates. The findings suggest that rapid, unreflective judgments of competence from faces can affect voting decisions. • face perception • social judgments • voting decisions
An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space (`face space') that best encodes the variation among known face images. The face space is defined by the `eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner
Article
This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.
  • M Bar
  • M Neta
  • H Linz
M. Bar, M. Neta, and H. Linz. Very first impressions. Emotion, 6(2):269, 2006.
A morphable model for the synthesis of 3d faces
  • V Blanz
  • T Vetter
V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187-194. ACM Press/Addison-Wesley Publishing Co., 1999.
Song Dynasty) and Z. Cheng. General Physiognomy
  • T Chen
T. Chen (Song Dynasty) and Z. Cheng. General Physiognomy. Shanxi Normal University Press, 2010. (in Chinese) ISBN: 978-7-5613-5065-2.
Surpassing human-level face verification performance on lfw with gaussianface
  • C Lu
  • X Tang
C. Lu and X. Tang. Surpassing human-level face verification performance on lfw with gaussianface. arXiv preprint arXiv:1404.3840, 2014.