Preprint

The FairCeptron: A Framework for Measuring Human Perceptions of Algorithmic Fairness

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Measures of algorithmic fairness often do not account for human perceptions of fairness that can substantially vary between different sociodemographics and stakeholders. The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification. It supports (i) studying human perceptions of fairness and (ii) comparing these human perceptions with measures of algorithmic fairness. The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis. We demonstrate the FairCeptron framework by applying it to a hypothetical university admission context where we collect human perceptions of fairness in the presence of minorities. An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts. We hope our work paves the way towards elevating the role of studies of human fairness perceptions in the process of designing algorithmic decision making systems.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
Conference Paper
Full-text available
In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n » k candidates, maximizing utility (i.e., select the "best" candidates) subject to group fairness criteria. Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-k should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above. An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.
Conference Paper
Full-text available
Sliders and Visual Analogue Scales (VASs) are input mechanisms which allow users to specify a value within a predefined range. At a minimum, sliders and VASs typically consist of a line with the extreme values labeled. Additional decorations such as labels and tick marks can be added to give information about the gradations along the scale and allow for more precise and repeatable selections. There is a rich history of research about the effect of labelling in discrete scales (i.e., Likert scales), however the effect of decorations on continuous scales has not been rigorously explored. In this paper we perform a 2,000 user, 250,000 trial online experiment to study the effects of slider appearance, and find that decorations along the slider considerably bias the distribution of responses received. Using two separate experimental tasks, the trade-offs between bias, accuracy, and speed-of-use are explored and design recommendations for optimal slider implementations are proposed.
Article
Full-text available
Justice research examining gender differences has yielded contrasting findings. This study enlists advanced techniques in cognitive neuroscience (fMRI) to examine gender differences in brain activation patterns in response to procedural and distributive justice manipulations. We integrate social role, information processing, justice, and neuroscience literature to posit and test for gender differences in 2 neural subsystems known to be involved in the appraisal of self-relevant events. Results indicate that the relationship between justice information processing and neural activity in areas representing these subsystems is significantly influenced by gender, with greater activation for females than males during consideration of both procedural and distributive justice information. In addition, we find evidence that gender and distributive injustice interact to influence bargaining behavior, with females rejecting ultimatum game offers more frequently than males. Results also demonstrate activation in the ventromedial prefrontal cortex (vmPFC) and ventral striatum brain regions during procedural justice evaluation is associated with offer rejection in females, but not in males. Managerial implications based on the study's support for gender differences in justice perceptions are discussed.
Article
A group of industry, academic, and government experts convene in Philadelphia to explore the roots of algorithmic bias.
Conference Paper
Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which one of them is the most appropriate given the societal domain in which the decision-making model is to be deployed. We take a descriptive approach and set out to identify the notion of fairness that best captures lay people's perception of fairness. We run adaptive experiments designed to pinpoint the most compatible notion of fairness with each participant's choices through a small number of tests. Perhaps surprisingly, we find that the most simplistic mathematical definition of fairness---namely, demographic parity---most closely matches people's idea of fairness in two distinct application scenarios. This conclusion remains intact even when we explicitly tell the participants about the alternative, more complicated definitions of fairness, and we reduce the cognitive burden of evaluating those notions for them. Our findings have important implications for the Fair ML literature and the discourse on formalizing algorithmic fairness.
Conference Paper
What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across two online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race of the loan applicants). Overall, one definition (calibrated fairness) tends to be more pre- ferred than the others, and the results also provide support for the principle of affirmative action.
Conference Paper
Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
Conference Paper
Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others. In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.
Article
Recidivism prediction instruments provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. While such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This paper discusses a fairness criterion originating in the field of educational and psychological testing that has recently been applied to assess the fairness of recidivism prediction instruments. We demonstrate how adherence to the criterion may lead to considerable disparate impact when recidivism prevalence differs across groups.
Article
A sense of fairness plays a critical role in supporting human cooperation. Adult norms of fair resource sharing vary widely across societies, suggesting that culture shapes the acquisition of fairness behaviour during childhood. Here we examine how fairness behaviour develops in children from seven diverse societies, testing children from 4 to 15 years of age (n = 866 pairs) in a standardized resource decision task. We measured two key aspects of fairness decisions: disadvantageous inequity aversion (peer receives more than self) and advantageous inequity aversion (self receives more than a peer). We show that disadvantageous inequity aversion emerged across all populations by middle childhood. By contrast, advantageous inequity aversion was more variable, emerging in three populations and only later in development. We discuss these findings in relation to questions about the universality and cultural specificity of human fairness.
Article
Although there is a growing applicant reactions literature, relatively little work has addressed the role of personality in applicant perceptions. Using a sample of actual law enforcement applicants (N=120), we studied the relationship between Big Five personality measured before a written test and applicants' post-test fairness perceptions, perceptions of themselves, and perceptions of the hiring organization. Personality was related to applicant perceptions after controlling for gender and test score. Personality also accounted for significant variance in self-perceptions and perceptions of the hiring organization beyond that accounted for by fairness perceptions. Neuroticism and agreeableness were the most consistent predictors of applicant perceptions. Our discussion focuses on the consideration of individual differences in applicant reactions research.
Article
To provide a measure of the Big Five for contexts in which participant time is severely limited, we abbreviated the Big Five Inventory (BFI-44) to a 10-item version, the BFI-10. To permit its use in cross-cultural research, the BFI-10 was developed simultaneously in several samples in both English and German. Results focus on the psychometric characteristics of the 2-item scales on the BFI-10, including their part-whole correlations with the BFI-44 scales, retest reliability, structural validity, convergent validity with the NEO-PI-R and its facets, and external validity using peer ratings. Overall, results indicate that the BFI-10 scales retain significant levels of reliability and validity. Thus, reducing the items of the BFI-44 to less than a fourth yielded effect sizes that were lower than those for the full BFI-44 but still sufficient for research settings with truly limited time constraints.
The Oxford Handbook of Justice in the Workplace
  • M L Ambrose
  • D X Wo
  • M D Griffith
Ambrose, M. L.; Wo, D. X.; and Griffith, M. D. 2015. Overall Justice: Past, Present, and Future. In Cropanzano, R.; and Ambrose, M. L., eds., The Oxford Handbook of Justice in the Workplace, 109-135. New York: Oxford University Press.
Fairness Through Awareness
  • C Dwork
  • M Hardt
  • T Pitassi
  • O Reingold
  • R S Zemel
Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. S. 2012. Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226. New York: ACM. doi:10.1145/2090236.
Justification and Rationalization Causes
  • H R Engstrom
  • A Alic
  • K Laurin
Engstrom, H. R.; Alic, A.; and Laurin, K. 2020. Justification and Rationalization Causes. In Lind, E. A., ed., Social Psychology and Justice, 44-66. New York: Routledge.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design
  • M Kearns
  • A Roth
Kearns, M.; and Roth, A. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. New York: Oxford University Press.
Is more Fairness Always Preferred? Self-Esteem Moderates Reactions to Procedural Justice
  • B M Wiesenfeld
  • W B Swann
  • J Brockner
  • C A Bartel
Wiesenfeld, B. M.; Swann Jr, W. B.; Brockner, J.; and Bartel, C. A. 2007. Is more Fairness Always Preferred? Self-Esteem Moderates Reactions to Procedural Justice. Academy of Management Journal 50(5): 1235-1253. doi:10.5465/amj. 2007.20159922.