PreprintPDF Available

Recruitment AI has a Disability Problem: questions employers should be asking to ensure fairness in recruitment

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Artificial Intelligence (AI) technologies have the potential to dramatically impact the lives and life chances of people with disabilities seeking employment and throughout their career progression. While these systems are marketed as highly capable and objective tools for decision making, a growing body of research demonstrates a record of inaccurate results as well as inherent disadvantages for women and people of colour (Broussard, 2018; Noble, 2018; O’Neil 2017). Assessments of fairness in Recruitment AI for people with disabilities have thus far received little attention or have been overlooked (Guo et al., 2019; Petrick, 2015; Trewin, 2018; Trewin et al. 2019; Whittaker et al., 2019). This white paper details the impacts to and concerns of disabled employment seekers using AI systems for recruitment, and provides recommendations on the steps employers can take to ensure innovation in recruitment is also fair to all users. In doing so, we further the point that making systems fairer for disabled employment seekers ensures systems are fairer for all.
2020 DISABILITY AND AI WHITEPAPER
Recruitment AI has a Disability Problem
Questions Employers Should be Asking to
Ensure Fairness in Recruitment
1
AUTHORS AND CONTRIBUTORS
Chara Bakalis, School of Law, Oxford Brookes University
Nigel Crook, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Paul Jackson, Oxford Brookes Business School, Oxford Brookes University
Kevin Maynard, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Arijit Mitra, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Selin Nugent, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Jintao Long, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Susan Scott-Parker, Business Disability International
James Partridge, Face Equality International
Rebecca Raper, Institute for Ethical Artificial Intelligence, Oxford Brookes University
Alex Shepherd, Institute for Ethical Artificial Intelligence, Oxford Brookes University
SUMMER 2020
This paper is intended to be a guide and living document that will evolve and
improve with input from readers and relevant stakeholders. Your feedback is
welcome and encouraged. Please share your feedback with us at
ethicalAI@brookes.ac.uk .
Cite as:
Nugent, S., Jackson, P., Scott-Parker, S., Partridge, J., Raper, R., Shepherd, A.,
Bakalis, C., Mitra, A., Long, J., Maynard, K., and Crook, N. (2020). Recruitment AI
has a Disability Problem: questions employers should be asking to ensure fairness in
recruitment. Institute for Ethical Artificial Intelligence
This work is licensed under the terms of the Creative Commons Attribution License
4.0
2
I Table of Contents
| About the Institute for Ethical Artificial Intelligence 3
I Disability and Employment Discrimination 5
I Recruitment AI 7
| Exclusion by Design and Discriminatory Use 7
Biased Systems 7
Improper Implementation and Use 9
| Tech on the Market: the dangers of discrimination 9
CRM and ATS 10
Conversational Agents 10
CV/ Resume Screeners 10
Pre-Employment Assessments 11
AI Interviewing 12
| Intervention Recommendations 13
| References 15
| Resources 17
| Glossary 17
3
| About the Institute for Ethical Artificial
Intelligence
Oxford Brookes University hosts a vibrant and ambitious research environment in the
areas of artificial intelligence, computing, and data science. Founding the Institute for
Ethical Artificial Intelligence was therefore a natural extension of our vision as a
research community to advance knowledge and promote the better understanding of
technology and its relationship to business and society in our local and the wider
global community. Our mission at the Institute for Ethical Artificial Intelligence is to
promote and support the development and deployment of ethical and trustworthy
intelligent software solutions for business, organisations, and society.
Our primary focus at the Institute for Ethical Artificial Intelligence is to help
organisations working in the professional services to understand and plan for the
risks and opportunities that AI and data analysis technologies can bring to their
organisation, their stakeholders and society at large. Working with both the users
and the providers of AI technology, as well as developing bespoke AI solutions, we
research and advise on the ethical impact of AI technology on organisations and
individuals.
In order to achieve this, we bring together a diverse group of world-leading experts
who together blend knowledge and skills from technology, business, social science
and the life sciences. We deliver expertise and independent guidance in areas that
include AI and machine learning, disability, psychology, business development,
equality and diversity, coaching and mentoring, digital health, and wellbeing.
For more information, please visit our webpage ethical-ai.ac.uk
4
| Introduction
The purpose of this White Paper is to
Detail the impacts to and concerns of disabled employment
seekers using AI systems for recruitment, and
Provide employers with the knowledge and evaluation tools
to ensure innovation in recruitment is also fair to all users.
In doing so, we further the point that making systems fairer for disabled
employment seekers ensures systems are fairer for all.
Artificial Intelligence (AI) and similar advanced data analytics systems are
increasingly sought-after tools for recruitment used to automate time-consuming,
repetitive operational tasks, and expand strategic potential. However, as engineering
of these systems becomes more complex, it is more difficult for organisations to
confidently assess whether the technology is functioning in line with their
expectations and if employment seekers will be treated fairly.
AI technologies have the potential to dramatically impact the lives and life
chances of people with disabilities seeking employment and throughout their career
progression. While these systems are marketed as highly capable and objective
tools for decision making, a growing body of research demonstrates a record of
inaccurate results as well as inherent disadvantages for women and people of colour
(Broussard, 2018; Noble, 2018; O’Neil 2017). Assessment of disability fairness in
Recruitment AI has thus far received little attention or been overlooked (see Guo et
al., 2019; Petrick, 2015; Trewin, 2018; Trewin et al. 2019; Whittaker et al., 2019).
Presently, a landscape of limited regulation, paired with increasing societal
pressure for AI and data analytics systems to be designed with fairness,
transparency, and validity, means that organisations face financial, legal,
reputational, operational, and ethical risks for implementing them. While there is
already much work being done to address the high-level concerns related to artificial
intelligence, bias, and fairness, there will inevitably be more challenges ahead that
no one company or industry can solve alone. In order to minimise these risks,
businesses, human and disability rights campaigners, and academic experts need to
collaborate to develop new ways to analyse, validate, and improve these systems
and to hold technology developers and suppliers accountable.
Our aim in this paper is to provide a starter toolkit to evaluate organisational
and ethical values in relation to the use of recruitment technology, and with regard to
vitally important procurement processes. We review the broad technological
developments that support recruitment, demonstrate their potential to impact
disabled employment seekers in various ways. We then present recommendations
5
for the questions employers should be asking before taking on new technologies and
when evaluating currently used systems.
The Institute for Ethical Artificial Intelligence and its partners invite public, third
sector and private sector stakeholders to respond to this guidance and to continue
discussion toward ensuring fairer recruitment practices for persons with disabilities,
and other disadvantaged employment seekers more generally.
I Disability and Employment Discrimination
People with disability have historically and continue to be regularly
disadvantaged in seeking and securing employment. Disabled people experience
widespread economic and societal exclusion and are more than twice as likely to be
unemployed as others (Office of National Statistics, 2019). The sheer scale of the
social and economic impacts of the COVID pandemic on employment and
employability will undoubtedly further disenfranchise people with disabilities. The
current climate of instability makes ensuring fair and equal treatment all the more
important, given increasing employment among people with disabilities helps raise
people out of poverty, improve their life chances, and is a net cultural and economic
benefit.
As defined by the United Nations Convention on the Rights of Persons with
Disabilities (CRPD), “persons with disabilities include those who have long-term
physical, mental, intellectual or sensory impairments which in interaction with various
barriers may hinder their full and effective participation in society on an equal basis
with others.”
The definition of disability doesn’t necessarily capture the complexity and
heterogeneity of people with disabilities, which is a key factor in the complications
with AI systems. A disability may be a life-long condition or occur at different life
stages or be the result of a major event/change. Disability can have wide-ranging life
impacts or be context dependent. Disability may be visible, but most are invisible.
Disabilities may include people with hearing, sight and mobility, and dexterity
impairments, people with cognitive and intellectual impairments, those with mental
health conditions, those with facial disfigurements, those of small stature, and
numerous others. Further, individuals may have a combination of multiple factors.
6
Disability also intersects with
other aspects of identity, such as
gender, ethnicity, sexuality, an
socioeconomic background. Disability
is not completely independent of
other features of a person’s identity
and life experience (Collins and Bilge
2020; Parker, 2015; Samuels, 2016).
Moreover, the social stigmas
attached to disability are
intersectional, shared, and amplified
with other marginalised identities
(Frederick and Shifrer, 2019). In light
of the ongoing Black Lives Matter
protests against racial violence and
injustice, our focus on disability is
intended to contribute to a wider
discussion of systemic and persistent
oppression of marginalized peoples.
Recognising and celebrating human
diversity is a necessary starting point
to design AI systems that fairly and
equitably engage with human reality.
Disability inclusion in the
workplace is impacted by number of
factors. There is often a qualifications
gap between disabled and non-
disabled people due to systematic
disadvantages in education, training,
and previous work experience
(Sayce, 2011). Even well intentioned
employers may struggle to recognize
how structural barriers to success
impact
Some industries or categories
of position lack accessibility that can
limit employment for people with
certain impairments. There are
inadequate programmes to support
persons with disabilities and those
who employment them. Employers
may also have negative attitudes/bias
and lack confidence or training to
support disabled employment
seekers (Lindsay et al., 2020; Suter
et. al. 2007).
Some of Global Disability Facts
There are more than 1.3 billion people with
disabilities worldwide and the number is
growing with an aging population and
advances in medical science. WHO
80% of this 1.3 billion people live in the
developing world. WHO
15-18% of any country’s population will have
a disability and/or chronic health condition
WHO
Circa 80% of people with disabilities have
impairments that are not immediately visible.
Check
1 in 5 women will have a disability UN
1 in 3 people aged 50-64 will have a
disability regardless of their ethnicity Check
People who live to the age of 70 are likely to
have at least 10 years of lived experience of
disability Check
In any large organisation, from 10-12% of
the workforce are like to have a disability
and/or chronic health condition UK Labour
Force Survey
At least 1 in 3 consumers will either be
disabled or will have someone who has a
disability in their immediate circle European
Commission
7
I Recruitment AI
As organisations increase in scale and receive larger volumes of job
applicants, they are under pressure to balance often competing interests in recruiting
and retaining the talented candidates, optimising workflow efficiency and
productivity, and managing costs. This means that employers are increasingly
turning to automated tools to support the employee’s journey from recruitment to
retirement.
Artificial Intelligence (AI) has featured prominently in these developments. AI
is a subfield of computer science, focused on training computers to perform
traditionally human tasks. For additional reference, a glossary of relevant AI terms is
provided at the end of this document.
AI systems are currently available across a wide range of recruitment functions,
including:
Candidate Sourcing / Engagement
Candidate Tracking
CV/ Resume Screening
Pre-Employment Assessments
AI Interviewing
We will discuss each of these categories of technology in relation to their potential to
impact people with disabilities in greater detail below.
The unifying objective for systems operating across these diverse recruitment
functions is that they are designed to distil the vast array of information about
applicants down to a few select predictable features for the purpose of making
quantifiable and easily comparable decisions. However, when systems need to cope
with the reality of human diversity, whether it pertains to disability, ethnicity, gender,
and/or other features, they often interpret complexity as an abnormality, or outlier. In
this case predictability may come at the expense of the life chances of disabled
people who are already faced with systematic disadvantages and unfair
discrimination in securing employment.
| Exclusion by Design and Discriminatory Use
Recruitment AI may inadvertently adversely impact employment seekers with
disabilities via two major routes: biased systems and discriminatory processes.
Biased Systems
8
The design of an AI system involves first specifying an objective and then
specifying how the system achieves and optimizes achieving that objective. Humans
are often not skilled at specifying objectives. If an objective is not specified
appropriately, the outcome may have unintended consequences.
Unwanted biases, or biases that treat some people negatively, or adversely
due to protected characteristics or other features of their identity, raise serious risks
of discrimination. It is critical to identify and mitigate these potentially harmful biases.
And to prevent and mitigate bias, it is necessary to understand how humans
introduce biases into an AI system.
Developing this knowledge begins with defining what biases exist within a
system and where they exist, or have a potential to exist. Disability-related biases in
AI systems are heavily influenced by historical hiring decisions. Since people with
disabilities are twice as unlikely to be unemployed, they are simply less likely to be
represented in data on past successful employees. These biases may be introduced
into systems through two primary mediums: the algorithmic model and the training
data.
The algorithmic model is the mathematical process by which an AI system
performs a certain function. Designing this model involves defining the objective or
problem the developer wishes to address and selecting the parameters that define
the system’s operation at what they determine is an optimal level (Russell and
Norvig, 2003).
How can this go wrong? For instance, an automated CV screener is
programmed to predict the best qualified candidate based on the (“optimal”)
parameter of having attended a top-tier university. Someone who has worked hard to
achieve success, right? The prestige of an institution may be one factor in a
successful employee, but that parameter also disadvantages people with disabilities,
different socioeconomic backgrounds, and/or underrepresented ethnicities, who
already face systemic barriers to be equally represented in prestigious institutions.
The training data is an initial set of data used to help a program learn how to
apply the model and produce sophisticated results in application (Russell and
Norvig, 2003). The model operates as well as the training data that goes in. The
sampling strategy used to collect the training data and the representativeness of the
data is a conscious decision by the developer.
Building on the previous example, what if the automated CV screener was
trained on data that did not include the data profiles of successful employees who
have a non-English name, went to state school, participate in disability-related
volunteering activities, had a break in employment due to family or illness, or have
an address in an economically disadvantaged area? These are simple, seemingly
innocuous features that will be represented in a CV. Interacting with information in a
CV that the programme has not previously encountered means that the system may
be more likely to reject a candidate. This is because these novel features do not fit
9
the prescribed collection of features that is modelled to represent the ‘ideal’
employee. These novel features may be innocuous, but they may also be indirectly
related to the experience of being disadvantaged on the job market.
Improper Implementation and Use
Even as systems become more technically sound with regard to
acknowledging and mitigating bias in design, risks for applicants with disability may
be generated and/or amplified by improper use and implementation of the
technology.
Most recruiters recognize that no single assessment method is suitable and
fair for all applicants. However, the marketed reliability and the ease of automated
adaptations of recruitment processes has resulted in many cases where AI tools are
being used in isolation of other measures of suitability and human decision makers in
the application package. In some organisations, a single product may be the sole
gate of entry into employment.
Moreover, AI assessment fails to factor in the likelihood that the employer
would make the adjustment post job hire that would determine if a particular disabled
candidate was ‘right’ for the job. For example, a qualified, visually impaired,
cybersecurity expert will only be the best candidate if the employer enables her to
use specialized software.
Acknowledging and monitoring uncertainty in AI systems is critical to making
fair and adequate decisions as sensitive and life changing as whether a person is
employed or not. The life chances of job seekers precariously intersect with the
computational complexities related to disability, the inherent challenges of bias, and
the uncertainty around automated decision-making. No system should be expected
to work perfectly.
The use of rigid, standardised recruitment processes that cannot be
adequately adjusted to enable candidates with disabilities to compete fairly are
inherently discriminatory (Hamraie, 2017). Candidates may have the option to
request accommodations to these systems although some developers expect this
is the role of the employer to deliver such adjustments. However, unless candidates
are given explicit assurances that they may request and be provided with equally-
evaluated, alternative routes, the employer risks, at best, making disabled users
uncomfortable/fearful of interacting with AI and, at worst, discriminating against such
candidates. Expecting disabled employment seekers to go through standardised
processes is akin to asking a wheelchair user to take the stairs to the interview room.
| Tech on the Market: the dangers of
discrimination
10
Recruitment AI encompasses a wide array of technologies functioning at
different points in the recruitment process. This section outlines the broad categories
currently in use, detailing the impact potential for people with disabilities. This list is
by no means exhaustive, but highlights major technologies used in the candidate
sourcing and selection phases of recruitment.
ATS and CRM Systems
Applicant Tracking Systems (ATS) are platforms where recruiters can conduct
each step in the hiring process from posting position openings to collecting
applications to screening candidates to evaluation and selection. Candidate
Relationship Management (CRM) systems maintain a connection between recruiters
and employment seekers so that desirable candidates may be easily referred to
future job openings.
We consider these systems together because share similar potential impacts
on people with disabilities. They are likely to utilise automated outlier detection tools,
such as CAPTCHAs, that when insufficiently trained can flag people with disabilities
as not human, or a spammer (Guo et al. 2019). The difference between human and
non-human may come down to a few seconds delay in response, a minor slip in
highlighting the correct answer, or misinterpreting an obscured set of letters. People
with difficulties related to dexterity or visual impairment are disproportionately
affected.
Further, the skills and qualification gap for disabled people due to systemic
inequalities likely disadvantages these candidates when evaluated against the
standard person specification as well as historic hiring decisions. These systems are
not designed with the flexibility that would take into account that some candidates
appear less qualified only due to systemic denial of education and employment
opportunities.
CV/ Resume Screeners
CV screening is a major driver of the recruitment innovation powered by AI
systems, addressing the need for processing high application volumes. Automated
screeners detect characteristics in the CV content, such as key phrases, proper
nouns to evaluate employability against criteria for the position. These criteria are
determined by either the job description or by evaluating the features of previously
successful candidates. They may go further to interpret characteristics of the
applicant, such as personality, sentiment, and demographics. Some also supplement
data in CVs to with information about the candidate from public data sources, social
media, and information about their previous employers.
Once again, the skills and qualification gap for disabled people due to
systemic inequalities is likely to disadvantage these candidates when evaluated
against a standard job description as well as historic hiring decisions. These systems
11
are not designed with flexibility that considers some appear less qualified due to
systemic lack and denial of education and employment opportunities.
AI screener systems that have not been trained on CV data from users with diverse
cognitive and intellectual abilities may have additional challenges with linguistic
flexibility. For screeners that analyse personality and emotion from texts, further
problems may arise. For example, people with neuro- and cognitive diversity may
express emotion in writing in a style previously not encountered by the AI system,
resulting in incorrect classifications about their emotional state or personality. . And
many pre-lingually Deaf individuals speak the official spoken language of their
country as a second language.
Conversational Agents
Recruitment conversational agents, or chatbots, are designed to mimic human
conversational abilities during the recruitment process. These technologies use an
approach termed natural language processing (NLP) to analyse questions and
comments and to respond effectively. Conversational agents are desirable additions
to the recruitment process as a means of increasing communication with
employment seekers in order to answer frequently asked questions, collect
information on candidates, ask screening questions, and schedule interviews or
meetings with a human recruiter.
Conversational agent systems have the potential to be helpful in some
circumstances where they are designed with accessibility in mind. Agents that
augment text with visual illustration (i.e. highlight key words, spelling and grammar
check, text suggestion), speech functionality, and dictation tools can enhance
accessibility and usability for a wide range of users.
However, if not thoughtfully designed and implemented, conversational
agents may also not respond appropriately, or in a hateful manner, and unfairly
screen out candidates. Depending on the nature of the agent’s function this can at
best lead to poor user experience and at worst discriminatory candidate screening.
Conversational agents are often not trained on language data gathered from
people with cognitive, intellectual, physical and linguistic diversity or those from
neuro-diversity groups. Undertrained agents may be unable to correctly interpret
spellings or phrases they haven’t previously encountered, such as messages from
people who have physical difficultly typing or have dyslexia, autism, dysphagia,
dyspraxia, ADHD, among numerous others. Moreover, agents that do not support
communications methods beyond writing, such as text-to-speech and dictation, limit
or exclude many individuals from participating in communication and being
competitive in the recruiting process.
Pre-Employment Assessments
12
A range of candidate aptitude assessments, such as cognitive ability,
technical skills, personality, and decision making, are a commonly used to
quantitatively measure and compare job applicants for a particular role. Broadly,
these tests are aimed at gauging a candidate’s ability to think quickly, solve
problems, and interpret data.
Many recruiters recognise that these assessments are often not reliable as
one-size-fits-all approaches. The generalisability of psychometric tests for people
with disabilitiesas well as many populations who are not from WEIRD (western,
educated, industrialized, rich, and democratic) backgrounds is unreliable (Cook
and Beckman, 2006). There is a degree of uncertainty about whether any assessed
candidates, never mind those with disabilities, are indeed able to successfully learn
and perform the duties of the role or not. Furthermore, many psychometric tests are
in themselves inaccessible to a wide range of disabled candidates. These
assessments must be balanced by other measures in the recruitment process.
Gamified assessments raise additional concerns related to dexterity, vision
impairment, and response time. Games often involve tasks that are assessed based
on speed of reaction to prompts and precision of responses, which may affect people
with motor limitations, who need extra time or assistance to complete dexterity tasks.
People with visual impairment may require magnification and colour adjustment and
additional time. Furthermore, people with cognitive diversity may require language
adjustment and additional time to read prompts.
AI Interviewing
AI powered interviewing includes facial analysis tools and speaking
conversational agentsaka robot recruiters (refer above to limitations of Chatbots).
These tools evaluate employability from the language, tone, and facial expressions
of candidates when they are asked an identical set of question in a standardised
process. Candidates are assessed based on a variety of facial, linguistic, and non-
verbal measures. ‘Ideal’ measures often are those that most closely align with the
same measures from historically successful candidates for any given role.
As with previous examples, systems that are not trained on a diverse range of
potentially successful candidates, face challenges in fairly assessing people with
facial features, expressions, voice tone, and non-verbal communication that it has
not previously encountered.
For instance, facial analysis software may inaccurately assess and potentially
exclude people with facial disfigurement or paralysis as well as conditions such as
Down syndrome, achondroplasia, cleft lip/palate, or other conditions that result in
facial differences. Further, people with blindness may not face the camera or make
eye contact in a manner acceptable to the system’s parameters. Moreover, issues
may exacerbated by differences in eye anatomy and dark glasses. People who need
captions due to hearing loss, or who lip read may struggle to hear or interpret the
questions.
Facial analysis tools that go further to interpret emotion and personality from
facial expressions pose alarmingly high risks. Beyond issues of accuracy and
13
algorithmic bias, the fundamental scientific concepts behind personality assessments
derived from facial feature measurements, is not supported -and is rooted in
pseudoscientific race studies (Noble, 2018). The implementation of these
technologies for recruitment risks legitimising the flawed methodological premise
with ill-informed buyers in a way that can only perpetuate historic disadvantages and
exclusion for marginalised peoples.
| Intervention Recommendations
Designing and implementing Recruitment AI systems that treat persons with
disabilities and by that extent, all employment seekers fairly requires the
engagement of all stakeholderstechnology suppliers, purchasers, and users alike.
Our aim is to facilitate purchasers in joining the discussion and to collaborate with us
to prepare the tools and language needed to initiate the conversation that asks:
How do we assess if any given Recruitment AI system is‘safe for employment
seekers with disabilities and others disadvantaged in any labour market?.
There are a number of actions a forward-thinking organisation can take to support
those technology suppliers who share the values and expectations of the
organisation and its clients toward applicants with disability. This process begins by
asking the right questions of technology developers and suppliers.
Vision, Strategy & Corporate Governance Stakeholders
i. Does this technology align with our
organizational strategy to increase diversity
and representation?
ii. Does use of this technology reflect our
organisation’s strategic policies with regard
to the ethical and responsible development
and implementation of artificial
intelligence?
iii. Is this supplier actively engaged in learning more about how to
adapt to match our values and needs as a business and those of
our stakeholders?
iv. Who in this organisation should be involved in the governance
process which determines how we investigate, procure, apply and
14
monitor HR tech systems so that at the very least they do not
adversely impact disadvantaged job seekers?
Human Resources and Operations Stakeholders
i. What are the benefits and risks of this
technology for disabled and other
disadvantaged employment seekers?
ii. Was a shared understanding of inclusivity
and fairnesswith specific reference to
eliminating the root causes of disability
related discriminationdesigned into this
technology?
iii. Will implementing this technology require alternative evaluation
routes to enable people with different impairments to be recruited
on the basis of individual capability and potential?
iv. Does the AI recruitment tool enable candidates to readily request
adjustments, in a non-stigmatising manner, at every stage of the
process
Procurement Stakeholders
i. Has this supplier proved their products are safe
for disabled and other disadvantaged
employment seekers before you purchase?
ii. How has the supplier actively involved people
with disabilities to test and validate its
products?
iii. Was a shared understanding of inclusivity and
fairnesswith specific reference to eliminating the root causes of
disability related discriminationdesigned into this technology?
15
iv. Do contractually defined performance standards require the
supplier to track the experience of job seekers with disabilities
particularly those who have requested disability related
adjustments?
v. Can they evidence that they have actively consulted and involved
persons with disabilities as expert advisors and potential users in
their product development life cycle?
Information Technology Stakeholders
i. Will our organisation be provided with the
appropriate explainability and interpretability
resources to assess outputs and impacts on
employment seekers disabilities?
ii. Does the relevant, quality data exist to support
this technology in performing effectively for
persons with disabilities?
iii. What are the appropriate oversight mechanisms to evaluate the
performance of the system and can the system withstand scrutiny
by disabled employment seekers?
iv. Can the supplier demonstrate how the processes will adapt so as to
ensure equal opportunities for disabled employment seekers?
| References
Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the
World. Cambridge: MIT Press, 2018.
Collins, P.H. and Bilge, S., 2020. Intersectionality. John Wiley & Sons.
Cook, D.A. and Beckman, T.J., 2006. Current concepts in validity and reliability for
psychometric instruments: theory and application. The American journal of
medicine, 119(2), pp.166-e7.
Frederick, A. and Shifrer, D., 2019. Race and disability: From analogy to
intersectionality. Sociology of Race and Ethnicity, 5(2), pp.200-214.
16
Guo, A., Kamar, E., Vaughan, J.W., Wallach, H. and Morris, M.R., 2019. Toward
Fairness in AI for People with Disabilities: A Research Roadmap. arXiv
preprint arXiv:1907.02227.
Hamraie, Aimi. 2017.Building Access: Universal Design and the Politics of Disability.
Minneapolis: University of Minnesota Press.
Lindsay, S., Leck, J., Shen, W., Cagliostro, E. and Stinson, J., 2019. A framework for
developing employer’s disability confidence. Equality, Diversity and Inclusion:
An International Journal.
Parker, Alison M. 2015 “Intersecting Histories of Gender, Race, and Disability.”
Journal of Women’s History 27, 1.
Petrick, Elizabeth R. 2015. Making Computers Accessible: Disability Rights and
Digital Technology. Baltimore: Johns Hopkins University Press.
Pinch, Trevor and Nelly Oudshoorn. 2005.How Users Matter: The Co-Construction of
Users and Technology. Cambridge: MIT Press.
Noble, Safiya. 2018. Algorithms of Oppression: How Search Engines Reinforce
Racism. New York: New York University Press.
Office for National Statistics, 2019. Disability And Employment, UK: 2019.
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. New York: Penguin Random House, 2017.
Russell, S., & Norvig, P. (2003). Artificial intelligence: a modern approach (2nd. ed.).
Pearson Education.
Samuels, Ellen. 2016. Fantasies of Identification: Disability, Gender, Race. New York
and London: New York University Press.
Sayce, L., 2011. Getting in, staying in and getting on: Disability employment support
fit for the future (Vol. 8081). The Stationery Office.
Suter, R., Scott-Parker, S. and Zadek, S., 2007. Realising potential: disability
confidence builds better business.
Trewin, S., 2018. AI fairness for people with disabilities: Point of view. arXiv preprint
arXiv:1811.10670.
Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert,
D., Lyckowski, N. and Manser, E., 2019. Considerations for AI fairness for
people with disabilities. AI Matters, 5(3), pp.40-63.
Whittaker, M., Alper, M., Bennett, C.L., Hendren, S., Kaziunas, L., Mills, M. and
West, M., 2019. Disability, Bias, and AI. AI Now Institute, November.
17
| Recommended Resources
Crawford, Kate, Roel Dobbe, Theodora Dryer, Genevieve Fried, Ben Green,
Elizabeth Kaziunas, Amba Kak, Varoon Mathur, Erin McElroy, Andrea Nill Sánchez,
Deborah Raji, Joy Lisi Rankin, Rashida Richardson, Jason Schultz, Sarah Myers
West, and Meredith Whittaker. AI Now 2019 Report. New York: AI Now Institute,
2019
https://ainowinstitute.org/AI_Now_2019_Report.html.
European Commission, 2020. White paper on artificial intelligencea European
approach to excellence and trust.
https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-
intelligence-feb2020_en.pdf
Leslie, D., 2019. Understanding artificial intelligence ethics and safety. arXiv preprint
arXiv:1906.05684.
https://arxiv.org/pdf/1906.05684.pdf
Office of Artificial Intelligence Guidelines, 2020. Guidelines for AI procurement.
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/
attachment_data/file/890699/Guidelines_for_AI_procurement__Print_version_
.pdf
Whittaker, M., Alper, M., Bennett, C.L., Hendren, S., Kaziunas, L., Mills, M. and
West, M., 2019. Disability, Bias, and AI. AI Now Institute, November.
https://wecount-cms.inclusivedesign.ca/wp-
content/uploads/2020/06/Disability-bias-AI.pdf
World Economic Forum, 2019. White Paper: Guidelines for AI Procurement.
http://www3.weforum.org/docs/WEF_Guidelines_for_AI_Procurement.pdf
| Glossary
Algorithm
A formula or set of rules that determines the process by which the machine goes
about finding answers to a question or solutions to a problem.
Artificial Intelligence (AI)
18
A field of computer science focused on the study of computationally supported
intelligent decisions and problem solving.
Augmented Intelligence
Complementing and supporting, rather than replacing, human tasks and intelligence.
Autonomous AI
An AI system that doesn’t require input from a human operator to function and
complete tasks.
Data mining
The process of identifying patterns within large sets of data with the intention of
deriving useful information about the data.
Deep learning
An approach in machine learning that models and examines complex structures and
relationships among data by employing algorithms.
Machine learning
A field of AI focusing employing algorithms that learn automatically from experience
for analytical modelling.
Natural language processing (NLP)
A field of AI that reads and interprets human languages in order to derive meaning
from them.
... Indeed, evidence from a recent white paper suggests that the increasing use of artificial intelligence in recruitment (e.g. curriculum vitae screeners) is problematic in this regard as systems are unable to account for such individual differences in experiences (Nugent et al., 2020). As a result, autistic candidates may be likely to be 'screened out' before they are able to demonstrate their skills. ...
Article
Full-text available
Autistic people face high unemployment rates. One reason for this may be that hiring processes are inaccessible. This study aimed to establish autistic people’s unique experiences of hiring processes in the United Kingdom, by comparing them to the experiences of non-autistic neurodivergent people and neurotypical people. Using qualitative and quantitative data from 225 autistic, 64 non-autistic neurodivergent and 88 neurotypical adults, we identified a series of (dis)similarities in participants’ views and experiences of recruitment for employment. Similarities across the three groups included (1) frustration with the focus on social skills; (2) a perceived need for more flexible hiring processes; (3) a desire for more clarity and (4) the importance of the environment. Participants also acknowledged the important role employers play in one’s decision to disclose a diagnosis or access need. Yet, autistic people faced a set of unique barriers to successful recruitment, over and above those that non-autistic people faced. For example, the perceived pressure to mask autistic traits to succeed and concerns about stigma and discrimination. Participants’ recommendations for improvements included the use of more practical recruitment strategies (e.g. work trials), more clarity about what to expect, and improvements in recruiters’ understanding of the challenges autistic and neurodivergent candidates may face. Lay abstract Autistic people are less likely to have a job than non-autistic people. One reason for this may be that hiring processes (e.g. job applications, interviews) can be challenging for autistic people. To better understand the experiences of hiring processes in the United Kingdom, we asked 225 autistic, 64 neurodivergent (but not autistic) and 64 adults with no reported area of neurodivergence questions about their experiences using an online survey. We found a range of similarities and differences in responses. For example, participants in all three groups were frustrated with the focus on social skills in recruitment and said they wanted more practical methods (e.g. work trials) that help them show their skills and abilities. Autistic and otherwise neurodivergent participants discussed the importance of the environment (e.g. the interview/assessment room) in improving experiences. Participants also discussed how employers can impact whether somebody decides to disclose their diagnosis or needs – or not. Autistic people experienced some barriers to successful recruitment that non-autistic people did not. For example, autistic people felt they had to hide their autistic traits to gain employment and many autistic people were worried about being discriminated against if they disclosed that they were autistic during the hiring process. To make experiences better, our participants said that employers should offer candidates different recruitment methods and give them more information about the hiring process. They also said employers should improve their understanding of autism and other hidden disabilities so they know the challenges that people might face during recruitment.
... Such proxies are not personal data in the meaning of the GDPR, and thus fall outside its scope. Nevertheless, the literature has repeatedly pointed out their adverse impact on protected groups [20,32,51,58,73,104]. Consequently, having access to the sensitive information stored in proxies might be useful to recognize and mitigate the effect of them. ...
Preprint
Full-text available
Tackling algorithmic discrimination against persons with disabilities (PWDs) demands a distinctive approach that is fundamentally different to that applied to other protected characteristics, due to particular ethical, legal, and technical challenges. We address these challenges specifically in the context of artificial intelligence (AI) systems used in hiring processes (or automated hiring systems, AHSs), in which automated assessment procedures are subject to unique ethical and legal considerations and have an undeniable adverse impact on PWDs. In this paper, we discuss concerns and opportunities raised by AI-driven hiring in relation to disability discrimination. Ultimately, we aim to encourage further research into this topic. Hence, we establish some starting points and design a roadmap for ethicists, lawmakers, advocates as well as AI practitioners alike.
Article
Full-text available
IntroductionThis research aims to explore the effectiveness and inclusivity of AI-powered recruitment tools in hiring people with disabilities within the United Arab Emirates. Such is the situation where AI integration into the arena of recruitment is increasingly rapid, while there are vital issues on the side of bias, accessibility, and fairness for applicants of diverse needs. Methods This study was a mixed-methods approach, examining sentiment analysis, emotion detection, and HR analytics of feedback from applicants with a disability, 415 in total. The research focused on scores referring to sentiment, the progression rate, and the outcome of the final hiring. ResultsThe sentiment score varied significantly across disability types (p-value <0.05). The applicants with cognitive disability expressed the highest sentiment sore while applicants with hearing impairment had the lowest, which indicated the varying adaptability of AI. The emotion analysis depicted a mix of positive and negative emotions. A few applicants liked technology and have trust in it, while others report fear. Clearly, the applicants, both disabled and non-disabled did not differ in their rate of progression (p-value >0.05), hence never indicating any significant difference within the initial steps of the process. The final hiring stage showed significant differences in results with (p-value <0.05), where the proportionate number of disabled applicants was recorded to be lower than that of non-disabled applicants
Article
Full-text available
Many autistic people are unemployed. Of those who are employed, many are in roles that do not reflect their skills, qualifications and/or capabilities, and little is known about how autistic people progress throughout their careers. This study aimed to review and synthesise the existing evidence about career progression for autistic people. In total, 33 studies met the criteria for inclusion, though no study directly aimed to explore the topic. Our findings suggest that underemployment is common within the autistic population. Indirectly, we identified several potential barriers and facilitators of career progression for autistic people. Possible barriers included personal (e.g. gaps in education and employment history), relational (e.g. disclosing an autism diagnosis) and organisational factors (e.g. inadequate employment support). Adequate employment support was the most frequently discussed facilitator. Future research should seek to identify the most successful employment supports for autistic people over the long term to ensure that all autistic people are able to live – and work – in ways that are meaningful to them. Lay abstract Lots of autistic people are unemployed. Even when they are employed, autistic people might be given fewer opportunities than non-autistic people to progress in their careers. For example, assumptions about autistic people’s differences in social communication might mean they are not given as many promotions. Indeed, we know that many autistic people are in jobs lower than their abilities (known as ‘underemployment’). We reviewed 33 studies that tell us something about career progression for autistic people. Our review found that lots of autistic people want to progress in their careers, but there are many barriers in their way. For example, when they told their employer about being autistic, some people were given fewer opportunities. Research has also shown that autistic people do not get enough support to progress and that gaps in their employment history can make it difficult to progress. Our review suggested that good employment support (e.g. mentors) might help autistic people to progress in their careers. However, not much research has evaluated employment support for autistic people, which means we do not know how useful it is. Future research should find the best support that allows autistic people to live and work in ways that are meaningful to them.
Chapter
Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users’ understanding of AI systems. It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.KeywordsArtificial IntelligenceExplainable Artificial IntelligenceHuman Computer InteractionDesign Thinking
Article
Full-text available
AI-based solutions have found a great application in filling the gap and enhancing the massive recruiting processes. With the recent developments, the role of gamification in the overall managerial processes, especially in recruiting has proven to be crucial. AI as a powerful tool towards the challenges the hiring process faces, appears as contradictory in a number of issues. In this paper, we have observed and analyzed the advantages and disadvantages of AI in recruiting, followed by a proposed model for resume screening based on keywords and phrases against job description. Furthermore, a case has been presented and assessed regarding results and implications of AI-based tools, namely machine learning models in a simple scenario of hiring process.
Chapter
Persons with disabilities experience high levels of unemployment, job insecurity, tightly bound with persistent socioeconomic aspects such as poverty, social isolation and marginalization. Such worrisome developments tend to magnify and reproduce the inequality and discrimination this vulnerable group faces in the field of employment with long-lasting effects on their life course and on economic development in general. At the same time, in an increasingly unequal world Artificial Intelligence (AI) technologies have rapidly emerged from the shadows to become a priority in the global market as well as to advance people’s lives. Against this backdrop, the opportunities and challenges in harnessing AI technologies (i.e., applications/smart devices amplifying human capability) to reasonably accommodate the needs of persons with disabilities in the labour market are examined in this chapter. Undoubtedly, realizing the full potential of AI technologies within employment settings from a disability rights perspective is particularly challenging. To this end, a human rights approach brings into play established frameworks of legal obligations and tools so as to regulate and evaluate the performance of AI technologies with the immediate and ultimate goal the benefit of the whole society. Looking ahead, as a way of facilitating employment opportunities for persons with disabilities this chapter concedes that AI should be framed as a matter of equity and in consistency with human rights principles and standards for achieving optimum workplace accessibility and inclusivity.KeywordsArtificial IntelligenceDisabilityHuman rightsEmploymentInequalityDiscrimination
Article
Full-text available
Sociologists are using intersectional lenses to examine an increasingly wider range of processes and identities, yet the intersection of race and disability remains a particularly neglected area in sociology. Marking an important step toward filling this gap, the authors interrogate how race and disability have been deployed as analogy in both disability rights activism and in critical race discourse. The authors argue that the “minority model” framework of disability rights has been racialized in ways that center the experiences of white, middle-class disabled Americans, even as this framework leans heavily upon analogic work likening ableism to racial oppression. Conversely, the authors examine the use of disability as metaphor in racial justice discourse, interrogating the historic linking of race and disability that gave rise to these language patterns. The authors argue that this analogic work has marginalized the experiences of disabled people of color and has masked the processes by which whiteness and able-bodiedness have been privileged in these respective movements. Finally, the authors argue that centering the positionality of disabled people of color demands not analogy but intersectional analyses that illuminate how racism and ableism intertwine and interact to generate unique forms of inequality and resistance.
Book
Full-text available
In the mid-nineteenth-century United States, as it became increasingly difficult to distinguish between bodies understood as black, white, or Indian; able-bodied or disabled; and male or female, intense efforts emerged to define these identities as biologically distinct and scientifically verifiable in a literally marked body. Combining literary analysis, legal history, and visual culture, Ellen Samuels traces the evolution of the “fantasy of identification”-the powerful belief that embodied social identities are fixed, verifiable, and visible through modern science. From birthmarks and fingerprints to blood quantum and DNA, she examines how this fantasy has circulated between cultural representations, law, science, and policy to become one of the most powerfully institutionalized ideologies of modern society. Yet, as Samuels demonstrates, in every case, the fantasy distorts its claimed scientific basis, substituting subjective language for claimed objective fact.From its early emergence in discourses about disability fakery and fugitive slaves in the nineteenth century to its most recent manifestation in the question of sex testing at the 2012 Olympic Games, Fantasies of Identification explores the roots of modern understandings of bodily identity.
Article
This year the ASSETS conference is hosting a workshop on AI Fairness for People with Disabilities the day before the main conference program begins. This workshop will bring together forty participants to discuss the practical, ethical, and legal ramifications of emerging AI-powered technologies for people with disabilities. We organized this workshop because artificial intelligence is increasingly being used in decision-making that directly impacts people's lives.
Article
In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human-centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.
Article
Purpose – Many employers lack disability confidence regarding how to include people with disabilities in the workforce, which can lead to stigma and discrimination. The purpose of this paper is to explore the concept of disability confidence from two perspectives, employers who hire people with a disability and employees with a disability. Design/methodology/approach – A qualitative thematic analysis was conducted using 35 semi-structured interviews (18 employers who hire people with disabilities; 17 employees with a disability). Findings – Themes included the following categories: disability discomfort (i.e. lack of experience, stigma and discrimination); reaching beyond comfort zone (i.e. disability awareness training, business case, shared lived experiences); broadened perspectives (i.e. challenging stigma and stereotypes, minimizing bias and focusing on abilities); and disability confidence (i.e. supportive and inclusive culture and leading and modeling social change). The results highlight that disability confidence among employers is critical for enhancing the social inclusion of people with disabilities. Originality/value – The study addresses an important gap in the literature by developing a better understanding of the concept of disability confidence from the perspectives of employers who hire people with disabilities and also employees with a disability.
Book
A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right. In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right. Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it's just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can't pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone.
Book
“All too often,” wrote disabled architect Ronald Mace, “designers don’t take the needs of disabled and elderly people into account.” Building Access investigates twentieth-century strategies for designing the world with disability in mind. Commonly understood in terms of curb cuts, automatic doors, Braille signs, and flexible kitchens, Universal Design purported to create a built environment for everyone, not only the average citizen. But who counts as “everyone,” Aimi Hamraie asks, and how can designers know? Blending technoscience studies and design history with critical disability, race, and feminist theories, Building Access interrogates the historical, cultural, and theoretical contexts for these questions, offering a groundbreaking critical history of Universal Design. Hamraie reveals that the twentieth-century shift from “design for the average” to “design for all” took place through liberal political, economic, and scientific structures concerned with defining the disabled user and designing in its name. Tracing the co-evolution of accessible design for disabled veterans, a radical disability maker movement, disability rights law, and strategies for diversifying the architecture profession, Hamraie shows that Universal Design was not just an approach to creating new products or spaces, but also a sustained, understated activist movement challenging dominant understandings of disability in architecture, medicine, and society. Illustrated with a wealth of rare archival materials, Building Accessbrings together scientific, social, and political histories in what is not only the pioneering critical account of Universal Design but also a deep engagement with the politics of knowing, making, and belonging in twentieth-century United States. © 2017 by the Regents of the University of Minnesota. All rights reserved.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Book
In 1974, not long after developing the first universal optical character recognition technology, Raymond Kurzweil struck up a conversation with a blind man on a flight. Kurzweil explained that he was searching for a use for his new software. The blind man expressed interest: One of the frustrating obstacles that blind people grappled with, he said, was that no computer program could translate text into speech. Inspired by this chance meeting, Kurzweil decided that he must put his new innovation to work to "overcome this principal handicap of blindness." By 1976, he had built a working prototype, which he dubbed the Kurzweil Reading Machine. This type of innovation demonstrated the possibilities of computers to dramatically improve the lives of people living with disabilities. In Making Computers Accessible, Elizabeth R. Petrick tells the compelling story of how computer engineers and corporations gradually became aware of the need to make computers accessible for all people. Motivated by user feedback and prompted by legislation such as the Americans with Disabilities Act, which offered the promise of equal rights via technological accommodation, companies developed sophisticated computerized devices and software to bridge the accessibility gap. People with disabilities, Petrick argues, are paradigmatic computer users, demonstrating the personal computer's potential to augment human abilities and provide for new forms of social, professional, and political participation. Bridging the history of technology, science and technology studies, and disability studies, this book traces the psychological, cultural, and economic evolution of a consumer culture aimed at individuals with disabilities, who increasingly rely on personal computers to make their lives richer and more interconnected.