PreprintPDF Available

An overview of ethical issues in using AI systems in hiring with a case study of Amazon's AI based hiring tool

Preprints and early-stage research may not have been peer reviewed yet.
An overview of ethical issues in using AI
systems in hiring with a case study of
Amazon’s AI based hiring tool
Akhil Alfons Kodiyan
November 12, 2019
1 Introduction
Corporations spend huge amounts of effort in terms of time and money to find
a perfect candidate for an open position as that can be the difference between
success or failure. According to a recent report from LinkedIn[9] time-to-hire
and cost-to-hire matrices are seeing an increasing trend in middle to top-level
openings. Resulting in key positions stay open longer than normal despite
having to spend more money. Which lead companies like Amazon, to look for
innovative ways to reduce such time and expenses by application of artificial
According to reports[12] in 2014, Amazon Inc, set up a team to build a
tool to review job applicant’s resumes, that utilized natural language process-
ing (NLP) and machine learning (ML) to find the top applicants that would
fit the job profile. Once implemented this software would use sophisticated
AI algorithms to learn key traits from successful job applicants resume, over
a period and look for similar markers in resumes submitted for screening.
This tool would then rate the candidate on a scale of 5 stars much like the
rating system used to rate products in Amazon, depending up how closely
they resemble the prior successful candidate.
By the end of 2014, the use of this experimental tool was widespread in the
company and few relied heavily on this system as it saved a significant amount
of time. By 2015, it came to the attention of the company, for technical job
titles like software developers and architects ratings are not done in a gender-
neutral way. Which lead to the company tasking its engineers to investigate
for the root cause. After much digging, engineers concluded that the cause of
bias was the data used for training the AI system, mostly consisted of resumes
of male employees in reflection to the then trend of male dominance in the
company and the tech industry. Such unknowingly biased training data lead
the algorithms to create an association that downgrade resumes that included
the words like “women’s” as in “women’s chess club captain”.And it was also
reported[12] that the engineers identified cases where system downgraded
graduates of two all-women’s colleges.
These discoveries lead Amazon to rewrite algorithms to be neutral in that
context but it was concluded that such an AI system theoretically in future
might develop a system of sorting candidates that could in some degree be
2 Literature Review
2.1 Traditional Hiring
The traditional hiring does not have any fixed model for how it is to be
executed [1]. It usually starts with the company identifying an open posi-
tion, followed by an analysis that results in a job description. This is then
advertised to either or both in-house channels like internal job portal and ex-
ternal channels like LinkedIn, Monster or head hunters. CVs so sourced are
then pooled and screened by HR personals and subject-matter experts. The
shortlisted candidates are then interviewed to select the final candidates[10].
Though this method of hiring is time tested and has a human touch, the
main limitations are the time and cost involved in hiring [9].
2.2 AI In Traditional Hiring
The concept of Artificial intelligence has been around a while and has found
applications in many research fields, only during the last decade technol-
ogy been further developed and implemented within many different organi-
zational settings[13]. Areas in which AI can be realized could be vast but
according to Tecuci[13] knowledge acquisition, natural language and robotics
are the main areas.
Natural language processing(NLP) is a process where information and
knowledge can be gathered by scanning a plain text. This kind of knowl-
edge extraction process can be used to automate scanning of resumes and
gather relevant information[5] which then could be used to sort the appli-
cants against their suitability to a job profile. Building these types of AI
system require training data based on which the underlying algorithm learn
to make the correlation between various traits found in resume to that of job
profile to predict the suitability of the applicant[4]. Companies like Amazon
reportedly have built similar systems to assist in their recruiting [12], which
case we would be discussing in detail in the next section.
2.3 Challenges In Adopting AI
Challenges in adopting AI technologies can be broadly classified into techno-
logical, privacy-related and ethical. Out of these ethical and privacy concerns
are found to main factors that limit the adoption of these technologies in the
modern hiring workflows whereas the technological challenges seem to dis-
solve away with the rapid progress of innovation in the industry.
Most of the AI systems need to training before they can be put into use,
this requires the use of a training data set which in the context of hiring
might involve personal information of applicants both failed and successful
so that systems can deduce common markers of the successful candidate and
failed candidate[4]. This raises questions on personal data privacy and data
2.4 Ethical Challenges In Adopting AI
The use of AI poses some serious ethical dilemmas and hard questions. It
is these challenges that hamper the widespread adoption of the use of this
technology in hiring. Major among these dilemmas and questions are how AI
ensures fairness, how the system manages irreconcilable ideas, How diversity
in a company will be maintained, Does the system has sufficient contextual
integrity, Is too much reliance on AI technologies dangerous?[11].
When people speak of fairness in relation to AI, they often draw on several
different meanings of fairness. It is was found that in computer science there
are around 21 definitions of fairness[2]. For example, fairness often means
equal opportunity. Another definition of fairness would be free from bias
based on gender or colour. Yet another would be a standard of treating
people equally across various dimensions like interpersonal, legally, etc. In
summary, despite its vagueness, it remains a very important moral value
people expect to see in an AI system but often it is one of the common areas
of ethical issues in intelligent software.
Optimising AI system for fairness is a tricky job as fairness involves many
areas and some of these are mutually exclusive. For example, a company
might wish to provide equal opportunity without any bias and cannot con-
sider a notion of fairness that corrects for the historical or inherent disadvan-
tages or social injustices. Implementing an AI that takes care of both would
be challenging and might often have to make sacrifices.
Hiring is an inherently discriminate process in which some applicant re-
ceive offers while others don’t base on certain traits that defines a ”good”
candidate. And such good traits on long-run creates diversity in a level com-
pany is comfortable. Should such parameters be a functional requirement of
the AI project to get optimal performance? or is it against fairness?
Contextual integrity is the appropriateness of using information concern-
ing an individual that aligns reasonably well with the individuals’ expectation
of privacy when he/she shared the information. When a candidate submits
their personal information while applying for a job, subjecting them to an
AI process carries within an implicit trust violation. Now one could argue
that these data were submitted to the company for evaluation and thereby
are owned by the firm and can do with it as it sees fit. But this view will
give rise to various other ethical issues related to data ownership.
When the AI system becomes faster and accurate in mimicking human
decision, an organization might rely on AI more than human judgment. In
such a scenario, will the organization eventually lose its ability to make a
hiring decision without the help of an AI?.
3 Liffick’s analysis
In this section, Liffick’s methodology of analysis will be applied to Amazon’s
failed automated resume rating system which was discussed in the introduc-
3.1 Main Participants and Actions
Reporter(The Reuters): Published an article on October 2018 about
an experimental AI recruiting tool used by Amazon Inc, that showed
bias against women[12]
The company (Amazon Inc): In 2014, tasked its engineers to create an
AI-based tool to assist in the company’s head-hunting process.
Engineers: Built the AI tool that automatically rates the candidate on
a scale of 5 stars. Used training data from the last 10 years interview
conducted by the company.
Managers: Primarily used this tool during the 2014-2015 period and
toward the end of 2015 realised that the tool is gender-biased.
Job Seekers(implied participant): Underwent shortlisting via new au-
tomation tool
3.2 Reduced List
The Company: As responsible for the creation of the tool.
Engineers: As this group developed the algorithms and unknowingly
used biased data to train this software.
Managers: As primary users and the ones who discovered the limita-
tions in the system.
3.3 Legal Considerations
Employment Equality Act, 1998, Section 8[6]: This case is clearly in
violation of this as section as it discriminates a prospective employee
based on gender.
Statute of Limitations,1957, Section 72[7]: These sections work in
favour of the company as it was experimenting with a technology whose
is not 100% behaviour predictable and was unaware of bias in the train-
ing data.
General Data Protection Regulation[8]: Couples significant areas are,
that GDPR states that a company needs to have applicants consent
to process their sensitive data. Then again, that company need to be
transparent how they use this data and how long they indent to keep
it. Both of these seems to have violated in this case as the report
implies candidate have no knowledge that they were being part of an
experimental procedure.
3.4 Possible Options for Participants
The Company
Could have provided ”no bias” as a functional criteria
Could have researched implications and possible challenges of us-
ing AI in hiring.
Could have relied on more human HR personal rather than build-
ing an AI system.
Could have done a statistical analysis on the training data set. As
such analysis might have warned them about the data skewness
in gender.
Should have considered moral and ethical requirements as implicit
functional requirements.
Could have ensured that system was thoroughly tested before in-
corporating them to there work schedule.
Could have refrained from using such tools and relied on human
effort to get work done.
3.5 Possible Justifications for Actions
The Company: According to the report, the company started this as
an experiment hence took little taught on questions relating to privacy,
morality or ethics.
Engineers: Engineers also considered this as an experiment to discover
the feasibility of using AI in hiring so they didn’t design and implement
the system in a way fit for ”production” by proper research and testing.
Managers: According to reports managers were impressed by the per-
formance of the AI automation and they considered the system to be
well tested and ready for production. So adopted them readily to ease
their day to day labour.
3.6 Key Statements
”...utilized natural language processing (NLP) and machine learning
(ML) to find the top applicants that would fit the job profile..”
”...used sophisticated AI algorithms to learn key traits from successful
job applicants resume, over a period and look for similar markers in
resumes submitted for screening...”
”...attention of the company that it’s’ automated rating of the can-
didate, especially for technical job titles like software developers and
architects are not in a gender-neutral way..”
”... data that was used for training the AI system, which mostly con-
sists of resumes of male employees in reflection to the then trend of
male dominance in the company...”
3.7 Questions raised
Has the company considered moral and ethical implications of this
Did the engineers analyse the data used for training AI system?
Did the engineers did proper testing of the system?
Did managers check if the software was certified to be ready to be used
in work?
Was a consent from people whose data was used for training AI ob-
3.8 Analogies employed
Savage vs Data Protection Commissioner and Google Ireland Ltd, 9
February 2018[3]: Google’s algorithms associate a person private infor-
mation to derogatory words.
Mary Dempsey vs NUI Galway: Direct discrimination based on gen-
der, family status, and disability ground due to which less favourable
treatment was received in the employment contract.
3.9 Codes of Ethics Utilised
Personal data ethics: Personal data collection is to be with the consent
of the owner under full disclosure of use and time of keeping. It is a
good practice to anonymous data when identity is not required for the
research. Ensure follow sufficient confidentiality levels when making
data available for reuse.
Fairness in employment: Ensuring a person has equal footing with any
other in showcasing his/her talents dispute of any immoral discrimi-
nating vectors
Business Ethics: A set of ethical values that helps a company define
and maintain standards of acceptable behaviour in a business context.
Professional Ethics: A set of ethical values that helps a professional
define and maintain standards of acceptable behaviour in a business
4 Alternative Proposals
Pessimistic: Company could wait for AI technology to mature so that
it can mimic complex human decision making. Meanwhile, use human
Optimistic: Company could optimise the AI system to mimic complex
human decision making and prevent bias from occurring via design
modifications. Also, the test system thoroughly to ensure the system
works as expected and free from bias or other issues before deployment.
Compromise: Company is to develop system free from bias and other
issues. Test system thoroughly and have human HR periodically cross-
verify AI systems suggestions and markers. While AI systems do the
heavy lifting of hiring like CV shortlisting, human HR will have the
final say on hiring decision.
5 Conclusion
The objective of amalgamation of AI technologies into traditional hiring is
saving a human being from various tedious labour in the hiring process.
Though this style of recruitment is relatively new yet a rapidly growing one.
Implications like data privacy, ethics, labour law, technology, feasibility and
need of such a system in hiring are to be considered carefully by every or-
ganisation before adopting such a system. And once adopted there should
frequent checks to ensure that the AI system is behaving within the accept-
able operational boundaries. It is undisputed that the use of AI could speed
up the hiring process and save human efforts but when applied incorrectly
or in unethical ways could result in loss of money and business reputation.
[1] Y Acikgoz. “Employee recruitment and job search: Towards a multi-
level integration”. In: Human resource management review 29 (2019),
pp. 1–13.
[2] Arvind Narayanan. “21 fairness definitions and their politics”. In: https:// (2018), p. 1.
[3] Courts. “Judgment”. In:
[4] Tsakalidis Faliagka Ramantas. “Application of machine learning algo-
rithms to an online recruitment system”. In: In Proc. International
Conference on Internet and Web Applications and Services (2012),
pp. 3–7.
[5] Kowalkiewicz Kaczmarek. “Information extraction from CV”. In: In-
Proceedings of the 8th International Conference on Business Informa-
tion Systems (2005), pp. 3–7.
[6] Government of Ireland. “Discrimination in Specific Areas:Discrimination
by employers”. In: Employment Equality Act, 1998 (1998), p. 8.
[7] Government of Ireland. “Postponement of limitation period in case of
mistake.” In: Statute of Limitations, 1957 (1957), p. 72.
[8] Europe Union. “General Data Protection Regulation”. In: General Data
Protection Regulation,Regulation (EU) 2016/679 (2016).
[9] LinkedIn. “LinkedIn Releases 2019 Global Talent Trends Report”. In:
talent- trends-report (2019).
[10] B. Mueller J.R Baum. “Recruitment sources and Post-hire outcomes:
The mediating role of unmet expectations”. In: International journal
of selection and assessment 13 (3) (2011), pp. 188–197.
[11] “Hiring by Machine”. In:
[12] Jeffrey Dastin. “Amazon scraps secret AI recruiting tool that showed
bias against women”. In: us-amazon-
com-jobs-automation-insight/ amazon-scraps-secret-ai-recruiting-tool-that-
showed -bias-against-women-idUSKCN1MK08G (2018).
[13] G Tecuci. “Artificial Intelligence”. In: Wires computational statistics
4(2) (2010), pp. 168–180.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
In this work, we present a novel approach for evaluating job applicants in online recruitment systems, leveraging machine learning algorithms to solve the candidate ranking problem. An application of our approach is implemented in the form of a prototype system, whose functionality is showcased and evaluated in a real-world recruitment scenario. The proposed system extracts a set of objective criteria from the applicants' LinkedIn profile, and infers their personality characteristics using linguistic analysis on their blog posts. Our system was found to perform consistently compared to human recruiters; thus, it can be trusted for the automation of applicant ranking and personality mining.
This study describes an attempt to develop an integrative model of job search and employee recruitment. Inevitably multi-level in nature, the model demonstrates the interplay between organizational-level factors and individual-level factors in influencing the outcomes of employee recruitment and job search activities. According to the model, influenced by job seeker and organizational characteristics, job search and recruitment activities jointly create job awareness, which is the first step in organizational attraction. Next, depending on the job seeker's current job situation, this attraction leads to job pursuit intention and behavior. The model also emphasizes the longitudinal nature of the process by which individuals gain employment. Finally, since each organization's applicant pool consists of job seekers with some common characteristics attracted to the same position, the model proposes that recruitment and job search can be examined by utilizing a multilevel framework.
21 fairness definitions and their politics
  • Arvind Narayanan
Arvind Narayanan. "21 fairness definitions and their politics". In: https:// (2018), p. 1.
Information extraction from CV
  • Kowalkiewicz Kaczmarek
Kowalkiewicz Kaczmarek. "Information extraction from CV". In: In-Proceedings of the 8th International Conference on Business Information Systems (2005), pp. 3-7.
Discrimination in Specific Areas:Discrimination by employers
  • Government
  • Ireland
Government of Ireland. "Discrimination in Specific Areas:Discrimination by employers". In: Employment Equality Act, 1998 (1998), p. 8.
Postponement of limitation period in case of mistake
  • Government
  • Ireland
Government of Ireland. "Postponement of limitation period in case of mistake." In: Statute of Limitations, 1957 (1957), p. 72.
General Data Protection Regulation
Europe Union. "General Data Protection Regulation". In: General Data Protection Regulation,Regulation (EU) 2016/679 (2016).
LinkedIn Releases 2019 Global Talent Trends Report
  • Linkedin
LinkedIn. "LinkedIn Releases 2019 Global Talent Trends Report". In: https://news. (2019).
Recruitment sources and Post-hire outcomes: The mediating role of unmet expectations
  • B Mueller
  • J Baum
B. Mueller J.R Baum. "Recruitment sources and Post-hire outcomes: The mediating role of unmet expectations". In: International journal of selection and assessment 13 (3) (2011), pp. 188-197.
Amazon scraps secret AI recruiting tool that showed bias against women
  • Jeffrey Dastin
Jeffrey Dastin. "Amazon scraps secret AI recruiting tool that showed bias against women". In: us-amazoncom-jobs-automation-insight/ amazon-scraps-secret-ai-recruiting-tool-thatshowed -bias-against-women-idUSKCN1MK08G (2018).