An overview of ethical issues in using AI
systems in hiring with a case study of
Amazon’s AI based hiring tool
Akhil Alfons Kodiyan
November 12, 2019
Corporations spend huge amounts of eﬀort in terms of time and money to ﬁnd
a perfect candidate for an open position as that can be the diﬀerence between
success or failure. According to a recent report from LinkedIn time-to-hire
and cost-to-hire matrices are seeing an increasing trend in middle to top-level
openings. Resulting in key positions stay open longer than normal despite
having to spend more money. Which lead companies like Amazon, to look for
innovative ways to reduce such time and expenses by application of artiﬁcial
According to reports in 2014, Amazon Inc, set up a team to build a
tool to review job applicant’s resumes, that utilized natural language process-
ing (NLP) and machine learning (ML) to ﬁnd the top applicants that would
ﬁt the job proﬁle. Once implemented this software would use sophisticated
AI algorithms to learn key traits from successful job applicants resume, over
a period and look for similar markers in resumes submitted for screening.
This tool would then rate the candidate on a scale of 5 stars much like the
rating system used to rate products in Amazon, depending up how closely
they resemble the prior successful candidate.
By the end of 2014, the use of this experimental tool was widespread in the
company and few relied heavily on this system as it saved a signiﬁcant amount
of time. By 2015, it came to the attention of the company, for technical job
titles like software developers and architects ratings are not done in a gender-
neutral way. Which lead to the company tasking its engineers to investigate
for the root cause. After much digging, engineers concluded that the cause of
bias was the data used for training the AI system, mostly consisted of resumes
of male employees in reﬂection to the then trend of male dominance in the
company and the tech industry. Such unknowingly biased training data lead
the algorithms to create an association that downgrade resumes that included
the words like “women’s” as in “women’s chess club captain”.And it was also
reported that the engineers identiﬁed cases where system downgraded
graduates of two all-women’s colleges.
These discoveries lead Amazon to rewrite algorithms to be neutral in that
context but it was concluded that such an AI system theoretically in future
might develop a system of sorting candidates that could in some degree be
2 Literature Review
2.1 Traditional Hiring
The traditional hiring does not have any ﬁxed model for how it is to be
executed . It usually starts with the company identifying an open posi-
tion, followed by an analysis that results in a job description. This is then
advertised to either or both in-house channels like internal job portal and ex-
ternal channels like LinkedIn, Monster or head hunters. CVs so sourced are
then pooled and screened by HR personals and subject-matter experts. The
shortlisted candidates are then interviewed to select the ﬁnal candidates.
Though this method of hiring is time tested and has a human touch, the
main limitations are the time and cost involved in hiring .
2.2 AI In Traditional Hiring
The concept of Artiﬁcial intelligence has been around a while and has found
applications in many research ﬁelds, only during the last decade technol-
ogy been further developed and implemented within many diﬀerent organi-
zational settings. Areas in which AI can be realized could be vast but
according to Tecuci knowledge acquisition, natural language and robotics
are the main areas.
Natural language processing(NLP) is a process where information and
knowledge can be gathered by scanning a plain text. This kind of knowl-
edge extraction process can be used to automate scanning of resumes and
gather relevant information which then could be used to sort the appli-
cants against their suitability to a job proﬁle. Building these types of AI
system require training data based on which the underlying algorithm learn
to make the correlation between various traits found in resume to that of job
proﬁle to predict the suitability of the applicant. Companies like Amazon
reportedly have built similar systems to assist in their recruiting , which
case we would be discussing in detail in the next section.
2.3 Challenges In Adopting AI
Challenges in adopting AI technologies can be broadly classiﬁed into techno-
logical, privacy-related and ethical. Out of these ethical and privacy concerns
are found to main factors that limit the adoption of these technologies in the
modern hiring workﬂows whereas the technological challenges seem to dis-
solve away with the rapid progress of innovation in the industry.
Most of the AI systems need to training before they can be put into use,
this requires the use of a training data set which in the context of hiring
might involve personal information of applicants both failed and successful
so that systems can deduce common markers of the successful candidate and
failed candidate. This raises questions on personal data privacy and data
2.4 Ethical Challenges In Adopting AI
The use of AI poses some serious ethical dilemmas and hard questions. It
is these challenges that hamper the widespread adoption of the use of this
technology in hiring. Major among these dilemmas and questions are how AI
ensures fairness, how the system manages irreconcilable ideas, How diversity
in a company will be maintained, Does the system has suﬃcient contextual
integrity, Is too much reliance on AI technologies dangerous?.
When people speak of fairness in relation to AI, they often draw on several
diﬀerent meanings of fairness. It is was found that in computer science there
are around 21 deﬁnitions of fairness. For example, fairness often means
equal opportunity. Another deﬁnition of fairness would be free from bias
based on gender or colour. Yet another would be a standard of treating
people equally across various dimensions like interpersonal, legally, etc. In
summary, despite its vagueness, it remains a very important moral value
people expect to see in an AI system but often it is one of the common areas
of ethical issues in intelligent software.
Optimising AI system for fairness is a tricky job as fairness involves many
areas and some of these are mutually exclusive. For example, a company
might wish to provide equal opportunity without any bias and cannot con-
sider a notion of fairness that corrects for the historical or inherent disadvan-
tages or social injustices. Implementing an AI that takes care of both would
be challenging and might often have to make sacriﬁces.
Hiring is an inherently discriminate process in which some applicant re-
ceive oﬀers while others don’t base on certain traits that deﬁnes a ”good”
candidate. And such good traits on long-run creates diversity in a level com-
pany is comfortable. Should such parameters be a functional requirement of
the AI project to get optimal performance? or is it against fairness?
Contextual integrity is the appropriateness of using information concern-
ing an individual that aligns reasonably well with the individuals’ expectation
of privacy when he/she shared the information. When a candidate submits
their personal information while applying for a job, subjecting them to an
AI process carries within an implicit trust violation. Now one could argue
that these data were submitted to the company for evaluation and thereby
are owned by the ﬁrm and can do with it as it sees ﬁt. But this view will
give rise to various other ethical issues related to data ownership.
When the AI system becomes faster and accurate in mimicking human
decision, an organization might rely on AI more than human judgment. In
such a scenario, will the organization eventually lose its ability to make a
hiring decision without the help of an AI?.
3 Liﬃck’s analysis
In this section, Liﬃck’s methodology of analysis will be applied to Amazon’s
failed automated resume rating system which was discussed in the introduc-
3.1 Main Participants and Actions
•Reporter(The Reuters): Published an article on October 2018 about
an experimental AI recruiting tool used by Amazon Inc, that showed
bias against women
•The company (Amazon Inc): In 2014, tasked its engineers to create an
AI-based tool to assist in the company’s head-hunting process.
•Engineers: Built the AI tool that automatically rates the candidate on
a scale of 5 stars. Used training data from the last 10 years interview
conducted by the company.
•Managers: Primarily used this tool during the 2014-2015 period and
toward the end of 2015 realised that the tool is gender-biased.
•Job Seekers(implied participant): Underwent shortlisting via new au-
3.2 Reduced List
•The Company: As responsible for the creation of the tool.
•Engineers: As this group developed the algorithms and unknowingly
used biased data to train this software.
•Managers: As primary users and the ones who discovered the limita-
tions in the system.
3.3 Legal Considerations
•Employment Equality Act, 1998, Section 8: This case is clearly in
violation of this as section as it discriminates a prospective employee
based on gender.
•Statute of Limitations,1957, Section 72: These sections work in
favour of the company as it was experimenting with a technology whose
is not 100% behaviour predictable and was unaware of bias in the train-
•General Data Protection Regulation: Couples signiﬁcant areas are,
that GDPR states that a company needs to have applicants consent
to process their sensitive data. Then again, that company need to be
transparent how they use this data and how long they indent to keep
it. Both of these seems to have violated in this case as the report
implies candidate have no knowledge that they were being part of an
3.4 Possible Options for Participants
–Could have provided ”no bias” as a functional criteria
–Could have researched implications and possible challenges of us-
ing AI in hiring.
–Could have relied on more human HR personal rather than build-
ing an AI system.
–Could have done a statistical analysis on the training data set. As
such analysis might have warned them about the data skewness
–Should have considered moral and ethical requirements as implicit
–Could have ensured that system was thoroughly tested before in-
corporating them to there work schedule.
–Could have refrained from using such tools and relied on human
eﬀort to get work done.
3.5 Possible Justiﬁcations for Actions
•The Company: According to the report, the company started this as
an experiment hence took little taught on questions relating to privacy,
morality or ethics.
•Engineers: Engineers also considered this as an experiment to discover
the feasibility of using AI in hiring so they didn’t design and implement
the system in a way ﬁt for ”production” by proper research and testing.
•Managers: According to reports managers were impressed by the per-
formance of the AI automation and they considered the system to be
well tested and ready for production. So adopted them readily to ease
their day to day labour.
3.6 Key Statements
•”...utilized natural language processing (NLP) and machine learning
(ML) to ﬁnd the top applicants that would ﬁt the job proﬁle..”
•”...used sophisticated AI algorithms to learn key traits from successful
job applicants resume, over a period and look for similar markers in
resumes submitted for screening...”
•”...attention of the company that it’s’ automated rating of the can-
didate, especially for technical job titles like software developers and
architects are not in a gender-neutral way..”
•”... data that was used for training the AI system, which mostly con-
sists of resumes of male employees in reﬂection to the then trend of
male dominance in the company...”
3.7 Questions raised
•Has the company considered moral and ethical implications of this
•Did the engineers analyse the data used for training AI system?
•Did the engineers did proper testing of the system?
•Did managers check if the software was certiﬁed to be ready to be used
•Was a consent from people whose data was used for training AI ob-
3.8 Analogies employed
•Savage vs Data Protection Commissioner and Google Ireland Ltd, 9
February 2018: Google’s algorithms associate a person private infor-
mation to derogatory words.
•Mary Dempsey vs NUI Galway: Direct discrimination based on gen-
der, family status, and disability ground due to which less favourable
treatment was received in the employment contract.
3.9 Codes of Ethics Utilised
•Personal data ethics: Personal data collection is to be with the consent
of the owner under full disclosure of use and time of keeping. It is a
good practice to anonymous data when identity is not required for the
research. Ensure follow suﬃcient conﬁdentiality levels when making
data available for reuse.
•Fairness in employment: Ensuring a person has equal footing with any
other in showcasing his/her talents dispute of any immoral discrimi-
•Business Ethics: A set of ethical values that helps a company deﬁne
and maintain standards of acceptable behaviour in a business context.
•Professional Ethics: A set of ethical values that helps a professional
deﬁne and maintain standards of acceptable behaviour in a business
4 Alternative Proposals
•Pessimistic: Company could wait for AI technology to mature so that
it can mimic complex human decision making. Meanwhile, use human
•Optimistic: Company could optimise the AI system to mimic complex
human decision making and prevent bias from occurring via design
modiﬁcations. Also, the test system thoroughly to ensure the system
works as expected and free from bias or other issues before deployment.
•Compromise: Company is to develop system free from bias and other
issues. Test system thoroughly and have human HR periodically cross-
verify AI systems suggestions and markers. While AI systems do the
heavy lifting of hiring like CV shortlisting, human HR will have the
ﬁnal say on hiring decision.
The objective of amalgamation of AI technologies into traditional hiring is
saving a human being from various tedious labour in the hiring process.
Though this style of recruitment is relatively new yet a rapidly growing one.
Implications like data privacy, ethics, labour law, technology, feasibility and
need of such a system in hiring are to be considered carefully by every or-
ganisation before adopting such a system. And once adopted there should
frequent checks to ensure that the AI system is behaving within the accept-
able operational boundaries. It is undisputed that the use of AI could speed
up the hiring process and save human eﬀorts but when applied incorrectly
or in unethical ways could result in loss of money and business reputation.
 Y Acikgoz. “Employee recruitment and job search: Towards a multi-
level integration”. In: Human resource management review 29 (2019),
 Arvind Narayanan. “21 fairness deﬁnitions and their politics”. In: https://
fatconference.org/static/tutorials/narayanan-21defs18.pdf (2018), p. 1.
 Courts. “Judgment”. In: http://www.courts.ie/Judgments.nsf/0/58DE5996E11841E2802582570043CFF3
 Tsakalidis Faliagka Ramantas. “Application of machine learning algo-
rithms to an online recruitment system”. In: In Proc. International
Conference on Internet and Web Applications and Services (2012),
 Kowalkiewicz Kaczmarek. “Information extraction from CV”. In: In-
Proceedings of the 8th International Conference on Business Informa-
tion Systems (2005), pp. 3–7.
 Government of Ireland. “Discrimination in Speciﬁc Areas:Discrimination
by employers”. In: Employment Equality Act, 1998 (1998), p. 8.
 Government of Ireland. “Postponement of limitation period in case of
mistake.” In: Statute of Limitations, 1957 (1957), p. 72.
 Europe Union. “General Data Protection Regulation”. In: General Data
Protection Regulation,Regulation (EU) 2016/679 (2016).
 LinkedIn. “LinkedIn Releases 2019 Global Talent Trends Report”. In:
talent- trends-report (2019).
 B. Mueller J.R Baum. “Recruitment sources and Post-hire outcomes:
The mediating role of unmet expectations”. In: International journal
of selection and assessment 13 (3) (2011), pp. 188–197.
 Princeton.edu. “Hiring by Machine”. In: https://aiethics.princeton.edu/wp-
 Jeﬀrey Dastin. “Amazon scraps secret AI recruiting tool that showed
bias against women”. In: https://www.reuters.com/article/ us-amazon-
showed -bias-against-women-idUSKCN1MK08G (2018).
 G Tecuci. “Artiﬁcial Intelligence”. In: Wires computational statistics
4(2) (2010), pp. 168–180.