ArticlePDF Available

The ethics of people analytics: risks, opportunities and recommendations

Authors:
  • Universitas Mercatorum

Abstract and Figures

Purpose This research analyzed the existing academic and grey literature concerning the technologies and practices of people analytics (PA), to understand how ethical considerations are being discussed by researchers, industry experts and practitioners, and to identify gaps, priorities and recommendations for ethical practice. Design/methodology/approach An iterative “scoping review” method was used to capture and synthesize relevant academic and grey literature. This is suited to emerging areas of innovation where formal research lags behind evidence from professional or technical sources. Findings Although the grey literature contains a growing stream of publications aimed at helping PA practitioners to “be ethical,” overall, research on ethical issues in PA is still at an early stage. Optimistic and technocentric perspectives dominate the PA discourse, although key themes seen in the wider literature on digital/data ethics are also evident. Risks and recommendations for PA projects concerned transparency and diverse stakeholder inclusion, respecting privacy rights, fair and proportionate use of data, fostering a systemic culture of ethical practice, delivering benefits for employees, including ethical outcomes in business models, ensuring legal compliance and using ethical charters. Research limitations/implications This research adds to current debates over the future of work and employment in a digitized, algorithm-driven society. Practical implications The research provides an accessible summary of the risks, opportunities, trade-offs and regulatory issues for PA, as well as a framework for integrating ethical strategies and practices. Originality/value By using a scoping methodology to surface and analyze diverse literatures, this study fills a gap in existing knowledge on ethical aspects of PA. The findings can inform future academic research, organizations using or considering PA products, professional associations developing relevant guidelines and policymakers adapting regulations. It is also timely, given the increase in digital monitoring of employees working from home during the Covid-19 pandemic.
Content may be subject to copyright.
The ethics of people analytics:
risks, opportunities
and recommendations
Aizhan Tursunbayeva
University of Twente, Enschede, The Netherlands
Claudia Pagliari
University of Edinburgh, Edinburgh, UK, and
Stefano Di Lauro and Gilda Antonelli
University of Sannio, Benevento, Italy
Abstract
Purpose This research analyzed the existing academic and grey literature concerning the technologies and
practices of people analytics (PA), to understand how ethical considerations are being discussed by
researchers, industry experts and practitioners, and to identify gaps, priorities and recommendations for
ethical practice.
Design/methodology/approach An iterative scoping reviewmethod was used to capture and
synthesize relevant academic and grey literature. This is suited to emerging areas of innovation where formal
research lags behind evidence from professional or technical sources.
Findings Although the grey literature contains a growing stream of publications aimed at helping PA
practitioners to be ethical,overall, research on ethical issues in PA is still at an early stage. Optimistic and
technocentric perspectives dominate the PA discourse, although key themes seen in the wider literature on
digital/data ethics are also evident. Risks and recommendations for PA projects concerned transparency and
diverse stakeholder inclusion, respecting privacy rights, fair and proportionate use of data, fostering a systemic
culture of ethical practice, delivering benefits for employees, including ethical outcomes in business models,
ensuring legal compliance and using ethical charters.
Research limitations/implications This research adds to current debates over the future of work and
employment in a digitized, algorithm-driven society.
Practical implications The research provides an accessible summary of the risks, opportunities, trade-offs
and regulatory issues for PA, as well as a framework for integrating ethical strategies and practices.
Originality/value By using a scoping methodology to surface and analyze diverse literatures, this study
fills a gap in existing knowledge on ethical aspects of PA. The findings can inform future academic research,
organizations using or considering PA products, professional associations developing relevant guidelines and
policymakers adapting regulations. It is also timely, given the increase in digital monitoring of employees
working from home during the Covid-19 pandemic.
Keywords Human resource management, People analytics, HR analytics, Workforce analytics, Human
resource information systems, HRIS, Ethics
Paper type Literature review
The ethics of
people
analytics
© Aizhan Tursunbayeva, Claudia Pagliari, Stefano Di Lauro and Gilda Antonelli. Published by Emerald
Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0)
licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both
commercial and non-commercial purposes), subject to full attribution to the original publication and
authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/
legalcode
Funding: This research received no specific grant from any funding agency in the public, commercial
or not-for-profit sectors.
Declaration of interest: statement The authors declare no potential competing interests.
Received 13 December 2019
Revised 30 July 2020
16 November 2020
Accepted 8 February 2021
Personnel Review
Emerald Publishing Limited
0048-3486
DOI 10.1108/PR-12-2019-0680
The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/0048-3486.htm
Introduction
People analytics (PA) is an emerging area of innovation which, although it draws on
traditional principles of human resources management (HRM), represents a seismic shift in
the power of organizations and their leaders to understand, shape and strategically optimize
their workforce (e.g. Fitz-Enz and Mattox, 2014). This shift arises from the use of digital and
data science methods to harvest, analyze and visualize complex information about individual
employees, teams, divisions and the workforce as a whole, to provide actionable insights.
Such approaches, which may be applied at the level of discrete applications or enterprise-wide
information and communications infrastructure, can enable greater transparency about
individualsperformance, skills, aptitudes, weaknesses, threats and future potential and may
be useful throughout the employee life cycle, from talent acquisition to retirement (e.g.
Edwards and Edwards, 2016). They can also be used to profile team dynamics and
communication networks, to understand their effects on organizational resilience and
outcomes (e.g. Cross et al., 2010). Recently, machine learning and artificial intelligence (AI)
have begun to feature in these innovations to analyze complex performance data, screen
potential employees, develop personalized training recommendations, enable smart
scheduling, predict future performance, infer employee satisfaction or gear payments to
employee value(e.g. Nunn, 2018).
Increasingly, PA techniques are extending beyond in-work metrics to new areas hitherto
outside the reach of human resource (HR) departments or managers, including the monitoring
of employeespersonal emails, social media activity and interactions with digital devices, and
apps. These may be presented as a means of supporting the employee experience or
enhancing workplace wellnesswhilst, in fact, also providing 24/7 intelligence about
location, activity, mood, health and social life (e.g. Ajunwa et al., 2017). Employee data are also
being used to train algorithms to modify or shapebehavior in and outside of the workplace,
such as through gamifying tasks and incentives (e.g. Cardador et al., 2017).
Although relatively new, PA innovations are slowly, and often silently, working their way
into routine practice in many organizations. Indeed, 84% of respondents in the 2018 Global
Human Capital Trends survey (Deloitte Insights, 2018) reported PA as being important or
very important, making it the second highest ranked HR trend. While it is unsurprising, and
to some extent encouraging, that organizations are keeping up with new technologies and
seeking to improve their effectiveness and resilience through better use of data, few are
meaningfully engaging with the important ethical challenges and risks these present for
employeesprivacy, autonomy and future work opportunities (Tursunbayeva et al., 2018).
Conversely, organizations may be unaware of the potential of PA to shine a light on unethical
practices, such as corporate gender bias, fraudulent expense claims or intellectual property
theft, which could help to improve accountability and integrity in the workplace (e.g. Holeman
et al., 2016). Balancing these ethical requirements is challenging (Delios, 2010) and magnifies
existing ethical dilemmas for HRM professionals faced with the need to produce efficiency
gains without demoralizing the workforce (e.g. Ekuma and Akobo, 2015). Nevertheless,
grasping this nettle is imperative, given changes in the social, regulatory and policy
environment over the last decade, as described in Box 1.
Two academic scoping reviews focused on PA systems and practices have recently been
published (Marler and Boudreau, 2017;Tursunbayeva et al., 2018). The former draws on the
scholarly literature, while the latter draws also on a wide range of online sources to map the
emergence of the term PA, the value propositions offered by vendors of PA tools and services
and the PA skill sets being sought by professionals. Amongst other findings, these revealed
that there has been little academic research on the topic of PA, despite the mushrooming
market penetration of vendor solutions and widespread corporate interest in engaging with
these innovations. An important observation arising from one of these reviews was the near
absence of ethical considerations in the corpus of academic, grey and online literature, despite
PR
the significant risks to privacy and autonomy these innovations present for employees
(Tursunbayeva et al., 2018), suggesting a need for further investigations.
The European General Data Protection Regulation (GDPR) has begun to orient vendors
and users of PA innovations to their vulnerabilities and potential liabilities (e.g. Politou et al.,
2018), but leaves gaps for which ethical guidelines are needed (Sodeman and Hamilton, 2019).
This includes the new types of risk presented by predictive algorithms and biometric data,
which have implications for choice, control and identity in the context of work.
Although no research-driven framework of ethical considerations for PA so far exists, the
literature on HR ethics offers high-level principles which are relevant to this discussion. For
example, the Chartered Institute for Personnel and Development (CIPD) draws on a range of
perspectives when considering HR ethics, at the heart of which is fairness, a concept grounded
in moral philosophy (Clark, 2015), as well as principles around work as a force for good, respect
for employees and the importance of integrity for the people profession(CIPD, 2020).
The specialist community of practice involved in the development and implementation of
PA systems has also recently started to take ethical issues more seriously, giving rise to an
untapped literature in need of synthesis (Mixson, 2019).
This rapid scoping review aimed to respond to this gap through a targeted examination of
the ethical issues described within existing academic and professional discourse on PA. The
objectives were to map the risks/opportunities and recommendations expressed in these
communities, alongside related literature and real-world examples. As such, it complements
existing socio-legal analyses on topics such as workplace surveillance and the gig economy
(e.g. Ajunwa et al., 2017;Wood et al., 2019) and contributes to emerging discourses on the
future of work. It uses plain English to summarize and synthesize the issues in a way that can
be easily interpreted by our target audiences (see Figure 1) and used in practice.
Method
Scoping review methods are suited to emerging areas of innovation, where formal research
may be sparse but sources of relevant evidence and knowledge are nonetheless accumulating
(Arksey and OMalley, 2005). Rather than attempting to be exhaustive and replicable, as with
systematic evidence reviews, these reviews are designed to rapidly understand the scope, key
considerations and maturity of an area, typically to inform research or policy.
Search strategy and article screening and selection
Scoping academic literature. Seven HR-related keywords from recent human resource
information systems (HRIS) and PA literature reviews (Tursunbayeva et al., 2016,2018) were
Box 1. The changing context of accountability
(1) The public has become more critical and less forgiving of corporate misbehavior (Rivera and
Karlsson, 2017)
(2) Regulations and laws on the protection of personal data have become more proactive and
punitive in many countries (e.g. European Commission, 2020)
(3) More companies are pursuing growth in emerging markets where ethical risks may be
heightened or relying on extended global supply chains that increase counterparty risks
(4) Digital communication has become the norm, exposing companies and the executives who
oversee them to new information risks
(5) The 24/7 news cycle and social media can rapidly spread and amplify reputationally damaging
stories
(6) Employee lawsuits are on the rise, with personal data abuse set to join gender and racial bias as
top trends (e.g. Fernandez-Campbell, 2018)
The ethics of
people
analytics
combined with ethics-related keywords to iteratively search the Web of Science Core
Collection (WoS) for literature published prior to December 31, 2019, as shown in Figure 2.
WoS is an interdisciplinary online literature database covering publications from the
sciences, social sciences, arts and humanities. Snowballing from qualifying article reference
lists was used to find other relevant works.
Scoping socially curated grey literature. Seven PA hashtags were created mostly from the
HR-related keywords used to search the academic literature, and then combined with the
#ethics hashtag (Figure 2). Twittersadvanced searchfunction was then used to identify
tweets linking to relevant articles, studies, industry reports or other information sources,
which we refer to as socially curatedgrey literature. The preliminary search period was
March 21, 2006 the date when Twitter was created and December 31, 2019. The full texts
of articles identified via the Twitter hashtag searches were located and analyzed. Additional
articles identified through snowballingfrom these publications and recent relevant papers
known to the authors were also integrated during the synthesis and interpretation phase.
Data analysis
The disciplinary affiliation of academic journals publishing PA research was assessed with
reference to their classification in the Scimago Journal Ranking Portal (SJR) (2019). Seven
articles were classified manually, as the journals were not covered by SJR. Finally, we checked
the number of citations appearing for each article in Google Scholar to identify the most
impactful ones and extracted and grouped the key concepts covered in the included articles.
Literature Identified Screened Analysed Included for
final analysis
Academic
((“Human resource*” OR Workforce
OR Labor OR Staff OR Employee OR
“human capital” OR Personnel)
AND Analytic* AND Ethic*)
Grey
(#talentanalytics #ethics;
#peopleanalytics #ethics; #hranalytics
#ethics; #humancapitalanalytics
#ethics; #humanresourcesanalytics
#ethics; #workforceanalytics #ethics;
#employeeanalytics #ethics)
WoS = 226
Manually added = 9
Tweets = 399
213 60 14
118
68
(including
manually
added = 16)
271
Figure 2.
Approach to
identification,
screening and analysis
of academic and grey
literature
Figure 1.
Key stakeholder
groups in PA
PR
In the absence of a theoretically informed framework for classifying PA ethical risks, we used
open-coding to identify themes in the eligible academic and curated grey literature to create a
set of categories for organizing the findings.
Results
Publication characteristics
Academic research. Searching WoS yielded 226 articles, 204 of which were in English. After
screening by title, 51 of these articles were judged as potentially relevant, and their full texts
were reviewed, together with a further nine articles identified through snowballing from the
reference lists (see Figure 2). Articles that simply mentioned the need to consider ethical
issues in PA (e.g. Mesko et al., 2018) or did not focus specifically on both PA and ethics (e.g.
Newman et al., 2017) were excluded, leaving a total of 14 articles in the final sample of relevant
academic papers (see appendix 1).
Seven of these publications appeared in the last couple of years, peaking in 2017 (n55),
although the first relevant article was published in 2005. Four of the articles published in
journals available in SJR (n55) appeared in multi-disciplinary journals.
Fourteen of the papersauthors are affiliated with academic institutions in the USA. The
remaining authors are affiliated with academic institutions located in the UK, Germany,
Ireland, Thailand, Singapore, Australia, Finland and Sweden. Overall, ten relevant articles
were discussion or conceptual papers, three were empirical papers and one reported on an
experiment.
Socially-curated grey literature. Three hundred ninety-nine tweets containing the hashtags
of interest were identified (see Figure 2).
Of these, 323 contained #peopleanalytics #ethics,61 contained #hranalytics #ethics,
14 contained #workforceanalytics #ethicsand one contained #talentanalytics #ethics
hashtags. The remaining keywords combinations, including #employeeanalytics #ethics,
#humancapitalanalytics #ethicsand #humanreseourcesanalytics #ethicsdid not
generate any results. Aside from the hashtags used for the search, the most commonly
used hashtags were #HR (used 205 times) and #futureofwork (used 160 times).
A total of 271 tweets remained after removing duplicates. The first relevant tweets
appeared in 2015; however, the majority were posted in 2019 (n5126) (see Figure 3).
Conference live tweets, links to webinars, YouTube videos, other posts, non-working
links or articles that we were unable to find were removed from further analysis, leaving
118 tweets containing links to articles. Of these, 52 unique articles were included for full-text
analysis alongside 16 additional grey literature publications that were snowballed or that
the authors were familiar with based on the background readings (see appendix 2). Most of
these publications (n523) were published in 2019.
Analysis and discussion
Relevant issues identified in the PA literature fell into two broad categories ethical risks
(and conversely opportunities) and recommendations, with a range of specific themes evident
within each of these, as summarized in Table 1.
Figure 3.
Twitter results info
graphics
The ethics of
people
analytics
To aid contextualization and interpretation, we discuss these categories alongside other
relevant literature and real-world examples in the following section. Eligible articles
identified with our search strategy are marked with an asterisk to differentiate them from
other sources.
Risks for employees
Operationalizing bias and discrimination. Arguments favoring the use of PA solutions rely on
the notion that they are objective; indeed, many are designed with the goodintention of
enabling HR decisions based on data rather than flawed or biased human reasoning.
Nevertheless, since these systems are designed by humans, the potential for prejudice,
misunderstanding and bias to be encoded into their algorithms remains.
In 2015, Amazon discovered that its recruitment engine,used for screening and
prioritizing potential software developers, had been systematically discriminating against
female applicants. The system had been trained, using machine learning, to look for key
patterns and terms in resumes submitted to the company over ten years, primarily from men.
In effect, it had taught itself that male candidates were better(Dastin, 2018*). Although
Amazon sought to correct this bias, it finally abandoned the system in 2018. The case
illustrates how purely algorithmic PA systems can potentially have unintended
discriminatory consequences by using data about race, age, gender, sexual orientation and
disability to sort candidates.
Such bias may also be purposefully designed; for example, Facebooksad-targeting
algorithms were implicated in a lawsuit filed by the Communications Workers of
Americaonbehalfofits7,000þmembers. Originating with a complaint against T-
Mobile by a jobseeker who discovered that she was not seeing the same ads as her
daughter, this has extended to a Class Action against hundreds of other companies that
used Facebooks platform for allegedly ageist job advertising (Fernandez-Campbell,
2018). Writers such as Kim (2017*) point out that this type of classification biasis not
adequately covered in existing legislation, such as the US Age Discrimination in
Employment Act.
Psychological or social profiling. PA has its roots in psychometrics and may embed tests of
personality and aptitude in its hiring and promotion algorithms. According to the
Risks for employees Risks for organizations
Operationalizing bias and discrimination Ethics as a point of risk for PA projects
Psychological or social profiling
Behavior shaping
Reducing performance/people to numbers
Creating inconvenience or income insecurity
Threatening privacy or autonomy through tracking and surveillance
Recommendations
Transparency and fairness
Legal compliance
Ethical guidelines and charters
Proportionality and protection
Data rights and consent
Inclusion of stakeholders
People skills and culture
Evaluation
Ethical business models
Table 1.
Risks and
recommendations
emerging from the
analysis
PR
Association of Graduates, 6070% of prospective employers in the USA and the UK are using
online personality tests in recruitment, which has been estimated as a $500 million business
growing by 1015% a year (ONeil, 2016a*). Opponents of this form of human quantification
argue that such tests can overlook moral character (Geller, 2018) and cultural or ethnic
differences (Kirke, 2019). They might also identify differences that could be labeled as
disabilities or mental health conditions, and thus be illegal under the Americans with
Disabilities Act of 1990 (ONeil, 2016a*), particularly if they are used as a mask for
discriminating against a protected class(Anderson, 2018). Although few job applicants
rejected on the basis of such tests contact a lawyer, incomplete feedback and lack of expert
knowledge on sources of bias mean they are unlikely to be aware or empowered to do so (Kim,
2017*). Greater transparency is called for in this regard, particularly since personality tests
could potentially be poor predictors of job performance and may thus be both unfair on
candidates and inefficient for employers (e.g. ONeil, 2016b*; ONeil, 2018). Meanwhile, with
some recruiters now harnessing cross-platform analytics to profile potential employees from
their digital exhausttrails, psychometric testing may soon be supplanted by passive data
mining, presenting new ethical challenges around transparency, choice and privacy rights
(Cappelli, 2019).
Behavior shaping
Data on individual employeesperformance patterns, combined with other data obtained
from emails and questionnaire responses, are also being used to feed algorithms that can send
personalized messages to shape or nudgebehavior. Based on principles from behavioral
economics and persuasive psychology, these aim to encourage the achievement of work-
related goals for the individual, team or organization. An example referenced in our grey
literature results is the company Humu, founded by former Google executive Lazlo Bock.
Humusnudge enginecan set up reminders, prompt questions during meetings, as well as
encourage employee-centric activities like saving for retirement or opting for healthier snacks
(Wakabayashi, 2018;High, 2019*). While the company has been keen to show its ethical
credentials by emphasizing its respect for privacy and its ability to influence employees
personal job satisfaction (e.g. High, 2019*), critics have pointed to a lack of transparency
around the purposes of nudges and uncertainties over whether employees know they are
being nudged, raising ethical questions around usersinformation rights, effects on their
personal autonomy and protection from manipulation (Wakabayashi, 2018).
Reducing performance/people to numbers. HR departments and senior managers are
widely using PA tools to monitor and measure (e.g. Guenole et al., 2018*) the performance of
individuals, teams and their workforce as a whole, presenting a range of ethical challenges.
Individuals: in contrast to screening and recruitment, performance management and
promotion require a stronger emphasis on compliance with training, the achievement of
targets and subjective ratings by managers. In the era of PA, these are becoming more
automated, with enterprise software making it easier for HR managers to quantify and profile
performance and time usage even at a distance. Proponents of PA argue that this can provide
workers with objective insights about their performance, optimize their development and
improve the objectivity of promotion decisions (Chowdhury, 2018*). Despite these worthy
goals, reducing employee performance to numbers can devalue other important
characteristics that are harder to measure, and has also been criticized for lacking context
(ONeil, 2016b*). Technologies that allow keystrokes to be logged and work to be viewed by
supervisors also create a panopticon effect, reducing workersprivacy and autonomy, with
potentially negative effects on work satisfaction and mental health (Booth, 2019*). They have
also been shown to affect employeesinclusion in and access to future training and
development opportunities (Jeske and Calvard, 2020).
The ethics of
people
analytics
Teams: Advocates of PA also claim that it can bring insights about how teams are working,
which can improve their productivity and engagement. For example, using PA to help
basketball teams understand their players and to track and review mistakes is reported to have
had good results (ONeil, 2016b*). Companies likeGoogle and Microsoft are exploring how this
can be achieved in business settings (Hogan, 2016), although preliminary evidence suggests
that such analytics may offer limited value. For example, despite collecting multiple data
points, Googles Aristotleproject was unable to identify consistent characteristics of successful
teams or team members (Bodie et al., 2016). These approaches also run the ethical risk of
reducing teams to the status of machines, in which suboptimalcomponents can be replaced,
as well as ignoring the value of both diversity and synergistic working (ONeil, 2016a*).
Populations: Some PA projects have been criticized for targeting organizational
populations more than teams and individuals, creating the potential for data and machine
learning to overprioritize and incentivize prototypically ideal characteristics at the risk of
creating a homogeneous workforce that fails to reap the benefits of individuality
(ONeil, 2016a*).
Creating inconvenience or income insecurity. Some PA tools have also been blamed for
causing inconvenience to employees, particularly by automatically altering work schedules
in sectors with fluid workforces. For example, Starbucks used diverse types of data from the
weather to pedestrian patterns to feed its scheduling software,resulting in uncertainty
about available shift work (ONeil, 2016b*). Data compiled by the US government suggests
that two-thirds of food service workers consistently get short-term notice of scheduling
changes. Following an expos
e in the New York Times, legislation was introduced in Congress
to rein in scheduling software, but its progress has been stalled (ONeil, 2016b*). In the
on-demand gigworkforce, this problem is likely to become more prominent, adding to
income insecurity (Crerar, 2018). For example, a study of Uber drivers, highlighted in our grey
literature results, found that while they are theoretically in control of their work, deviating
from the companys algorithms could result in being banned from the platform (Mohlmann
and Henfridsson, 2019*). Some governments are seeking to tackle this with expectations of
guaranteed-hours employment and equal pay (e.g. UK), but competition and globalization of
the labor market are likely to make this hard to implement.
Threatening privacy or autonomy through tracking and surveillance. Issues around privacy
and surveillance dominated the ethical considerations examined in both the academic and
grey literatures. PA is often promoted as a means of enabling managers and organizations to
track and monitor their employees, both in the workplace and, in some cases, even in their
personal lives, for example, where these are linked to mobile phones or social media accounts.
Some scholars have speculated that the global variation in levels of workplace monitoring
reflects technological more than ethical differences (Pitesa, 2012*), while others point to the
role of political and cultural influences (Guenole et al., 2018*).
A number of academic articles have analyzed the diverse methods through which
employees can be monitored or surveilled. These can include pre-employment checks
including credit reports, driving records, criminal records and drug-testing data checks; as
well as on-the-job monitoring including electronic performance monitoring, e-mail
monitoring, audio, video (Pitesa, 2012*) and location surveillance (Kaupins and Minch, 2005*).
Recently, the research firm Gartner found that more than 50% of the 239 large
corporations it surveyed are using nontraditionalmonitoring techniques, including
scrutinizing who is meeting with whom, analyzing the text of emails and social media
messages, scouring automated telephone transcripts and even gleaning genetic data
(Wartzman, 2019*). Other research revealed similar results, reporting that leading PA users
are monitoring people data from diverse sources, including surveys (76%), integrated data
from HR and financial systems (87%) and social media (17%) (Agarwal et al., 2018*). Career
Builders independent survey of 2,300 hiring managers reported that 70% of respondents in
PR
2017 also used personal information obtained from social media to screen candidates, while
54% reported finding information on social media that led them not to hire a prospective
candidate for an open role (Mann et al., 2018*). The most commonly cited factor for this was
the candidate posting provocative or inappropriate content. The survey also reported that
third-party data brokers are often used to acquire this information, raising additional
challenges for governance and accountability (Mann et al., 2018*).
In contrast, narratives in the grey literature (mostly industry sources) suggest that most
employees are accepting of digital monitoring. For example, in a blog for the Academy to
Innovate HR, Mann and et al. (2018*) cite a survey by ExecuNet suggesting that 82% of
employees expect prospective employers to Googlethem, although only 33% bother to
Google themselves. It has been argued that this acceptance is a result of organizations
success in persuading employees that sharing personal information is in their interest, thus
shifting perceptions of workplace monitoring away from authoritarian regimestoward
something that evinces an ostensibly participatory character(Wartzman, 2019*) or to
participatory surveillance(Marchant, 2019*).
Employee tracking and monitoring projects were mentioned as particularly risky in the
creative and innovative industries, where people can require time-out for brainstorming ideas,
which might be measured by PA software as time spent not working (Booth, 2019*). Likewise,
as noted by Kim (2017*), a system cannot know when an employee has an upset stomach and
needs to be away from their desk - it just senses that they are not currently working.
Not only might monitoring tools and programs provide organizations with incomplete or
low-quality data about work, as in the examples above, surveillance may have unintended
negative effects on work itself. One academic experiment revealed that the prospect of active
monitoring reduced potential employeesimpressions of an organizations ethics as well as
the likelihood of job acceptance and job satisfaction (Holt et al., 2017*). While higher pay
significantly increased the likelihood of job acceptance, it only marginally increased
perceived job satisfaction. The same experiment also revealed that none of the potential
justifications given by an employer for monitoring changed participantsperspectives on its
ethicality or their willingness to work at such a company (Holt et al., 2017*).
Employee wellness programsrepresent a particular class of workplace monitoring,
which may require staff to share their medical data, wear a biometric monitoring device or
even to be microchipped. An employee survey on wearables by PwC reported that 37% did
not trust their employer not to use the data against them in some way (Jacobs, 2017*).
Nevertheless, many organizations are still in the process of adopting wellness programs,
despite little evidence of their effectiveness. The Illinois Workplace Wellness Study (Jones
et al., 2019) enrolled 5,000 employee volunteers in a randomized controlled trial of a program
involving biometric health screening and online health risk assessment, linked to health and
wellness classes and financial incentives. The results revealed no impact on employee health
outcomes, productivity or company medical spending, and there was a strong self-selection
effect, with healthier employees more likely to participate. From an ethical perspective, this
suggests that such programs may inadvertently widen health inequalities. Such programs
have also been criticized for placing undue responsibility for health on the individual, and for
penalizing those who cannot comply, such as the disabled (Carroll, 2018*). Moreover, while
they are typically framed as benign and helpful, they are often designed more to reduce
corporate costs than benefit workers (Kellar-Guenther, 2016).
Even strong opponents of workplace monitoring, such as the American Civil Liberties
Union, acknowledge that employers have a right to undertake some monitoring (Kim, 2017*),
although it calls for ethical standards. Indeed, the academic literature already contains
proposals on how to make workplace monitoring less stressful. This can include, for example,
informing employees about the monitoring system, setting fair performance benchmarks;
and using documentation or records for benign purposes rather than for sanctions
The ethics of
people
analytics
(Moussa, 2015*). Educating and communicating with employees about monitoring are also
identified as the best ways to attain their consent and agreement (Kim, 2017*).
Risks for organizations
Ethics as a point of risk for PA projects. A theme seen in the grey literature concerned the role
of ethics as a challenge for PA projects, reflecting a growing acknowledgment in the
profession that successfully implementing these innovations is highly dependent on their
privacy and acceptability. In an Insight222 survey of 57 companies, 81% of respondents
reported that their workforce analytics projects were sometimes or often jeopardized by data
ethics/privacy concerns (Petersen, 2018*). Some organizations have been criticized for
spending money on PA systems but failing to act on the insights they bring about
unproductive work (Smith, 2015*), creating a gap between leaders and laggards in PA
adoption (Fleming et al., 2018*).
PA projects are relatively new, so organizations currently lack an extensive history of legal,
ethical or risk precedents to consult. It has been claimed that existing risk management
strategies are not fully applicable to PA projects because organizations may be unable to
recognize indicators of potential failure (Calvard and Jeske, 2018*).
Other concerns, reflected in both the academic and grey literature, relate to employeeslack
of trust in PA projects or their outcomes. A recent study concluded that 63% of employees
believe that their employer is tracking or gathering sensitive data about them, and 72%
believe their companies are not telling them what data they are collecting (Pease, 2018*).
Employees who do not trust their employers are less likely to provide relevant, truthful
information. Knowing one is being observed and judged or ranked on a second-by-second
basis can also lead to people gaming the system (Jacobs, 2017*).
Organizations are also reportedly putting PA projects on hold due to uncertainty over their
regulatory compliance, particularly with the high-profile GDPR. Despite this, in the run-up to
its enforcement in May 2018, only 53% of companies reported that they had been getting
ready for GDPR and only 22% that they had excellent safeguards to protect employee data
(Green, 2018*). The penalties for breaching GDPR can be severe, with organizations failing to
safeguard or misusing personal information facing fines of up to V20m or 4% of annual
worldwide turnover (Mann et al., 2018*). However, while GDPR represents a significant
advancement of employee rights in the digital era, its primary focus on protecting personally
identifiable information leaves open questions around the uses of anonymized or non-
identifiable data. More significantly, it only applies to European Union(EU) citizens, albeit also
to companies processing their data overseas. Australia and New Zealand are also reported to
have comprehensive regulations to protect employeesprivacy (Pitesa, 2012*). However, there
is a regulatory deficit in other regions, particularly in developing countries. Nevertheless, even
in the EU, legislation on diverse types of privacy is not equally mature. For example, the right
of an individual (whether an employee or not) to location privacy has not been established
anywhere in the world, albeit this is implicitly covered by broader laws on personal data in
several countries. As an illustration, the Finnish Personal Information Law and Law about
Privacy and Security of Telecommunications are said to apply to location privacy although
there are no laws in Finland that concern location information(Sami, 2004 as cited in
Kaupins and Minch, 2005*). Conflicting rules on the data rights of employers and employees
also create complications when it comes to PA, with the invocation of legitimate interest
under GDPR giving rise to ambiguity when it comes to privacy rights (Petersen, 2018*).
The lack of robust legal protections in diverse parts of the world, including the USA, has
been exacerbated by the declining role of trade unions as a force to advocate for workers
rights (including privacy rights). In the USA, this has been made worse by at-will
employment contracts, in which employees can be fired for any reason, giving employers
greater coercive powers over their employees (Suk, 2007), including through surveillance.
PR
Judging what is acceptable and what is possible was mentioned as another huge dilemma
for HR and PA professionals. Many authors mentioned not only legal but also moral or ethical
dilemmas. One observation was that the agenda in PA projects is often left to technologists,
computer scientists or PA vendors, when what is really needed are experts in human
behavior and ethics (Calvard and Jeske, 2018*).
Increasingly, employees are putting pressure on corporate leaders to be more ethical, in
some cases staging protests and walkouts in response to perceived misuses of data or
algorithms (e.g. Helmore, 2019). State-sponsored programs applying PA-like tools to workers
are also raising concerns. For example, secretive data-mining company Palantir was recently
found to have covertly installed an app on manual workersphones to monitor their
movements, social networks and communications. The project, conducted in association with
the US immigration authorities, resulted in multiple sackings and deportations of
undocumented migrants (Joseph, 2019).
Recommendations
In addition to the concerns raised in the academic and grey literature, a number of
suggestions and recommendations for managing the ethical risks of PA projects were seen in
the literature, which we have clustered into the categories shown in Table 1 and are
discussed below.
Transparency and fairness. Transparency was identified as being one of the most critical
considerations for PA projects. Diverse articles recommend that organizations communicate
their reasons for pursuing PA projects and the kind of benefits employees should expect from
them, rather than only describing what they will involve. PA projects lacking transparency may
be perceived by employees as unfair and thus encounter resistance to participation or acceptance,
although there is also a lack of clarity in how to define or measure fairness (Manyika, 2019).
Legal compliance. Adherence with legislation is an essential building block of all HR data
policies. A survey by Privacy International and freedominfo.org found that 57 countries,
mostly from Europe and North America, have passed privacy legislation, while a further 37
countries, mostly in Africa and South America, have pending efforts (Kim, 2017*).
Many authors referred to the introduction of GDPR as an opportunity for European
organizations to review their compliance with relevant laws and regulations. It was also
recognized that technology is rapidly evolving in ways that may be difficult to anticipate, and
a pressing question for HR practitioners is what to do in new situations that are not covered
adequately by legislation, bearing in mind that what may be legal is not automatically ethical.
Ethical guidelines and charters. Reports in the grey literature strongly recommend that
organizations develop and publish clear guidance in the form of an ethical charter, potentially
in collaboration with other organizations. A recent survey revealed that almost half of
respondents do not have a PA-related ethical charter in place yet (Petersen, 2018*). Aligning
the charter with the social norms of the country in which the organization is located was also
seen as important, since attitudes toward personal data collection and analysis can vary
between countries and cultures (e.g. Guenole et al., 2018*). The PA-related guidance recently
developed by consulting firm Insight222 (Green, 2018*) was cited as a useful resource, while it
was also noted that HR professionals are bound by broader professional standards (e.g. CIPD)
that should also guide their ethical standards of practice in relation to PA (Green, 2019).
Proportionality and protection. Articles in our review emphasize that PA practitioners
need to understand which approaches to data storage, access or analysis are permitted in
their jurisdiction, who their stakeholders are and their access rights, and who ownsthe data
on employee-held devices such as laptops and mobile phones (Jones, 2017*). They call for a
better mapping of the data types and methods used in PA, recognizing that the ethical issues
with big data lie not so much with its collection but with the weaknesses in organizational
The ethics of
people
analytics
processes and systems that enable it(Nunan and Di Domenico, 2015, p. 10 as cited in Calvard
and Jeske, 2018*). They also acknowledge the co-dependencies between technologies, laws
and social attitudes about what data should be protected and what should not (e.g. as for
employees with disabilities, where data may potentially be used both to discriminate and to
prevent discrimination).
It is strongly recommended that data collected for PA projects should be strictly job-
related, though it is acknowledged that it is not easy to draw a line between what is personal
and what is job-related, especially where data are collected from employer-owned cell phones
or notebooks (Bersin, 2019*).
The use of aggregated, non-identifying data is recommended where possible, to
demonstrate to employees that the purpose behind PA projects is to capture larger
organizational trends. For small teams, it is recommended to present a generic overview of the
results, ensuring that no single response can be attributed to a specific employee (Kumar,
2018*). Moreover, data that are not permitted or no longer useful should be deleted, as it is
claimed that about 60% of organizations possess such data and HR departments are among
the worst offenders (Jacobs, 2017*).
As employeesawareness of PA grows, they will start exercising their rights and may
request that HR correct or erase their data, increasing the need for transparency and security
on the part of HR/PA software providers and teams (Haim, 2018*). Blockchain is suggested as
one opportunity for good governance, enabling digital verification of employeesprofiles, as
well as allowing potential new-hires to own and manage their data during the recruitment
process (Spence, 2018*). Approaches to privacy by designare also advocated, both when
creating procedures for the use of legacy HRIS and developing new digital platforms
(Lingard, 2018*), with a requirement to review their compliance on a regular basis. When
selecting PA solutions, organizations also need to follow ethical procurement processes and
supplier management procedures (Haim, 2018*).
It was also proposed that organizations should adopt the best practices already used for
the governance of algorithms in other sectors, such as healthcare and pharmaceuticals, as
well as standards for data collection, integrity, preservation and model validity (Kim, 2017*).
Data rights and consent. Aside from the legal requirements, it is recommended that
organizations inform employees of their right to opt-out of relevant data collection processes and
give them the opportunity to do so. For example, employeesright to informed consent is part of
the privacy guidelines from the Organisation for Economic Co-operation and Development
(Kaupins and Minch, 2005*). Organizations also need to consider whether employees are making
choice to participate freely (Mann et al., 2018*) or because they fear negative consequences. It is
also recommended that consent be renewed regularly (e.g. once every quarter).
Inclusion of stakeholders. There is an agreement, across the grey and academic literatures,
that diverse stakeholders need to be consulted and involved in PA projects to ensure these are
sustainable and successful (Calvard and Jeske, 2018*). Stakeholder-specific
recommendations include the following:
HR and PA professionals should execute only PA projects which they can be proud of, can
communicate openly about and which are compliant with the companys privacy comfort zone
(Guenole et al.,2018*). They are also encouraged to engage with work councils where these
exist. The specific recommendation for HR teams was totake control of the PA agenda, rather
than letting it be led by suppliers, and to rigorously monitor machine-relateddecisions to
make sure they are reasonable and unbiased, whilealso evidence-based (Agarwal et al., 2018*).
Consulting legal and/or compliance officers is important for ensuring compliance with
data anonymization policies and regulations, since HR teams cannot know everything about
data privacy, legal requirements or ethics(Green, 2018*).
Employees are critical stakeholders in PA projects and should never feel afraid to speak
up about their concerns (Leong, 2017*). Listening to employeesopinions can elucidate
PR
questionable practices that management has potentially not considered (Kumar, 2018*) and
may be collected via anonymized surveys. For employees to feel safer in PA projects, it is
important to let them maintain a sense of ownership of the data that are being gathered (Jones,
2017*). The need to ensure that employees experience the benefits of PA projects, and not just
the organization, is also seen as critical (Marritt, 2016).
Managers are also seen as crucial in creating a safe space for employees to discuss
corporate ethics, to maximize transparency and minimize the dangers of whistle blowing
(Leong, 2017*).
New organizational roles such as Chief Data Officer, Chief Information Governance Officer
or Chief Privacy Officer, alongside information governance committees, are seen as ways of
protecting employee privacy while staying in line with corporate objectives (Leong, 2017*).
Ethicists are seen as valuable consultants by some commentators, helping decision-
makers and PA professionals to ensure the integrity of new projects (West, 2018*).
International organizations and governments have a macro-role to play in PA projects, as
they are responsible for the creation of and monitoring of adherence to the policies related to
PA practices (Kim, 2017*).
People skills and culture. Several qualifying articles from the grey literature mentioned the
importance of PA skills and talent. It was recommended that employers should ideally try to
fill PA roles with internal candidates, who can have extensive company knowledge and serve
as translators in communicating the results of PA projects (Fleming et al., 2018*). Desirable
characteristics of PA leaders noted in the articles included patience, innovation, holistic
thinking, project and process management, adaptive leadership, ability to catalyze or broker
analytics and being a good brand ambassador (Green and Chidambaram, 2018*). However,
very few authors specified ethics amongst these soft skills. Of those that did so, it was
recommended that ethics should not only be included in PA training activities but also in
daily work, so employees operationalize ethical considerations (West, 2018*).
Evaluation. Monitoring and evaluation are key considerations for PA projects, and
communicating quick winscan encourage buy-in. It is recommended that in addition to
their benefits for employers tied to the organizations strategic challenges and broader
transformational initiatives, decisions about future analytics investments can be made more
ethical by taking into account their impacts on people outcomes,and that decisions should
be made by HR professionals and the company management rather than by suppliers. In
making these decisions, it is important to consider the potential harms that PA projects may
bring to employees and to plan strategies for managing risk and avoiding unintended
consequences (Pease, 2018*).
Ethical business models. It was noted in the grey literature that PA leaders are beginning to
realize that risk may be a bigger strategic issue than growthand are adjusting their business
models to include not only financial profits but also ethical aspects of doing business (Bersin,
2018*). As remarked in one of the grey literature publications, thankfully, with each new data
scandal, helped by GDPR rules, a new [HR technology] product is launched with a different
business model(Spence, 2018*). This recognition is reflected in the growing interest in ethics
amongst global technology companies, including the partnership between Amazon, Apple,
Facebook, Google, IBM and Microsoft aimed at studying and advancing public
understanding of AI and its influences on people and society, including ethical influences
(Bersin, 2018*).
Conclusions and implications
Interest in digital ethics has risen at an exponential rate in the last few years, with
governments, academics and the technology industry racing to create new ethical principles,
manifestos, guidelines and frameworks. This is reflected in the results of recent meta-review
The ethics of
people
analytics
of AI ethics guidelines, published in the Nature journal (Jobin et al., 2019) whose authors
remark on the variation in interpretation and the difficulty of translating principles into
regulations and practices. Despite this activity, ethical considerations for PA have received
relatively little attention, compared to other areas with a strong focus on data analytics, such
as education or medicine.
This study set out to identify, map and describe the existing published academic and grey
literature covering ethical considerations for PA, up to the end of December 2019. Our
analysis indicates that discussion of ethical issues in PA has appeared in the academic and
grey literature mainly (although not extensively) in the last three years more than a decade
after the first PA articles were published (Tursunbayeva et al., 2018). Searching the academic
literature revealed little formal research into ethical aspects of PA, although searching social
media exposed a growing stream of grey literature aimed at helping managers to recognize
the ethical issues and adopt more ethical practices (e.g. Green, 2018). These literatures
touched on philosophical, legal, societal and data security considerations, as well as risks and
potential benefits.
The majority of articles revealed by the searches were discussion papers, technical
descriptions, subjective case reports, blog posts and educational resources, rather
than empirical studies. Despite this apparent evidence gap, many organizations are
developing, planning or already using PA, exposing employees to potential risks for their
privacy, autonomy, career options, income and well-being. The accuracy of the data
underpinning PA and the algorithms it drives also create new questions around error and
bias, while the legality of PA practices in terms of employment law and data protection
regulations remains unclear. A shift in the emphasis of PA projects, from managing
individuals to managing larger organizational populations, suggests a desire to avoid
these uncertainties.
While similar issues associated with rights, fairness and power dynamics have been
discussed for many years in relation to HR and employment ethics (Ekuma and Akobo, 2015),
the dataficationof work and the workforce, aided by predictive analytics and connected
digital devices, casts a new light on these. The literature exposed by our review points not
only to increased monitoring and surveillance but also to the automation of processes in
recruitment, talent analytics, performance assessment and the shaping of behavior, aided by
developments in behavioral economics and AI, adding to concerns about work-by-numbers
and the demise of choice, opportunity and fairness.
Despite these concerns, the literature yielded by our searches typically casts PA in a
positive light, more so in the case of content posted via Twitter, where the majority of
references to PA ethics were found, reflecting professional communities of practice. The
optimistic view promotes the ethical use of data and automation to eliminate human bias from
hiring, promotion and remuneration decisions, such as through eliminating gender
discrimination. It nonetheless acknowledges that such approaches can backfire if the
source data is skewed, as in the case of Amazons hiring algorithms, which had been trained
using data primarily from male applicants. The value of PA for exposing unethical practices
such as absenteeism or intellectual property theft is framed as a way of protecting
organizations. In addition, while wellness apps and cellphone tracking could be seen as a form
of backdoor surveillance, if used benignly they may potentially support employeeshealth
and security.
The articles appearing in our search results also highlight the challenges involved in
implementing PA projects in organizations while ensuring they are ethical and legally
compliant, as well as recommendations for addressing them. This is seen as particularly
problematic for international organizations operating in diverse contexts with multiple
regulations and differing cultural or political expectations. It is also acknowledged that PA is
an emerging innovation with as-yet-unknown consequences, and organizations need to
PR
envision and mitigate potential risks as PA projects are happening. This need, for what might
be termed anticipatory ethics,is embodied within frameworks for responsible innovation,
such as the one proposed by the EU (RRI Tools Consortium, 2016) or the UKsEngineering
and Physical Sciences Research Council (2016).
It is interesting to contrast the way in which ethical issues are discussed in the
PA-specific literature, compared with broader academic discourse on data ethics and the
future of work, seen in the legal, social and political sciences. These meta-narratives are
dominated by concerns about privacy, rights, power and fairness, particularly in relation
to the rise of the platform-driven gig economy,the algorithmic shaping of behavior and
the role of AI in replicating and replacing the human workforce (e.g. Dastin, 2018*). In
contrast, much of the PA-specific literature derives from industry sources and tends to
express more optimism about the potential of PA, although it is recognized that
adherence with ethical practices is needed to realize this potential. Ethical issues and
recommendations described in the broader literature on data/digital ethics were
nevertheless reflected in PA narratives, including the need for transparency and
fairness in PA projects, proportionality and protections in the use of data, respect for the
participantsrights and choices (e.g. through obtaining consent) and inclusion of diverse
stakeholders into PA initiatives (see Figure 1). Other ethical recommendations arising in
this literature include the need to ensure legal compliance whilst also covering areas
overlooked by existing regulations within ethical charters, providing training in PA
ethics, fostering a systemic culture of ethical practice, ensuring that PA provides
reciprocal benefits for employees (e.g. data for personal development), evaluating PA
projects and including ethical outcomes in business models.
This exploratory scoping review makes several important contributions to theory,
practice and policy on PA. As academic research on PA is still in its infancy, this review can
help to inform and guide future work. It provides an accessible summary of the risks,
opportunities, trade-offs and regulatory issues for PA, as well as a framework for integrating
ethical strategies and practices, and could thus help organizations to avoid potentially
catastrophic unintended consequences, not only for their employees but also for their
resilience and reputation. Finally, this paper can provide a channel through which to inform
and engage relevant policymakers.
The rise of PA raises new questions for interdisciplinary management science and
adds to current debates over the future of human work and employment in a digitized,
algorithm-driven society. Such innovations present a dilemma for organizations seeking
to optimize their workforce and maximize their effectiveness while also risking employee
surveillance, depersonalization and dissatisfaction, alongside new legal vulnerabilities.
Using the scoping review method has provided an opportunity to go beyond the nascent
academic literature on PA ethics to explore how industry, the consulting sector and
PA professionals themselves are discussing these issues. Although the PA literature
remains optimistic and somewhat technocentric, we were able to discern ethical
themes around risk, regulation and people factors that reflect similar considerations in
the wider literature on digital ethics. Uses of data and analytics also offer opportunities
to enhance organizational ethics through reducing human bias or increasing wellness
and safety, which can be lost in both sociopolitical and technocentric discourses.
These dilemmas call for a new social contract between employers and employees, which
could help organizations to avoid catastrophic unintended consequences for their
resilience, reputation and bottom line. New legal and policy research is also needed
to accommodate the changing technological, regulatory and cultural contexts of PA
(e.g. Duggan et al., 2020).
While PA practitioners and analysts have recently proposed a set of ethical principles
(Green, 2018*), concerted academic effort is needed to develop evidence-based and inclusive
The ethics of
people
analytics
frameworks to guide regulators, industry and practitioners in how to respond to these
innovations, particularly given their steady penetration into scaled enterprise software and
platforms.
Aswehavenotedinthemethodologysection, no theoretically driven, PA ethics
guidelines exist, and for this reason we chose to be guided by the data rather than by a
specific framework. One of our recommendations is that such guidelines should be
developed, which our results can help to inform. There is a need for primary research to
understand how these methods are changing work within different types of organization
and their intended and unintended impacts on employees. As more research is published,
the case for using systematic review methods, in preference to the scoping approach
adopted here, will grow. For the reasons explained in the methods section, the present
analysis is the natural first step in what is an emerging field and builds directly from
observations about the lack of ethical discourse seen in our published review on the value
propositions of PA.
Postscript: PA in the era of Covid-19
The searches undertaken for this review extend to the end of 2019 and thus pre-date the
beginning of the Covid-19 pandemic. The results are nevertheless timely, given the rapid rise
in working from home, creating greater dependencies on technology and bringing peoples
professional and personal lives much closer together. In addition to generating new
organizational requirements for managing workers remotely, this has ramped-up the use of
methods for monitoring, assessing and shaping the behavior and performance of workers
and teams, some of which could be ethically problematic (Hern, 2020). These include covert
keystroke logging, communications monitoring and harnessing employeesdevice cameras
and microphones, in some cases without consultation or consent (Gifford, 2020). The risks
and benefits are likely to vary between settings, types of work, and countries with different
legislation; for example, workersprivacy rights are somewhat less protected in the US
compared to the EU (Dale, 2017). Nevertheless, the growing use of bosswareis presenting
new risks that even HR departments may not be fully aware of (Schwartz, 2020). Concerns
have also been raised about the potential for such technologies to unfairly stigmatize women
having to balance work with childcare responsibilities, to gamifyproductivity using digital
rewards and to decrease peoples ability to decouple work from leisure time (Nguyen, 2020).
Given the long-term threat of new outbreaks, it is also likely that technologies such as
facial recognition cameras, biometric scanners and mobile-tracking apps will begin to enter
physical work environments, alongside analytical tools integrated into computers or
networks. These will inevitably create closer links between measures of well-being and
performance, magnifying the types of ethical dilemma already discussed in relation to
workplace wellness programs (Pagliari, 2020). So far, ethical debates around PA and worker
surveillance have been relatively undifferentiated, but it is likely that more research focused
specifically on PA methods will emerge in the coming months, helping to shape new
frameworks for ethical practice as organizations and workers transition to the new normal
in a post-pandemic world.
References
Agarwal, D., Bersin, J., Lahiri, G., Schwartz, J. and Volini, E. (2018), People data: how far is too far?,
available at: https://www2.deloitte.com/za/en/pages/human-capital/articles/people-data-how-far-
is-too-far.html (accessed 20 July 2019).
Ajunwa, I., Crawford, K. and Schultz, J. (2017), Limitless worker surveillance,California Law Review,
Vol. 105 No. 3, pp. 735-776.
PR
Anderson, M. (2018), Who are you?: the legal implications of employee personality testing, available
at: https://www.lexology.com/library/detail.aspx?g=dbe2b2e9-f1db-42c4-8551-3394c35ed868
(accessed 3 March 2021).
Arksey, H. and OMalley, L. (2005), Scoping studies: towards a methodological framework,
International Journal of Social Research Methodology, Vol. 8 No. 1, pp. 19-32.
Bersin, J. (2018), The ethics of artificial intelligence: its trickier than you think, Josh Bersin, 20 August,
available at: https://joshbersin.com/2018/08/the-ethics-of-ai-its-much-trickier-than-you-think/
(accessed 3 March 2021).
Bersin, J. (2019), People analytics and AI in the workplace: four dimensions of trust, Josh Bersin, 4
May, available at: https://joshbersin.com/2019/05/the-ethics-of-ai-and-people-analytics-four-
dimensions-of-trust/ (accessed 3 March 2021).
Bodie, M.T., Cherry, M.A. and McCormick, M.L. (2016), The law and policy of people analytics,
University of Colorado Law Review, Paper No. 2016-6, available at: https://papers.ssrn.com/sol3/
papers.cfm?abstract_id=2769980 (accessed 3 March 2021).
Booth, R. (2019), UK businesses using artificial intelligence to monitor staff activity,The Guardian,7
April, available at: https://www.theguardian.com/technology/2019/apr/07/uk-businesses-using-
artifical-intelligence-to-monitor-staff-activity (accessed 3 March 2021).
Calvard, T.S. and Jeske, D. (2018), Developing human resource data risk management in the age of
big data,International Journal of Information Management, Vol. 43, pp. 159-164.
Cappelli, B. (2019), Your approach to hiring is all wrong,Harvard Business Review, MayJune 2019,
pp. 48-58.
Cardador, M.T., Northcraft, G.B. and Whicker, J. (2017), A theory of work gamification: something
old, something new, something borrowed, something cool?,Human Resource Management
Review, Vol. 27 No. 2, pp. 353-365.
Carroll, A.E. (2018), Workplace wellness programs dont work well. Why some studies show
otherwise,The New York Times, 6 August, available at: https://www.nytimes.com/2018/08/06/
upshot/employer-wellness-programs-randomized-trials.html (accessed 3 March 2021).
Chowdhury, R. (2018), How human-centric AI can help your employees love Mondays again,
Forbes, 16 March, available at: https://www.forbes.com/sites/rummanchowdhury/2018/03/
16/how-human-centric-ai-can-help-your-employees-love-mondays-again/ (accessed 3
March 2021).
CIPD (2020), Professional values. Principles-led. Explore the three key principles to making good
decisions, available at: https://peopleprofession.cipd.org/profession-map/core-purpose/
principles-led (accessed 20 July 2020).
Clark, S. (2015), Ethical decision-making: eight perspectives on workplace dilemmas. Research
report,CIPD, available at: https://www.cipd.co.uk/knowledge/culture/ethics/workplace-
decisions-report#gref (accessed 3 March 2021).
Crerar, P. (2018), Gig economy workersrights to be given boost in overhaul,The Guardian,8
November, available at: https://www.theguardian.com/business/2018/nov/08/gig-economy-
workers-rights-to-be-given-boost-in-overhaul (accessed 3 March 2021).
Cross, R.L., Singer, J., Colella, S., Thomas, R.J. and Silverstone, Y. (2010), The Organizational Network
Fieldbook: Best Practices, Techniques and Exercises to Drive Organizational Innovation and
Performance, Jossey-Bass, ISBN: 9780470542200.
Dale, B. (2017), The differences between workplace privacy for Americans and Europeans,Observer,
20 September, available at: https://observer.com/2017/09/workplace-privacy-america-europe/
(accessed 3 March 2021).
Dastin, J. (2018), Amazon scraps secret AI recruiting tool that showed bias against women,Reuters,
9 October, available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight-idUSKCN1MK08G (accessed 3 March 2021).
The ethics of
people
analytics
Delios, A. (2010), How can organizations be competitive but dare to care?,Academy of Management
Perspectives, Vol. 24 No. 3, pp. 25-36.
Deloitte Insights (2018), The rise of the social enterprise. 2018 deloitte global human capital trends,8
May, available at: https://www2.deloitte.com/content/dam/insights/us/articles/HCTrends2018/
2018-HCtrends_Rise-of-the-social-enterprise.pdf (accessed 3 March 2021).
Duggan, J., Sherman, U., Carbery, R. and McDonnell, A. (2020), Algorithmic management and app-
work in the gig economy: a research agenda for employment relations and HRM,Human
Resource Management Journal, Vol. 30, pp. 114-132, doi: 10.1111/1748-8583.12258.
Edwards, M.R. and Edwards, K. (2016), Predictive HR Analytics: Mastering the HR Metric, Kogan
Page, New York, NY.
Ekuma, K.J. and Akobo, L.A. (2015), Human resource management ethics and professionalsdilemmas:
a review and research agenda,Human Resource Management Research,Vol.5No.3,pp.47-57.
Engineering and Physical Sciences Research Council (2016), Framework for responsible innovation,
available at: https://epsrc.ukri.org/index.cfm/research/framework/ (accessed 20 July 2020).
European Commission (2020), Data protection in the EU, available at: https://ec.europa.eu/info/law/
law-topic/data-protection/data-protection-eu_en (accessed 14 November 2020).
Fernandez-Campbell, A. (2018), Facebook, Amazon, and hundreds of companies post targeted job ads
that screen out older workers,Vox, 31 May, available at: https://www.vox.com/policy-and-
politics/2018/5/31/17408884/facebook-amazon-job-ads-age-discrimination-lawsuit (accessed 3
March 2021).
Fitz-Enz, Jac and Mattox, J.R. II (2014), Predictive Analytics for Human Resources, John Wiley & Sons,
Hoboken, New Jersey.
Fleming, O., Fountaine, T., Henke, N. and Saleh, T. (2018), Ten red flags signaling your analytics
program will fail,McKinsey Analytics, 14 May, available at: https://www.mckinsey.com/
business-functions/mckinsey-analytics/our-insights/ten-red-flags-signaling-your-analytics-
program-will-fail (accessed 3 March 2021).
Geller, L.W. (2018), How your hiring process could predict unethical behavior,StrategyþBusiness,29
January, available at: https://www.strategy-business.com/article/How-Your-Hiring-Process-
Could-Predict-Unethical-Behavior?gko=744f0 (accessed 3 March 2021).
Gifford, C. (2020), COVID-19 raises questions about employee surveillance technology,European
CEO, 22 June, available at: https://www.europeanceo.com/home/featured/covid-19-raises-
questions-about-employee-surveillance-technology/ (accessed 3 March 2021).
Green, D. (2018), Dont Forget the Hin HR. Ethics and People analytics, David Green Blog, 19 March,
available at: https://www.davidrgreen.com/blog/2018/4/9/dont-forget-the-h-in-hr (accessed 3
March 2021).
Green, D. (2019), Episode 2: driving business performance with people data (Interview with Edward
Houghton, Head of research and thought leadership at the CIPD), MyHRfuture, available at:
https://www.myhrfuture.com/digital-hr-leaders-podcast/2019/5/20/episode-2-driving-business-
performance-with-people-data-interview-with-edward-houghton-head-of-research-and-thought-
leadership-at-the-cipd (accessed 3 March 2021).
Green, D. and Chidambaram, A. (2018), The Role of the People Analytics Leader - Part 2: Creating
Organisational Culture and Shaping the Future, myHRfuture, 25 February, available at: https://
www.myhrfuture.com/blog/2018/2/25/the-role-of-the-people-analytics-leader-part-2-creating-
organisational-culture-shaping-the-future (accessed 3 March 2021).
Guenole, N., Feinzig, S. and Green, D. (2018), The grey area: ethical dilemmas in HR analytics.
Perspectives from the global workforce,IBM, available at: https://www.ibm.com/watson/
talent/talent-management-institute/ethical-dilemmas-hr-analytics/hr-ethical-dilemmas.pdf
(accessed 3 March 2021).
Haim, L.S. (2018), Will People analysts always be human?, Littal Shemer Haim, 8 May, available at:
https://www.littalics.com/will-people-analysts-always-be-human/ (accessed 3 March 2021).
PR
Helmore, E. (2019), Hundreds of Google employees urge company to resist support for Ice,The
Guardian, 16 August, available at: https://www.theguardian.com/technology/2019/aug/16/
hundreds-of-google-employees-urge-company-to-resist-support-for-ice#:~:text=Hundreds%
20of%20Google%20employees%20urge%20company%20to%20resist%20support%20for%
20Ice,-This%20article%20is&text=A%20group%20of%20employees%20called,Immigration
%20and%20Customs%20Enforcement%20contract (accessed 3 March 2021).
Hern, A. (2020), Shirking from home? Staff feel the heat as bosses ramp up remote surveillance,The
Guardian, 27 September, available at: https://www.theguardian.com/world/2020/sep/27/
shirking-from-home-staff-feel-the-heat-as-bosses-ramp-up-remote-surveillance (accessed 3
March 2021).
High, P. (2019), Former Google HR chief Laszlo Bock aims to revolutionize people management
with Humu,Forbes, 9 September, available at: https://www.forbes.com/sites/peterhigh/2019/
09/09/former-google-hr-chief-laszlo-bock-aims-to-revolutionize-people-management-with-
humu/ (accessed 3 March 2021).
Hogan, K. (2016), Empower Your Employees to Leverage Their Own Data, LinkedIn, 14 April,
available at: https://www.linkedin.com/pulse/empower-your-employees-leverage-own-data-
kathleen-hogan/ (accessed 3 March 2021).
Holeman, I., Cookson, T.P. and Pagliari, C. (2016), Digital technology for health sector governance in
low and middle income countries: a scoping review,Journal of Global Health, Vol. 6 No. 2,
pp. 1-11.
Holt, M., Lang, B. and Sutton, S.G. (2017), Potential employeesethical perceptions of active monitoring:
the dark side of data analytics,Journal of Information Systems, Vol. 31 No. 2, pp. 107-124.
Jacobs, K. (2017), The ethics of gathering employee data,HR Magazine, 21 March, available at:
https://www.hrmagazine.co.uk/content/features/the-ethics-of-gathering-employee-data
(accessed 3 March 2021).
Jeske, D. and Calvard, T. (2020), Big data: lessons for employers and employees,Employee Relations,
Vol. 42 No. 1, pp. 248-261.
Jobin, A., Ienca, M. and Vayena, E. (2019), The global landscape of AI ethics guidelines,Nature
Machine Intelligence, Vol. 1, pp. 389-399.
Jones, G. (2017), Whos Data is it Anyway?, LinkedIn, 30 November, available at: https://www.linkedin.
com/pulse/whos-data-anyway-gareth-jones/ (accessed 3 March 2021).
Jones, D., Molitor, D. and Reif, J. (2019), What do workplace wellness programs do? Evidence from the
Illinois workplace wellness study,The Quarterly Journal of Economics, Vol. 134 No. 4,
pp. 1747-1791.
Joseph, G. (2019), Data company directly powers immigration raids in workplace,WNYC, available
at: https://www.wnyc.org/story/palantir-directly-powers-ice-workplace-raids-emails-show/
(accessed 3 March 2021).
Kaupins, G. and Minch, R. (2005), Legal and ethical implications of employee location monitoring,
Proceedings of the 38th annual Hawaii International Conference on System Sciences, Big Island,
HI, USA, p. 133a.
Kellar-Guenther, Y. (2016), Workplace wellness programs and accessibility for all,AMA J Ethics,
Vol. 18 No. 4, pp. 393-398.
Kim, P. (2017), Data-driven discrimination at work,William and Mary Law Review, Vol. 48,
pp. 857-936.
Kirke, M. (2019), AI in HR: the good, the bad and the scary,The People Space, 29 May, available at:
https://www.thepeoplespace.com/leaders/articles/ai-hr-good-bad-and-scary (accessed 3
March 2021).
Kumar, T. (2018), Ethics and workforce data: is legislation enough?,Analytics in HR, available at:
https://laptrinhx.com/ethics-and-workforce-data-is-legislation-enough-1620079341/ (accessed 3
March 2021).
The ethics of
people
analytics
Leong, K. (2017), Is your company using employee data ethically?,Harvard Business Review,
available at: https://hbr.org/2017/03/is-your-company-using-employee-data-ethically (accessed 3
March 2021).
Lingard, S. (2018), GDPR compliance: practical steps to take control of your HR data,HRZone,13
February, available at: https://www.hrzone.com/perform/business/gdpr-compliance-practical-
steps-to-take-control-of-your-hr-data (accessed 3 March 2021).
Mann, H., Neale, C. and Kumar, T. (2018), People analytics: ethical considerations,Analytics in HR,
available at: https://www.analyticsinhr.com/blog/people-analytics-ethical-considerations/
(accessed 3 March 2021).
Manyika, J. (2019), Tackling bias in artificial intelligence (and in humans),McKinsey, 6 June,
available at: https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-
artificial-intelligence-and-in-humans#:~:text=AI%20can%20help%20reduce%20bias,bake%
20in%20and%20scale%20bias&text=In%20many%20cases%2C%20AI%20can,on%20the%
20training%20data%20used (accessed 3 March 2021).
Marchant, G.E. (2019), What are best practices for ethical use of nano sensors for worker
surveillance?,AMA Journal of Ethics, Vol. 21 No. 4, pp. 356-362.
Marler, J.H. and Boudreau, J.W. (2017), An evidence-based review of HR analytics,International
Journal of Human Resource Management, Vol. 28 No. 1, pp. 3-26.
Marritt, A. (2016), People analytics, whats in it for the employees?,Organization View, available at:
https://www.organizationview.com/insights-articles/2016/7/5/people-analytics-whats-in-it-for-
the-employees- (accessed 3 March 2021).
Mesko, B., Hetenyi, G. and Gyorffy, Z. (2018), Will artificial intelligence solve the human resource
crisis in healthcare?,BMC Health Services Research, Vol. 18, p. 545.
Mixson, E. (2019), People analytics: 10 trends to watch in 2020,HR Gazette, 20 December, available
at: https://hr-gazette.com/people-analytics-10-trends-to-watch-in-2020/ (accessed 3 March 2021).
Mohlmann, M. and Henfridsson, O. (2019), What people hate about being managed by algorithms,
according to a study of Uber drivers,Harvard Business Review, available at: https://hbr.org/
2019/08/what-people-hate-about-being-managed-by-algorithms-according-to-a-study-of-uber-
drivers (accessed 3 March 2021).
Moussa, M. (2015), Monitoring employee behavior through the use of technology and issues of
employee privacy in America,Sage Open, Vol. 5 No. 2, pp. 1-13.
Newman, A., Round, H., Bhattacharya, S. and Roy, A. (2017), Ethical climates in organizations: a
review and research agenda,Business Ethics Quarterly, Vol. 27 No. 4, pp. 475-512.
Nguyen, A. (2020), On the clock and at home: post-COVID-19 employee monitoring in the workplace,
PeopleþStrategy,availableat:https://www.shrm.org/executive/resources/people-strategy-
journal/summer2020/Pages/feature-nguyen.aspx?utm_source=postcard&utm_medium=dir
ectmail&utm_campaign=hrps~2020engagement~nguyensummer2020twitter (accessed 3
March 2021).
Nunan, D. and Di Domenico, M. (2015), Big data: a normal accident waiting to happen?,Journal of
Business Ethics, Vol. 145, pp. 481-491.
Nunn, J. (2018), How AI is transforming HR departments,Forbes, 9 May, available at: https://www.
forbes.com/sites/forbestechcouncil/2018/05/09/how-ai-is-transforming-hr-departments/
(accessed 3 March 2021).
ONeil, C. (2016a), How algorithms rule our working lives,The Guardian, 1 September, available at:
https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives
(accessed 3 March 2021).
ONeil, C. (2016b), Rogue Algorithmsand the dark side of big data,Knowledge@Wharton,21
September, available at: https://knowledge.wharton.upenn.edu/article/rogue-algorithms-dark-
side-big-data/ (accessed 3 March 2021).
PR
ONeil, C. (2018), Personality tests are failing American workers,Bloomberg, 18 January, available at:
https://www.bloomberg.com/opinion/articles/2018-01-18/personality-tests-are-failing-american-
workers#:~:text=A%20recent%20study%20found%20that,with%20other%20types%20of%
20assessments.&text=Whether%20or%20not%20personality%20tests,they%20fulfill%20that
%20basic%20requirement (accessed 3 March 2021).
Pagliari, C. (2020), The ethics and value of contact tracing apps: international insights and
implications for Scotlands COVID-19 response,Journal of Global Health, Vol. 10 No. 2, pp.
1-18.
Pease, G. (2018), People analytics privacy vs. transparency,Gene Pease, 14 March, available at:
https://genepease.com/people-analytics-privacy-vs-transparency/ (accessed 3 March 2021).
Petersen, D. (2018), Data ethics: 6 steps for ethically sound people analytics,Visier, available at:
https://www.visier.com/clarity/six-steps-ethically-sound-people-analytics/ (accessed 3
March 2021).
Pitesa, M. (2012), Employee surveillance and the modern workplace,inOSullivan, P., Esposito, M.
and Smith, M. (Eds), Business Ethics: A Critical Approach: Integrating Ethics across the Business
World, pp. 206-219.
Politou, E., Alepis, E. and Patsakis, C. (2018), Forgetting personal data and revoking consent
under the GDPR: challenges and proposed solutions,Journal of Cybersecurity,Vol.4No.1,
tyy001.
Rivera, K. and Karlsson, P.O. (2017), CEOs are getting fired for ethical lapses more than they used
to,Harvard Business Review, available at: https://hbr.org/2017/06/ceos-are-getting-fired-for-
ethical-lapses-more-than-they-used-to#:~:text=Print-,Companies%20have%20become%
20much%20more%20likely%20to%20dismiss%20their%20chief,inflated%20resumes%2C%
20and%20sexual%20indiscretions (accessed 3 March 2021).
RRI Tools Consortium (2016), A practical guide to responsible research and innovation,Key lessons
from RRI Tools, 1 December, available at: https://rri-tools.eu/-/rri-tools-a-practical-guide-to-
responsible-research-and-innovation-key-lessons-from-rri-tools- (accessed 3 March 2021).
Schwartz, M.J. (2020), Employee surveillance: whos the boss(ware)?,Bank Info Security,8July,
available at: https://www.bankinfosecurity.com/employee-surveillance-whos-bossware-a-
14579 (accessed 3 March 2021).
Scimago Journal Ranking Portal (2019), Scimago journal and country rank, available at: https://www.
scimagojr.com/.
Smith, T. (2015), The Ethics of Analytics: A Look Into The Dark Side, LinkedIn, 24 November, available
at: https://www.linkedin.com/pulse/ethics-analytics-look-dark-side-tracey-smith/?trk=mp-
reader-card (accessed 3 March 2021).
Sodeman, W.A. and Hamilton, B. (2019), Legal and ethical challenges for Human resources in the big
data era,Academy of Management Specialized Conference on Big Data and Managing in a
Digital Economy, Surrey, UK.
Spence, A. (2018), The personal data backlash next up recruitment?,Medium, 30 September,
available at: https://medium.com/blockchain-and-the-distributed-workforce/the-personal-data-
backlash-next-up-recruitment-b97eafbee997 (accessed 3 March 2021).
Suk, J.C. (2007), Discrimination at will: job security protections and equal employment opportunity in
conflict,Stanford Law Review, Vol. 60 No. 1, pp. 73-113.
Tursunbayeva, A., Bunduchi, R., Franco, M. and Pagliari, C. (2016), Human resource information
systems in health care: a systematic evidence review,Journal of the American Medical
Informatics Association, Vol. 24 No. 3, pp. 633-654.
Tursunbayeva, A., Di Lauro, S. and Pagliari, C. (2018), People analyticsa scoping review of
conceptual boundaries and value propositions,International Journal of Information
Management, Vol. 43, pp. 224-247.
The ethics of
people
analytics
Wakabayashi, D. (2018), Firm led by Google veterans uses A.I. to Nudgeworkers toward
happiness,The New York Times, 31 December, available at: https://www.nytimes.com/2018/
12/31/technology/human-resources-artificial-intelligence-humu.html#:~:text=the%20main%
20story-,Firm%20Led%20by%20Google%20Veterans,to%20%27Nudge%27%20Workers%
20Toward%20Happiness&text=MOUNTAIN%20VIEW%2C%20Calif.,-%E2%80%94%
20Technology%20companies%20like&text=It%20digs%20through%20employee%
20surveys,elevating%20a%20work%20force%27s%20happiness (accessed 3 March 2021).
Wartzman, R. (2019), Workplace tracking is growing fast. Most workers dont seem very concerned,
Fast Company, 20 March, available at: https://www.fastcompany.com/90318167/workplace-
tracking-is-growing-fast-most-workers-dont-seem-very-concerned (accessed 3 March 2021).
West, D.M. (2018), The role of corporations in addressing AIs ethical dilemmas,Brookings,13
September, available at: https://www.brookings.edu/research/how-to-address-ai-ethical-
dilemmas/ (accessed 3 March 2021).
Wood, A.J., Graham, M., Lehdonvirta, V. and Hjorth, I. (2019), Good gig, bad gig: autonomy and
algorithmic control in the global gig economy,Work, Employment and Society, Vol. 33 No. 1,
pp. 56-75.
Yerby, J. (2013), Legal and ethical issues of employee monitoring,Online Journal of Applied
Knowledge Management, Vol. 1 No. 2, pp. 44-55.
Appendix
The appendices are available online for this article.
Corresponding author
Aizhan Tursunbayeva can be contacted at: a.tursunbayeva@utwente.nl
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com
PR
... More specifically, scientific literature provided very limited contributions on the possible effects that the organisational adoption of HRA practices may have on employees (e.g. Gal et al., 2020;Tursunbayeva et al., 2021). In this regard, researchers opened a debate discussing the trade-off between the possible benefits and risks that HRA practices might have for employees psychological and emotional state. ...
... However, recent research on HRA (e.g. Chatterjee et al., 2021;Tursunbayeva et al., 2021) counter that the implementation of analytics processes arise practical and ethical issues related to workplace surveillance, threatening employee well-being (Sewell and Barker, 2006;Ball, 2010). The debate, however, is still open, with researchers providing mainly promotional or conceptual contributions. ...
... Recent research Giermindl et al., 2021) explained that the implementation of HRA practices generate different concerns in individuals, including algorithmic opacity, datafication and reductionism, nudging, and privacy invasion. Furthermore, scholars (Tursunbayeva et al., 2021) emphasised different risks to the well-being and ethical rights of employees, ranging from possible bias operationalisation and discrimination to the excessive surveillance and privacy invasion. These are in addition to the traditional risks related to data governance, cybersecurity, regulations compliance, and trade unions relations (Peeters et al., 2020), fuelling doubts about the actual benefits of using data and analytics in HRM to employees. ...
... More specifically, scientific literature provided very limited contributions on the possible effects that the organisational adoption of HRA practices may have on employees (e.g. Gal et al., 2020;Tursunbayeva et al., 2021). In this regard, researchers opened a debate discussing the trade-off between the possible benefits and risks that HRA practices might have for employees psychological and emotional state. ...
... However, recent research on HRA (e.g. Chatterjee et al., 2021;Tursunbayeva et al., 2021) counter that the implementation of analytics processes arise practical and ethical issues related to workplace surveillance, threatening employee well-being (Sewell and Barker, 2006;Ball, 2010). The debate, however, is still open, with researchers providing mainly promotional or conceptual contributions. ...
... Recent research Giermindl et al., 2021) explained that the implementation of HRA practices generate different concerns in individuals, including algorithmic opacity, datafication and reductionism, nudging, and privacy invasion. Furthermore, scholars (Tursunbayeva et al., 2021) emphasised different risks to the well-being and ethical rights of employees, ranging from possible bias operationalisation and discrimination to the excessive surveillance and privacy invasion. These are in addition to the traditional risks related to data governance, cybersecurity, regulations compliance, and trade unions relations (Peeters et al., 2020), fuelling doubts about the actual benefits of using data and analytics in HRM to employees. ...
... Peneliti di berbagai bidang semakin terdorong untuk memanfaatkan teknologi digital karena efisiensinya dalam mempercepat pengolahan data, meningkatkan akses terhadap sumber daya akademik, serta memfasilitasi kolaborasi lintas disiplin ilmu (Tursunbayeva et al., 2021). Namun, di tengah berbagai manfaat yang ditawarkan, tantangan teknis dan etika tetap menjadi perhatian utama. ...
Article
Penelitian ini menganalisis penggunaan teknologi digital dalam penelitian akademik, dengan fokus pada motivasi, tantangan, dan kepatuhan terhadap etika publikasi. Melalui pendekatan kualitatif dengan Focus Group Discussions (FGD), penelitian ini melibatkan 18 akademisi, peneliti, dan mahasiswa pascasarjana dari berbagai disiplin ilmu. Hasil penelitian menunjukkan bahwa teknologi digital digunakan terutama untuk meningkatkan efisiensi dalam pengolahan data, akses literatur, dan kolaborasi. Penggunaan perangkat lunak analisis data dan platform kolaboratif mempercepat penelitian dan meningkatkan produktivitas. Namun, tantangan utama meliputi keterbatasan keterampilan teknis, infrastruktur, serta pemahaman dan penerapan etika penelitian. Masalah etika yang sering muncul meliputi plagiarisme, manipulasi data, dan kurangnya transparansi dalam publikasi, terutama karena tekanan untuk menerbitkan cepat. Konflik terkait atribusi kepenulisan dan hak kekayaan intelektual juga ditemukan dalam penggunaan alat analisis otomatis dan platform kolaboratif. Meskipun sebagian besar peserta menyadari pentingnya etika publikasi, penerapannya masih terhambat oleh kurangnya regulasi yang jelas dan dukungan institusi. Namun, program pelatihan etika publikasi yang meningkat mencerminkan upaya memperkuat literasi digital dan kesadaran etika. Studi ini menegaskan bahwa teknologi digital berpotensi meningkatkan efisiensi penelitian akademik, tetapi tantangan teknis dan etika harus diatasi melalui kebijakan institusional yang lebih kuat dan pelatihan berkelanjutan.
... The systems might collect digital traces left behind when using applications on a computer, such as, for instance, log-in times or active speaking time in meetings (Leonardi, 2021). They might also assess metrics such as the length and frequency of phone calls, the number of emails sent or the time spent in meetings (Marler & Boudreau, 2017;Tursunbayeva et al., 2022), or more invasive forms, such as regular screenshots and photos of employees taken by the front-facing camera of the work computer (Kantor & Sundaram, 2022). Some PA systems use the collected data to calculate the alleged productivity of employees and reflect this in an accessible way for the management (Parent-Rocheleau & Parker, 2021;Tambe et al., 2019). ...
Article
People analytics – algorithmic management systems in personnel management – constantly collect and process employee-generated data. This can allow the systems to deliver actionable insights to support managerial decisions. Thus, people analytics can become a mediator between managers and employees, challenging and redefining their dynamics, relationships, and communication. In this study, we conduct an online experiment where participants take the role of employees, performing a real-effort task. In manipulated scenarios, rewards for completing the task are distributed either by a team leader without algorithmic support, a team leader with the option of algorithmic support, or a team leader who delegates the process of rewarding employees to an autonomous system. Our results indicate that the involvement of people analytics has downstream consequences, inducing feelings of unfair- ness and betrayal, which ultimately increase demands for reparation and general retaliatory behaviour against the manager and the organisation. Specifically, our findings show that, in unfair situations, employees perceive a stronger violation of fairness when they suspect a people analytics system to be involved in the decision-making process, and less violation when the same unfair decision was made by a manager without algorithmic support. Managers should therefore ensure that, even though they increasingly interact with technical systems, they keep people in focus.
... Instead of providing insights purely to the managers (Hüllmann et al., 2021;Tursunbayeva et al., 2018), employees are considered important stakeholders as well. As market reviews have highlighted, current digital workplace technologies are mainly designed to support strategic decision-making, data handling, and people management (Tursunbayeva et al., 2021). However, today, the use of such technologies needs to come with incentives and benefits for employees, even if this value proposition might conflict with the former one aimed only at managers. ...
Article
Digital workplace technologies lead to increasing levels of transparency. This, however, can invoke tensions: On the one hand, transparency facilitates employee empowerment, and on the other hand, transparency drives employee surveillance and privacy concerns. Furthermore, transparency has traditionally been one-sided, with control vested in managers, leading to a panoptical scenario. The contemporary workforce demands empowerment and bidirectional, inverse transparency, which challenges the assumptions of agency theory. These tensions are pertinent challenges that are even more salient today. Following a design science research process, we examine how a technical solution could be designed to mitigate these tensions and overcome the panopticon. Drawing on the knowledge base of stewardship theory and the concept of inverse transparency, we derive design requirements, design principles, and design features for an artifact that instantiates inverse transparency and drives the movement to stewardship behavior. We develop three theoretical conjectures on the artifact’s implications on kernel theory and conclude by advancing a design theory on inverse transparency that guides the design of digital workplace technologies. Lastly, our study illuminates the emerging understanding of transparency-related challenges in the contemporary workforce and further contributes to the discourse with theoretical and design knowledge.
... Further, two reviews examine the impact of the COVID-19 pandemic on HR analytics. Tursunbayeva et al. (2022) conducted a review to identify both the risks and opportunities emerging during this period and developed a framework for integrating ethical and strategic practices into HR analytics. Their study also emphasizes the necessity for fairness and ethical considerations within the HRD, especially as changes in working conditions during the pandemic heightened employee awareness about data management. ...
Article
Full-text available
A rapid transformation of human resources technology (HR Tech) occurs through emerging startups who use artificial intelligence with data analytics to develop employee-focused platforms that modify talent acquisition and staff management systems. Talent management evolved beyond simple recruitment and performance evaluation techniques because it now focuses on building adaptive workplaces that include all employees within data-driven structures which generate employee success in 2025. This piece examines developing HR Tech startups that modernize established workplace methods through better and quicker solutions for workforce analysis alongside staff involvement systems and training programs. Most HR-related startups fall into different impact categories that we will examine based on their innovations and broader trends while providing operational strategies for leaders seeking organizational future-readiness.
Conference Paper
Organisations and their human resource departments are progressively leveraging on digital technologies and factual data to support people-related decisions, in an approach known as HR Analytics (HRA). Academics defined HRA as an organisational capability rooted in different organisational dimensions and resources. Despite the increasing interest, academics have still to reach a consensus on what HRA really encompass. Additionally, there is still limited research on how to successfully build, develop, and manage these analytics capabilities. Academic literature lacks practical guidance that could support practitioners in defining and planning their development path, prioritising investments and activities. The aim of this work, thus, is to provide a model to assess and evaluate the current and desired state of HRA maturity. Then, we propose a procedure to comprehensively measure and evaluate HRA dimensions and their relationships, enabling the generation of an effective development roadmap. The research has been developed using an augmented version of the methodology proposed by Becker et al., (2009), integrated with the procedure proposed by Gastaldi et al., (2018). Eventually, our research demonstrated that HRA is defined through 14 dimensions and 37 further components. Additionally, we revealed that analytics capabilities are developed through an evolutionary path characterised by 4 discrete stages of maturity. During its development, HRA needs to be considered as a dynamic capability, evaluating its various intersections with the organisation. The results of this research open the door for interesting future research focusing on the organisational effects of HRA.
Article
Harnessing the power of human resource (HR) analytics revolutionises human resource management, optimises resource allocation, refines recruitment and retention strategies and propels organisational growth to unprecedented heights. This article analyses 121 HR analytics records from the Web of Science, using a hybrid approach, combining bibliometrics (to identify the emerging themes) and systematic review analysis (to find connections between emerging themes using the PRISMA technique). It provides value-adding information through descriptive, keyword, author and network analysis using ‘R’ software and VOSviewer. The article concludes that strategic decision-making is one of the fundamental blocks enabled by technology and analytical abilities, further enhancing performance, strategic management and effectiveness. This review supports HR analytics as a sustainable competitive advantage source. The scientific contribution of this article is to make proper organisational policies for the effective implementation of HR analytics. This article illuminates critical research lacunae and delineates avenues for future scholarly inquiry, enriching the discourse and charting a course for advancing knowledge in the field.
Article
Full-text available
The COVID-19 pandemic has put health systems, economies and societies under unprecedented strain, calling for innovative approaches. Scotland’s government, like those elsewhere, is facing difficult decisions about how to deploy digital technologies and data to help contain, control and manage the disease, while also respecting citizens’ rights. This paper explores the ethical challenges presented by these methods, with particular emphasis on mobile apps associated with contact tracing. Drawing on UK and international experiences, it examines issues such as public trust, data privacy and technology design; how changing disease threats and contextual factors can affect the balance between public benefits and risks; and the importance of transparency, accountability and stakeholder participation for the trustworthiness and good-governance of digital systems and strategies. Analysis of recent technology debates, controversial programmes and emerging outcomes in comparable countries implementing contact tracing apps, reveals sociotechnical complexities and unexpected paradoxes that warrant further study and underlines the need for holistic, inclusive and adaptive strategies. The paper also considers the potential role of these apps as Scotland transitions to the ‘new normal’, outlines challenges and opportunities for public engagement, and poses a set of ethical questions to inform decision-making at multiple levels, from software design to institutional governance.
Article
Full-text available
Current understanding of what constitutes work in the growing gig economy is heavily conflated, ranging from conceptualisations of independent contracting to other forms of contingent labour. This article calls for a move away from problematic aggregations by proposing a classification of gig work into three variants, all based strongly upon key technological features: app‐work, crowdwork, and capital platform work. Focusing specifically on the app‐work variant, this article's more delineated focus on the textured dimensions of this work proposes new lines of enquiry into employment relationships and human resource management. Examining the crucial role of algorithmic management, we critically discuss the impact of this novel mediation tool used by gig organisations for the nature of employment relations within app‐work, work assignment processes, and performance management. In so doing, we propose a series of research questions that can serve as a guide for future research in this increasingly important field.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
Purpose: The focus of the current article is to critically reflect on the pros and cons of using employee information in big data projects. Approach: The authors reviewed papers in the area of big data that have immediate repercussions for the experiences of employees and employers. Findings: The review of papers to date suggests that big data lessons based on employee data are still a relatively unknown area of employment literature. Particular attention is paid to discussion of employee rights, ethics, expectations, and the implications employer conduct has on employment relationships and prospective benefits of big data analytics at work for work. Originality/value: This viewpoint article highlights the need for more discussion between employees and employers about the collection, use, storage and ownership of data in the workplace. A number of recommendations are put forward to support future data collection efforts in organisations.
Article
Full-text available
Upon the General Data Protection Regulation’s (GDPR) application on 25 May 2018 across the European Union, new legal requirements for the protection of personal data will be enforced for data controllers operating within the EU territory. While the principles encompassed by the GDPR were mostly welcomed, two of them, namely the right to withdraw consent and the right to be forgotten, caused prolonged controversy among privacy scholars, human rights advocates and business world due to their pivotal impact on the way personal data would be handled under the new legal provisions and the drastic consequences of enforcing these new requirements in the era of big data and internet of things. In this work, we firstly review all controversies around the new stringent definitions of consent revocation and the right to be forgotten in reference to their implementation impact on privacy and personal data protection, and secondly, we evaluate existing methods, architectures and state-of-the-art technologies in terms of fulfilling the technical practicalities for the implementation and effective integration of the new requirements into current computing infrastructures. The latter allow us to argue that such enforcement is indeed feasible provided that implementation guidelines and low-level business specifications are put in place in a clear and cross-platform manner in order to cater for all possible exceptions and complexities.
Article
Full-text available
This mixed-method ‘scoping review’ mapped the emergence of the term People Analytics (PA), the value propositions offered by vendors of PA tools and services and the PA skillsets being sought by professionals. Analysis of academic research and online search traffic since 2002 revealed changes in the relative trajectory of PA and conceptually related terms over the past fifteen years, indicating both the re-branding of similar innovations and a differentiation of priorities and communities of practice. The market in commercial PA tools and services is diverse, offering numerous functional and strategic benefits, although published evidence of these outcomes remains sparse. Companies marketing PA systems and services emphasise benefits to employers more than to personnel. Across the sources examined, including specialised online courses, PA was largely aligned with HRM, however its development reflects the shifting focus of HR departments from supporting functional to strategic organisational requirements. Consideration of ethical issues was largely absent.
Article
Workplace wellness programs cover over 50 million U.S. workers and are intended to reduce medical spending, increase productivity, and improve well-being. Yet limited evidence exists to support these claims. We designed and implemented a comprehensive workplace wellness program for a large employer and randomly assigned program eligibility and financial incentives at the individual level for nearly 5,000 employees. We find strong patterns of selection: during the year prior to the intervention, program participants had lower medical expenditures and healthier behaviors than nonparticipants. The program persistently increased health screening rates, but we do not find significant causal effects of treatment on total medical expenditures, other health behaviors, employee productivity, or self-reported health status after more than two years. Our 95% confidence intervals rule out 84% of previous estimates on medical spending and absenteeism.
Article
Many employers now offer workers wearable or implantable devices that can monitor their health, productivity, and wellness. Nanotechnology enables even more powerful and functional monitoring capacity for these devices. A history of workplace monitoring programs suggests that, despite nanosensors' potential benefits to employers and employees, they can only be successful and sustainable when a company's motivations for offering them are acceptable and transparent to workers. This article describes 5 best practices for motivating nano-enabled worker monitoring programs that are acceptable, effective, and ethical.