ArticlePublisher preview available

The ethical AI—paradox: why better technology needs more and not less human responsibility

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Because AI is gradually moving into the position of decision-maker in business and organizations, its influence is increasingly impacting the outcomes and interests of the human end-user. As a result, scholars and practitioners alike have become worried about the ethical implications of decisions made where AI is involved. In approaching the issue of AI ethics, it is becoming increasingly clear that society and the business world—under the influence of the big technology companies—are accepting the narrative that AI has its own ethical compass, or, in other words, that AI can decide itself to do bad or good. We argue that this is not the case. We discuss and demonstrate that AI in itself has no ethics and that good or bad decisions by algorithms are caused by human choices made at an earlier stage. For this reason, we argue that even though technology is quickly becoming better and more sophisticated a need exists to simultaneously train humans even better in shaping their ethical compass and awareness.
Vol.:(0123456789)
1 3
AI and Ethics (2022) 2:1–4
https://doi.org/10.1007/s43681-021-00075-y
OPINION PAPER
The ethical AI—paradox: why better technology needs more
andnotlesshuman responsibility
DavidDeCremer1 · GarryKasparov2
Received: 13 June 2021 / Accepted: 17 June 2021 / Published online: 24 June 2021
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021
Abstract
Because AI is gradually moving into the position of decision-maker in business and organizations, its influence is increas-
ingly impacting the outcomes and interests of the human end-user. As a result, scholars and practitioners alike have become
worried about the ethical implications of decisions made where AI is involved. In approaching the issue of AI ethics, it is
becoming increasingly clear that society and the business world—under the influence of the big technology companies—are
accepting the narrative that AI has its own ethical compass, or, in other words, that AI can decide itself to do bad or good.
We argue that this is not the case. We discuss and demonstrate that AI in itself has no ethics and that good or bad decisions
by algorithms are caused by human choices made at an earlier stage. For this reason, we argue that even though technology
is quickly becoming better and more sophisticated a need exists to simultaneously train humans even better in shaping their
ethical compass and awareness.
Keywords AI ethics· Mirror· Paradox· Decisions versus choice· Behavioral business ethics
There is no doubt that AI has become part of the business
world and is here to stay. The potential of AI in terms of
economic benefits is unrivalled. This emerging intelligent
technology is even considered by many to be more important
and impactful than the internet was [1]. It is then also no
surprise that AI is increasingly involved in decision-making,
either as a tool, advisor or even manager [2]. This means that
today intelligent technology is increasingly acquiring power
to influence a wide variety of outcomes important to soci-
ety. As we all know, with greater power also comes greater
responsibility. For this reason, we need to start addressing
the question of whether AI is intrinsically equipped to be a
responsible actor and as such act in ways that we humans—
as the important end-user—consider ethical.
This question is receiving much attention as the adop-
tion of AI has created ethical concerns about, among others,
privacy (compromising personal information), biased deci-
sions (based on flawed historical data), lack of transparency
(how decisions are made), and the risk to lose one’s job due
to automation. With such ethical concerns, fear and even
anxiety about the employment and advancement of AI has
surfaced in society and business. Interestingly, the narra-
tive that surrounds the discussion about the ethicality of
AI is characterized by the tendency to attribute human-like
qualities to AI [3]. Because of this tendency—referred to
as anthropomorphism—we seem to create the impression
that AI itself can be inherently bad or good. As we tend to
attribute AI such magical and human-like powers, a trend is
emerging to see this intelligent (and thus learning) technol-
ogy as the one being responsible for its actions and deci-
sions. What can we learn from this trend?
This perspective identifies the important role that
humans’ expectations about a machine plays. Specifically,
a kind of illusion seems to be in play where our enthusiasm
for the supposedly magical powers of AI has led us down a
road in which we essentially reduce ethics to a technological
issue. How? First of all, developments in computer science
contribute to this kind of thinking as fairness and ethics in
this field is increasingly being seen as the same as transpar-
ency and intelligibility. Both features can be optimized by
modifying technological features to algorithmic solutions
[4]. Second, the developments taking place in the big tech
industry also adopt a narrative that introduces ethics as a
* David De Cremer
bizddc@nus.edu.sg
1 Centre On AI Technology forHumankind (AiTH), NUS
Business School, National University ofSingapore, 15 Kent
Ridge Drive, Singapore119245, Singapore
2 Renew Democracy Initiative (RDI), NewYork, NY, USA
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Following from the above, a different assumption may therefore be that whether a decision-maker is considered legitimate to make moral decisions depends on the perceived moral values of the decision agent in question. As it has been argued and demonstrated that algorithms in and of themselves do not possess moral values [33,36], humans can already be considered to possess greater moral values than 1 We rely on the leadership literature to conceptualize legitimacy, fairness, and trustworthiness perceptions as a function of leadership style and behavior [5,24,112]. This literature focuses on the subjective perceptions of employees regarding whether their leader makes decisions in a way that is legitimate, fair, and trustworthy. ...
Article
Full-text available
Algorithms are increasingly making decisions in organizations that carry moral consequences and such decisions are considered to be ordinarily made by leaders. An important consideration to be made by organizations is therefore whether adopting algorithms in this domain will be accepted by employees and whether this practice will harm their reputation. Considering this emergent phenomenon, we set out to examine employees’ perceptions about (a) algorithmic decision-making systems employed to occupy leadership roles and make moral decisions in organizations, and (b) the reputation of organizations that employ such systems. Furthermore, we examine the extent to which the decision agent needs to be recognized as “merely” a human, or whether more information is needed about the decision agent’s moral values (in this case, whether it is known that the human leader is humble or not) to be preferred over an algorithm. Our results reveal that participants in the algorithmic leader condition—relative to those in the human leader and humble human leader conditions—perceive the decision made to be less fair, trustworthy, and legitimate, and this in turn produces lower acceptance rates of the decision and more negative perceptions of the organization’s reputation. The human leader and humble human leader conditions do not significantly differ across all main and indirect effects. This latter effect strongly suggests that people prefer human (vs. algorithmic) leadership primarily because they are human and not necessarily because they possess certain moral values. Implications for theory, practice, and directions for future research are discussed.
... An important area for future research is how organizations devise their interactive communication technologies in a responsible manner. For example, De Cremer and Kasparov (2021) highlight some ethical areas, such as privacy, biased decision recommendations, lack of transparency and fear of job loss that businesses should consider in their AI strategies and decisions. Our TRISEC framework can guide researchers and managers to systematically contemplate which ethical dilemmas must be addressed along the service logic, technology implementation and customer experience foci and how they might differ in the SEC service contexts. ...
Article
Full-text available
Purpose Service providers increasingly use conversational agents (CAs), such as chatbots, to effectively communicate with customers while managing interaction costs and providing round-the-clock customer service. Yet, the adoption and implementation of such agents in service contexts remains a hit-and-miss, and firms often struggle to balance their CAs implementation complexities and costs with relation to their service objectives, technology design and customer experiences. The purpose of this paper is to provide guidance on optimizing CA design, therefore, the authors develop a conceptual framework, TRISEC, that integrates service logic, technology design and customer experience to examine the implementation of CA solutions in search, experience and credence (SEC) contexts. Design/methodology/approach The paper draws on service marketing and communications research, combining the service context classification scheme of search, experience and credence and the technology infused service marketing triangle foci (service, technology and customer) in its conceptual development. Findings The authors find that an opportunity exists in recognizing the importance of context when designing CAs and aiming to achieve a balance between service objectives, technology design and customer experiences. Originality/value This study contributes to service management and communications research literature by providing interactive service marketing researchers with the highly generalizable TRISEC framework to aid in optimizing CA design and implementation in interactive customer communication technologies. Furthermore, the study provides an array of future research avenues. From a practical perspective, this study aims at providing managers with a means to optimize CA technology design while maintaining a balance between customer centricity and implementation complexity and costs in different service contexts.
... In recent theoretical accounts, intelligent machines relying on artificial intelligence (AI) -as a general term that includes machine learning, robotics, computer vision and natural language processing -have been referred to as a way of building human values in the world 3 . It is therefore noted that AI can best be seen as a mirror that portrays the values and choices of humans 4 . This implies that a machine cannot be seen as bad or good, and that many of the unethical outcomes that it produces are determined by the values and perspectives of human designers and programmers. ...
... Ultimately, in a technologising world undergirded by Big Data, we must address the pressing question-how does the prevailing data asymmetry subvert our quest for ethical AI? First, the commercial exploitation of data for algorithms that automate everything from online advertisements to social media feeds and insurance premiums is an opaque exercise. In our 'black box society', these critical processes evade regulatory scrutiny through secrecy and active obfuscation [7]. Big Tech companies' data mining and algorithmic design processes are so complex that they have become incomprehensible to regulators, rendering hollow any requirements for transparency and accountability. ...
Article
Full-text available
Technology giants today preside over vast troves of user data that are heavily mined for profit. The concentration of such valuable data in private hands to serve mainly commercial interests must be questioned. In this article, we argue that if data is the new oil, Big Tech companies possess extensive, encompassing and granular data that is tantamount to premium oil. In contrast, governments, universities and think tanks undertake data collection efforts that are comparatively modest in scale, scope, duration and resolution and must contend with ‘data dregs’. Viewed against the backdrop of the COVID-19 pandemic, this sharp data asymmetry is unfortunate because the data Big Tech monopolizes is invaluable for boosting epidemiological control, formulating government policies, enhancing social services, improving urban planning and refining public education. We explain why this state of extreme data inequity undermines societal benefit and subverts our quest for ethical AI. We also propose how it should be addressed through data sharing and Open Data initiatives.
... Moreover, managerial decisions involving conflicts of interest also introduce-to varying degrees-moral components (Jones et al., 2007) that have implications that are unique to humans (Parry et al., 2016). Interestingly, as society expects its organizations to preserve "humanity" in the decisions they take, the presence of this conflict of interest means that the automation of decision-making in itself is one of the most important challenges organizations are facing today (De Cremer & Kasparov, 2021a). ...
Article
Full-text available
Autonomous algorithms are increasingly being used by organizations to reach ever increasing heights of organizational efficiency. The emerging business model of today therefore appears to be one where autonomous algorithms are gradually expanding their occupation into becoming a leading decision-maker, and humans by default become increasingly more subordinate to such decisions. We address the question of whether this business perspective is consistent with the sort of collaboration employees want to have with algorithms at work. We explored this question by investigating in what way humans preferred to collaborate with algorithms when making decisions. Using two experimental studies (Study 1, n = 237; Study 2, n = 684), we show that humans consider the collaboration with autonomous algorithms as unfair when the algorithm leads decision-making and will even incur high financial costs in order to avoid this. Our results also show that humans do not want to exclude algorithms entirely but seem to prefer a 60–40% human–algorithm partnership. These findings contrast the position taken by today’s emerging business model on the issue of automated organizational decision-making. Our findings also provide support for the existence of an implicit theory—held by both present and future employees—that humans should lead and algorithms follow.
... Inspired by tech companies' view on the ethical algorithm, the notion of responsible business is transforming more into an issue of technical competencies rather than human leadership abilities [1]. Google's ethics-as-a-service, for example, is setting the stage to elicit among business leaders the idea that ethics is something that can easily be fixed if you have the right technology at hand [2]. If so, business leaders may well feel less compelled to deal with ethics and moral business dilemmas in the future-isn't that's what we have machine for now? ...
Article
With the increasing influence of AI on the workings of organizations and the interests of its stakeholders, a consensus seems to have emerged that business leaders are more than ever attuned to being responsible in their adoption and use of intelligent technologies. In this opinion paper I develop the argument that this consensus is ill-founded. The emergence of AI ethics as a field and expertise has, first, created the idea among business leaders that their ethical duties can be carried out by machine. As a result, we see that business leaders are increasingly taking less responsibility in treating their workers in humane ways but rather as machines; a practice that ultimately leads to an approach, where workers’ problems resulting from such a “machine first” work culture are seen to be remedied only by machine. I conclude with outlining several recommendations on how to install a “humans first” mindset and develop corresponding leadership styles (purpose-driven and inclusive) to consolidate a human-centred focus.
Chapter
Full-text available
Artificial intelligence (AI) is one of today’s most significant technological advances; machine-learned technologies benefit both businesses and customers. The journey that a customer takes from pre-purchase to post-purchase is referred to as customer experience. People have developed a desire for positive relationships as technology has gradually taken over our lives. Particularly now, almost any product imaginable can be ordered online. AI is returning to our DNA of one-on-one customer relationships. It can improve customer interaction; however, marketers must understand how well these advances in technology affect customer experience in this ever-changing environment. Moreover, the presence of an omnichannel e-commerce environment necessitates a consistent customer experience across all platforms and devices. Customer personalization can be improved by recommender systems, and customer engagement can be improved by conversational agents, both individually and collectively. This analysis chapter presents a framework for businesses and other researchers to understand how recommendation systems can assist businesses in improving customer experience throughout the customer journey.
Article
We discuss the dilemma that while AI is considered as one of the most powerful engines that exist today to drive innovation, at the same time, however, rapid application of AI also has the potential to further increase inequalities and societal harm. This makes that we are being confronted with the question whether today’s amazing tech innovations may ultimately not bring limited benefits to the weaker members of society who need this kind of innovation the most. As such, do we need to slow down tech innovation to ensure that not more (and possibly new) unethical outcomes emerge over time? We note that the pursuit of tech innovation has been advocated primarily to optimize productivity, and, hence, economic growth. This pursuit represents, however, a narrow perspective on the good that AI could produce. Therefore, we argue that we need to adopt a less narrow perspective on what optimization means using tech innovation in ways that optimizes a diversity of human interests. By adopting such broader perspective, we propose an integrative approach where we start from the idea that we need to continue pushing tech innovation, but in combination with regulating innovation efforts and installing a stronger sense of moral awareness and responsibility among those in charge of the tech innovation journey. We conclude with outlining recommendations that can help promote this integrative approach, including the combination of self- and government regulation, promoting training efforts to establish more responsible leadership, and encouraging efforts to bring AI faster to the people.
Article
Although Artificial Intelligence (AI) has become a pervasive organisational phenomenon, it is still unclear if and when people are willing to cooperate with machines. We conducted five empirical studies (total N = 1,025 managers). The results show that human managers do not want to exclude machines entirely from managerial decisions, but instead prefer a partnership in which humans have a majority vote. Across our studies, acceptance rates steadily increased up until the point where humans have approximately 70% weight and machines 30% weight in managerial decisions. After this point the curve flattened out, meaning that higher amounts of human involvement no longer increased acceptance. In addition to this overall pattern, we consistently found four classes of managers that reacted differently to different amounts of human versus machine involvement: A first class of managers (about 5%) preferred machines to have the upper hand, a second class of managers (about 15%) preferred an equal partnership between humans and machines, a third class of managers (about 50%) preferred humans to have the upper hand, and a final class of managers (about 30%) preferred humans to have complete control in managerial decisions. Practical implications and directions for future research are discussed.
Article
The emerging field of behavioral ethics has attracted much attention from scholars across a range of different disciplines, including social psychology, management, behavioral economics, and law. However, how behavioral ethics is situated in relation to more traditional work on business ethics within organizational behavior (OB) has not really been discussed yet. Our primary objective is to bridge the different literatures on ethics within the broad field of OB, and we suggest a full-fledged approach that we refer to as behavioral business ethics. To do so, we review the foundations and research foci of business ethics and behavioral ethics. We structure our review on three levels: the intrapersonal level, interpersonal level, and organizational level. For each level, we provide relevant research examples and outline where more research efforts are needed. We conclude by recommending future research opportunities relevant to behavioral business ethics and discuss its practical implications. Expected final online publication date for the Annual Review of Organizational Psychology and Organizational Behavior, Volume 7 is January 21, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
AI should augment human intelligence, not replace it
  • De Cremer
  • D Kasparov
De Cremer, D., Kasparov, G.: AI should augment human intelligence, not replace it. Harvard business review. https:// hbr. org/ 2021/ 03/ ai-should-augme nt-human-intel ligen ce-not-repla ce-it (2021). Accessed 1 June 2021
What does building a fair AI really entail? Harvard business review
  • De Cremer
De Cremer, D.: What does building a fair AI really entail? Harvard business review. https:// hbr. org/ 2020/ 09/ what-does-build ing-a-fairai-really-entail (2020). Accessed 1 June 2021
Military drones may have attacked humans for first time without being instructed to, UN report says
  • V Sankaran
Sankaran, V.: Military drones may have attacked humans for first time without being instructed to, UN report says. Independent. https:// www. indep endent. co. uk/ life-style/ gadge ts-and-tech/ dronefully-autom ated-milit ary-kill-b1856 815. html (2021). Accessed 1 June 2021
Google offers to help others with the tricky ethics of AI
  • T Simonite
Simonite, T.: Google offers to help others with the tricky ethics of AI. https:// www. wired. com/ story/ google-help-others-tricky-ethicsai/ (2020). Accessed 1 June 2021
Google Turmoil exposes cracks long in making for top AI watchdog
  • N Grant
  • D Bass
  • J Eidelson
Grant, N., Bass, D., Eidelson, J.: Google Turmoil exposes cracks long in making for top AI watchdog. Bloomberg. https:// www. bloom berg. com/ news/ artic les/ 2021-04-21/ google-ethic al-ai-group-s-turmo il-began-long-before-public-unrav eling (2021). Accessed 1 June 2021