A preview of this full-text is provided by Springer Nature.
Content available from AI & SOCIETY
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
AI & SOCIETY
https://doi.org/10.1007/s00146-023-01778-y
OPEN FORUM
Developing safer AI–concepts fromeconomics totherescue
PankajKumarMaskara1
Received: 2 June 2023 / Accepted: 5 September 2023
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023
Abstract
With the rapid advancement of AI, there exists a possibility of rogue human actor(s) taking control of a potent AI system
or an AI system redefining its objective function such that it presents an existential threat to mankind or severely curtails
its freedom. Therefore, some suggest an outright ban on AI development while others profess international agreement on
constraining specific types of AI. These approaches are untenable because countries will continue developing AI for national
defense, regardless. Some suggest having an all-powerful benevolent one-AI that will act as an AI nanny. However, such an
approach relies on the everlasting benevolence of one-AI, an untenable proposition. Furthermore, such an AI is itself subject
to capture by a rogue actor. We present an alternative approach that uses existing mechanisms and time-tested economic
concepts of competition and marginal analysis to limit centralization and integration of AI, rather than AI itself. Instead
of depending on international consensus it relies on countries working in their best interests. We recommend that through
regulation and subsidies countries promote independent development of competing AI technologies, especially those with
decentralized architecture. The Sherman Antitrust Act can be used to limit the domain of an AI system, training module,
or any of its components. This will increase the segmentation of potent AI systems and force technological incompatibility
across systems. Finally, cross-border communication between AI-enabled systems should be restricted, something countries
like China and the US are already inclined to do to serve their national interests. Our approach can ensure the availability
of numerous sufficiently powerful AI systems largely disconnected from each other that can be called upon to identify and
neutralize rogue systems when needed. This setup can provide sufficient deterrence to any rational human or AI system from
attempting to exert undue control.
Keywords Existential risks· Safe AI· AI regulation· AI policy· X risks· Decentralized AI
1 Introduction
AI continues to find its way into our daily lives with unprec-
edented speed (Ariyaratne etal. 2023). Companies, attracted
by AI’s potential for economic efficiency (De Andrade and
Tumelero 2022), and governments, lured by its potential use
in law enforcement (Rademacher 2020) and national defense
(Allen and Chan 2017; Sayler 2020; Schmidt etal. 2021),
are investing heavily in AI development. At the same time,
renowned people like Geoffrey Hinton (widely acclaimed as
the Godfather of AI), Elon Musk, and Henry Kissinger have
warned of grave dangers posed by AI (Bove 2023; Daniel
2023; Schechner 2023). These involve existential threat
to humans (Bostrom 2002, 2013) and decrease in human
welfare and freedom (Bostrom 2012) due to wresting of
large-scale decision-making control (Sotala and Yampols-
kiy 2014) by one or few human actors using AI (Hine and
Floridi 2023) or by an AI system itself that deviates from
human welfare centricity intentionally (Gervais 2021; Zie-
sche and Yampolskiy 2018) or unintentionally (Turchin etal.
2019). We hereafter refer to such possibilities as tail risks.1
Turchin and Denkenbergen (2020) identify and classify
different types of global catastrophic risks at various levels
of AI development while Bostrom (2002) classifies existen-
tial risks along the dimension of humanity reaching its post-
human potential. Regardless of the classification, most of
the different types of catastrophic events identified in these
* Pankaj Kumar Maskara
pmaskara@nova.edu
1 Department ofFinance andEconomics, Nova Southeastern
University, 3301 College Avenue, Carl DeSantis Bldg. Suite
5150, FortLauderdale33314, USA
1 Though AI also presents other challenges like job displacement
(Clifton et al 2020), propagation of racial biases (Gupta etal 2022;
Shin 2022), and wealth inequality (Fernandes etal 2020), among oth-
ers, we restrict our analysis to tail risks.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.