ArticlePublisher preview available

Developing safer AI–concepts from economics to the rescue

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract and Figures

With the rapid advancement of AI, there exists a possibility of rogue human actor(s) taking control of a potent AI system or an AI system redefining its objective function such that it presents an existential threat to mankind or severely curtails its freedom. Therefore, some suggest an outright ban on AI development while others profess international agreement on constraining specific types of AI. These approaches are untenable because countries will continue developing AI for national defense, regardless. Some suggest having an all-powerful benevolent one-AI that will act as an AI nanny. However, such an approach relies on the everlasting benevolence of one-AI, an untenable proposition. Furthermore, such an AI is itself subject to capture by a rogue actor. We present an alternative approach that uses existing mechanisms and time-tested economic concepts of competition and marginal analysis to limit centralization and integration of AI, rather than AI itself. Instead of depending on international consensus it relies on countries working in their best interests. We recommend that through regulation and subsidies countries promote independent development of competing AI technologies, especially those with decentralized architecture. The Sherman Antitrust Act can be used to limit the domain of an AI system, training module, or any of its components. This will increase the segmentation of potent AI systems and force technological incompatibility across systems. Finally, cross-border communication between AI-enabled systems should be restricted, something countries like China and the US are already inclined to do to serve their national interests. Our approach can ensure the availability of numerous sufficiently powerful AI systems largely disconnected from each other that can be called upon to identify and neutralize rogue systems when needed. This setup can provide sufficient deterrence to any rational human or AI system from attempting to exert undue control.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
AI & SOCIETY
https://doi.org/10.1007/s00146-023-01778-y
OPEN FORUM
Developing safer AI–concepts fromeconomics totherescue
PankajKumarMaskara1
Received: 2 June 2023 / Accepted: 5 September 2023
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023
Abstract
With the rapid advancement of AI, there exists a possibility of rogue human actor(s) taking control of a potent AI system
or an AI system redefining its objective function such that it presents an existential threat to mankind or severely curtails
its freedom. Therefore, some suggest an outright ban on AI development while others profess international agreement on
constraining specific types of AI. These approaches are untenable because countries will continue developing AI for national
defense, regardless. Some suggest having an all-powerful benevolent one-AI that will act as an AI nanny. However, such an
approach relies on the everlasting benevolence of one-AI, an untenable proposition. Furthermore, such an AI is itself subject
to capture by a rogue actor. We present an alternative approach that uses existing mechanisms and time-tested economic
concepts of competition and marginal analysis to limit centralization and integration of AI, rather than AI itself. Instead
of depending on international consensus it relies on countries working in their best interests. We recommend that through
regulation and subsidies countries promote independent development of competing AI technologies, especially those with
decentralized architecture. The Sherman Antitrust Act can be used to limit the domain of an AI system, training module,
or any of its components. This will increase the segmentation of potent AI systems and force technological incompatibility
across systems. Finally, cross-border communication between AI-enabled systems should be restricted, something countries
like China and the US are already inclined to do to serve their national interests. Our approach can ensure the availability
of numerous sufficiently powerful AI systems largely disconnected from each other that can be called upon to identify and
neutralize rogue systems when needed. This setup can provide sufficient deterrence to any rational human or AI system from
attempting to exert undue control.
Keywords Existential risks· Safe AI· AI regulation· AI policy· X risks· Decentralized AI
1 Introduction
AI continues to find its way into our daily lives with unprec-
edented speed (Ariyaratne etal. 2023). Companies, attracted
by AI’s potential for economic efficiency (De Andrade and
Tumelero 2022), and governments, lured by its potential use
in law enforcement (Rademacher 2020) and national defense
(Allen and Chan 2017; Sayler 2020; Schmidt etal. 2021),
are investing heavily in AI development. At the same time,
renowned people like Geoffrey Hinton (widely acclaimed as
the Godfather of AI), Elon Musk, and Henry Kissinger have
warned of grave dangers posed by AI (Bove 2023; Daniel
2023; Schechner 2023). These involve existential threat
to humans (Bostrom 2002, 2013) and decrease in human
welfare and freedom (Bostrom 2012) due to wresting of
large-scale decision-making control (Sotala and Yampols-
kiy 2014) by one or few human actors using AI (Hine and
Floridi 2023) or by an AI system itself that deviates from
human welfare centricity intentionally (Gervais 2021; Zie-
sche and Yampolskiy 2018) or unintentionally (Turchin etal.
2019). We hereafter refer to such possibilities as tail risks.1
Turchin and Denkenbergen (2020) identify and classify
different types of global catastrophic risks at various levels
of AI development while Bostrom (2002) classifies existen-
tial risks along the dimension of humanity reaching its post-
human potential. Regardless of the classification, most of
the different types of catastrophic events identified in these
* Pankaj Kumar Maskara
pmaskara@nova.edu
1 Department ofFinance andEconomics, Nova Southeastern
University, 3301 College Avenue, Carl DeSantis Bldg. Suite
5150, FortLauderdale33314, USA
1 Though AI also presents other challenges like job displacement
(Clifton et al 2020), propagation of racial biases (Gupta etal 2022;
Shin 2022), and wealth inequality (Fernandes etal 2020), among oth-
ers, we restrict our analysis to tail risks.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Before embarking on a discussion of the regulation of artificial intelligence (AI), it is first necessary to define the subject matter regulated. Defining artificial intelligence is a difficult endeavour, and many definitions have been proposed over the years. Although more than 70 years have passed since it was adopted, the most convincing definition is still nonetheless that proposed by Turing; in any case, it is important to be mindful of the risk of anthropomorphising artificial intelligence, which may arise in particular from its very definition. Once we have established the subject matter regulated, we must ask ourselves whether lawmakers should pursue an approach that seeks to regulate artificial intelligence as a whole, or whether by contrast they should regulate applications of artificial intelligence in specific sectors or individual areas. The proposal for a regulation on artificial intelligence published on 21 April 2021 implements the former approach whilst also pursuing geopolitical goals. After providing an initial overview of the notion of artificial intelligence, this article investigates the geopolitical context to the proposal for a regulation, and then goes on to illustrate the regulatory model embraced by the proposal as well as related critical aspects.
Article
Full-text available
The US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
Article
The paradigm of edge computing has formed an innovative scope within the domain of IoT through expanding the services of the cloud to the network edge to design distributed architectures and securely enhance decision-making applications. Due to the heterogeneous of edge Computing, edge applications are required to be developed as a set of lightweight and interdependent modules. As this concept aligns with the objectives of microservice architecture, effective implementation of microservices-based edge applications within IoT networks has the prospective of fully leveraging edge nodes capabilities. Deploying microservices at IoT edge faces plenty of challenges associated with security and privacy. Advances in AI, and the easy access to resources with powerful computing providing opportunities for deriving precise models and developing different intelligent applications at the edge of network. In this study, an extensive survey is presented for securing edge computing-based AI Microservices to elucidate the challenges of IoT management and enable secure decision-making systems at the edge. We present recent research studies on edge AI and microservices orchestration and highlight key requirements as well as challenges of securing Microservices at IoT edge. We also propose a Microservices-based edge framework that provides secure edge AI algorithms as Microservices utilizing the containerization technology.
Article
A human-centered artificial intelligence (AI) approach is proposed as a theoretical foundation and a practical guideline to achieve green and sustainable AI. The goal of this study is to contribute to the discussion of AI as both more sustainable and greener.
Article
Objective ChatGPT (Generative Pre-trained Transformer) is an artificial intelligence language tool developed by OpenAI that utilises machine learning algorithms to generate text that closely mimics human language. It has recently taken the internet by storm. There have been several concerns regarding the accuracy of documents it generates. This study compares the accuracy and quality of several ChatGPT-generated academic articles with those written by human authors.Material and methodsWe performed a study to assess the accuracy of ChatGPT-generated radiology articles by comparing them with the published or written, and under review articles. These were independently analysed by two fellowship-trained musculoskeletal radiologists and graded from 1 to 5 (1 being bad and inaccurate to 5 being excellent and accurate).ResultsIn total, 4 of the 5 articles written by ChatGPT were significantly inaccurate with fictitious references. One of the papers was well written, with a good introduction and discussion; however, all references were fictitious.Conclusion ChatGPT is able to generate coherent research articles, which on initial review may closely resemble authentic articles published by academic researchers. However, all of the articles we assessed were factually inaccurate and had fictitious references. It is worth noting, however, that the articles generated may appear authentic to an untrained reader.
Article
Conflict pressures are pushing the world closer to autonomous weapons that can kill without human control. Researchers and the international community must join forces to prohibit them. Conflict pressures are pushing the world closer to autonomous weapons that can kill without human control. Researchers and the international community must join forces to prohibit them.
Article
In its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of “trustworthiness” with “acceptability” in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
Article
We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct medical decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g. development; validation/certification; and trustworthy production and maintenance.