Available via license: CC BY 4.0
Content may be subject to copyright.
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
Discover Articial Intelligence
Perspective
Identity ofAI
VladanDevedzic1
Received: 22 October 2022 / Accepted: 1 November 2022
© The Author(s) 2022 OPEN
Abstract
With the explosion of Articial Intelligence (AI) as an area of study and practice, it has gradually become very dicult to
mark its boundaries precisely and specify what exactly it encompasses. Many other areas of study are interwoven with
AI, and new research and development topics that require interdisciplinary approach frequently attract attention. In
addition, several AI subelds and topics are home to long-time controversies that give rise to seemingly never-ending
debates that further obfuscate the entire area of AI and make its boundaries even more indistinct. To tackle such prob-
lems in a systematic way, this paper introduces the concept of identity of AI (viewed as an area of study) and discusses
its dynamics, controversies, contradictions, and opposing opinions and approaches, coming from dierent sources and
stakeholders. The concept of identity of AI emerges as a set of characteristics that shape up the current outlook on AI
from epistemological, philosophical, ethical, technological, and social perspectives.
Keywords Articial intelligence· Area of study· Identity· Characteristics· Topics· Controversies
1 Introduction
It is very dicult to tell with high accuracy what people have in mind when they talk about AI. The term/label ’AI’ has
become overloaded. To some people, the proliferation of related elds, such as applied statistics, data science, predictive
analytics, biometrics, etc., brings up the feeling that AI gradually loses its identity to these other elds.
Compiling from denitions from several online English dictionaries (Merriam-Webster, Cambridge, Britannica, Collins,
Vocabulary.com, Dictionary.com), the term identity is used in this paper to denote the set of characteristics, qualities,
beliefs, etc. by which a thing or person is recognized or known and is distinguished from others. The paper extends
this term from things and persons also to the research and engineering area of Articial Intelligence (AI) and discusses
important qualities, beliefs and insights related to the identity of AI.
The question that immediately follows is: What are those distinguishing characteristics, qualities, beliefs, etc. of AI
as an area of study? This question has been taken as the research question in the analyses conducted in an attempt to
elucidate the notion of identity of AI.
Note that this is an open question, and it is where the notion of identity of AI that many people intuitively and implicitly
have in their minds starts to diuse. As AI continues to boom at an unprecedented pace, as other more-or-less related
areas and elds continue to overlap more and more with AI, and as new subelds and topics continue to open under the
umbrella that we call AI, it becomes increasingly dicult to draw a clear distinction between AI and non-AI.
* Vladan Devedzic, devedzic@gmail.com; vladan.devedzic@fon.bg.ac.rs | 1University ofBelgrade, Faculty ofOrganizational Sciences,
Belgrade, Serbia.
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
The characteristics of AI discussed here are selected based on the attention they get from AI researchers, the impor-
tance of challenges they put to further development of AI, as well as the controversies they arise in dierent discussions
about AI (see the "Methodology" section for details). These characteristics are:
A. the denition and understanding of AI as an area of study
B. the scope and coverage of AI with respect to those of other related elds
C. the autonomous behavior of AI systems
D. the dierent views of AI coming from dierent stakeholders
E. the explainability of AI systems and their behavior
F. some recently emerged and increasingly important topics that already make impact in the development of AI systems,
such as AI ethics, bias and trustworthiness
G. other characteristics, beliefs, and topic areas that contribute to the dynamics of AI’s identity and simultaneously keep
the whole concept open.
Note that these seven characteristics are necessarily interdependent and interwoven. For example, it is not possi-
ble to assess the explainability of an AI system and its behavior (characteristic E) without putting it in the context of a
specic stakeholder or group of stakeholders (characteristic D). Who are the end users of the system, and what kind of
explanations do they expect? On the other hand, from the system developers’ perspective, is it cost-eective to build
into the system the explanations that might not be frequently needed during the system exploitation? Likewise, do the
explanations that the system generates improve or degrade its trustworthiness (characteristic F)?
Still, in order to keep the focus, characteristics A-G are analyzed in this paper mostly individually.
The driving idea of the study presented in the paper was to illustrate the concept of identity of AI from multiple
perspectives. As new AI subelds and technologies continue to emerge, the identity of AI requires at least a periodical
revision because new problems arise as well (e.g., AI energy consumption, model-training biases, legal issues, etc.). The
way that our society interacts with new AI systems and technologies also makes its own impact on the identity of AI in
terms of how people understand it. Psychologists, sociologists, philosophers, and even legal experts play increasingly
important roles in attempts to draw the scope of AI [1].
Note also that, to the best of the author’s knowledge, the existing literature does not cover the concept of identity of
AI in an integral and comprehensive, yet focused way. True, AI textbooks, edited volumes and review papers do cover
multiple aspects of AI that largely overlap with characteristics A-G, but no overarching, unifying big picture emerges
from their coverage.
2 Methodology
Extensive research of literature on dierent aspects of AI has been conducted and the corresponding ndings have
been critically evaluated, following the guidelines presented in [2]. In addition, a number of AI systems and applications
embodying the latest AI technology have been reviewed as well. Likewise, the author has examined many free online AI
resources (books, courses, videos, interactive tools and Websites, podcasts, lecture notes, colabs, code repositories and
datasets) in an attempt to get a better insight into how these diverse resources contribute to the concept of identity of AI.
Table1 summarizes this approach.
Scholarly articles coming from high-reputation publishers and libraries (ACM, IEEE, Elsevier, Springer, Cambridge
University Press, Taylor & Francis, PubMed, EBSCO, ScienceDirect,…) have constituted the majority of the literature used.
However, it has been decided to also use popular AI Websites like AI Topics [3], as well as paper preprint repositories (arxiv.
org), dierent tutorials, calls for papers, AI-relevant Web pages and newsletters from Websites of dierent companies,
organizations and institutions, AI product descriptions, and even informed popular press articles that cover AI topics. It
has been considered important for the topic of this paper to include such alternative resources to an extent as well, in
order to get a more complete picture of the notion of identity of AI. Of course, special care has been taken to dierentiate
between marketing articles and informed popular texts, and to select only useful ones from these alternative resources.
Characteristics, qualities, beliefs, and opinions related to AI as a eld of study and discussed by and/or implemented in
these dierent resources have been found to be either converging or raising attention, debates, and even controversies.
The author has assumed that AI experts and practitioners have reached at least an approximate consensus about the
rst group, largely corresponding to the last row in Table1. For example, heuristic search, representation and reasoning,
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
Table 1 The methodological approach taken in this paper
Each row represents one element/component of the approach
Label—short name of the corresponding element/component
Driving idea—the essentials of the element/component and why it was important
Details—additional explanation of the element/component
Contribution—how this element/component contributes to the approach taken in the paper
Label Driving idea Details Contribution
Critical evaluation Evaluate literature and other resources
relevant for understanding the con-
cept of identity of AI
Analyze strengths and weaknesses of presenting distin-
guishing characteristics of identity of AI Indicate opposing views, as well as attempts to integrate
them
State-of-the-art analysis Include mostly contemporary research Focus on current views, since the area of AI has exploded
in the last decade Include the perspectives of dierent stakeholders into the
current big picture of the concept of identity of AI
Earlier work, mile-
stones, established
topic areas
Include some of the relevant seminal
work from the past, in order to get a
more complete picture
Present some historically important ideas that persist or
have been overcome by newer developments Important sources of attempts to dene intelligence and AI;
evolution of characteristics of identity of AI
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
games, and robotics, to name but a few, have long been well-established AI topic areas that certainly contribute to the
notion of AI’s identity. It is the second group that makes AI’s identity very vague. The focus of this research has been
largely on this second group, roughly corresponding to the rst two rows in Table1, thus the subsequent sections dis-
cuss primarily the characteristics and opinions from the second group. This second group can be seen as creating the
"variable part" of AI’s identity.
As for the resources other than text ones, statistics and graphs from [3], AI training courses from Stanford and MIT,
as well as resources available from Kaggle and GitHub repositories have proven to be very valuable in the author’s past
work, so they have been largely consulted here as well. Some of them are not referenced explicitly in the subsequent
sections, but have nevertheless supported the creation of the big picture.
The big picture of the current identity of AI has emerged from the analyses carried out in this research eort, but has
remained vague. Its characteristics (A-G from the Introduction) are elaborated in detail in the rest of the paper. Note that
the paper section titles apparently do not map these characteristics one to one explicitly. There is a reason for that: the
controversies around these characteristics have made several important dichotomies in AI as an area of study, so that
analyzing these dichotomies leads to a more comprehensive understanding of the whole area. The "Discussion" section,
however, brings all these pieces together around the characteristics A-G.
3 Denition ofAI?
The question mark in this section heading is intentional—there are many attempts to dene AI, yet there is no widely
accepted, ’standard’ denition. The reason is that the concept of intelligence is much too complex, not well understood
[4], and "nobody really knows what intelligence is" [5].
3.1 Selected definitions
As an introduction to a more detailed discussion on denitions of intelligence and AI, here are some examples accord-
ing to which AI is:
• the theory and development of computer systems able to perform tasks normally requiring human intelligence, such
as visual perception, speech recognition, decision-making, and translation between languages [6].
• scientic understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in
machines [3].
• the designing and building of intelligent agents that receive percepts from the environment and take actions that
aect that environment [7].
The rst denition has been selected as a representative of denitions coming from general reference sources, such
as dictionaries and encyclopedias. The second one comes from one of the most authoritative Websites when it comes
to AI, that of The Association for the Advancement of Articial Intelligence (AAAI). The third one is the one adopted in
the widely used and quoted textbook on AI.
3.2 Collections ofdefinitions
Typical approaches in compiling denitions of AI and human intelligence are to search for them on the Web and in lit-
erature, to look for the denitions used by high-reputation research groups, centers, and labs, to ask experts to provide
their own denitions, and to ask experts to comment on existing denitions in an attempt to rene them and possibly
merge them into one. An obvious drawback of these approaches is the lack of completeness; as Legg and Hutter notice,
many such denitions are buried deep inside articles and books, which makes it impractical to look them all up [8].
Still, existing incomplete collections provide useful insights into dierent views on intelligence and AI, make it pos-
sible to classify the included denitions into meaningful categories, and also prompt researchers to extract common
and essential features from dierent denitions and devise their own ones. For example, Legg and Hutter have collected
and surveyed 70 denitions of intelligence and noticed strong similarities between many of them [8]. By pulling out
commonly occurring features and points from dierent denitions, they have adopted the view that "intelligence [is the
concept that] measures an agent’s ability to achieve goals in a wide range of environments."
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
Fagella makes a clear separation between AI as an area of study and AI as an entity, and focuses on the latter [9]. His
approach has been to start from only ve practical denitions from reputable sources (Stanford University, AAAI, etc.)
and ask experts to comment on them or to provide their own denition. The nal result of an analysis of these comments
is the following attempt to dene AI: "Articial intelligence is an entity (or collective set of cooperative entities), able to
receive inputs from the environment, interpret and learn from such inputs, and exhibit related and exible behaviors
and actions that help the entity achieve a particular goal or objective over a period of time."
Marsden has started from about two dozen established denitions, coming from dierent areas of business and science
[10]. By spotting common themes in his list of denitions, he has synthesized them into the following one of his own:
"[AI is any] technology that behaves intelligently (insofar as it responds adaptively to change). The capacity to respond
adaptively to change through the acquisition and application of knowledge is a hallmark of intelligence—i.e. the abil-
ity to cope with novel situations." Although his denition suers from the same deciency as many others—circularity
caused by the terms ’intelligently’ and ’intelligence’—it stresses the adaptivity to change as an important feature of AI.
3.3 Formal approaches
Legg and Hutter have also constructed a formal mathematical denition of machine intelligence, calling it universal
intelligence [5]. It includes mathematically formalized descriptions of an agent, its goal(s), its environment(s), the obser-
vations it receives from the environment(s), the actions it performs in the environment(s), and reward signals that the
environment(s) send(s) to the agent as information of how well the agent is performing in pursuing its goal(s). The agent
is supposed to compute its measure of success from the reward signals and wants to maximize it.
Another existing formal denition of AI is Russell’s [11]. It is based on the concept of bounded optimality, i.e. the capac-
ity to generate maximally successful behavior given the available information and computational resources. Formally,
bounded optimality is exhibited by an agent’s program lopt that belongs to the set LM of all programs l that can be
implemented on a machine M and that satises
where E is an environment class, U is a performance measure on sequences of environment states, and V is the expected
value according to U obtained by the agent when executing the function implemented by the program l (argmax is an
operation that nds the argument that gives the maximum value from a target function).
3.4 Issues withdefinitions
The lack of consensus on the denition of AI is expected and is caused by dierent cognitive biases [12]. These biases
form part of people’s judgment and cannot always be avoided. Another issue stems from the fact that denitions, in
the attempt to be concise, often lack aspects that many researchers believe are essential for the entire area of AI. For
example, the denition of AI proposed by the European Commission’s High-Level Expert Group on Articial Intelligence
(AI HLEG) does not mention of Natural Language Processing (NLP), one of the oldest and most popular topics in the area
of AI, although it does mention other topics explicitly [13].
AI researchers often rely on the notion of rationality instead of the vague concept of intelligence. Rationality refers to
the ability to choose the best action to take in order to achieve a certain goal, given certain criteria to be optimized and
the available resources [7, 11, 13], and is a much more specic concept than intelligence. However, focusing on rationality
alone leaves little room for other important aspects of intelligence, such as intention, abstraction, emotions and beliefs.
Yet another issue is that whatever the denition of AI one chooses, there should be a way to test if a system is intel-
ligent according to that denition. The widely known Turing test [14] is nowadays disliked and even dismissed by many.
They criticize it as misleading [15], insucient [9], not convincing [16], not formal enough [11], and largely based on a
chat-like model of interaction between humans and computers that excludes criteria important in, e.g., self-driving cars
and robots [17].
Unfortunately, alternatives to the Turing test are also limited and are not widely studied and accepted [18]. An implicit
assumption underlying such tests—that a demonstration of some kind of intelligent behavior clearly reveals genuine
intelligence—is not true [15, 19]. For example, the AI system in a self-driving car does not see or understand its environ-
ment in the way that humans do.
lopt =argmaxl
∈
LMV(Agent(l, M),E,U
)
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
Most denitions of intelligence and AI also reect a very human-centric way of thinking [20]. The major criticism here
is that practical AI technologies like statistical-pattern matchers should not be necessarily anthropomorphized in an
attempt to mimic human abilities; they should rather be seen as complementing human abilities [11, 21].
3.5 Working definitions
Still, from the pragmatic point of view, there are researchers who believe that it’s better to have at least some explicit
’working denitions’ of intelligence and AI that serve current research contexts, improve coherence, and make research
and communication of ideas more ecient, than to wait for new fundamental discoveries and consensus about the
denitions. To this end, Wang has suggested several working denitions of AI depending on whether dierent practical
contexts require building systems that are similar to the human mind in terms of structure, behavior, capability, func-
tion, or principle [20].
Linda S. Gottfredson has proposed the following denition of intelligence: "Intelligence is a very general mental capa-
bility that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex
ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking
smarts. Rather, it reects a broader and deeper capability for comprehending our surroundings—’catching on,’ ’making
sense’ of things, or ’guring out’ what to do" [22]. The last bit of this denition is of special interest for understanding the
level of ’intelligence’ of current AI systems.
Obviously, these dierent working denitions give the area of AI dierent identities. They have dierent focus, cor-
respond to dierent levels of abstraction, set dierent research goals, and require dierent methods. Practical develop-
ments based on these dierent denitions/approaches produce dierent results, and must be evaluated according to
dierent criteria. An AI system relying on an ’integrated’ denition, i.e. satisfying the criteria of all these working deni-
tions does not exist yet.
4 AI, its topics, andother disciplines: blurring borderlines, merging scopes
A quick glance at the Quick Topic Analysis graph at AITopics.org [3], https:// short url. at/ kDEPW,reveals that Machine Learn-
ing (ML) is by far the most popular topic in AI today. A similar graph from the same source indicates that neural networks
are the most popular subarea of ML. Other popular ML subareas include statistical learning, learning graphical models,
reinforcement learning, ML performance analysis, inductive learning, evolutionary systems and decision tree learning,
etc. Approximately, these popular ML subareas follow a geometric progression—each next subarea is about half the size
or 2/3 the size of the previous one in terms of popularity. These graphs are continuously updated at AITopics.org, but for
quite some time they have remained roughly the same in shape and in terms of the dominance of ML and particularly
neural networks. Jiang etal. have presented a somewhat dierent graph—a semantic network of important concepts
in AI—based on the search results in Web of Science [23]. They also provide an analysis of why ML and especially deep
learning have become so popular.
While having these up-to-date insights is certainly useful, it looks as if non-experts are not aware of them. More pre-
cisely, it often looks as if the non-ML parts of AI are simply overlooked and ignored by many. Moreover, in industry and in
the popular press alike, people often disregard the fact that AI subsumes ML and use the terms AI and ML as synonyms.
Alternatively, one often sees the use of ’AI and ML’, ’AI/ML’, ’AI or ML’, ’AI and neural networks’, ’AI and deep learning’, and
the like, which reects colloquial rather than factual use and makes identity of AI even more indistinctive.
The things get even more complicated with the fact that AI partially overlaps and intersects with other disciplines,
areas and elds, like math, statistics, pattern recognition, data mining, knowledge discovery in databases, programming,
neurocomputing, etc. They all use concepts, tools, and techniques from AI and vice versa. In interdisciplinary contexts,
it is often hard to say what specic discipline prevails. Interested readers are welcome to take a look at a nice graphical
illustration of these intersections and relationships [24].
Such complex relationships between dierent disciplines and their topic areas, as well as the popularity of ML, often
bring up questions like "What is the dierence between AI and Data Science (DS)?", "What is the dierence between
ML and DS?", and "What is the dierence between ML and statistics?" Note that, again, people who are not AI experts
often exhibit a tendency to use AI and DS as synonyms. However, DS "works by sourcing, cleaning, and processing data
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
to extract meaning out of it for analytical purposes" [25], i.e., its center of attention is data analysis, whereas AI is about
understanding the mechanisms of intelligence and building systems (agents) that can perform tasks normally requiring
intelligence. Mahadevan makes a comment that "ML is to statistics as engineering is to physics… statistics is the science
that underlies … machine learning" [26].
5 AI dichotomies
Dierent stakeholders see AI from dierent perspectives. This is the source of several dichotomies that further diuse
the identity of AI. An illustration of such dierent perspectives is the already mentioned distinction of AI as an entity and
as an area of study [9, 27]. There are others as well.
5.1 Human‑driven AI vs. autonomous AI
To most people, the term ’autonomous AI’ brings two real-world mental associations: autonomous robots and self-driving
vehicles. The autonomy of such systems is strongly related to the degree to which the system can run independently [28].
Here the major problems arise insituations when communications are poor, or the sensors fail, or the system is physically
damaged—the system should have a range of intelligent capabilities to handle such unexpected situations on its own.
The human-driven part of autonomous AI systems is, however, still large—in addition to turning these systems on
and o, it is humans that create the programs that the systems execute, and much of system maintenance is also on the
human side.
At a more basic level, and bearing in mind the AITopics.org Quick Topic Analysis graph (https:// short url. at/ kDEPW)
[3],much of AI today is about ML model building and making predictions. Although tools like AutoML can automate
these tasks to a great extent, in many real-world situations such tasks are mostly human-driven. It is still most often the
case that human experts select variables from datasets to train the model with, and run various kinds of dimensionality
reduction and feature engineering processes. Training ML models is not only time-consuming and resource-demanding, it
is often manual. Interpreting predictions obtained from feeding previously unseen data is also a human task in most cases.
In fact, things are even more complex with human-driven AI. Although currently there is nothing conscious and
human-cognitive in the systems labeled AI, it is also important to understand that human intelligence also has perfor-
mance constraints and limitations [29]. As a simple example, consider again the case of self-driving vehicles—while level
5 self-driving is still questionable, there is also no guarantee that, safety-wise, human steering is better than self-driving
(and self-driving vehicles and their algorithms are continuously being improved).
Likewise, there are arguments against the so-called nality argument, i.e. that "the nal goal that a rational agent hap-
pens to have at the beginning of its existence will be really nal" (unlike humans, who can always reconsider their nal
goals and possibly change them) [30]. The rationale is as follows: an agent’s goal is not separate from its understanding
of the world, and this understanding can change over time as the agent learns from experience; thus the agent’s under-
standing of the given goal may change as well. Note, however, that the nality argument also has many proponents
and for now it remains an open debate. In reality, current AI systems can at best choose the path to take to achieve their
human-set goal [13].
Finally, there are more and more voices in support of integrating human-driven AI, autonomous AI, and human intel-
ligence in practical developments and use of intelligent systems [29, 31–33]. The rationale is to leverage all AI technology,
but also to keep humans in the loop in order to achieve the right balance.
5.2 Industry/Academia dichotomy
It is a well-known fact in many elds that the expectations that industry has about innovations dier to an extent from
what researchers are pursuing. Industry has to care not only about the quality and utility of its products and services,
but also about pressures coming from the market, about competitiveness, and about continuous improvement at a
very practical level. Academia and research are more about ideas and visions, and how to develop prototypes, evaluate
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
them, and contribute to existing knowledge in dierent elds. There are also many initiatives and funding programs
that support bringing research and industry together in an attempt to get "the best of both worlds’’. Likewise, many big
companies have their own research departments whose activities pertain to practical innovations to become part of
future products and services oered by these companies.
In the case of AI, the hype surrounding the eld for quite some time now contributes to the fact that many companies
today use the term ’AI’ in advertising their products simply because it is so fashionable. However, there is a danger of
overselling AI if the term ’AI’ is used just for marketing purposes, leading to what Waddell calls "AI washing" and "fake-
it-till-you-make-it attitude": "If you can swap in ’data analytics’ for ’AI’ in a company’s marketing materials, the company
is probably not using AI" [34].
Some reality check here can introduce another perspective. Figure1shows the declining trend in popularity of some
of the best selling AI technologies—ML, deep learning and NLP—on the Gartner Hype Cycle curve [35] for AI. Companies
are typically interested in how they can build dierent AI innovations into products, i.e. in the rightmost end of the curve.
Contrary to that, researchers typically focus on emerging topics that belong to the leftmost, rising end of the curve. In the
past few years, they have included, e.g., knowledge graphs, small data, and generative AI (not shown explicitly in Fig.1).
Such technologies are typically not getting much attention in industry before investors become interested in them. For
instance, the approach called small data refers to what Andrew Ng calls "smart-sized, ’data-centric’ solutions" [36]. The
idea is very appealing: instead of training neural networks with huge volumes of data, some of which may be inconsistent,
noise-generating, erroneous, how about selecting small sets of representative, high-quality data? This would not only
drastically reduce the training time—it would be much closer to how humans build their mental models of the world [37].
But before this idea becomes actionable, research eorts are needed to nd out how to get small-size, good-quality data.
Another phenomenon related to the notion of identity of AI and interesting in the context of industry/academia
dichotomy is that of the AI eect— discounting (mostly by researchers) AI systems, devices and software as not being
really intelligent but "just technology", after results are achieved in meeting a challenge at the leftmost slope of the curve
[38]. On the other hand, companies are interested in further developing that same "just technology", provided that it
survives the test of time long enough to prove protable.
5.3 Artificial General Intelligence (AGI) vs. narrow AI
Informally, if a program, an agent, or a machine represents and/or embodies generalized human cognitive abilities and
is capable of understanding, learning, and performing any intellectual task a human is capable of, it is considered to be
AGI, or an AGI system/entity. Just like humans, when faced with an unfamiliar task, AGI can act in order to nd a solu-
tion. It is expected to achieve complex goals in a variety of environments, while simultaneously learning and operating
autonomously. AGI, or strong AI, is best understood as the original goal of AI as a discipline [39], as opposed to many
current practical AI systems, called narrow AI, capable of performing specic tasks (e.g., self-driving cars, face recognition
technology, and checkers playing programs). Just like AI can be discussed both as an area of study and as an entity, AGI
Fig. 1 Popularity of ML,
deep learning (DL) and NLP
on the Gartner Hype Cycle
curve for AI over time, since
2017 (the shape of the curve
shown afterthe https://
commo ns. wikim edia. org/
wiki/ File: Gartn er_ Hype_ Cycle.
svggraphlicensed under the
Creative Commons Attribu-
tion-Share Alike 3.0 Unported,
2.5 Generic, 2.0 Generic and
1.0 Generic license, attribu-
tion: Jeremykemp at English
Wikipedia;the legend and the
relevant years on the graph
inserted by the author)
ML, DL, NL
P
2017
2019
2021
2022
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
can be also seen both as a theoretical and practical study of general, human-level intelligence, and as an engineered
system that can display and exhibit the same general intelligence as humans.
Although currently AGI does not exist, there is a slowly growing research community around that idea. There is
also an open discussion between supporters and opponents of AGI. Some supporters believe that AGI will be real-
ized in this century [40]. On the other hand, opponents argue from philosophical and other points of view that AGI
cannot be realized [41].
The open discussion between supporters and opponents of AGI raises a lot of speculation about the develop-
ment of AGI to the extent that artificial superintelligence will surpass human intelligence and replace humans as the
dominant life form on Earth [42]. Researchers like McLean etal. and Naudé and Dimitri fear "an arms race for AGI" that
would be detrimental for humanity if the resulting AGI systems break out of control [43, 44] and analyze sources of
risk for such a scenario, proposing ways to mitigate these risks.
The attitude of the majority of the AI community when discussing AGI still remains reserved. Some researchers
propose to strive to create not full-fledged AGI systems but practical AI systems "compatible with human intelligence"
[45], i.e. "systems that effectively complement us" [29] and represent "the amplification of human intelligence" [45].
An important milestone on the path to achieving AGI would certainly be overcoming Moravec’s paradox [46]. It
refers to the fact that cognitive tasks that are difficult for humans can be relatively easy for computers, and vice versa.
"It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers,
and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" [46].
Meanwhile, much of AI remains narrow AI, meaning that current AI systems carry out specific ’intelligent’ behaviors
and perform specific practical tasks in specific contexts. There is nothing wrong with this approach—the history of
AI so far has seen a number of useful results, systems, and technologies that are all narrow AI. Moreover, intercon-
necting multiple narrow AI systems might lead to a higher-quality outcome [29], and narrow AI systems pose no
existential threat to humans [44].
5.4 Explainable AI vs. black‑box AI
In a recent philosophical study of intelligence [47], explanations and explainability are identified as important char-
acterizations of the epistemic distinctiveness of the notion of intelligence. Informally, an explanation is a clarification
of a certain phenomenon, process, procedure, or fact(s), and typically comes as a set of statements that describe the
causes, context, and consequences of the phenomenon/process/procedure/fact(s). Explainability itself is defined as
follows: given a certain audience, explainability refers to the details and reasons a model gives to make its functioning
clear or easy to understand [48]. Put more simply, explainability is a property of those AI systems that can provide a
form of explanation for their actions [13].
It was recognized long ago that many AI systems, in order to be useful and interactive, should be able to explain
their built-in knowledge, reasoning processes, results and the recommendations they make [49–53]. Remember,
though, that back in the day most of AI was focused on symbolic and logical reasoning, as well as that the field of
NLP was not as advanced as it is today. Thus the early explanation approaches and techniques all had their limita-
tions. Consequently, early AI systems could not carry on a convincing, explanatory dialog with the user in real-world
environments.
The interest in AI systems that can generate meaningful explanations of their behavior has revived with the AI explo-
sion in the last decade. However, it has been accompanied with a considerable shift of focus, due to the rapid develop-
ment and dominance of ML. In ML, especially in neural networks and deep learning models, the complexity of opaque
internal representations is huge. Although these new models are more eective and exhibit much better performance
than the early models, they are also characterized by reduced explainability. As Rodu and Baiocchi write, "some of the
most successful algorithms are so complex that no person can describe the mathematical features of the algorithm"
[54]. This has led to the rise of a whole movement in AI, called explainable AI or XAI [55–58], best reected in DARPA’s
large project with the same name [59].
XAI is dened in a way similar to how explainability is dened: given an audience, an XAI is one that produces details
or reasons to make its functioning clear or easy to understand [48]. In other words, an AI model is explainable if it is
possible to trace (in detail and in a manner understandable to humans) its reasoning, its decision-making processes, its
predictions and the actions it executes. Note the ’audience’ part in these denitions; as Arya etal. stress, one explanation
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
does not t all—each explanation generated by an XAI system should target specic users and should be tailored for
their interests, background knowledge and preferences [60].
Also, explainability is not exactly the same concept as interpretability, which is the ability to explain or to provide the
meaning in terms understandable to a human; a model can be explained, but the interpretability of the model is some-
thing that comes from the design of the model itself [48, 61].
Wu notes an important detail here: simple models, like decision trees and logistic regression models, are interpret-
able—humans can directly trace how these models transform input to output by examining the model parameters [62].
In other words, there is no need to further explain such models; they are interpretable and self-explanatory.
On the other hand, there are models that are not straightforwardly interpretable to humans. We call them black-box
models [58, 61–63]. Currently, these black-box models usually outperform interpretable models on unstructured data
(like text and images), but there are cases when interpretable models perform on par with black-box models, e.g. in case
of modeling structured clinical data [61].
Current dominance of ML in AI is implicitly reected in the previous paragraphs. Moreover, some researchers call
the entire XAI just explainable ML [61] or interpretable ML [63]. Some of the techniques typically used in explainable ML
include identication of feature importance/dominance, development of surrogate simple interpretable models based
on the predictions of the black-box model, and various visualization techniques.
The rivalry between black-box and explainable models is actually quite complex. "Explainability may not be vital in
all domains, but its importance becomes underlined in domains with high stakes such as AI applications in medical
diagnostics, autonomous vehicles, nance, or defense" [64]. Also, "black-box predictive models, which by denition are
inscrutable, have led to serious societal problems that deeply aect health, freedom, racial bias, and safety…" [63]. Like-
wise, inscrutability of black-box models can hamper users’ trust in the system and eventually lead to rejection/abandon-
ment of the system; "algorithmic biases [of black-box systems]… have led to large-scale discrimination based on race and
gender in a number of domains ranging from hiring to promotions and advertising to criminal justice to healthcare" [58].
Loyola-González proposes a fusion of both explainable and black-box approaches in real-world domains, stressing
that "experts in the application domain do not need to understand the inside of the applied model, but it is mandatory
for experts in machine learning to understand this model due to, on most occasions, the model need to be tuned for
obtaining accurate results" [65]. Petch etal. elaborate on this idea by proposing to train models using both black-box and
interpretable techniques and then assess the accuracy of predictions [61]. In practice, there are a number of applications
where experts are happy with post-hoc and/or just detailed enough explanations coming from the system—see, e.g.,
[66]. Still, situations also arise in practice when AI systems behave in ways unexplainable even to their creators, which
can create important security and interpretability issues [67].
Although users generally prefer explainable and interpretable systems to black-box ones, their cognitive load to inter-
pret explanations provided by such systems can still hinder the benets of the explanations. The task at hand must be
dicult enough for the explanations to be benecial for the users, and it is important to dene measures of explanation
eectiveness bearing in mind that these can change over time [57, 64, 68].
6 Open problems inAI
Recently, several elds, topics and problems related to AI are receiving a lot of attention in research publications and in
press, videos and interviews. They extend the identity of AI in their own way.
6.1 Energy consumption andcarbon footprint ofAI
Carbon footprint is a term often used in discussions about sustainability. Adapted from [69], it can be understood as the
total greenhouse gas emissions caused by activities of a person or an organization, or by an event, a service, a product,
etc. It is usually expressed as the carbon dioxide equivalent of these activities, events, processes and entities.
Carbon footprint of AI is primarily related to the fact that today’s AI systems, and especially large-scale deep neural
networks, consume a lot of power. For example, Strubell etal. have calculated that the carbon emissions of training one
large NLP model is roughly equivalent to the carbon emissions of ve cars over their entire lifetime [70]. Similarly, it is
well-known that data centers that store data around the world consume huge amounts of energy [71], and much of
nowadays AI relies on data.
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
Still, as Schwartz etal. carefully note, these things are not just black and white—much of the recent computationally
expensive AI developments push the boundaries of AI [72]. Also, Patterson etal. predict that data storage and model
training demand for energy will plateau, and then shrink [73]. Hölzle explains misconceptions that make projected power
demands of ML models like the one reported by Strubell etal. largely inated [74].
Along these lines, Amy van Wynsberghe denes sustainable AI as "a movement to foster change in the entire lifecycle
of AI products (i.e. idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity
and social justice" [75]. Note that the focus here is not only on AI systems and technology, but also includes a lot of social,
environmental, ethical, and even political context [76].
6.2 Bias
"If the training data is biased, that is, it is not balanced or inclusive enough, the AI system trained on such data will not
be able to generalize well and will possibly make unfair decisions that can favor some groups over others. Recently the
AI community has been working on methods to detect and mitigate bias in training datasets and also in other parts of
an AI system" [13].
This general statement can have serious consequences in practical applications of AI. If there exist prejudices in the
training data or if the developers of an AI system unconsciously introduce their cognitive biases in the algorithm design,
the results will be biased and often unfair [77]. For example, it has been reported that OpenAI’s GPT-3 language model
[78] exhibits bias in terms of displaying strong associations between Muslims and violence [79]. There are many more
other examples of biased results in AI [77, 80–84].
An obvious question here is: Why not simply lter out prejudices and other kinds of bias from the training data? It
turns out that this is often easier said than done, because such ltering can introduce unwanted elimination of data
useful in the other parts of the overall model [85]. It is also very dicult for developers not to embed subjective value
judgments about what to prioritize in the algorithms, even if their intentions are good. Note also that if the training data
is incomplete, there is a high risk of the data being inherently biased from the start. Moreover, as Dr. Sanjiv M. Narayan
from Stanford University School of Medicine has commented in an interview [86], "All data is biased. This is not paranoia.
This is fact." Finally, special care should be taken in cases when there are datasets other than the one used for training the
model—the model designers must ensure that the trained model generalizes well to the other, external datasets [87, 88].
Another obvious question is: How to avoid or x bias in AI, i.e. how to keep it out of AI tools? It is usually not possible
to do it completely, since the so-called ’static assumption’—the data that does not change over time and the biases that
all show up before the system is put in actual use—is typically not realistic. However, ensuring that the training data is
representative of the target application context and audience, building multiple versions of the model, with multiple
datasets, conducting data subset analysis (ensuring that the model performance is identical across dierent subsets of
the training data), as well as updating the datasets over time and re-training the model, certainly mitigates the prob-
lem [86]. The already mentioned ’data-centric’ approach [36] is also helpful in reducing the initial bias to an acceptable
minimum. There are also automated tools like IBM’s AI Fairness 360 library (open-sourced through GitHub) and Watson
OpenScale, as well as Google’s What-If Tool, that can help spot biases early, visualize model behavior over multiple data-
sets, and mitigate risks.
Schwartz etal. have provided the most systematic and the most comprehensive study to date of bias in AI and how
to mitigate it [89].
6.3 AI ethics
AI carbon footprint and AI bias are currently probably the hottest topics in the more general context of AI ethics or ethical
AI. Frequently used related terms are responsible AI and trustworthy AI, and, more recently, sustainable AI. There is a great
deal of overlap among these terms and how they are used in literature, but there are some dierences as well. Table2
illustrates this point.
The concept of AI ethics refers to a system of moral guidelines and principles (see Table2) designed to make AI devel-
opment and deployment safe, fair, responsible, and accountable [76, 90–94].
AI ethics has come into focus in recent years with the advances of AI and its integration with the mainstream IT
products, as concerns have arisen over AI’s potentially harmful eects and misuse in decision-making, employment and
labour, social interaction, health care, education, media, access to information, digital divide, personal data and consumer
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental
freedoms, including freedom of expression, privacy and non-discrimination [94].
To operationalize AI ethical principles, companies should identify existing infrastructure that a data and AI ethics
program can leverage, create a data and AI ethical risk framework tailored to the company’s industry, change the views
on ethics by taking cues from the successes in health care, optimize guidance and tools for product managers, build
organizational awareness, formally and informally incentivize employees to play a role in identifying AI ethical risks, and
monitor impacts and engage stakeholders [99].
Responsible AI is a somewhat more narrow and more concise concept than ethical AI (AI ethics), although it is often
used as a synonym for ethical AI. It denotes a set of principled and actionable norms to ensure organizations develop
and deploy AI responsibly [95]. This boils down to assessing and explaining social implications of using AI systems,
guaranteeing fairness in AI development by eliminating biased data, processes and decisions, protecting data and users’
privacy, documenting AI systems in detail and informing the users that they are interacting with AI, enforcing AI system
security (preventing attacks and unwanted changes in the system behavior), ensuring transparency (interpretability,
explainability) of AI systems, and complying with inclusiveness standards [95, 100].
Trustworthy AI is another similar term, used to denote AI that is not only ethically sound, but is also robust, resilient,
lawful and characterized by a high level of trust throughout its lifecycle (design, development, testing, deployment,
maintenance) [97, 101]. Li etal. stress interdisciplinary nature of trustworthy AI, and Thieves etal. discuss key aspects of
dierent pertinent frameworks and guidelines for achieving trustworthy AI [97].
Sustainable AI is a relatively new research topic in the eld of AI ethics. It focuses on developing and deploying AI sys-
tems in a way compatible with sustaining environmental resources, economic models and societal values [75]. It includes
measuring AI carbon footprints, energy consumption needed for training algorithms, the tension between innovation
in AI and general sustainable development goals, while serving the needs of the society at large.
Many authors stress that it is people who develop AI systems, and if something goes wrong the problem is not really
AI—it is the human factor [102–104].
6.4 Fear ofAI (AI anxiety)
The recent huge wave of AI development and integration with almost every aspect of modern living has also caused fear/
anxiety in many people. This fear is multi-dimensional. Li and Huang have identied eight dimensions of AI anxiety [105]:
1. privacy violation anxiety—disclosure or misuse of private data stored in datasets, obtained by biometric devices,
collected from surveillance cameras, etc.
2. bias behavior anxiety—discrimination of individuals or groups in AI systems
3. job replacement anxiety—worry about being replaced by AI systems or entities at workplaces
4. learning anxiety—perception of AI as being dicult to learn (as it becomes a must to learn)
5. existential risk anxiety—fear that all intelligent life on Earth will be destroyed by AI
6. ethics violation anxiety—fear that behavior of AI entities may violate rules of human ethics in interaction with humans
7. articial consciousness anxiety—fear that AI entities may develop human-like consciousness that will eventually set
them apart from humans and give rise to a new, articially created species
8. lack of transparency anxiety—discomfort about the opacity of AI training and decision-making processes.
Many of these anxieties are caused by negative propaganda, often spread by insuciently informed individuals and
doomsayers, or exaggerated and yet unveried claims appearing in popular press (of the ’AGI is here!’ kind, e.g., [42]). This
is supplemented by widespread skepticism caused by AI failures [106]. The controversy here is that some scholars and
inuential people express their concerns about the future of AI much akin to these anxieties, while others oppose this
kind of fear and argue for acceptance of AI because of the benets it brings to humans [107, 108]. The debate is still open.
Fear and anxiety are much studied in psychology, hence research on AI anxiety is often conducted by psychologists
in the context of dierent personality traits [109], fear of loneliness, of drone use, of autonomous vehicles, and of being
unemployed, all of it being elevated with media exposure to science ction [110–113].
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
Table 2 AI ethics: related concepts and terms
There are dierences in the views of dierent authors on the principles of AI ethics
Concept Description Principles
Ethical AI A system of moral guidelines and principles of AI development and deployment Transparency, justice and fairness, non-malecence, responsibility, privacy (in
addition, benecence, freedom and autonomy, trust, dignity, sustainability, and
solidarity) [92];
Benecence, non-malecence, autonomy, justice, and explicability [91];
Privacy, manipulation, opacity, bias, human–robot interaction, employment, the
eects of autonomy, machine ethics, articial moral agency, and AI superintel-
ligence (AGI) [93];
Proportionality and do no harm, safety and security, fairness and non-discrimination,
sustainability, right to privacy and data protection, human oversight and determi-
nation, transparency and explainability, responsibility and accountability, aware-
ness and literacy, and multi-stakeholder and adaptive governance and collabora-
tion [94]
Responsible AI A set of principled and actionable norms to ensure organizations develop and
deploy AI responsibly Accountability, transparency, fairness, reliability and safety, privacy and security, and
inclusiveness [95]
Trustworthy AI Ethically sound, technically robust, resilient, and lawful AI, built with trust through-
out its lifecycle Robustness, generalization, explainability, transparency, reproducibility, fairness,
privacy preservation, and accountability [96];
Benecence, non-malecence, autonomy, justice, and explicability [97];
Reliability, safety, security, privacy, availability, usability, accuracy, robustness, fair-
ness, accountability, transparency, interpretability/explainability, ethical data col-
lection and use of the system outcome, and more (yet to be dened) [98]
Sustainable AI AI developed and deployed in a way compatible with sustaining environmental
resources, economic models and societal values Environmental protection, reduced carbon footprints, reduced energy consumption,
protecting people, legal frameworks [75]
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
7 Discussion—the big picture
Current identity of AI, emerging from the reviews and insights presented in the previous sections, is summarized in
Table3. This big picture is still vague, and will require periodic updating and extensions over time due to the dynamics
of the entire area of AI. It is also subject to dierent interpretations from dierent experts and other stakeholders, as
they all focus on dierent aspects of AI. Still, it provides a general sense of what researchers, practitioners, educators,
developers, end users, analysts, organizations, and policy makers have in mind when they use the term ’AI’. These dier-
ent views and perspectives are unlikely to get completely unied, so their blend shown in Table3 is not denite and is
more like the current track record of identity of AI.
The left column in Table3 indicates the elements of the current identity of AI that have emerged in this research eort
as distinguishing characteristics, qualities and themes of AI as an area of study. The right column is a condensed recap of
the major approaches related to each specic characteristic, such that they contradict each other to an extent and thus
make the current identity of AI somewhat indistinct.
To shed yet another light on the big picture depicted in Table3, one should have in mind several other facts. Human-
level intelligence, consciousness and general cognitive capabilities of machines are not yet at sight, but nevertheless
continue to present challenges to researchers. An intriguing observation to this end is the fact that children surpass even
the most sophisticated AI systems in several areas and everyday tasks, like social learning, exploratory learning, telling
relevant information from irrelevant noise, and being curious about the right kinds of things [1, 114]. This is not to say
that humans surpass AI systems in all tasks; successful AI achievements like large language models and game-playing
applications are certainly better than humans in terms of coping with complexities of the corresponding domains. How-
ever, scientists are still polarized about the idea of creating AGI without consciousness and its own goals and survival
instincts [45].
In addition, it is dicult to expect the disagreements about dierent aspects of identity of AI shown in Table3 to
resolve soon, because developments in AI and in other relevant areas are interdependent. For example, neuroscience
and cognitive psychology still cannot tell how human consciousness is created. Human reasoning is still a black box in
itself and much of it does not happen at a conscious level, hence XAI models can at best try to imitate human explana-
tion capabilities in a convincing way. When it comes to trust, human verication of AI system decisions is still desirable
and, in order to avoid subjectivity, should be based on objective performance measures.
Finally, it happens that AI techniques that have fallen out of focus return in new technological contexts. Knowledge
graphs [115, 116] are an example. Knowledge graphs are a natural continuation of earlier techniques like semantic net-
works, ontologies, and RDF graphs. While today researchers and practitioners know only from experience that deep neural
networks can achieve great results in certain contexts [117] but details of their operation are not fully understood, knowl-
edge graphs are much better understood and much more interpretable AI technologies that continue to develop. Their
graph-based structures are suitable for describing semantics of interlinked entities using the basic object-attribute-value
model. This model is certainly limited in terms of the types of knowledge that it can represent and has other deciencies
as well [118], but combined with, e.g., trained NLP model, facilitate dierent tasks in question answering systems based
on knowledge graphs. Another example is embedded AI or embedded intelligence [119]. It is an AI topic area focused on
deployment of dierent AI techniques on edge computing devices. AI applications can be implemented on cloud serv-
ers and then run by edge and mobile devices, but recent developments in hardware technologies have enabled partial
implementation of AI on edge devices as well.
8 Conclusions
Is AI going to resolve its many dierent views, approaches and disagreements among its stakeholders? Will it continue to
ourish and embed itself in dierent products, services, tools and technologies, enhancing almost all aspects of human
lives? Or is it going to gradually disintegrate into predictive analytics, data science and engineering, statistics, and other
related elds? Or, on the contrary, perhaps it is going to evolve into AGI, compatible with human intelligence? Or it will
possibly spin out of control, go rogue, and disobey humans and their ethical norms?
Whatever the scenario, the identity of AI (where AI is seen as an area of study) is currently blurry and vague. AI
has changed dramatically, from its early focus on logic, heuristic search, reasoning, cognitive modeling, knowledge
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
representation and games, to predictive analytics, object recognition, deep learning, large language models, self-driving
vehicles, sophisticated robotics, convincing applications and much more. It is nowadays discussed not only by researchers
and practitioners, but also by business people, social scientists, government organizations, informed journalists and other
stakeholders. All of them have their own views and understanding of AI, which further diuses the notion of identity of
AI and its dening characteristics and beliefs.
Complex deep learning systems underlying narrowly focused applications are often superior to humans in the corre-
sponding tasks, with the drawback that they can "make a perfect chess move while the room is on re". Contrary to that,
children outperform complex AI systems in many simple tasks. It is just one of many controversies surrounding AI and
making its identity indistinct. Current discourse of AI research and development focuses predominantly on extending its
limits and improving performance, rarely depicting such controversies of the intensive dynamics of AI in a comprehensive
Table 3 Current identity of AI
Characteristic/Element—characteristic, quality, belief, etc. important for the notion of identity of AI
Comments, controversies, disagreements—summary explanation of the characteristic/element
Characteristic/Element Comments, controversies, disagreements
A. Denition—the denition and understanding of AI as an area of study A number of denitions exist, both of intelligence and of AI, with no consensus
on which one is representative enough. Existing denitions are incomplete
and human-centric. Formal denitions are scarce. Cognitive biases of dier-
ent authors prevent consensus, and all existing tests of how intelligent an
AI system is are subject to strong criticism. Working denitions come as a
rescue, as they are proposed to serve dierent current research contexts
B. Scope—the scope and coverage of AI, taking into account overlapping
with other related elds Machine learning and especially neural networks currently dominate the entire
area of AI. Traditional AI elds, such as representation and reasoning, cogni-
tive modeling, games and computer vision are still around. NLP and robotics
are very popular, and currently largely rely on ML models. There is a notable
overlap of AI with data science and predictive analytics, as both of these
areas implicitly express "proprietary rights" over ML. Statistics, neurocomput-
ing, pattern recognition, and other elds also partially overlap with AI
C. Autonomous behavior—the degree to which an AI system can run inde-
pendently Autonomous robots and self-driving vehicles have advanced to an impressive
level. Sensor and communication technologies have become essential ele-
ments of these systems. Still, much of AI is human-driven and goal-directed.
Integrative approaches (human-in-the-loop) are being proposed to keep the
best of both worlds
D. Stakeholders’ views—the dierent views of AI coming from dierent
stakeholders Researchers typically come up with ideas and experiments that push the
boundaries of AI. Industry tends to build them into products only if the inno-
vations proposed by researchers are protable. Industrial applications rely
on training models with huge datasets, whereas researchers have started to
turn their attention to data-centric approaches with smaller datasets with
quality data. Business analysts such as The Gartner Group regularly update
research and development trends in AI; that perspective can be used to rec-
oncile and harmonize the somewhat opposing approaches from academia
and industry. ’AI’ is overused as a marketing term. Government bodies and
policy makers gradually develop legal documents to support AI develop-
ment at the strategic level
E. Explainability—the explainability and interpretability of AI systems and
their behavior The need for explainability and interpretability is evident and is often
expressed in the context of opaqueness of deep learning models. Black-box
models typically have better performance than explainable models. The
XAI movement has not yet achieved all of its envisioned results. Tailored
user-centered explanations are still in focus of relevant research groups and
applications. Hybrid approaches are suggested where both explainable and
black-box models are used depending on the objectives and on specic user
groups
F. Ethics—guidelines and principles of development and deployment of AI
systems according to the norms of human and social ethics There is a great deal of overlap between the terms AI ethics, trustworthy AI,
responsible AI, and sustainable AI. Concerns about transparency, bias, fair-
ness, energy consumption, carbon footprint, user privacy, accountability,
social responsibility, reliability and lawfulness have received a lot of attention
in AI developments in recent years
G. Other To be maintained over time, as the area of AI advances.
Currently AGI and AI anxiety are among the topics that receive consider-
able attention from psychologists, philosophers, and social scientists. AGI
still remains a vision, although time after time the general press publishes
excited claims that new developments are on the verge of AGI. In the mean-
time, interconnected multiple narrow AI systems seem to be more realistic
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
way. The rise of topics covered in the eld of ethical AI further exposes these controversies in the context of the entire
lifecycle of an AI system. The need for resolving the controversies and disagreements calls for discussing the dynamic
concept of identity of AI.
None of the opposing and mutually inconsistent views discussed in the paper is meant to downplay and diminish
the tremendous importance of AI for human society and prosperity. They are just illustrating the current fuzziness of
the characteristics, qualities and beliefs related to everything that people label ’AI’. After all, even if all the controversies,
disagreements, anxieties, and hype are removed, one thing certainly remains—extremely useful and exciting technology.
Acknowledgements Thestudy presented in this paper is part of the project Data analysis in selected domains, funded by the Serbian Academy
of Sciences and Arts (Grant No: F-150, 2022).
Author contributions Vladan Devedzic (corresponding author) is the only author of this article and the study presented. The author read and
approved the nal manuscript.
Declarations
Competing interests The authordeclares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in
the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ lic en ses/ b y/4. 0/.
References
1. Abrams Z. The promise and challenges of AI. Monitor. 2021;52:62.
2. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies: a typology of reviews. Health
Inf Libr J. 2009;26:91–108.
3. AI Topics. What is Articial Intelligence? Assoc Adv Artif Intell. https:// aitop ics. org/ search. Accessed 8 Aug 2022.
4. Martinez R. Articial Intelligence: distinguishing between types & denitions. Nev Law J. 2019;19:1015–42.
5. Legg S, Hutter M. Universal intelligence: a denition of machine intelligence. Minds Mach. 2007;17:391–444.
6. Oxford Reference. Articial intelligence. Oxf Ref. https:// doi. org/ 10. 1093/ oi/ autho rity. 20110 80309 54269 60.
7. Russell SJ, Norvig P. Articial intelligence: a modern approach Fourth edition, global edition. Harlow: Pearson; 2022.
8. Legg S, Hutter M. A collection of denitions of intelligence. arXiv; 2007. http:// arxiv. org/ abs/ 0706. 3639. Accessed 8 Aug 2022.
9. Fagella D. What is articial intelligence? An informed denition. Emerj Artif Intell. Res. 2018. https:// emerj. com/ ai- gloss ary- terms/ what-
is- arti cial- intel ligen ce- an- infor med- den ition/. Accessed 8 Aug 2022.
10. Marsden P. Articial intelligence dened: useful list of popular denitions from business and science. digitalwellbeing.org. 2017. https://
digit alwel lbeing. org/ arti cial- intel ligen ce- den ed- useful- list- of- popul ar- den itions- from- busin ess- and- scien ce/. Accessed 9 Aug 2022.
11. Russell S. Rationality and intelligence: a brief update. In: Müller VC, editor. Fundam Issues Artif Intell. Cham: Springer International Pub-
lishing; 2016. p. 7–28. https:// doi. org/ 10. 1007/ 978-3- 319- 26485-1_2.
12. Monett D, Hoge L, Lewis CWP. Cognitive biases undermine consensus on denitions of intelligence and limit understanding. CEUR
Workshop Proc. CEUR; 2019, p. 52–9. http:// ceur- ws. org/ Vol- 2452/ paper8. pdf. Accessed 9 Aug 2022.
13. AI HLEG. A denition of Articial Intelligence: main capabilities and scientic disciplines. High-Level Expert Group on Articial Intelli-
gence (AI HLEG); 2019 Apr. https:// digit al- strat egy. ec. europa. eu/ en/ libra ry/ den ition- arti cial- intel ligen ce- main- capab iliti es- and- scien
tic- disci plines
14. Turing AM. Computing machinery and intelligence. Mind. 1950;LIX:433–60.
15. Smith G. Turing Tests Are terribly misleading. Mind Matters. 2022. https:// mindm atters. ai/ 2022/ 05/ turing- tests- are- terri bly- misle ading/.
Accessed 12 Aug 2022.
16. Loukides M. Articial intelligence? O’Reilly Media. 2015. https:// www. oreil ly. com/ radar/ arti cial- intel ligen ce- human- inhum an/. Accessed
8 Aug 2022.
17. Lorica B, Loukides M. What is articial intelligence? O’Reilly Media. 2016. https:// www. oreil ly. com/ radar/ what- is- arti cial- intel ligen ce/.
Accessed 8 Aug 2022.
18. Dvorsky G. 8 Possible alternatives to the turing test. Gizmodo. 2015. https:// gizmo do. com/8- possi ble- alter nativ es- to- the- turing- test-
16979 83985. Accessed 8 Aug 2022.
19. Searle JR. Minds, brains, and programs. Behav Brain Sci. 1980;3:417–24.
20. Wang P. On dening articial intelligence. J Artif Gen Intell. 2019;10:1–37.
21. Marche S. Google’s AI is something even stranger than conscious. The Atlantic. 2022. https:// www. theat lantic. com/ techn ology/ archi ve/
2022/ 06/ google- palm- ai- arti cial- consc iousn ess/ 661329/. Accessed 9 Aug 2022.
22. Gottfredson L. Mainstream science on intelligence: an editorial with 52 signatories. Intelligence. 1997;24:13–23.
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
23. Jiang Y, Li X, Luo H, Yin S, Kaynak O. Quo vadis articial intelligence? Discov Artif Intell. 2022;2:4.
24. Al-Mushayt OS. Automating E-government services with articial intelligence. IEEE Access. 2019;7:146821–9.
25. Chatterjee M. Data science vs machine learning and articial intelligence. Gt. Blog. 2020. https:// www. mygre atlea rning. com/ blog/ die
rence- data- scien ce- machi ne- learn ing- ai/. Accessed 10 Aug 2022.
26. Mahadevan S. How is statistical learning dierent from machine learning? Quora. 2018. https:// www. quora. com/ How- is- Stati stica l.- L earn
ing- die rent- from- Machi ne- Learn ing. Accessed 9 Aug 2022.
27. Grewal PDS. A critical conceptual analysis of denitions of articial intelligence as applicable to computer engineering. IOSR J Comput
Eng. 2014;16:09–13.
28. Chen J. Editorial-autonomous intelligent systems. Auton Intell Syst. 2021;1:1.
29. Korteling JE (Hans), van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human-versus Articial Intelligence.
Front Artif Intell. 2021;4:622364.
30. Totschnig W. Fully autonomous AI. Sci Eng Ethics. 2020;26:2473–85.
31. Cunneen M, Mullins M, Murphy F. Autonomous vehicles and embedded articial intelligence: the challenges of framing machine driving
decisions. Appl Artif Intell. 2019;33:706–31.
32. Marr B. Human vs. Articial intelligence: why nding the right balance is key to success. Forbes. 2022. https:// www. forbes. com/ sites/
berna rdmarr/ 2022/ 05/ 30/ human- vs- arti cial- intel ligen ce- why- ndi ng- the- right- balan ce- is- key- to- succe ss/. Accessed 9 Aug 2022.
33. Zhou J, Chen F. Towards humanity-in-the-loop in AI lifecycle. In: Chen F, Zhou J, editors. Humanity driven AI. Cham: Springer International
Publishing; 2022. p. 3–13. https:// doi. org/ 10. 1007/ 978-3- 030- 72188-6_1.
34. Waddell K. “AI washing” threatens to overinate expectations for the technology. Axios. 2019. https:// www. axios. com/ 2019/ 11/ 16/ ai-
washi ng- hidden- people Accessed 9 Aug 2022.
35. Gartner. Gartner hype cycle research methodology. Gartner. https:// www . gartn er. com/ en/ resea rch/ metho dolog ies/ gartn er- hype- cycle
Accessed 8 Aug 2022.
36. Strickland E. Andrew Ng: Unbiggen AI. IEEE Spectr. 2022. https:// spect rum. ieee. org/ andrew- ng- data- centr ic- ai Accessed 9 Aug 2022.
37. Kosoy E, Collins J, Chan DM, Huang S, Pathak D, Agrawal P, etal. Exploring exploration: comparing children with RL agents in unied
environments. arXiv; 2020. http:// arxiv. org/ abs/ 2005. 02880. Accessed 8 Aug 2022.
38. McCorduck P. Machines who think: a personal inquiry into the history and prospects of articial intelligence. 25th anniversary update.
Natick, Mass: A.K. Peters; 2019.
39. Goertzel B, Pennachin C, editors. Articial general intelligence. Berlin, New York: Springer; 2011.
40. Müller VC, Bostrom N. Future progress in articial intelligence: a survey of expert opinion. In: Müller VC, editor. Fundam Issues Artif Intell.
Cham: Springer International Publishing; 2016. p. 555–72. https:// doi. org/ 10. 1007/ 978-3- 319- 26485-1_ 33.
41. Fjelland R. Why general articial intelligence will not be realized. Humanit Soc Sci Commun. 2020;7:10.
42. Cuthbertson A. ‘The Game is Over’: Google’s DeepMind says it is close to achieving human-level AI. The Independent. 2022. https://
www. indep endent. co. uk/ tech/ ai- deepm ind- arti cial- gener al- intel ligen ce- b2080 740. html. Accessed 13 Aug 2022.
43. McLean S, Read GJM, Thompson J, Baber C, Stanton NA, Salmon PM. The risks associated with articial general intelligence: a systematic
review. J Exp Theor Artif Intell. 2021;1–15.
44. Naudé W, Dimitri N. The race for an articial general intelligence: implications for public policy. AI Soc. 2020;35:367–79.
45. Dickson B. Meta’s Yann LeCun strives for human-level AI. VentureBeat. 2022. https:// ventu rebeat. com/ 2022/ 03/ 21/ metas- yann- lecun-
striv es- for- human- level- ai/. Accessed 8 Aug 2022.
46. Moravec H. Mind children: the future of robot and human intelligence. 4th ed. Cambridge: Harvard Univ Press; 2010.
47. Coelho Mollo D. Intelligent behaviour. Erkenntnis. 2022. https:// doi. org/ 10. 1007/ s10670- 022- 00552-8.
48. Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, etal. Explainable articial intelligence (XAI): concepts,
taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020;58:82–115.
49. Chandrasekaran B, Tanner MC, Josephson JR. Explaining control strategies in problem solving. IEEE Expert. 1989;4:9–15.
50. Clancey WJ. The epistemology of a rule-based expert system—a framework for explanation. Artif Intell. 1983;20:215–51.
51. Swartout WR. XPLAIN: a system for creating and explaining expert consulting programs. Artif Intell. 1983;21:285–325.
52. Swartout WR, Paris C, Moore JD. Explanations in knowledge systems: design for explainable expert systems. IEEE Expert. 1991;6:58–64.
53. Swartout WR, Moore JD. Explanation in second generation expert systems. In: David J-M, Krivine J-P, Simmons R, editors. Second Gener
Expert Syst. Berlin Heidelberg: Springer; 1993. p. 543–85. https:// doi. org/ 10. 1007/ 978-3- 642- 77927-5_ 24.
54. Rodu J, Baiocchi M. When black box algorithms are (not) appropriate: a principled prediction-problem ontology. arXiv; 2021. http:// arxiv.
org/ abs/ 2001. 07648. Accessed 9 Aug 2022.
55. Borrego-Díaz J, Galán-Páez J. Explainable artificial intelligence in data science. Minds Mach. 2022. https:// doi. org/ 10. 1007/
s11023- 022- 09603-z.
56. Buijsman S. Dening explanation and explanatory depth in XAI. Minds Mach. 2022. https:// doi. org/ 10. 1007/ s11023- 022- 09607-9.
57. Mueller ST, Veinott ES, Homan RR, Klein G, Alam L, Mamun T, etal. Principles of explanation in human-AI systems. arXiv; 2021. http://
arxiv. org/ abs/ 2102. 04972. Accessed 9 Aug 2022.
58. Rai A. Explainable AI: from black box to glass box. J Acad Mark Sci. 2020;48:137–41.
59. Gunning D, Vorm E, Wang JY, Turek M. DARPA’s explainable AI (XAI) program: a retrospective. Appl AI Lett. 2021. https:// doi. org/ 10. 1002/
ail2. 61.
60. Arya V, Bellamy RKE, Chen P-Y, Dhurandhar A, Hind M, Homan SC, etal. One explanation does not t all: a toolkit and taxonomy of AI
explainability techniques. arXiv; 2019. http:// arxiv. org/ abs/ 1909. 03012. Accessed 8 Aug 2022.
61. Petch J, Di S, Nelson W. Opening the black box: the promise and limitations of explainable machine learning in cardiology. Can J Cardiol.
2022;38:204–13.
62. Wu M. Explainable AI: Looking inside the black box. AiThority. 2021. https:// aitho rity. com/ machi ne- learn ing/ reinf orcem ent- learn ing/
expla inable- ai- looki ng- inside- the- black- box/. Accessed 9 Aug 2022.
63. Rudin C, Chen C, Chen Z, Huang H, Semenova L, Zhong C. Interpretable machine learning: fundamental principles and 10 grand chal-
lenges. Stat Surv. 2022. https:// doi. org/ 10. 1214/ 21- SS133. full.
Vol:.(1234567890)
Perspective Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0
1 3
64. Kaul N. 3Es for AI: economics, explanation. Epistemol Front Artif Intell. 2022;5:32–8.
65. Loyola-González O. Black-box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access.
2019;7:154096–113.
66. Janssen FM, Aben KKH, Heesterman BL, Voorham QJM, Seegers PA, Moncada-Torres A. Using explainable machine learning to explore
the impact of synoptic reporting on prostate cancer. Algorithms. 2022;15:49.
67. Daras G, Dimakis AG. Discovering the hidden vocabulary of DALLE-2. arXiv; 2022. http:// arxiv. org/ abs/ 2206. 00169. Accessed 13 Aug
2022.
68. Miller T. Explanation in articial intelligence: insights from the social sciences. Artif Intell. 2019;267:1–38.
69. Wiedmann T, Minx J. A denition of ’carbon footprint. In: Pertsova CC, editor. Ecol Econ Res Trends. New York: Nova Science Publishers;
2008. p. 1–11.
70. Strubell E, Ganesh A, McCallum A. Energy and policy considerations for deep learning in NLP. arXiv; 2019. http:// arxiv. org/ abs/ 1906.
02243. Accessed 9 Aug 2022.
71. Dhar P. The carbon impact of articial intelligence. Nat Mach Intell. 2020;2:423–5.
72. Schwartz R, Dodge J, Smith NA, Etzioni O. Green AI. Commun ACM. 2020;63:54–63.
73. Patterson D, Gonzalez J, Holzle U, Le Q, Liang C, Munguia L-M, etal. The carbon footprint of machine learning training will plateau, then
shrink. Computer. 2022;55:18–28.
74. The carbon footprint of Machine Learning | ALMD Keynote Session. YouTube; 2022. https:// www. youtu be. com/ watch?v= gAKG1 n1u_
aI. Accessed 8 Aug 2022.
75. van Wynsberghe A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics. 2021;1:213–8.
76. Tamburrini G, The AI. Carbon footprint and responsibilities of AI scientists. Philosophies. 2022;7:4.
77. Cowgill B, Dell’Acqua F, Deng S, Hsu D, Verma N, Chaintreau A. Biased programmers? Or biased data? A eld experiment in operational-
izing AI ethics. Proc 21st ACM Conf Econ Comput. New York, NY, USA: Association for Computing Machinery; 2020, 679–81. https:// doi.
org/ 10. 1145/ 33914 03. 33995 45 Accessed 8 Aug 2022.
78. Floridi L, Chiriatti M. GPT-3: its nature, scope, limits, and consequences. Minds Mach. 2020;30:681–94.
79. Abid A, Farooqi M, Zou J. Large language models associate muslims with violence. Nat Mach Intell. 2021;3:461–3.
80. Cooper A. Police departments adopting facial recognition tech amid allegations of wrongful arrests. CBS News. 2021. https:// www. cbsne
ws. com/ news/ facial- recog nition- 60- minut es- 2021- 05- 16/. Accessed 8 Aug 2022.
81. Dastin J. Amazon scraps secret ai recruiting tool that showed bias against women. In: Martin K, editor. Ethics data anal concepts cases.
1st ed. Boca Raton: Auerbach Publications; 2022. p. 299–302.
82. Kharbat FF, Alshawabkeh A, Woolsey ML. Identifying gaps in using articial intelligence to support students with intellectual disabilities
from education and health perspectives. Aslib J Inf Manag. 2020;73:101–28.
83. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Sci-
ence. 2019;366:447–53.
84. Yan S. Algorithms are not bias-free: four mini-cases. Hum Behav Emerg Technol. 2021;3:1180–4.
85. Solaiman I, Dennison C. Process for adapting language models to society (PALMS) with values-targeted datasets. arXiv; 2021. http://
arxiv. org/ abs/ 2106. 10328. Accessed 9 Aug 2022.
86. Siwicki B. How AI bias happens—and how to eliminate it. Healthc. IT News. 2021. https:// www. healt hcare itnews. c om/ new s/ ho w - ai- bias-
happe ns- and- how- elimi nate- it. Accessed 9 Aug 2022.
87. Feeny AK, Chung MK, Madabhushi A, Attia ZI, Cikes M, Firouznia M, etal. Articial intelligence and machine learning in arrhythmias and
cardiac electrophysiology. Circ Arrhythm Electrophysiol. 2020;13: e007952.
88. John MM, Banta A, Post A, Buchan S, Aazhang B, Razavi M. Articial intelligence and machine learning in cardiac electrophysiology. Tex
Heart Inst J. 2022;49: e217576.
89. Schwartz R, Vassilev A, Greene K, Perine L, Burt A, Hall P. Towards a standard for identifying and managing bias in articial intelligence.
National Institute of Standards and Technology; 2022 Mar. https:// nvlpu bs. nist. gov/ nistp ubs/ Speci alPub licat ions/ NIST. SP. 1270. pdf
90. Coeckelbergh M. AI ethics. Cambridge: The MIT Press; 2020.
91. Floridi L, Cowls J. A unied framework of ve principles for AI in society. Harv Data Sci Rev. 2019. https:// hdsr. mitpr ess. mit. edu/ pub/ l0jsh
9d1. Accessed 8 Aug 2022.
92. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1:389–99.
93. Müller VC. Ethics of articial intelligence and robotics. In: Zalta EN, editor. Stanf Encycl Philos. Summer 2021. Metaphysics Research Lab,
Stanford University; 2021. https:// plato. stanf ord. edu/ archi ves/ sum20 21/ entri es/ ethics- ai/. Accessed 9 Aug 2022.
94. UNESCO. Recommendation on the ethics of articial intelligence. UNESCO. 2020. https:// en. unesco. org/ arti cial- intel ligen ce/ ethics.
Accessed 9 Aug 2022.
95. Microsoft. Microsoft responsible AI standard, V2. Microsoft Corp. https:// blogs. micro soft. com/ wp- conte nt/ uploa ds/ prod/ sites/5/ 2022/
06/ Micro soft- Respo nsible- AI- Stand ard- v2- Gener al- Requi remen ts-3. pdf. Accessed 9 Aug 2022.
96. Li B, Qi P, Liu B, Di S, Liu J, Pei J, etal. Trustworthy AI: from principles to practices. arXiv; 2022. http:// arxiv. org/ abs/ 2110. 01167. Accessed
8 Aug 2022.
97. Thiebes S, Lins S, Sunyaev A. Trustworthy articial intelligence. Electron Mark. 2021;31:447–64.
98. Wing JM. Trustworthy AI. Commun ACM. 2021;64:64–71.
99. Blackman R. A Practical Guide to Building Ethical AI. Harv Bus Rev [Internet]. 2020 Oct 15 [cited 2022 Aug 8]; Available from: https:// hbr.
org/ 2020/ 10/a- pract ical- guide- to- build ing- ethic al- ai
100. Ghallab M. Responsible AI: requirements and challenges. AI Perspect. 2019;1:3.
101. Ammanath B. Trustworthy AI: a business guide for navigating trust and ethics in AI. 1st ed. Hoboken: Wiley; 2022.
102. Ciampaglia GL, Mantzarlis A, Maus G, Menczer F. Research challenges of digital misinformation: toward a trustworthy web. AI Mag.
2018;39:65–74.
103. Demar tini G, Mizzaro S, Spina D. Human-in-the-loop articial intelligence for ghting online misinformation: challenges and opportuni-
ties. Bull Tech Comm Data Eng. 2020;43:65–74.
Vol.:(0123456789)
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 Perspective
1 3
104. Romero A. AI has an invisible misinformation problem. Medium. 2022. https:// alber torom gar. medium. com/ ai- has- an- invis ible- misin
forma tion- probl em- 4593d f3f35 ce. Accessed 9 Aug 2022.
105. Li J, Huang J-S. Dimensions of articial intelligence anxiety based on the integrated fear acquisition theory. Technol Soc. 2020;63: 101410.
106. Olson P. The promise of articial intelligence hasn’t borne fruit in health tech. Moneycontrol. 2022. https:// www. money contr ol. com/
news/ opini on/ the- promi se- of- arti cial- intel ligen ce- hasnt- borne- fruit- in- health- tech- 84921 91. html. Accessed 9 Aug 2022.
107. Hosseinpour H. Disobedience of AI: threat or promise. Inf Társad. 2020;20:48.
108. Metzinger T, Bentley PJ, Häggström O, Brundage M. Should we fear articial intelligence? European Parliament; 2018. https:// www. europ
arl. europa. eu/ RegDa ta/ etudes/ IDAN/ 2018/ 614547/ EPRS_ IDA(2018) 614547_ EN. pdf. Accessed 8 Aug 2022.
109. Sindermann C, Yang H, Elhai JD, Yang S, Quan L, Li M, etal. Acceptance and fear of articial intelligence: associations with personality in
a German and a Chinese sample. Discov Psychol. 2022;2:8.
110. Kalra N, Groves DG. The enemy of good: estimating the cost of waiting for nearly perfect automated vehicles. RAND Corporation; 2017.
https:// www. rand. org/ pubs/ resea rch_ repor ts/ RR2150. html
111. Liang Y, Lee SA. Fear of autonomous robots and articial intelligence: evidence from national representative data with probability
sampling. Int J Soc Robot. 2017;9:379–84.
112. Mirbabaie M, Brünker F, Möllmann Frick NRJ, Stieglitz S. The rise of articial intelligence—understanding the AI identity threat at the
workplace. Electron Mark. 2022;32:73–99.
113. Shari A, Bonnefon J-F, Rahwan I. How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-
driving cars. Transp Res Part C Emerg Technol. 2021;126: 103069.
114. Gopnik A, Making AI. More human. Sci Am. 2017;316:60–5.
115. Gutierrez C, Sequeda JF. Knowledge graphs. Commun ACM. 2021;64:96–104.
116. Hogan A, Blomqvist E, Cochez M, D’amato C, Melo GD, Gutierrez C, etal. Knowledge graphs. ACM Comput Surv. 2022;54:1–37.
117. Tavora M. Deep learning explainability: hints from physics. Medium. 2020. https:// towar dsdat ascie nce. com/ deep- learn ing- expla inabi
lity- hints- from- physi cs- 2f316 dc077 27. Accessed 9 Aug 2022.
118. Yani M, Krisnadhi AA. Challenges, techniques, and trends of simple knowledge graph question answering: a survey. Information.
2021;12:271.
119. Seng KP, Ang L-M. Embedded intelligence: state-of-the-art and research challenges. IEEE Access. 2022;10:59236–58.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional aliations.