ArticlePDF Available

On Defining Differences between Intelligence and Artificial Intelligence

Authors:
This is an open access article licensed under the Creative Commons BY-NC-ND License.
Journal of Artificial General Intelligence 11(2) 68-70, 2020 Submitted 2019-09-17
DOI: 10.2478/jagi-2020-0003 Accepted 2019-11-07
On Defining Differences between
Intelligence and Artificial Intelligence
Roman V. Yampolskiy ROMAN.YAMP OL SK IY@LOUISVILLE.EDU
Computer Engineering and Computer Science
University of Louisville
Louisville, USA
Editors: Dagmar Monett, Colin W. P. Lewis, and Kristinn R. Th´
orisson
In “On Defining Artificial Intelligence” Pei Wang (2019) presents the following definition:
“Intelligence is the capacity of an information-processing system to adapt to its environment while
operating with insufficient knowledge and resources. Wang’s definition is perfectly adequate and
he also reviews definitions of intelligence suggested by others, which have by now become standard
in the field (Legg and Hutter, 2007). However, there is a fundamental difference between defining
intelligence in general or human intelligence in particular and defining Artificial Intelligence (AI)
as the title of Wang’s paper claims he does. In this commentary I would like to bring attention to
the fundamental differences between designed and natural intelligences (Yampolskiy, 2016).
AI is typically designed for the explicit purpose of providing some benefit to its designers and
users and it is important to include that distinction in the definition of AI. Wang only once, briefly,
mentions the concept of AI safety (Yampolskiy, 2013; Yampolskiy and Fox, 2012; Bostrom, 2014;
Yudkowsky, 2011; Yampolskiy, 2015a) in his article and doesn’t bring it or other related concepts
into play. In my opinion, definition of AI which doesn’t explicitly mention safety or at least its
necessary subcomponents: controllability, explainability (Yampolskiy, 2019b), comprehensibility,
predictability (Yampolskiy, 2019c) and corrigibility(Soares et al., 2015) is dangerously incomplete.
Development of Artificial General Intelligence (AGI) is predicted to cause a shift in the
trajectory of human civilization (Baum et al., 2019). In order to reap the benefits and avoid pitfalls
of such powerful technology it is important to be able to control it. Full control of intelligent system
(Yampolskiy, 2015b) implies capability to limit its performance (Trazzi and Yampolskiy, 2018), for
example setting it to a particular level of IQ equivalence. Additional controls may make it possible
to turn the system off (Hadfield-Menell et al., 2017), and turn on/off consciousness (Elamrani and
Yampolskiy, 2019; Yampolskiy, 2018a), free will, autonomous goal selection and specify moral
code (Majot and Yampolskiy, 2014) the system will apply in its decisions. It should also be possible
to modify the system after it is deployed to correct any problems (Yampolskiy, 2019a; Scott and
Yampolskiy, 2019) discovered during use. An AI system should be able, to the extent theoretically
possible, explain its decisions in a human comprehensible language. Its designers and end users
should be able to predict its general behavior. If needed, the system should be confinable to a
restricted environment (Yampolskiy, 2012; Armstrong, Sandberg, and Bostrom, 2012; Babcock,
Kram´
ar, and Yampolskiy, 2016), or operate with reduced computational resources. AI should be
operating with minimum bias, and maximum transparency, it has to be friendly (Muehlhauser and
Bostrom, 2014), safe and secure (Yampolskiy, 2018b).
68
ONDEFI NI NG DIFFERENCES BETWEEN INT EL LIGEN CE A ND ART IFI CIA L INTELLIGENCE
Consequently, we propose the following definition of Artificial Intelligence which compliments
Wang’s definition: Artificial Intelligence is a fully controlled agent with a capacity of an
information-processing system to adapt to its environment while operating with insufficient
knowledge and resources.
References
Armstrong, S., Sandberg, A., and Bostrom, N. 2012. Thinking inside the box: Controlling and
using an oracle ai. Journal of Consciousness Studies.
Babcock, J., Kram´
ar, J., and Yampolskiy, R. 2016. The AGI containment problem. In International
Conference on Artificial General Intelligence. Springer.
Baum, S. D., Armstrong, S., Ekenstedt, T., H¨
aggstr¨
om, O., Hanson, R., Kuhlemann, K., Maas,
M. M., Miller, J. D., Salmela, M., Sandberg, A., Sotala, K., Torres, P., Turchin, A., and
Yampolskiy, R. V. 2019. Long-term trajectories of human civilization. foresight 21(1):53–83.
Bostrom, N. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.
Elamrani, A. and Yampolskiy, R. 2019. Reviewing Tests for Machine Consciousness. Journal of
Consciousness Studies 26(5-6):35–64.
Hadfield-Menell, D., Dragan, A., Abbeel, P., and Russell, S. 2017. The off-switch game. In
Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.
Legg, S. and Hutter, M. 2007. Universal Intelligence: A Definition of Machine Intelligence. Minds
and Machines 17:391–444.
Majot, A. and Yampolskiy, R. 2014. AI safety engineering through introduction of self-reference
into felicific calculus via artificial pain and pleasure. In IEEE International Symposium on Ethics
in Science, Technology and Engineering. IEEE.
Muehlhauser, L. and Bostrom, N. 2014. Why we need friendly AI. Think 13(36):41–47.
Scott, P. J. and Yampolskiy, R. 2019. Classification Schemas for Artificial Intelligence Failures.
arXiv:1907.07771 [cs.CY]. https://arxiv.org/abs/1907.07771.
Soares, N., Fallenstein, B., Armstrong, S., and Yudkowsky, E. 2015. Corrigibility. In Workshops at
the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Trazzi, M. and Yampolskiy, R. 2018. Building safer AGI by introducing artificial stupidity.
arXiv:1808.03644 [cs.AI]. https://arxiv.org/abs/1808.03644.
Wang, P. 2019. On Defining Artificial Intelligence. Journal of Artificial General Intelligence
10(2):1–37.
Yampolskiy, R. and Fox, J. 2012. Safety Engineering for Artificial General Intelligence. Topoi
32:217–226.
Yampolskiy, R. 2012. Leakproofing Singularity-Artificial Intelligence Confinement Problem. Minds
and Machines 22(4):29–324.
69
YAMPOLSKIY
Yampolskiy, R. 2013. Artificial intelligence safety engineering: Why machine ethics is a wrong
approach. In Philosophy and Theory of Artificial Intelligence, 389–396. Berlin Heidelberg:
Springer.
Yampolskiy, R. 2015a. Artificial superintelligence: a futuristic approach. Chapman and Hall CRC.
Yampolskiy, R. 2015b. The space of possible mind designs. In International Conference on
Artificial General Intelligence. Springer.
Yampolskiy, R. 2016. On the origin of synthetic life: attribution of output to a particular algorithm.
Physica Scripta 92(1):013002.
Yampolskiy, R. 2018a. Artificial Consciousness: An Illusionary Solution to the Hard Problem.
Reti, saperi, linguaggi 2:287–318.
Yampolskiy, R. 2018b. Artificial Intelligence Safety and Security. Chapman and Hall/CRC.
Yampolskiy, R. 2019a. Predicting future AI failures from historic examples. Foresight 21(1):138–
152.
Yampolskiy, R. 2019b. Unexplainability and Incomprehensibility of Artificial Intelligence.
arXiv:1907.03869 [cs.CY]. https://arxiv.org/abs/1907.03869.
Yampolskiy, R. 2019c. Unpredictability of AI. arXiv:1905.13053 [cs.AI]. https://arxiv.org/
abs/1905.13053.
Yudkowsky, E. 2011. Complex Value Systems in Friendly AI. In Schmidhuber, J., T´
orisson, K.,
and Looks, M., eds., Artificial General Intelligence. Berlin Heidelberg: Springer. 388–393.
70
... In On Defining Artificial Intelligence, Pei Wang (2019) presents the following definition: "Intelligence is the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources". While the author considers this an adequate definition, Yampolskiy (2020) believes that there is a fundamental difference between defining intelligence in general, human intelligence in particular, and AI, as the title of Wang's article claims to do. ...
Article
Full-text available
O objetivo geral deste artigo é definir as possibilidades e óbices do desenvolvimento de uma inteligência artificial (IA) capaz de ser tão inteligente e capaz (ou até mais) do que o ser humano. Como resultado, tem-se que, apesar dos grandes avanços científicos sobre o comportamento humano e o funcionamento do cérebro, ainda se sabe pouco sobre o que é e que como funciona aquilo que tem sido chamado de consciência, o que oblitera o desenvolvimento de uma inteligência artificial geral (IAG). Metodologicamente, trata-se de pesquisa desenvolvido conforme o método de procedimento dialético, tendo abordagem qualitativa e transdisciplinar e técnica de pesquisa de revisão bibliográfica.
... However, in the case of such a definition, the argument regarding difficulties of practical application remains valid, similarly to earlier proposals. There are also attempts of defining AI as computational cognition (Rapaport, 2020), complementation of Wang definition from 2019 (Yampolskiy, 2020), or differentiation into different AI systems, i.e. Artificial General Intelligence, Strong AI, Weak AI, or Human-level AI (Yampolskiy, Fox, 2012;Wang, 2019;Emmert-Streib, Yli-Harja, Dehmer, 2020). ...
Article
Full-text available
One of the main sectors that makes heavy use of the development of advanced computational methods is the banking sector. The goals of our research are as follows: 1) to compare scientific and regulatory approaches to defining artificial intelligence (AI) and machine learning (ML), 2) to propose AI and ML definitions for regulatory purposes that allow us to clearly state if a given method is AI/ML or not, 3) to compare the complex quantitative methods applied in banking in terms of complexity and interpretability in order to provide a clear classification of methods to the interested parties (practitioners and management), 4) to propose a possible approach towards the further development of quantitative methods in the areas of required strict interpretability. Our literature review focuses on the definitions of AI/ML applied by scientists and regulators, as well as the proposals of application of complex quantitative solutions in different banking domains. We propose practical definitions of AI and ML based on the current state of the art and requirements of clarity in the banking industry (a very limited risk appetite regarding regulations non‑compliance) and compare quantitative methods applied in different banking domains. For regulatory purposes, we propose general and inclusive definitions of AI and ML which allow for a clear classification of specific methods. In the case of strict requirements towards the interpretability of applied methods, we propose a gradual and controlled increase in the complexity of existing solutions. Therefore, we propose the differentiation of quantitative methods in terms of interpretability and complexity. We also think that the definitions of AI/ML in further regulations should make it possible to clearly say whether particular approaches are AI/ML. Our research is directed to policymakers, practitioners, and executives related to the banking sector.
Article
The existence of isotopes was proposed at the dawn of the 20 th century based on compelling experimental evidence by many distinguished investigators. The subject of this review focuses on one specific application of isotopes in the evolving science of forensics. The topics covered include isotope ratio measurement and variation, forensic anthropology, wildlife trafficking, explosive investigation, illicit drugs, counterfeit medicines, food authentication, nuclear forensics and artificial intelligence (AI). Future directions and a conclusion for this important research topic are also included.
Article
This article aims to discover the mechanisms behind the adoption and acceptance of AI in advertising industry in Turkey. Semi-structured interviews reflecting technology acceptance literature conducted with agency practitioners to discover the usages and conditions of AI supported applications. Participants are selected in accordance with convenience and snowball sampling methods. The results provide important insights into four main strands of the literature: Technology usefulness, ease of use, attitudes toward technologies and barriers preventing and restricting the use of technologies. It is understood that practitioners effectively utilize AI in their business processes highlighting its contribution to efficiency in creative production. While technologies are being actively utilized, the process of understanding and exploring is still ongoing in the background. In line with the literature, agency practitioners point out the skepticism that exists among advertisers. It is noticable that as a result of finding AI tools useful and easy to use, overall attitude of participants toward AI tend to be positive. Participants asserted that they do not have any concerns about being replaced by AI. Their confidence on this matter seems to be based on the idea that AI could be most efficient in cooperation with human intelligence.
Chapter
This chapter is trying to answer a big question: Should we create artificial intelligence (AI) that deserves moral consideration, like humans? To figure this out, the authors look at different ideas about what AI is and what it should be. They use two main theories about ethics (how we decide right and wrong) to see if AI should be treated morally. One theory they use says that if AI fits the definition of intelligence, it should be treated morally, no matter which ethical theory you follow. The other theory they use is called “capability theory,” combined with the definition of AI. This leads to the conclusion that we shouldn't develop AI further if we believe it deserves moral consideration. So, the chapter explores whether AI should be treated morally, and it suggests that if so, we might need to stop developing AI.
Article
عبر دراسة نقدية في الفلسفة والتأصيل، يتناول موضوع البحث أحد جوانب أخلقة الذكاء الاصطناعي الأكثر تعقيداً، المرتبطة بتحديد: "أين نحن؟" من أخلاقيات الذكاء الاصطناعي. وذلك ضمن سؤالين قوامها الاعتبار الأخلاقي والمقياس القانوني. عارضاً لذلك وفق مخطط بحثي، حاول في قسمه الأول تحديد ماهية السلوك الأخلاقي الاصطناعي الـمُنتظر، لتحديد كُنه هذا السلوك، وكيفية الوصول إليه. بينما حاول قسمه الثاني، توضيح مقياس البعد الأخلاقي للسلوك الاصطناعي الـمُنتظر، وهل نحن أمام ذكاء اصطناعي أخلاقي بمقياس اجتماعي أم قانوني. وقد خلص البحث إلى نتيجة معرفية مفادها أن ضمان تصرف الذكاء الاصطناعي بأخلاقية، يجب أن يكون مقدماً على المفهوم الصرف لأخلاق هذا الذكاء. وبأنه يجب تحييد المعايير التقنية ضمن هذا الذكاء لمصلحة المعايير الإنسانية فيه. بذات الوقت، توصل البحث إلى حلقة معرفية مترابطة قوامها: أن أخلقة الذكاء الاصطناعي هي مفتاح تهذيب الذكاء الاصطناعي نفسه، وبأن تهذيب الذكاء الاصطناعي هي البوابة الموصلة لأخلقة الذكاء الاصطناعي. مُوصياً بضرورة احترام هذه المعادلة في أخلقة أخلاقيات الذكاء الاصطناعي، بوصفها الدائرة المغلقة لحلقتين متكاملتين كلٌ منهما: البوابة والـمَخرج للأخرى.
Preprint
Full-text available
In the context of designing and implementing ethical Artificial Intelligence (AI), varying perspectives exist regarding developing trustworthy AI for autonomous cars. This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences. By exploring the diverse viewpoints, we identify key factors contributing to the differences and propose strategies to bridge the gaps. This study goes beyond the trolley problem to visualize the complex challenges of trustworthy and ethical AI. Three pillars of trustworthy AI have been defined: transparency, reliability, and safety. This research contributes to the field of trustworthy AI for autonomous cars, providing practical recommendations to enhance the development of AI systems that prioritize both technological advancement and ethical principles.
Preprint
Full-text available
It is indubitably clear that developments in artificial intelligence (AI) technologies portend significant opportunities for global society, but also threats for humanity at large, some of which rise to the level of an existential threat that stems from what analysts call "catastrophe trajectories" and "technological transformation trajectories." This is especially so with machine-learning and deep-learning technologies operating with algorithmic logics that may (highly probably) shift such AI to "artificial general intelligence" (AGI) in the direction of "superintelligence" not subject to either human comprehension or human control. Automated decision-making becomes all the more a matter of ethical concern when such technologies are appropriated in settings of nuclear weapons operations. The threat of such technologies must be considered grave when such technologies are appropriated for automated decision-making in nuclear weapons command, control, and communications (NC3). This matter is illustrated here with reference to D.F. Jones's masterful science fiction narrative, Colossus, published in 1966. Jones's prescience tells a cautionary tale by no means to be ignored today in view of ongoing AI developments vis-à-vis military security policy.
Preprint
Full-text available
2023/04/15 コンテンツ [close] • 要旨 • 1 はじめに • 2 AI制御問題 o 2.1 制御問題の種類 o 2.2 形式的な定義 • 3 これまでの研究 o 3.1 制御可能な o 3.2 制御不能 • 4 制御不能の証明 • 5 AIが制御不能であることを示す学際的証拠 o 5.1 制御理論 o 5.2 哲学 o 5.3 公共選択理論 o 5.4 正義(不公平感) o 5.5 コンピュータサイエンスの理論 o 5.6 サイバーセキュリティ o 5.7 ソフトウェア工学 o 5.8 情報技術 o 5.9 学習可能性 o 5.10 経済学 o 5.11 エンジニアリング o 5.12 天文 o 5.13 物理学 • 6 AIの制御不能性に関するAI安全性研究からの証拠 o 6.1 価値のアライメント o 6.2 脆弱性 o 6.3 識別不可能性
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Article
Full-text available
Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue.
Article
Full-text available
Purpose The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100 per cent secure system. Design/methodology/approach AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AI Safety, failures are at the same, moderate level of criticality as in cybersecurity; however, for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery. Findings In this paper, the authors present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. The authors suggest that both the frequency and the seriousness of future AI failures will steadily increase. Originality/value This is a first attempt to assemble a public data set of AI failures and is extremely valuable to AI Safety researchers.
Article
Full-text available
With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin's original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples' origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal, both explanations are equality likely. Our results should attract attention of astrobiologists and scientists interested in developing a more complete theory of life, as well as of AI-Safety researchers.
Conference Paper
Full-text available
There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey the AGI containment problem - the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous. We identify requirements for AGI containers, available mechanisms, and weaknesses that need to be addressed.
Conference Paper
Full-text available
The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.
Conference Paper
Full-text available
In the 18th century the Utilitarianism movement produced a morality system based on the comparative pain and pleasure that an action created. Called felicific calculus, this system would judge an action to be morally right or wrong based on several factors like the amount of pleasure it would provide and how much pain the action would inflict upon others. Because of its basis as a type of "moral mathematics" felicific calculus may be a viable candidate as a working ethical system for artificial intelligent agents. This paper examines the concepts of felicific calculus and Utilitarianism in the light of their possible application to artificial intelligence, and proposes methods for its adoption in an actual intelligent machine. In order to facilitate the calculations necessary for this moral system, novel approaches to synthetic pain, pleasure, and empathy are also proposed.
Book
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.