Content uploaded by Roman Yampolskiy
Author content
All content in this area was uploaded by Roman Yampolskiy on Jun 27, 2020
Content may be subject to copyright.
This is an open access article licensed under the Creative Commons BY-NC-ND License.
Journal of Artificial General Intelligence 11(2) 68-70, 2020 Submitted 2019-09-17
DOI: 10.2478/jagi-2020-0003 Accepted 2019-11-07
On Defining Differences between
Intelligence and Artificial Intelligence
Roman V. Yampolskiy ROMAN.YAMP OL SK IY@LOUISVILLE.EDU
Computer Engineering and Computer Science
University of Louisville
Louisville, USA
Editors: Dagmar Monett, Colin W. P. Lewis, and Kristinn R. Th´
orisson
In “On Defining Artificial Intelligence” Pei Wang (2019) presents the following definition:
“Intelligence is the capacity of an information-processing system to adapt to its environment while
operating with insufficient knowledge and resources.” Wang’s definition is perfectly adequate and
he also reviews definitions of intelligence suggested by others, which have by now become standard
in the field (Legg and Hutter, 2007). However, there is a fundamental difference between defining
intelligence in general or human intelligence in particular and defining Artificial Intelligence (AI)
as the title of Wang’s paper claims he does. In this commentary I would like to bring attention to
the fundamental differences between designed and natural intelligences (Yampolskiy, 2016).
AI is typically designed for the explicit purpose of providing some benefit to its designers and
users and it is important to include that distinction in the definition of AI. Wang only once, briefly,
mentions the concept of AI safety (Yampolskiy, 2013; Yampolskiy and Fox, 2012; Bostrom, 2014;
Yudkowsky, 2011; Yampolskiy, 2015a) in his article and doesn’t bring it or other related concepts
into play. In my opinion, definition of AI which doesn’t explicitly mention safety or at least its
necessary subcomponents: controllability, explainability (Yampolskiy, 2019b), comprehensibility,
predictability (Yampolskiy, 2019c) and corrigibility(Soares et al., 2015) is dangerously incomplete.
Development of Artificial General Intelligence (AGI) is predicted to cause a shift in the
trajectory of human civilization (Baum et al., 2019). In order to reap the benefits and avoid pitfalls
of such powerful technology it is important to be able to control it. Full control of intelligent system
(Yampolskiy, 2015b) implies capability to limit its performance (Trazzi and Yampolskiy, 2018), for
example setting it to a particular level of IQ equivalence. Additional controls may make it possible
to turn the system off (Hadfield-Menell et al., 2017), and turn on/off consciousness (Elamrani and
Yampolskiy, 2019; Yampolskiy, 2018a), free will, autonomous goal selection and specify moral
code (Majot and Yampolskiy, 2014) the system will apply in its decisions. It should also be possible
to modify the system after it is deployed to correct any problems (Yampolskiy, 2019a; Scott and
Yampolskiy, 2019) discovered during use. An AI system should be able, to the extent theoretically
possible, explain its decisions in a human comprehensible language. Its designers and end users
should be able to predict its general behavior. If needed, the system should be confinable to a
restricted environment (Yampolskiy, 2012; Armstrong, Sandberg, and Bostrom, 2012; Babcock,
Kram´
ar, and Yampolskiy, 2016), or operate with reduced computational resources. AI should be
operating with minimum bias, and maximum transparency, it has to be friendly (Muehlhauser and
Bostrom, 2014), safe and secure (Yampolskiy, 2018b).
68
ONDEFI NI NG DIFFERENCES BETWEEN INT EL LIGEN CE A ND ART IFI CIA L INTELLIGENCE
Consequently, we propose the following definition of Artificial Intelligence which compliments
Wang’s definition: “Artificial Intelligence is a fully controlled agent with a capacity of an
information-processing system to adapt to its environment while operating with insufficient
knowledge and resources.”
References
Armstrong, S., Sandberg, A., and Bostrom, N. 2012. Thinking inside the box: Controlling and
using an oracle ai. Journal of Consciousness Studies.
Babcock, J., Kram´
ar, J., and Yampolskiy, R. 2016. The AGI containment problem. In International
Conference on Artificial General Intelligence. Springer.
Baum, S. D., Armstrong, S., Ekenstedt, T., H¨
aggstr¨
om, O., Hanson, R., Kuhlemann, K., Maas,
M. M., Miller, J. D., Salmela, M., Sandberg, A., Sotala, K., Torres, P., Turchin, A., and
Yampolskiy, R. V. 2019. Long-term trajectories of human civilization. foresight 21(1):53–83.
Bostrom, N. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.
Elamrani, A. and Yampolskiy, R. 2019. Reviewing Tests for Machine Consciousness. Journal of
Consciousness Studies 26(5-6):35–64.
Hadfield-Menell, D., Dragan, A., Abbeel, P., and Russell, S. 2017. The off-switch game. In
Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.
Legg, S. and Hutter, M. 2007. Universal Intelligence: A Definition of Machine Intelligence. Minds
and Machines 17:391–444.
Majot, A. and Yampolskiy, R. 2014. AI safety engineering through introduction of self-reference
into felicific calculus via artificial pain and pleasure. In IEEE International Symposium on Ethics
in Science, Technology and Engineering. IEEE.
Muehlhauser, L. and Bostrom, N. 2014. Why we need friendly AI. Think 13(36):41–47.
Scott, P. J. and Yampolskiy, R. 2019. Classification Schemas for Artificial Intelligence Failures.
arXiv:1907.07771 [cs.CY]. https://arxiv.org/abs/1907.07771.
Soares, N., Fallenstein, B., Armstrong, S., and Yudkowsky, E. 2015. Corrigibility. In Workshops at
the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Trazzi, M. and Yampolskiy, R. 2018. Building safer AGI by introducing artificial stupidity.
arXiv:1808.03644 [cs.AI]. https://arxiv.org/abs/1808.03644.
Wang, P. 2019. On Defining Artificial Intelligence. Journal of Artificial General Intelligence
10(2):1–37.
Yampolskiy, R. and Fox, J. 2012. Safety Engineering for Artificial General Intelligence. Topoi
32:217–226.
Yampolskiy, R. 2012. Leakproofing Singularity-Artificial Intelligence Confinement Problem. Minds
and Machines 22(4):29–324.
69
YAMPOLSKIY
Yampolskiy, R. 2013. Artificial intelligence safety engineering: Why machine ethics is a wrong
approach. In Philosophy and Theory of Artificial Intelligence, 389–396. Berlin Heidelberg:
Springer.
Yampolskiy, R. 2015a. Artificial superintelligence: a futuristic approach. Chapman and Hall CRC.
Yampolskiy, R. 2015b. The space of possible mind designs. In International Conference on
Artificial General Intelligence. Springer.
Yampolskiy, R. 2016. On the origin of synthetic life: attribution of output to a particular algorithm.
Physica Scripta 92(1):013002.
Yampolskiy, R. 2018a. Artificial Consciousness: An Illusionary Solution to the Hard Problem.
Reti, saperi, linguaggi 2:287–318.
Yampolskiy, R. 2018b. Artificial Intelligence Safety and Security. Chapman and Hall/CRC.
Yampolskiy, R. 2019a. Predicting future AI failures from historic examples. Foresight 21(1):138–
152.
Yampolskiy, R. 2019b. Unexplainability and Incomprehensibility of Artificial Intelligence.
arXiv:1907.03869 [cs.CY]. https://arxiv.org/abs/1907.03869.
Yampolskiy, R. 2019c. Unpredictability of AI. arXiv:1905.13053 [cs.AI]. https://arxiv.org/
abs/1905.13053.
Yudkowsky, E. 2011. Complex Value Systems in Friendly AI. In Schmidhuber, J., T´
orisson, K.,
and Looks, M., eds., Artificial General Intelligence. Berlin Heidelberg: Springer. 388–393.
70