Available via license: CC BY-NC-ND 3.0
Content may be subject to copyright.
Vol. 11, No. 17, Year 2018
ISSN: 2286-2102
E-ISSN:2286-2552
6
WHY ARE WE AFRAID OF ARTIFICIAL INTELLIGENCE (AI)?
VASILE GHERHEȘ
Department of Communication and Foreign Languages, Politehnica University of Timișoara, Romania
© 2018 Vasile Gherheș
This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivs
license (http://creativecommons.org/licenses/by-nc-nd/3.0/)
DOI: 10.1515/eras-2018-0006
Abstract
The study presents the results regarding the attitudes of students from humanities and technical specializations in
Timișoara towards the emergence and development of artificial intelligence (AI). The emphasis was on the most
likely consequences of the development of artificial intelligence in the future, especially the negative
consequences that its development would entail. The method used for data collection was the sociological survey
and the information gathering tool was the questionnaire. It was applied to a total of 929 people, ensuring a
sample representativity margin of ± 3%. The analysis reveals that the participants in the study predict that due to
the emergence and development of AI, in the future, interpersonal relationships will be negatively affected, there
will be fewer jobs, economic crises will emerge, it will be used to make intelligent weapons, to increase military
conflicts, to take control of humanity and, last but not least, to destroy mankind. The results revealed differences
in responses depending on the type of specialization (humanities or technical) and the gender of the respondents.
Keywords: artificial intelligence, risks, fear, perceptions.
_________________________________________________________________________________________
Introduction
Used for the first time by John McCarthy in 1956, the term artificial intelligence was
defined as "the science and engineering of creating intelligent machines" or as "a machine that
behaves in a way that could be considered intelligent, if it was a human being"(McCarthy,
2007). The field was based on the idea that human intelligence can be described and defined
so precisely that it can be simulated on a computing machine. Ioan Dziţac, a Romanian author
concerned with this field, in the work Artificial Intelligence states that "AI can be described as
the capability of machines or programs to mimic human thinking processes, such as thinking
or learning. Moreover, the subject of AI can be defined as the study of making computers do
things for which man needs intelligence to achieve them"(2008, 42). Another definition
provided by the author is "the ability of evolved technical systems to achieve quasi-human
performances"(2008, 42). In The English Oxford Living Dictionary, AI was defined as: “the
theory and development of computer systems able to perform tasks normally requiring human
intelligence, such as visual perception, speech recognition, decision-making, and translation
between languages” and The Encyclopedia Britannica states that “artificial intelligence (AI),
the ability of a digital computer or computer-controlled robot to perform tasks commonly
associated with intelligent beings”.
There are three main categories of AI:
The first is the narrow artificial intelligence (Kurzweil, 2005), Artificial Narrow
Intelligence (ANI). It is designed to perform small tasks (for example, facial
recognition, searching for information on the internet, making online bookings,
driving the car, etc.). It can exceed human performances almost regardless of the
specific task, using mechanical learning tools and deep learning tools.
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 7
Volume 11, Number 17, Year 2018
The second type of AI is the General Artificial Intelligence (GAI), which refers to AI-
engineered machines that would be as intelligent as humans and could perform any
intellectual task (Pennachin and Goertzel, 2007). The moment of its emergence is still
a topic for debate among specialists, being around 2040.
The third type of AI is the artificial super-intelligence, Artificial Superintelligence
(ASI), which is much more advanced than a human being, would exceed him a few
billion times in almost any field, including scientific creativity, general wisdom and
social skills (Bostrom, 2006). The consequences of the development of super-
intelligence are unknown and it is almost impossible to make predictions for this
technological leap.
For one out of two experts included in the study ”Future Progress in Artificial
Intelligence: A Survey of Expert Opinion”, the span of time for the emergence of AI is
expected to be 2040-2050, and for 9 out of 10 it is 2075. In their opinion, super-intelligence
will be developed in less than 30 years since the emergence of AI. There is about one chance
out of three that this development turns out to be "bad" or "extremely bad" for mankind.
The findings of the study ”Technical and Humanities Students' Perspectives on the
Development and Sustainability of Artificial Intelligence” show that there is a positive attitude
towards the emergence of AI. It is believed that this will positively influence the evolution of
society, the accelerated development of this field being perceived as a positive thing. Most
respondents consider themselves optimistic when they think about what might happen in the
future as a result of the development of the AI, but there are equally many concerns about the
possibility that entities/devices equipped with artificial intelligence can destroy humanity and
replace people in certain activities and trades.
The possibility of creating thinking machines raises a number of ethical issues and
dilemmas as a result of the implementation of artificial intelligence. Perhaps one of the
greatest threats is the use of AI in the military industry. The scenario in which the use of
artificial intelligence could lead to the onset of a nuclear war is very possible in the future,
which is why many researchers and scientists have signaled the risks involved in the use of
artificial intelligence in the military industry. Thus, at the Buenos Aires Artificial Intelligence
Conference in Argentina, Stephen Hawking, Elon Musk and over 1,000 robotics researchers
signed a letter, warning of the potential disaster that "autonomous weapons" would have
(https://futureoflife.org/open-letter-autonomous-weapons). Elon Musk was warning that the
ambitions of the great powers to dominate the area of artificial intelligence could cause a new
world war. Major advances in AI, along with the development of drones, satellites and other
technologies, increase the possibility of tensions between countries and the outbreak of
international wars (https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-
could-increase-the-risk.html).
Another issue much debated by specialists in the field is the fear that AI will become
autonomous and get the opportunity to escape from people's control. There is also the threat
that it will lead to the replacement of man by robots, almost in all social spheres. With
increasingly more jobs being automated, this would lead to global mass unemployment, with
the human presence becoming unnecessary.
Elon Musk, co-founder of Tesla Motors and founder of Space X, said in a post on
Twitter that ”we need to be very careful with artificial intelligence because it is more
dangerous than nuclear bombs”. (https://twitter.com/elonmusk/status/
495759307346952192?lang=en).
Being an extremely complex domain that only allows speculation about how AI will
influence society, there are not many representative studies that capture the social perception
of the population regarding the AI. As this is an extremely complex field, the development of
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 8
Volume 11, Number 17, Year 2018
the AI raises a number of existential problems and a large number of questions we do not
have an answer for at the moment. We do not yet know how humanity will look like in the
age of artificial intelligence, what changes it will make in the structure of society. This study
aims to highlight the attitude of the students from the universities in Timișoara regarding the
emergence of artificial intelligence, with an emphasis on the most likely consequences of its
future development, the negative consequences that its development would entail.
Methodological aspects
The present research has been carried out through a quantitative approach based on the
sociological survey method. The tool used to collect data was the anonymous online
questionnaire posted on the Isondaje.ro platform (an online survey service). The data were
collected between 5 March and 26 April 2018, the answers of 929 students from the existing
universities in Timișoara were recorded, the margin of error being 3%. We chose students as a
target group because they represent one of the educated categories of the population, have
access to such information, and because in the future they will be the main beneficiaries of the
results of the emergence of artificial intelligence. The questionnaires were applied both to the
technical specializations in the universities, as well as to the ones with a humanistic
specificity, the type of specialization (technical education - humanities education) being one
of the variables according to which we have done the following analyses. The groups of
respondents were approximately equal (humanities, 50.5% and technical studies, 49.5% and
on relatively equal percentages for gender 48.2% males and 51.8% females). We have
inserted in the questionnaire a series of assertions about the possible scenarios of artificial
intelligence development in the future, with subjects having the choice between the most
likely and the least likely answer, and there are also intermediate responses.
The objective was to determine the attitude respondents have to the emergence and
development of artificial intelligence, focusing on the negative implications entailed by its
development.
The analysis of the results reveals that 84.6% of the respondents consider that they
know what artificial intelligence means, while 12% declare that they do not know what this
term means, and 3.4% have chosen the answer I do not know/do not answer. Differences of
responses have been identified by gender variables and their specialization. Depending on the
gender variable, the following situation was recorded: 95.5% of male respondents say they
know what the notion of artificial intelligence means, compared with 75.6% of females, in
other words, males are better informed on this topic (Figure 1).
Figure 1
95.5
3.3
1.2
75.6
19.1
5.3
0.0 20.0 40.0 60.0 80.0 100.0 120.0
Yes
No
I don’t know, I won’t answer
Do you know what artificial intelligence means?
female male
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 9
Volume 11, Number 17, Year 2018
Depending on the same variable, differences were also found for those who declare
that they do not know what this concept refers to, when the category of male respondents
registered 3.3%, while the female category accounted for 19.1%. There were no very large
differences between the categories of respondents depending on the type of specialization
(technical or humanistic).
”Entities/devices equipped with artificial intelligence will negatively impact
interpersonal relationships” is a statement in which variations in results according to the
respondents’ gender and studies have been recorded. For this statement, as a general result,
there was an attitude in favor of this scenario, in the sense that 29.6% consider it highly likely
and 27.1% as likely to happen in the future. 18.6% of the respondents declare themselves
neutral to this statement, and those who estimate that in the future human relationships will be
negatively impacted by artificially intelligent entities/devices are quantitatively lower (highly
unlikely - 7.5% or unlikely - 14.2%). As it can be seen in the table below (Table 1), there were
differences between male and female respondents in the sense that the first category is more
skeptical about this scenario, the negative impact on interpersonal relationship, females being
the ones who record higher values in favor of this scenario.
Male
Female
Total
Entities/devices equipped with
artificial intelligence will
negatively impact interpersonal
relationships
Highly unlikely
9.3%
5.9%
7.4%
Unlikely
19.3%
10.0%
14.2%
Neutral
22.9%
15.1%
18.6%
Quite likely
23.2%
30.5%
27.2%
Highly likely
22.2%
35.8%
29.6%
I don’t know, I won’t
answer
3.1%
2.8%
2.9%
Total
100.0%
100.0%
100.0%
Table 1
There were also differences according to the type of specialization followed (Table 2),
in the sense that the respondents who follow technical specializations have accumulated lower
values for the response variants that foresee the probability that human relations will be
affected in the future by the emergence of entities/devices equipped with artificial
intelligence.
Technical
studies
Humanities
Total
Entities/devices equipped
with artificial intelligence
will negatively impact
interpersonal relationships
Highly unlikely
8.5%
6.3%
7.4%
Unlikely
17.5%
10.9%
14.2%
Neutral
20.5%
16.8%
18.6%
Quite likely
23.7%
30.7%
27.2%
Highly likely
26.9%
32.5%
29.6%
I don’t know, I won’t
answer
3.0%
2.8%
2.9%
Total
100.0%
100.0%
100.0%
Table 2
The possibility of international cyber-attacks is another scenario subject to the
evaluation of the respondents. Most of the interviewed students are of the opinion that this is
highly likely (34.3%) and quite likely (31%) to happen in the future. Those at the opposite
pole accumulate in the total options 14.2%, equal to the respondents who consider themselves
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 10
Volume 11, Number 17, Year 2018
neutral. There were no significant variations in responses based on gender variables and
specialization followed by students.
Another aspect subject to evaluation by the students of Timișoara was the risk of
losing personal information. The largest category is represented by those who believe that it is
quite likely that this will happen in the future (24.8%). The following categories are
constituted by those who declared themselves neutral (22.8%) with regard to the issue in
question and those who believe that this risk is likely to occur in the future, taking into
account the development of artificial intelligence (20.8%).
As it can be seen from the table below (Table 3), there were differences in responses
according to the gender of the respondents. Cumulating the most likely and quite likely
variants of responses, we find that over half of the female respondents (54.2%) believe that by
developing AI there will be a risk of losing personal information compared with only 35.1%
of the male respondents who see this possibility.
Male
Female
Total
There is a risk of losing personal
information
Highly unlikely
13.8%
5.9%
9.5%
Unlikely
22.4%
13.8%
17.7%
Neutral
23.6%
22.2%
22.8%
Quite likely
19.8%
28.9%
24.8%
Highly likely
15.3%
25.3%
20.8%
I don’t know, I won’t
answer
5.0%
3.9%
4.4%
Total
100.0%
100.0%
100.0%
Table 3
Differences were also recorded according to respondents' specialization (Table 4).
The greatest differences can be noticed in the category of response quite likely, where those
who follow humanities studies reached 29.2% compared to those who have technical studies,
who reached 20.5%.
Technical
studies
Humanities
Total
There is a risk of losing
personal information
Highly unlikely
10.4%
8.5%
9.5%
Unlikely
20.5%
14.8%
17.7%
Neutral
24.5%
21.1%
22.8%
Quite likely
20.5%
29.2%
24.8%
Highly likely
19.8%
21.8%
20.8%
I don’t know, I won’t answer
4.3%
4.6%
4.4%
Total
100.0%
100.0%
100.0%
Table 4
According to a specialized report published by Forrester Research Company, the
development of Artificial Intelligence (AI) will lead to the disappearance of 24.7 million jobs
by 2027. Instead, 14.9 million new jobs will be created, many of them in technology. In the
case of our study, 45.4% of interviewees believe that in the future, due to AI, there will highly
likely be fewer jobs for people. 31.3% of respondents think this is quite likely to happen,
while 8.5% think it is unlikely and 3% is highly unlikely.
Another question that the respondents were asked was whether the development of
artificial intelligence would lead to the emergence of economic crises. As it can be noticed in
Table 5 on the total column, as a general score, most of the respondents declared themselves
neutral (30.1%), the following categories being those who consider it quite likely (22.8% )
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 11
Volume 11, Number 17, Year 2018
and highly likely to happen (15.2%) in the future. And the category of those who stated they
did not know/not respond is significantly higher than for the previous questions. As it can be
noticed in Table 5, differences were also recorded according to the gender variable. Women
are more likely than men to be able to experience economic crises amidst the development of
AI (eg 18.7% female vs. 11% male).
Male
Female
Total
It will lead to the emergence
of economic crises
Highly unlikely
9.1%
4.3%
6.5%
Unlikely
16.7%
13.4%
14.9%
Neutral
33.4%
27.3%
30.1%
Quite likely
19.1%
25.9%
22.8%
Highly likely
11.0%
18.7%
15.2%
I don’t know, I won’t answer
10.7%
10.4%
10.6%
Total
100.0%
100.0%
100.0%
Table 5
Differences have been noticed for the studies variable (Table 6), especially for
humanities students where, as in the case of female respondents, responses leading to the
association of economic crises with the emergence and development of AI were recorded.
Technical
studies
Humanities
Total
It will lead to the emergence
of economic crises
Highly unlikely
7.9%
5.0%
6.5%
Unlikely
15.8%
13.9%
14.9%
Neutral
32.2%
27.9%
30.1%
Quite likely
19.4%
26.4%
22.8%
Highly likely
13.9%
16.6%
15.2%
I don’t know, I won’t answer
10.9%
10.2%
10.6%
Total
100.0%
100.0%
100.0%
Table 6
As it can be noticed in Figure 2, almost three-quarters of respondents consider it highly
likely (39.1%) and quite likely (35%) that in future the AI will be used to create intelligent
weapons. There were no very big differences between the categories of respondents according
to the type of specialization (technical or humanistic) or gender (male or female).
Figure 2
Highly
unlikely,
3.1
Unlikely,
4.0
Neutral,
13.9
Quite
likely, 35.0 Highly
likely, 39.1
I don’t
know, I
won’t
answer, 5.0
Artificial intelligence will be used to create
intelligent weapons
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 12
Volume 11, Number 17, Year 2018
A statement subject to evaluation by the respondents was whether artificial
intelligence would lead to increased military conflicts. In this respect, we find out that the
views of the subjects are rather in favor of this scenario, of the likelihood of this happening in
the future (over 50% of them considering this possible). These categories are followed by the
indecisive who have accumulated a score of 22%. Cumulatively, 19.4% of interviewees
believe it is highly unlikely and less likely that artificial intelligence will lead to increased
military conflicts (Figure 3).
Figure 3
Another aspect researched in the study was whether artificial intelligence would no
longer need people to evolve. From the analysis of the answers, about a quarter of the
respondents (25.9%) consider that this is quite likely to happen, 19.6% that it is unlikely and
18.9% highly likely. The lowest score was recorded for the highly unlikely response variant
(12.4%). The scenario that in the future, following the development of AI, the entities/devices
become independent and able to act and make decisions on their own is for most respondents
quite likely (34.3%) and highly likely (23.8%).
The taking over of humanity by artificially intelligent entities/devices is another
scenario subject to the attention of the interviewees. Most respondents consider this to be
unlikely to happen (21.3%), followed by those who offered the variant quite likely (21%) and
highly unlikely (20.4%).
Different results according to respondents' gender and the studies they follow have
been recorded in this question. As it can be seen in Table 7, for males the likelihood that in
the future artificial intelligence entities/devices will take over humanity is lower than for
females.
Male
Female
Total
Entities/devices equipped
with artificial intelligence
will take over humanity
Highly unlikely
25.8%
15.9%
20.4%
Unlikely
25.8%
17.7%
21.3%
Neutral
18.6%
21.0%
19.9%
Quite likely
17.2%
24.2%
21.0%
Highly likely
8.8%
16.7%
13.1%
I don’t know, I won’t answer
3.8%
4.5%
4.2%
Total
100.0%
100.0%
100.0%
Table 7
Highly
unlikely, 5.8
Unlikely,
13.6
Neutral, 22.0
Quite likely,
27.7
Highly
likely, 24.3
I don't know,
I won't
answer, 6.7
Artificial intelligence will lead to an increase in the
number of military conflicts
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 13
Volume 11, Number 17, Year 2018
The studies variable also influences the answers to the question in the sense that for
people doing technical studies the scenario of the taking over of humanity by entities/devices
equipped with artificial intelligence is considered less likely compared to those who follow
humanistic studies, the values recorded in this case being higher (Table 8).
Technical
studies
Humanities
Total
Entities/devices equipped with
artificial intelligence will take
over humanity
Highly unlikely
22.6%
18.1%
20.4%
Unlikely
25.8%
16.8%
21.3%
Neutral
19.8%
20.0%
19.9%
Quite likely
17.3%
24.8%
21.0%
Highly likely
10.4%
15.9%
13.1%
I don’t know, I won’t answer
4.1%
4.4%
4.2%
Total
100.0%
100.0%
100.0%
Table 8
As it can be seen in the figure below (Figure 4), the opinions recorded for the scenario
where artificial intelligence will conclude that people pose a threat know a more uniform
distribution of responses. The largest category is of those who declared themselves neutral in
this respect (21.9%), with the rest of the opinions recording a somewhat symmetry of the
scores. Differences have been recorded according to the gender of the respondents to the
response category which states that this scenario is highly likely to happen in the future.
For the above mentioned variable, the male respondents recorded 12.2% of the
responses compared to the female respondents who recorded 20.2%. Variations in the scores
for the same response variant (highly likely) were also recorded according to the type of
specialization of the respondents, in the sense that for those with technical studies the
registered value was 13.2% compared to those with humanities where it was 20%.
Figure 4
The last question being analyzed is whether humanity will be destroyed by artificial
intelligence. The category with the most responses is of those who think this is likely to
happen in the future (20.9% of total responses). This category is followed by those who
declare themselves neutral on this issue (19.8%), those who believe that this scenario is
unlikely to happen (18.9%) and those who claim to be unlikely (17.7%) to be a consequence
Highly
unlikely,
18.2
Unlikely,
16.8
Neutral, 21.9
Quite likely,
19.3 Highly
likely, 16.6
I don't know,
I won't
answer, 7.3
Artificial intelligence will reach the conclusion that
people pose a threat
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 14
Volume 11, Number 17, Year 2018
of the development of the AI. 16.8% of respondents opted for the highly likely variant. We
can notice that there is a balance between those who consider the scenario as being likely to
happen and those who consider it less likely (by accumulating the scores obtained in the
variants of the answer highly likely and quite likely we obtain a total of 37.7% and by
cumulating of highly unlikely and unlikely we get a total of 36.6%).
Male
Female
Total
Humankind will be
destroyed by artificial
intelligence
Highly unlikely
25.3%
13.6%
18.9%
Less likely
22.2%
13.9%
17.7%
Neutral
20.0%
19.6%
19.8%
Quite likely
15.5%
25.3%
20.9%
Highly likely
11.9%
20.8%
16.8%
I don’t know, I won’t answer
5.0%
6.7%
5.9%
Total
100.0%
100.0%
100.0%
Table 9
As it can be noticed in Table 9, between males and females there are differences
regarding the scenario where mankind will be destroyed by artificial intelligence. Females
predict that the apocalyptic scenario will happen in the future (by cumulating the scores
obtained with the answers highly likely and quite likely we obtain a total of 46.1%), unlike
males (the recorded values cumulate for both variants a total of 27.4%).
Technical
studies
Humanities
Total
Humankind will be destroyed
by artificial intelligence
Highly unlikely
23.2%
14.4%
18.9%
Less likely
21.5%
13.7%
17.7%
Neutral
19.0%
20.7%
19.8%
Quite likely
16.4%
25.5%
20.9%
Highly likely
14.7%
19.0%
16.8%
I don’t know, I won’t answer
5.1%
6.8%
5.9%
Total
100.0%
100.0%
100.0%
Table 10
The results presented in Table 10 highlight the differences between respondents
according to the studies they follow. We find higher percentages for the likelihood that
mankind will be destroyed by artificial intelligence in respondents with humanistic studies
(44.5%) compared to those with technical studies where 31.1% of the responses were
cumulatively recorded.
Conclusions
Concerns about the emergence and development of AI are understandable given that
currently only possible scenarios of what might be in the future are being circulated. The
perception of the majority of students (56.7%) in Timișoara is that in the future it is possible
that human relationships are affected by the emergence and development of devices equipped
with artificial intelligence. Besides, the results of the study also highlighted the fact that most
interviewees believe that AI entities/devices will make international cyber-attacks possible
(65.3%). Approximately three-quarters of the respondents believe that fewer jobs will be
available in the future, and a third of them see scenarios of economic crises, all due to the
emergence and development of AI entities/devices. Regarding the fact that AI will reach the
conclusion that people pose a threat, there was a balanced distribution of responses, in the
sense that the percentage of those who see this scenario as plausible is roughly equal to those
Vasile Gherheș, Why are we afraid of Artificial Intelligence (AI)?
DOI: 10.1515/eras-2018-0006
European Review of Applied Sociology 15
Volume 11, Number 17, Year 2018
who do not entertain this possibility (about 35%). A similar situation is found in the case of
humanity being destroyed by AI, and for this scenario there is a uniform distribution of
opinions among those who have this fear and those who are more reserved to this hypothesis.
It is very likely that a large number of our representations of AI entities/devices are
greatly affected by SF books and films, which in most of them describe an apocalyptic ending
for humanity. If at present, at the professional and industrial level, we can notice an increase
in interest in the development of AI systems that only increase the “power” of humans,
perhaps we should also think in the future that intelligent systems might change life for the
better, from improving communication to improving medical, transportation and
environmental protection services.
Acknowledgments: The author would like to thank his colleagues, who disseminated the
questionnaire to their students, and the students, who took the time to fill it in online.
REFERENCES
McCarthy, J. (2007). What is Artificial Intelligence? Available online: http://www-formal.stanford.edu/jmc/
(accessed on 11 September 2018)
Dziţac, I. (2008). Inteligență artificială, Arad, Ed. Universității „Aurel Vlaicu”.
The English Oxford Living Dictionary, Available online: https://en.oxforddictionaries.com/definition/us/
artificial_intelligence (accessed on 22 September 2018).
The Encyclopaedia Britannica, Available online: https://www.britannica.com/technology/artificial-intelligence
(accessed on 22 September 2018).
Kurzweil, R. (2005). The Singularity is near: When humans transcend biology. New York: Viking. Google
Scholar
Pennachin, C., Goertzal, B. (2007). Contemporary approaches to artificial general intelligence. In Goertzel, B.,
Pennachin, C. (eds.), Artificial general intelligence, p. 1–30. Heidelberg: Springer. Google
Scholar, Crossref
Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations, 5 (1), p.
11–30. Google Scholar
Müller, V.C., Bostrom, N. (2016). Future progress in artificial intelligence: A Survey of Expert Opinion. In
Fundamental Issues of Artificial Intelligence, Springer: Berlin, Germany, 553–571.
Gherheș, V., Obrad, C. (2018). Technical and Humanities Students' Perspectives on the Development and
Sustainability of Artificial Intelligence, Sustainability 10 (9), Crossref.
https://futureoflife.org/open-letter-autonomous-weapons, (accessed on 25 September 2018).
https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-could-increase-the-risk.html, (accessed on
25 September 2018).
Twitter. 2014. Available online: https://twitter.com/elonmusk/status/495759307346952192?lang=en (accessed
on 26 September 2018).