ArticlePDF Available

A Qualitative Approach to the Public Perception of AI

Authors:

Abstract and Figures

Since the Dartmouth workshop on Artificial Intelligence coined the term, AI has been a topic ofevergrowing scientific and public interest. Understanding its impact on society is essential to avoid potential pitfalls in its applications. This study employed a qualitative approach to focus on the public’s knowledge of, and expectations for AI. We interviewed 25 participants in 30-minute interviews over a period of two months. In these interviews we investigated what people generally know about AI, what advantages and disadvantages they expect, and how much contact they have had with AI or AI based technology. Two main themes emerged: (1) a dystopian view about AI (e.g., ‘’the Terminator’’) and (2) an exaggerated or utopian attitude about the possibilities and abilities of AI. In conclusion, there needs to be accurate information, presentation, and education on AI and its potential impact in order to manage the expectations and actual capabilities.
Content may be subject to copyright.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
David C. Wyld et al. (Eds): AIAD, EDU, COMSCI, IOTCB - 2022
pp. 01-17, 2022. IJCI - 2022 DOI:10.5121/ijci.2022.110401
A QUALITATIVE APPROACH TO THE
PUBLIC PERCEPTION OF AI
Alexander Hick and Martina Ziefle
Chair of Communication Science, RWTH Aachen University, Aachen, Germany
ABSTRACT
Since the Dartmouth workshop on Artificial Intelligence coined the term, AI has been a topic ofever-
growing scientific and public interest. Understanding its impact on society is essential to avoid potential
pitfalls in its applications. This study employed a qualitative approach to focus on the public’s knowledge
of, and expectations for AI. We interviewed 25 participants in 30-minute interviews over a period of two
months. In these interviews we investigated what people generally know about AI, what advantages and
disadvantages they expect, and how much contact they have had with AI or AI based technology. Two main
themes emerged: (1) a dystopian view about AI (e.g., ‘’the Terminator’’) and (2) an exaggerated or
utopian attitude about the possibilities and abilities of AI. In conclusion, there needs to be accurate
information, presentation, and education on AI and its potential impact in order to manage the
expectations and actual capabilities.
KEYWORDS
Artificial Intelligence, machine learning, public perception, qualitative study, technology
1. INTRODUCTION
Between the years 1951 and 2022, PubMed records around 150.000 publications relating to
Artificial Intelligence (AI). There has been a considerable increase in its scientific interest ever
since the Dartmouth workshop on Artificial Intelligence in 1951. This interest, however, is not
confined to the scientific community. The public engagement with AI has also grown over the
past 20 years, especially since 2009 [1]. When googling AI in the year 2000, the results would
have been around 37.000 results. Now, this number is at 3 billion results. This underlines the
exponential growth of available sources, from which to extract information about the topic.
Alongside this digital and scientific forum, there also remains a noteworthy social and cultural
representation of AI. According to Wikipedia (2022), between the year 1927 (the movie
Metropolis) and today (the movie Je suis Auto, 2022) around 150 movies were produced that
include either AI as a technology, a theme, mood, or discussion.
In 2021, the European Commission revised the Coordinated Plan on Artificial Intelligence which
is a set of goals and recommendations for the development and uptake of AI in the European
Union. One of its key policy objectives is to implement AI as a societal good, that is, for people.
The general public is one of the main stakeholders in this discussion and its perception influences
the integration of AI in society and everyday life [2]. To estimate the impact AI has and could
have on society we should understand what it is, what it does, and how and where it is
implemented. AI is a term coined by a group of computer scientists at the Dartmouth workshop
on Artificial Intelligence [3] and refers to the ability of a computer to perform actions commonly
associated with human intelligence [4]. Until then, this definition has undergone various
adjustment and now adds up to a composite that describes AI as the field of science in which we
develop technologies that displays certain cognitive tasks in an intelligent manner [5]. Some of
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
2
these tasks include image, face, and object recognition, speech translation, movie
synchronisation, or transportation [6], [7], [8]. AI is used as a tool to quickly and efficiently
analyse large amounts of data by implementing pattern recognition algorithms that are applicable
for the task at hand. These tools may be in the form of software (i.e., algorithms, deep learning,
neural networks) and hardware (i.e., robots, machines, cars). An essential feature of AI is large
amounts of data with which algorithms can be trained in various desired (or undesired) ways [9],
[10]. AI is thus a tool which automates part of the cognitive labour which, otherwise, would be
carried out by humans [11]-[16]. The technologies that employ AI are the focus of this study.
But what makes people accept AI-based technology ? Many papers, studies and books have been
written on the topic of technology acceptance, and the connected mediating influence of
perception of the technology [20], [21], [22]. However, the invention of something does not
necessarily lead to its acceptance [22]. On the one hand, people have to know something exists
and, on this base, have the chance to draw informed decisions about accepting it. On the other
hand, and often in spite of their knowledge about the technology, people still might refuse to
accept a technology out of personal or affective reasons, even though the technology is also
beneficial for them [23], [24]. Additionally, the technology also has to be useful, easily
accessible, and provide an addition to an otherwise less efficient type of work [24], [25] [20].
Nonetheless, rejection, or more precisely, not accepting a technology might still ensue [26], [27].
Technology acceptance is, thus, a multi-factorial process. During this process a person, the
(potential) user, evaluates whether they approve of the advertised technology based on several
systemic, individual, and context-related influences. Understanding which aspects and factors
influence the deliberation process is key to produce an acceptable, and for the developer
profitable, product [21]-[27].
The next question is: Who accepts the technology? Different people of interest i.e., different
stakeholders, such as health care worker, politicians, doctors, lawyers, engineers, to name a few.
However, should the general public even be included in the acceptance process? In a recent study
on public deliberation about the topic of AI, Lehoux, Miller and William [28] used a scenario-
based approach to bring ethical challenges regarding AI-technology to the public’s attention. The
aim of the study was to gain insights into the public’s imagination, and to validate the value of
citizen’s participation in the field of responsible research and innovation. This is a good example
of public engagement for research purposes which is an essential part of technological integration
within society [29]. Therefore, the public should be considered as an important constituent of the
development and implementation process and the adoption of innovative ideas in general, and
new technologies in particular.
According to AI-watch, a website by the European Commission monitoring AI developments and
progress, there are over 1000 AI firms and more than 400 patents in AI-technologies (i.e.,
software and hardware). These figures only represent the current situation in Germany.
Furthermore, the EC acknowledges that the general public might not be able to ‘fully understand
the workings and effects of AI systems (EC: AI ethics guidelines, 2021, p.23), which is a
reasonable assumption, based on the myriad of different AI-applications, algorithms, and
technical jargon. However, this is no reason not to engage them in a manner that is accessible to a
non-expert in the field. It is evident that AI is already in widespread use in society and everyday
life, with a broad range of individual differences, user diversity aspects, privacy, and trust issues,
all of which need to be considered in this line of research [30], [27], [31]. This widespread
availability and continuing development creates a need for a thorough understanding of the
stakeholders perspectives regarding those technologies.
The development, in the scientific, political, and societal sphere, favours more investigations of
the public perception of AI [32]. The multi-level growth of both societal and scientific interest in
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
3
AI opens various new perspectives on, and opportunities for epistemological, ethical [33],
judicial [34], and technological discussion [35] [36] [9] [37]. While the public’s exposure to the
concept of AI and thus its perception has shifted increasingly into the empirical focus [38] [39]
[40] [41], there remains a need for further and, due to the rapid ongoing development of
technology, updated insights into the public’s perception[28]. This study aims to provide
additional qualitative insights into the public’s knowledge of, contact with and expectations for
AI.
1.1. Questions and Logic of Research
To achieve an understanding on public perceptions and mental models of AI technology and to
learn about potential knowledge-gaps, misconceptions regarding expectations as well as
acceptance-relevant barriers, we carried out a qualitative interview study in which laypeople were
asked about their knowledge about AI, expectations in different application contexts, individual
wants and needs but also potential barriers people see when using AI technologies
The qualitative approach was chosen to better understand the individual reasons people might
give and their explanation for (not) using or accepting AI. The findings might help researchers
understand sensitive information and public communication strategies related to AI technology.
The results can be used to develop educative materials which might help to further support the
public’ s understanding of AI technologies which they will encounter in everyday life. Likewise,
technical designers and computer scientists developing AI technologies might also benefit from
such early cognitive concepts as it gives them a sense of where laypeople have difficulties with
adopting, understanding, or using AI-based technologies.
2. METHOD
2.1. Participants
Participants were recruited from the social network of the researcher. The final sample included
N = 25 participants. Information on the demographic variables can be found in Table 1. No prior
knowledge about AI was necessary for the interviews but three participants (2 males; 25yrs. and
63yrs., and 1 female; 59yrs.) had previous working experience with AI technology or had worked
in the technology industry. The remaining sample had no prior professional experience with AI or
AI-based technologies. All participants were notified prior to the interview about the careful and
anonymised processing of their personal data, that the interview was voluntary, after which, they
all gave their informed consent to participate in this study.
Table 1. Sample statistics
Variable
N
Percentage %
Gender
25
100
Females
14
56
Males
11
44
Age
Mean
SD
Range
43,72
21,65
21-82
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
4
2.2. Procedure
To assess personal-level knowledge and understand subjective attitudes about AI we chose a
qualitative approach with open-ended questions. These questions were asked in semi-structured
interviews over a period of one month and after careful analysis of the existing literature on the
topic. The interview guideline was developed from November 2021 to January 2022. First, a
literature review was conducted to find relevant areas of use for AI. Four areas were found to be
the most relevant for the current research purposes. The areas were (1) Finance, (2) Mobility, (3)
Communication, and (4) Healthcare [42] [8]. On the basis of these areas and the corresponding
literature, we developed the open-ended questions and metaphors for the interview-guideline. The
interviews were designed to be as explorative as possible to avoid influencing the participants.
This approach is useful when examining the particular reasons people give and the attitudes they
hold about a topic. The explorative approach was adapted from existing studies [51], [52], to
evaluate mental modelsi.e., the participant’s subjective viewabout AI. There were four
segments in the interview guideline. All segments were intended to elicit the most spontaneous
response that the participants had, without too much deliberation or concentration on technical
details. The first segment of the guideline was about general AI knowledge that each participant
might have, and areas of use that would come to mind. The second segment asked participants to
indicate what reasons there were to use AI (i.e., advantages) or not to use it (i.e., disadvantages).
After this segment, the third question was about existing contact (unaware or aware) with AI or
AI-based technologies. The last segment included a list of metaphors from which participants
should choose the option that would best describe what AI is for them personally.
The interview started with a general introduction to the topic, the name and field of interest of the
researchers, and the overall aim of the project. To avoid bias, no specific information was given
about AI or technologies based on it. During the interview participants should firstly, explain
what AI is, and secondly, what advantages and disadvantages they would consider it to have.
Then, participants were asked to describe AI in their own words, and finally encouraged to name
a metaphor which would best describe AI for them. These metaphors could be chosen from a list
in their handouts or from the participants’ own imagination. To assist the participants on how to
answer, additional sub-questions were introduced if needed. These questions were further divided
into the private sphere (home & work) and the public sphere (public places or transportation).
2.3. Data Analysis
The interviews were carried out in February 2022 via Zoom, Skype and, in some cases, in person.
The interviews lasted between 30 min and 1 hour. Four participants (>80 years) were interviewed
together. The remaining 21 participants were single person interviews. All participants were
handed a copy of the interview guideline in which they could follow each question. The
interviews were audiotaped and transcribed in March 2022. The transcripts were evaluated by the
researchers and categorized based on recurring themes in all the interviews, and pre-existing
themes found in the literature. The categorization was performed in MAXQDA (2018) [53]. The
study was carried out in Germany and in German. Selected quotes from the interviews were
translated to English for this publication.
3. RESULTS
The results of this research will be presented in the order of the original interview guideline. The
first interview question was as follows:
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
5
3.1. ‘What do you already know about AI?‘
This question was answered very similarly among the participants. It was intended to invoke the
first knowledge and thoughts about the topic that came to mind. Eight people had some, but not a
lot of prior knowledge about AI:
‘‘Not much, just that it is technology that becomes intelligent through humans, not by itself.
‘‘(F23)
‘‘Basically nothing, I have never concerned myself with it, only some stuff you pick up from the
media. ‘‘(F59)
Others said they had no prior knowledge “I have no idea at all” (M50) or no real prior
knowledge “Well, honestly, I have no real idea what it is” (M58). Some indicated to have never
thought about it: “[…] I never thought about it”(F61).
In contrast to this, 11 people had an idea sometimes very accurate about what AI is. There
were, again, very similar answers to the first question:
‘‘Well, AI is Artificial Intelligence. I would define it as machine learning, that is, based on some
particular dataset or data, the AI can create knowledge and from which it can draw
conclusions.‘‘(M25)
‘‘So, I think that AI means an algorithm that can learn, and which is implemented in a
machine.[…] and AI can speed up processes and make them easier in order to save money, time
[…] something where machines get human abilities to make human life easier.‘‘(F58)
3.2. ‘Where is AI implemented?‘
Here, too, similar answers were common among participants. All, but one (F74), participants
indicated that AI was built in their smart-phones for face-recognition, navigation, or speech
recognition (SIRI). Some participants also mentioned smart-home technologies like Amazon’s
ALEXA or the PHILLIPS HUE system. The next most frequent answer was autonomous driving
or automated assistance system in cars (e.g., automatic braking, lane assistant, blind-spot
assistant). Other areas of implementation includedin order of frequency medical technologies,
industry, video games, and science (i.e., research). Exemplary results and areas of use are
summarised in Table 2.
Table 2. Summary of ‘Where AI is implemented’.
Technology
Area of Use
Autonomous driving
Mobility
Traffic lights
Transportation
Approximation
Engineering
Software to detect
group dynamics in
stadiums
Software
Public Entertainment
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
6
3.3. ‘What is the difference between an AI-based technology and non-AI
technology?
The most common answer was that AI is something that learns. One participant (M25, with prior
knowledge about AI) explained that:
‘‘I would say that the [AI] system learns, that is, it analyses data. By means of this data
analysis and the acquisition of new data, it learns and adapts to the individual needs and
the purpose for which we developed the system.‘‘(M25)
Another view about AI was that ‘‘it‘‘–meaning AI as an individual in its own rightdevelops into
something ‘‘for itself‘‘:
‘‘Well, I think about AI as this individual that develops, and other technology [not-AI])
does not, because it only acts on entered programming codes.‘‘(F47)
Two participants (F80 & M82) discussed the issue of AI as something functional, as opposed to
something emotional or social. For them, AI, could never ‘‘know‘‘ the emotions or ‘‘represent
them (F80)‘‘ in the way humans do. Overall, the 25 participants all mentioned that AI, could not,
or should not be seen as something empathic, emotional or as a ‘‘social agent‘‘:
‘‘[…] it will be able to do everything, but emotions, it should not be able to do.‘‘(F80)
‘‘[…] yes, I think recognizing emotions, that is a step too far.‘‘(M82)
‘‘My emotions are none of its business!
[…]my emotions belong to other humans.‘‘(F74)
Another participant also shared this sentiment indicating that emotions are something reserved
for human beings:
‘‘Recognise emotions…under no circumstances. They are reserved for humans‘‘ (M24a)
Another participant agreed with the classification and said that:
‘‘Exactly, AI should not be seen as something social!‘‘(M58)
One participant elaborated on this classification saying that, even if it (AI) could recognise
emotions, she would not want it to act on them:
‘‘(When the AI recognise sadness) […]oh she is sad, let us play a happy song…I am quite
old-school in that respect…then I’d rather stay sad, nothing should turn this around
(laughs)‘‘ (F24a) (brackets added)
This last answer was shared by another female participant with regard to the individual
description of AI:
‘‘Well, I see AI as a Servant or Advisor because the human aspect…well, this will never be
the case for me. […] friend? Well, how?! I do not share (my emotions) with my
computer.‘‘(F59)
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
7
3.4. ‘What are reasons to use AI?‘
Following the question of the difference between AI and non-AI, participants should now give
reasons to use AI. The most common answer was the usefulness of AI because it would make
processes or work easier. The answers are summarised in Table 3 below.
Table 3. Reasons to use AI and advantages.
The following quotes were extracted from the transcript to explain some of the reasons
participants gave for using AI:
‘‘In the own home it would make sense to use it to make life extremely easier, like these
robots that clean or care for patients at some point.‘‘(F50)
‘‘It probably would make things easier when it takes over easy task. The cleaning robot
would be an example for this.‘‘(M24)
‘‘Because it is faster, it can detect faster. But also, that the progress is faster. Maybe it is,
on the other hand, not too good because one loses touch with rationality. But it is
important that it moves along (AI), and that it is being used.‘‘(F80)
3.5. ‘What are reasons not to use AI?‘
Participants were asked to indicate reasons not to use AI. Their answers are divided into
subsections. These sections represent categories created in the MAXQDA [53] data analysis and
are based on the answer given by the participants. Most of the time participants showed some
level of concern regarding AI. These levels of concern were divided into the following
subcategories.
3.5.1. Data Privacy Concern
Data privacy was distinct from personal privacy. In this sample the participants were more
concerned with their personal data (i.e., name, social security number, bank statements), than
with their own privacy (i.e., the right to be left alone or ‘‘picking my nose, without being
watched‘‘ (F59)). All participants were concerned that their personal data could be used to their
disadvantage (e.g., higher insurance payments).
‘‘[…] things that concern me individually, I reckon there are certain aspects were the AI is
being misused, the personal data.‘‘(M25)
3.5.2. Privacy Concern
Privacy was defined by one participant as the individual’s sphere in which one has no need to
change one’s behaviour in either a negative or positive way. This participant associated this type
Reason to use AI
Advantages
Language
Translation
Translate any language
Cleaning Robot
Convenience
Make life easier
(cleaning robots)
Convenience
Efficiency
Saves time
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
8
of privacy with a decreased cognitive load due to the freedom of not being watched. She
underlined the importance of being able to be left alone:
‘‘That you can be just the way you are. […]even when the own behaviour has no negative
consequences, but just that…being able to be burden free‘‘(F25)
Another participant had a concern regarding the private sphere and dignity of other, particularly
older, people:
‘‘[…] or when older people are monitored in a room […] I think believe there are
personal contexts which become endangered such personal privacy.‘‘(M25)
3.5.3. Loss of control
In addition to this concern of both data privacy and personal privacy, some participants worried
about losing control over an autonomous system. There were differences, however, in what this
‘‘loss of control‘‘ included:
‘‘What it (AI) should not be allowed to be is more intelligent than the human that has
developed it and if one loses control over it.‘‘(M24)
‘‘It should definitely not be able to program itself! Power of the machines and whatnot…If
you have watched Terminator, you surely wouldn’t want that. If it is intelligent and
develops a personality with own interests like: ‘‘I do not like asparagus‘‘, for
example…this would be a problem.‘‘(F42)
‘‘[…] yes, both, loss of control and privacy (as areas of concern‘‘ (F59)
‘‘[…] it depends on how it’s done…or is it like the Terminator…like this autonomous
machine‘‘(M50)
These were concerns related to the presentation of AI in the media or movies. There was another
facet of loss of control regarding the software and algorithm development and implementation.
One participant, who had worked in the technology industry, said:
‘‘The AI-system needs to be supervised…not by the system itself, of course, because then it
becomes rogue[…] that is the problem with AI…that, due to money issues, the mistakes in
the original codes are being transferred to the technologies we use today. And then the AI
provides faulty output‘‘(M63)
3.5.4. Surveillance Concern
In line with this reasoning some participants had concerns about surveillance. There were two
levels of concern. First, that AI might monitor people. Second, that people might not sufficiently
monitor AI. This distinction was not made by all participants. Two participants (M25 & M63)
have a professional background in the area of AI and reported specific, industry related concerns.
(A) AI monitoring people:
‘‘If older people are constantly being watched in their room via a camera and are under
surveillance…that, I think, is not acceptable at all!‘‘(M25)
(B) People monitoring AI:
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
9
‘‘For me it’s just scary because I know about the source of error. If it is done the way my
company used to do it, then I definitely don’t want that because, if so, it cannot work!‘‘
This participant also gave a reason for their view:
‘‘The reason is: Money. Because it is more expensive to sufficiently monitor the system. I
am convinced that no company, no government, doesn’t matter who it is exactly…no one
would be willing to do this is such a way that it works adequately‘‘(M63)
3.5.5. Question of Responsibility
The final concern was about responsibility. In one case the participant was referring to the public
transportation system (e.g., autonomous trams):
‘‘I think if the tram will be able to drive by itself one day, the question of responsibility
arises…in the case of an accident that is.‘‘
However, this participant also added a statement about the state-of-the-art of this specific
technology:
‘‘But I think if we reach the level of technology at which we are able to do this, it is
definitely a good thing.‘‘ (M24)
Other participants had different facets of responsibility in mind:
‘‘From my point of view, it should be forbidden to use the (AI-based) tracking on the
internet…not only as an adult, it pushes me towards buying stuff that I do not need…I also
think that teenagers and children are heavily influenced by this. Their whole nature and
personality‘‘(F42)
‘‘I think that before someone were to use an AI, it should not be possible without my
consent. It should not happen that a doctor won’t treat me because I have not given my
consent.‘‘(F74)
‘‘[…]it should not share this with my insurance…there, my affinity for data privacy is
popping up.‘‘(F25a)
3.6. Metaphors
This part summarises participants personal associations with the term AI and related metaphors
or pictures that came to mind. The most common metaphor for AI was servant or advisor. Only
one participant (M25) indicated that, for him, AI is something ‘social‘, like a ‘guardian angel‘.
The other participants described AI in terms of something that is functional i.e., used for a
specific task. For practical reasons, not every quote from the transcripts could be used in this
section. However, in the following section the overall themes, and some additional quotes, will
be discussed and a list of the metaphors is presented in Table 4 and respective frequencies in
Figure 1 below.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
10
Table 4. Metaphors for AI.
Other Metaphors like ‘Roommate, Friend, or Someone equal to me‘ were categorically rejected
by most participants. Only two participants said AI was above me (F25) ‘or considered [AI] a
friend (F59).
Figure 1. Metaphor for AI Frequencies
4. DISCUSSION
4.1. General Summary
This study used a qualitative approach to investigate people‘s knowledge, expectation, and
perception of AI. It applied open-ended questions to assess what people generally know about
expect of, and how much contact they had with technology that is based on AI. Examples
included smart phones, laptops, tablets, cars, and medical technology such as diagnostic and
prognostic tools. This was done to assess what people consider about a new technology and
which aspects they tend to focus on in terms of knowledge, perception and subsequently
acceptance.
Participants in this sample had all heard of the term AI. However, the answers to what AI
specifically is, varied greatly. In general, the participants had a good grasp of the basic function
of AI and AI-based technology. The general understanding of AI was relatively good. Overall,
the participants were also open to the ideas and promises of AI. The results suggest that the
participants in this study were generally open to a ‘new’ technology such as AI, without
Metaphors
Advisor
Servant
Patron
Guardian angel
Roommate
Helper
Friend
Ruler
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
11
necessarily knowing the exact way the technology works, but rather, understanding its aim, the
broad way an e.g., diagnostic algorithm reached a conclusion, or the general transparency of the
system itself to answer the question of responsibility in case of a malfunction.
During the interview most participants were interested in the possibilities that AI holds and also
curious about what it can and cannot do at the moment. However, there was a inaccurate
judgment about how widespread AI really is in everyday life. Although most participants used a
smartphone, not all of them considered it to be AI-based. Furthermore, some people thought that
AI is clearly defined. This shows a certain discrepancy between science and society. In science,
as discussed in the introduction, we do not have one clear definition of AI. Instead, a composite
of different elements is created to come up with a working definition based on context. This
definition, however, is by no means complete and applicable in every context. During this study
participants ascribed AI to systems that were not AI and forgot to do so in cases were a
technology is actually based on it. We will come back to this point in the section about the
relevance of the present study.
Now we will focus on a different kind of discrepancy between the participants themselves. We
will discuss two standpoints that originated from differential exposure to and understanding of
AI. Most participants answered the open-ended questions without mentioning either utopian or
dystopian views. Nonetheless, some participants mentioned either utopian (i.e., exaggerated
views on what AI can do) or dystopian views. (i.e., unrealistic views on what AI could
‘‘become‘‘).
4.2. Dystopian Views on AI
There were some answers that showed either an exaggerated view, that is, ascribing to AI what it
cannotor could notactually do. Or an unrealistic view in the sense that, some participants
associated AI with dystopian scenarios from movies such as the Terminator.
‘‘It should definitely not be able to program itself! Power of the machines and whatnot…If
you have watched Terminator, you surely wouldn’t want that. If it is intelligent and
develops a personality with own interests like: ‘‘I do not like asparagus‘‘, for
example…this would be a problem.‘‘(F42)
This dystopian view about what AI can do’, was in response to the item: AI can program itself.
The item-based part of the interview is not part of the current study. It was not further specified
what programming means nor what itself would include. However, it seems that some
participants had an inherently bad or negative association when thinking about losing control
over an AI system:
‘‘What it (AI) should not be allowed to be is more intelligent than the human that has
developed it and if one loses control over it.‘‘(M24)
Almost all participants associated something functional rather than social with AI. Although this
might not lead to a dystopian view per se, this trend indicates that AI is considered to be
something that might be able to surpass the functional abilities of humans (e.g., diagnosing
patients, data analysis, object detection). While there is currently no general-AI, that is, an all-
purpose system that can transfer knowledge to any domain, it is important to consider possible
concerns and implication of such a system ad hoc [43], [44], [45]. The people who knew what AI
actually is indicated that, for them, it is something very transparent and useful. The people who
had no idea about AI could not form this opinion and, thus, described AI as something opaque or
threatening. The part of AI, commonly known as machine learning, is a very complex statistical
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
12
process of calculating different weights from input data to generate an output. However, the
technical details were not explained by any participant because it is not common knowledge, and
also beyond the point of this study. Still, and as mentioned in the introduction, the missing
understanding of the workings and effects of an AI system, could have motivated the European
Commission to propose an organisation that will provide a technical certificate, attesting an AI
system’s transparency, fairness, and accountability. There is a clear political understanding of the
discrepancy between knowledge within the general public and an accessible source of
information in the AI Now Report from the year 2018 as well as the European Commission’s
Ethics guideline’ from 2021. The implementation of AI-based devices into everyday life should
be based on an informed consent. To be able to give a truly informed consent, it is important that
the provided information is accessible to the user or user group. In the realm of AI-system and
machine learning algorithms, this is currently not the case for most users [54].
4.3. Utopian Views on AI
In contrast to the dystopian associations is an exaggerated utopian view on AI. One participant
ascribed to AI abilities which it cannot perform. The following quote is a good example:
‘‘[…]when I get up in the morning to make coffee and it knows what I want…that is an
advantage. But on the other hand, it doesn’t forget and when it is intelligent…, and has a
personality, everybody does, and it does too…‘‘(F42)‘‘
This view is not utopian per se because it does not describe AI as something desirable. Rather it
exaggerates the potential abilities that AI can have. Intelligence is not the same as personality.
However, one cannot blame this participant for holding this view, because of the way that AI is
being portrayed in the movies (e.g., the Terminator, Her, Ex-Machina). These movies portray AI
as being completely autonomous and super-human. By presenting AI is such a way, people might
get the wrong impression about AI’s actual abilities. Furthermore, this depiction tends to under
represent what AI can, and how it is currently working in everyday life e.g., smartphones,
factories, cars, computer vision, to name but a few. To know that this, too, is AI would help
people understand the potential of this technology, and in turn accurately evaluate its true benefits
(or barriers). This particular association was an exemplary result of the way movies can influence
the perception of technology. The influence of the popular culture, and especially from movies,
on technology perception, is a well-established concept [55]. Therefore, this relationship is
another relevant angle for future studies.
4.4. Relevance
The relevance of this study is first, its explorative summary of subjective knowledge and attitudes
towards AI. Second, the outline of potential barriers that might occur during the development and
implementation process of AI-based technologies. The qualitative nature of this study assessed
the reasons for a particular attitude or subjective perspective on a certain benefit/barrier in terms
of AI development, use, and implementation. On the one hand, participants’ expectations were
(subjectively) influenced by their own contact with AI systems. On the other hand, this view
might also have been moderated by the exposure to the media and popular cultures’ presentation
of AI. In future research, the results of this study need to be quantified to validate a reliable
association between the variables perception and acceptance of AI [1], [28], [46].
According to one participant (M63), the discussion about AI and its technological
implementation has seen a steep increase since the year 2000 due to the developments in
computing abilities. In line with this claim is, Fast and Horvitz [1]. They found that public
discussion too has increased since the year 2009 (coinciding with the release of the critically
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
13
acclaimed movie: Terminator Salvation). In their systematic review of New York Times articles
on the topic of AI over a period of 30 years, Fast and Horvitz discovered a similar trend in public
perception, with a discussion about AI that is usually more optimistic than pessimistic. This
shows that still today, after more than a decade of exposure to AI, people still hold on to some of
the narrow concepts about AI, and thus, tend to worry about losing control over AI. In line with
this finding is also the concern people have about issues and questions of (moral) responsibility,
data privacy, privacy, and security. These are all relevant elements because they can influence
how people think about a technology, and if they accept it or not [9] [39] [46].
As addressed in the introduction, the acceptance of technology is a complex process. To avoid
spending extensive amounts of time and energy careful consideration about this process is needed
from developers, policy makers and other economic, political, or social stakeholders. This study
is a first step in trying to understand what people consider to be important. It summarises a
snapshot off people’s attitudes and reasons which can help with the design as well as
implementation of AI based technology by keeping in mind that, sometimes, there is a clear
concern or fear, about a technology, that can be addressed by the developers.
Another relevant point is the knowledge and curiosity that people have about technology.
Although there were participants who did not know a lot about AI, they still were curious about
its potential and its future. Often these participants took issue with the opaqueness of AI, and they
were uncertain if they could trust it. By providing a sufficient amount of transparency and public
education on the technology, and co-developing AI-technologies with, and for people, many of
the mentioned fears can be resolved. Therefore, this study can also be used as a roadmap to avoid
these certain pitfalls.
Technology usually develops alongside the context and the society it is being developed for [45],
[46]. Breakthroughs like the steam engine, the computer or, nowadays AI, tend to occur in a
period during which, there is both a need for the new technology, and the resources to realise it
[47], [48], [49]. The rapid development and investment in AI are facilitated by the increase in
computing power, hard drive storage capacities, market interest in AI-products, and large
amounts of available, and often unavailable, data [17], [18], [19]. However, the use of these
technologies, in everyday life still depends upon the right perceptions of, accepting attitude
towards, and need for implementation.
4.5. Strengths & Weaknesses
The main strength of this study was its reliance on purely explorative concepts. The open-ended
questions were posed in such a way that any answer was possible and could be considered. The
main interests for this study were the spontaneous and direct associations that people have about
the concept of AI. The interview questions were selected from a pool of different areas e.g., e-
commerce, finance, transportation, healthcare. Thus, a wide variety of potential associations
could be covered and is presented in this paper.
In contrast to its strengths there are some limitations to this approach. The current study only
presents explorative and descriptive findings which might not be generalizable to the wider
public. Furthermore, the interview guideline was semi-structured which resulted in some
variations in the interviews. Also, the sample included more people in the age range of 20-30 and
only some above 70. This led to the skewed average towards a mean age of 43. Another sample-
related limitation is the nature of the sampling method. The sample was conveniently chosen
from the social circle of the researcher. This might have led to some biased answers with regard
to interest in the topic, or the willingness to participate. Future studies should use a random and
varied sample and to increase the variety and balance individual interests in this topic.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
14
4.6. Summary & Future Research
In conclusion the study has shown existing and basic knowledge about AI and personal
deliberation about important societal, ethical issues, and technological. Participants showed a
good grasp of the concept of AI and were generally excited to learn about the topic. There is a lot
of potential in interviewing the general public about AI. Future studies should focus on eliciting
the reasons why people think that AI is threatening, uncontrollable, or a guardian angel.
Additional quantitative studies could aim at generalising findings about the existing knowledge
level regarding AI and AI-based technologies.
ACKNOWLEDGEMENTS
The authors thank all of the participants for sharing their views, stories, and attitudes.
Furthermore, the first author A.H. would like to thank Sophia Otten, Caterina Maidhof, and Julia
Offermann on notes on an earlier version of this draft.
This work is part of the VisuAAL project on Privacy-Aware and Acceptable Video-Based
Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This
project has received funding from the European Union’s Horizon 2020 research and innovation
program under the Marie Skłodowska-Curie grant agreement No 861091 and from the Deutsche
For schungsgemeinschaft (DFG, German Research Foundation) under German’s Excellence
Strategy - EXC-2023 Internet of Production - 390621612.
REFERENCES
[1] E. Fast and E. Horvitz, ‘Long-Term Trends in the Public Perception of Artificial Intelligence’. arXiv,
Dec. 02, 2016. Accessed: Jun. 17, 2022. [Online]. Available: http://arxiv.org/abs/1609.04904
[2] B. J. Copeland and D. Proudfoot, ‘Artificial intelligence’, in Philosophy of Psychology and Cognitive
Science, Elsevier, 2007, pp. 429482. doi: 10.1016/B978-044451540-7/50032-3.
[3] The Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group et al.,
‘Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial
Intelligence in Radiology’, Can Assoc Radiol J, vol. 70, no. 2, pp. 107118, May 2019, doi:
10.1016/j.carj.2019.03.001.
[4] K. Murphy et al., ‘Artificial intelligence for good health: a scoping review of the ethics literature’,
BMC Med Ethics, vol. 22, no. 1, p. 14, Dec. 2021, doi: 10.1186/s12910-021-00577-8.
[5] M. Blut, C. Wang, N. V. Wünderlich, and C. Brock, ‘Understanding anthropomorphism in service
provision: a meta-analysis of physical robots, chatbots, and other AI’, J. of the Acad. Mark. Sci., vol.
49, no. 4, pp. 632658, Jul. 2021, doi: 10.1007/s11747-020-00762-y.
[6] V. Kaul, S. Enslin, and S. A. Gross, ‘History of artificial intelligence in medicine’, Gastrointestinal
Endoscopy, vol. 92, no. 4, pp. 807812, Oct. 2020, doi: 10.1016/j.gie.2020.06.040.
[7] L. J. Catania, ‘The evolution of artificial intelligence (AI)’, in Foundations of Artificial Intelligence
in Healthcare and Bioscience, Elsevier, 2021, pp. 711. doi: 10.1016/B978-0-12-824477-7.00002-X.
[8] A. Jobin, M. Ienca, and E. Vayena, ‘The global landscape of AI ethics guidelines’, Nat Mach Intell,
vol. 1, no. 9, pp. 389399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
[9] V. Claes, E. Devriendt, J. Tournoy, and K. Milisen, ‘Attitudes and perceptions of adults of 60 years
and older towards in-home monitoring of the activities of daily living with contactless sensors: An
explorative study’, International Journal of Nursing Studies, vol. 52, no. 1, pp. 134148, Jan. 2015,
doi: 10.1016/j.ijnurstu.2014.05.010.
[10] P. Climent-Pérez, S. Spinsante, A. Mihailidis, and F. Florez-Revuelta, ‘A review on video-based
active and assisted living technologies for automated lifelogging’, Expert Systems with Applications,
vol. 139, p. 112847, Jan. 2020, doi: 10.1016/j.eswa.2019.112847.
[11] J. Füegi and J. Francis, ‘Lovelace & Babbage and the creation of the 1843 “notes”’, ACM Inroads,
vol. 6, no. 3, pp. 7886, Aug. 2015, doi: 10.1145/2810201.
[12] A. G. Bromley, ‘Charles Babbage’s Analytical Engine, 1838’, IEEE Annals Hist. Comput., vol. 4, no.
3, pp. 196217, Jul. 1982, doi: 10.1109/MAHC.1982.10028.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
15
[13] J. Al-Khalili, ‘The birth of the electric machines: a commentary on Faraday (1832) “Experimental
research in electricity”’, Phil. Trans. R. Soc. A., vol. 373, no. 2039, p. 20140208, Apr. 2015, doi:
10.1098/rsta.2014.0208.
[14] W. D. Devine, ‘From Shafts to Wires: Historical Perspective on Electrification’, J. Eco. History, vol.
43, no. 2, pp. 347372, Jun. 1983, doi: 10.1017/S0022050700029673.
[15] C. Coombs, ‘Will COVID-19 be the tipping point for the Intelligent Automation of work? A review
of the debate and implications for research’, International Journal of Information Management, vol.
55, p. 102182, Dec. 2020, doi: 10.1016/j.ijinfomgt.2020.102182.
[16] J. C. Sipior, ‘Considerations for development and use of AI in response to COVID-19’, International
Journal of Information Management, vol. 55, p. 102170, Dec. 2020, doi:
10.1016/j.ijinfomgt.2020.102170.
[17] C. Collins, D. Dennehy, K. Conboy, and P. Mikalef, ‘Artificial intelligence in information systems
research: A systematic literature review and research agenda’, International Journal of Information
Management, vol. 60, p. 102383, Oct. 2021, doi: 10.1016/j.ijinfomgt.2021.102383.
[18] S. T. M. Peek, E. J. M. Wouters, J. van Hoof, K. G. Luijkx, H. R. Boeije, and H. J. M. Vrijhoef,
‘Factors influencing acceptance of technology for aging in place: A systematic review’, International
Journal of Medical Informatics, vol. 83, no. 4, pp. 235248, Apr. 2014, doi:
10.1016/j.ijmedinf.2014.01.004.
[19] W. Wilkowska, J. Offermann-van Heek, F. Florez-Revuelta, and M. Ziefle, ‘Video Cameras for
Lifelogging at Home: Preferred Visualization Modes, Acceptance, and Privacy Perceptions among
German and Turkish Participants’, International Journal of HumanComputer Interaction, vol. 37, no.
15, pp. 14361454, Sep. 2021, doi: 10.1080/10447318.2021.1888487.
[20] K. Arning and M. Ziefle, ‘“Get that Camera Out of My House!” Conjoint Measurement of
Preferences for Video-Based Healthcare Monitoring Systems in Private and Public Places’, in
Inclusive Smart Cities and e-Health, vol. 9102, A. Geissbühler, J. Demongeot, M. Mokhtari, B.
Abdulrazak, and H. Aloulou, Eds. Cham: Springer International Publishing, 2015, pp. 152164. doi:
10.1007/978-3-319-19312-0_13.
[21] Bhattacherjee and Sanford, ‘Influence Processes for Information Technology Acceptance: An
Elaboration Likelihood Model’, MIS Quarterly, vol. 30, no. 4, p. 805, 2006, doi: 10.2307/25148755.
[22] Venkatesh, Morris, Davis, and Davis, ‘User Acceptance of Information Technology: Toward a
Unified View’, MIS Quarterly, vol. 27, no. 3, p. 425, 2003, doi: 10.2307/30036540.
[23] V. Venkatesh, ‘Adoption and use of AI tools: a research agenda grounded in UTAUT’, Ann Oper
Res, vol. 308, no. 12, pp. 641652, Jan. 2022, doi: 10.1007/s10479-020-03918-9.
[24] V. Venkatesh and F. D. Davis, ‘A Theoretical Extension of the Technology Acceptance Model: Four
Longitudinal Field Studies’, Management Science, vol. 46, no. 2, pp. 186204, Feb. 2000, doi:
10.1287/mnsc.46.2.186.11926.
[25] J. Offermann-van Heek, E.-M. Schomakers, and M. Ziefle, ‘Bare necessities? How the need for care
modulates the acceptance of ambient assisted living technologies’, International Journal of Medical
Informatics, vol. 127, pp. 147156, Jul. 2019, doi: 10.1016/j.ijmedinf.2019.04.025.
[26] P. Lehoux, F. A. Miller, and B. Williams-Jones, ‘Anticipatory governance and moral imagination:
Methodological insights from a scenario-based public deliberation study’, Technological Forecasting
and Social Change, vol. 151, p. 119800, Feb. 2020, doi: 10.1016/j.techfore.2019.119800.
[27] U. Felt, S. Schumann, C. G. Schwarz, and M. Strassnig, ‘Technology of imagination: a card-based
public engagement method for debating emerging technologies’, Qualitative Research, vol. 14, no. 2,
pp. 233251, Apr. 2014, doi: 10.1177/1468794112468468.
[28] B. S. Zaunbrecher, J. Kluge, and M. Ziefle, ‘Exploring Mental Models of Geothermal Energy among
Laypeople in Germany as Hidden Drivers for Acceptance’, J. sustain. dev. energy water environ.
syst., vol. 6, no. 3, pp. 446463, Sep. 2018, doi: 10.13044/j.sdewes.d5.0192.
[29] N. Martinez-Martin et al., ‘Ethical issues in using ambient intelligence in health-care settings’, The
Lancet Digital Health, vol. 3, no. 2, pp. e115e123, Feb. 2021, doi: 10.1016/S2589-7500(20)30275-2.
[30] H. T. Vu and J. Lim, ‘Effects of country and individual factors on public acceptance of artificial
intelligence and robotics technologies: a multilevel SEM analysis of 28-country survey data’,
Behaviour & Information Technology, vol. 41, no. 7, pp. 15151528, May 2022, doi:
10.1080/0144929X.2021.1884288.
[31] G. L. Liehner, P. Brauner, A. K. Schaar, and M. Ziefle, ‘Delegation of Moral Tasks to Automated
Agents—The Impact of Risk and Context on Trusting a Machine to Perform a Task’, IEEE Trans.
Technol. Soc., vol. 3, no. 1, pp. 4657, Mar. 2022, doi: 10.1109/TTS.2021.3118355.
[32] N. Xu and K.-J. Wang, ‘Adopting robot lawyer? The extending artificial intelligence robot lawyer
technology acceptance model for legal industry by an exploratory study’, Journal of Management &
Organization, vol. 27, no. 5, pp. 867885, Sep. 2021, doi: 10.1017/jmo.2018.81.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
16
[33] H. Kempt and S. K. Nagel, ‘Responsibility, second opinions and peer-disagreement: ethical and
epistemological challenges of using AI in clinical diagnostic contexts’, J Med Ethics, vol. 48, no. 4,
pp. 222229, Apr. 2022, doi: 10.1136/medethics-2021-107440.
[34] S. Talley, ‘Public Acceptance of AI Technology in Self-Flying Aircraft’, JAAER, 2020, doi:
10.15394/jaaer.2020.1822.
[35] P. Climent-Pérez and F. Florez-Revuelta, ‘Protection of visual privacy in videos acquired with RGB
cameras for active and assisted living applications’, Multimed Tools Appl, vol. 80, no. 15, pp.
2364923664, Jun. 2021, doi: 10.1007/s11042-020-10249-1.
[36] A. Vellido, ‘Societal Issues Concerning the Application of Artificial Intelligence in Medicine’,
Kidney Dis, vol. 5, no. 1, pp. 1117, 2019, doi: 10.1159/000492428.
[37] M. D. McCradden, T. Sarker, and P. A. Paprica, ‘Conditionally positive: a qualitative study of public
perceptions about using health data for artificial intelligence research’, BMJ Open, vol. 10, no. 10, p.
e039798, Oct. 2020, doi: 10.1136/bmjopen-2020-039798.
[38] J. P. Richardson et al., ‘Patient apprehensions about the use of artificial intelligence in healthcare’,
npj Digit. Med., vol. 4, no. 1, p. 140, Dec. 2021, doi: 10.1038/s41746-021-00509-1.
[39] Y. Chen, C. Stavropoulou, R. Narasinkan, A. Baker, and H. Scarbrough, ‘Professionals’ responses to
the introduction of AI innovations in radiology and their implications for future adoption: a
qualitative study’, BMC Health Serv Res, vol. 21, no. 1, p. 813, Dec. 2021, doi: 10.1186/s12913-021-
06861-y.
[40] J. Clune, ‘AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial
intelligence’. arXiv, Jan. 31, 2020. Accessed: Jun. 17, 2022. [Online]. Available:
http://arxiv.org/abs/1905.10985
[41] R. Fjelland, ‘Why general artificial intelligence will not be realized’, Humanit Soc Sci Commun, vol.
7, no. 1, p. 10, Dec. 2020, doi: 10.1057/s41599-020-0494-4.
[42] D. Silver, S. Singh, D. Precup, and R. S. Sutton, ‘Reward is enough’, Artificial Intelligence, vol. 299,
p. 103535, Oct. 2021, doi: 10.1016/j.artint.2021.103535.
[43] K. Kieslich, B. Keller, and C. Starke, ‘AI-Ethics by Design. Evaluating Public Perception on the
Importance of Ethical Design Principles of AI’, Big Data & Society, vol. 9, no. 1, p.
205395172210929, Jan. 2022, doi: 10.1177/20539517221092956.
[44] F. J. Swetz, ‘Mathematical Treasure: Ada Lovelace’s Notes on the Analytic Engine’, 2019.
https://www.maa.org/press/periodicals/convergence/mathematical-treasure-ada-lovelaces-notes-on-
the-analytic-engine
[45] W. E. Bijker, Why and How Technology Matters. Oxford University Press, 2006. doi:
10.1093/oxfordhb/9780199270439.003.0037.
[46] R. Forrester, ‘The Invention of the Steam Engine’, SocArXiv, preprint, Oct. 2019. doi:
10.31235/osf.io/fvs74.
[47] C. A. Lin, ‘Exploring personal computer adoption dynamics’, Journal of Broadcasting & Electronic
Media, vol. 42, no. 1, pp. 95112, Jan. 1998, doi: 10.1080/08838159809364436.
[48] S. Serpa and C. Ferreira, ‘Society 5.0 and Social Development’, SOCIAL SCIENCES, preprint, Nov.
2018. doi: 10.20944/preprints201811.0108.v1.
[49] J. Weizenbaum, ‘On the Impact of the Computer on Society: How does one insult a machine?’,
Science, vol. 176, no. 4035, pp. 609614, May 1972, doi: 10.1126/science.176.4035.609.
[50] S. Makridakis, The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and
firms’, Futures, vol. 90, pp. 4660, Jun. 2017, doi: 10.1016/j.futures.2017.03.006.
[51] S. Otten and M. Ziefle, ‘Exploring Trust Perceptions in the Medical Context: A Qualitative Approach
to Outlining Determinants of Trust in AAL Technology’:, in Proceedings of the 8th International
Conference on Information and Communication Technologies for Ageing Well and e-Health, Online
Streaming, Germany 2022, pp. 244253. doi: 10.5220/0011058300003188.
[52] C. Maidhof, M. Ziefle, and J. Offermann, ‘Exploring Privacy: Mental Models of Potential Users of
AAL Technology’:, in Proceedings of the 8th International Conference on Information and
Communication Technologies for Ageing Well and e-Health, Online Streaming, Germany, 2022, pp.
93104. doi: 10.5220/0011046200003188.
[53] MAXQDA. VERBI. (2018). Accessed: Feb. 1, 2022. Available:
https://www.maxqda.com/blogpost/how-to-cite-maxqda
[54] AI Nowhttps://ainowinstitute.org (accessed: May 4, 2022).
[55] M. Bucchi and B. Trench, Eds., Routledge handbook of public communication of science and
technology, Second edition. London ; New York: Routledge, Taylor & Francis Group, 2014.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
17
AUTHORS
Alexander Hick received his BSc in Psychology from Tilburg University and holds a
MSc in Philosophy from the University of Edinburgh. He is currently working at the
RWTH Aachenas an Early-Stage Researcher (ESR) within the Marie Skłodowska-Curie
Actions (MSCA) Innovative Training Network‘s (ITN) VisuAAL project. VisuAAL is a
European Marie Skłodowska-Curie graduate programme in which the Chair of
Communication Studies at RWTH Aachen University, together with partners from
Stockholm University, Trinity College Dublin, TU Vienna, and the Universidad de Alicante are
collaborating. His focus is on the acceptance artificial intelligence in the health care sector, and the
perception of AI-based technology by different stakeholder groups.
Martina Ziefle is Professor of Communication Science and head of the eHealth
research group at RWTH Aachen University. She studied psychology at the universities
of Göttingen and Würzburg. She completed her studies with distinction at the University
of Würzburg. At the University of Fribourg, Switzerland, Martina Ziefle received her
doctorate with summa cum laude. Her research focuses on the interface between humans
and technology, considering different usage contexts and user requirements. Especially
in the sensitive area of eHealth technologies, user diversity and technology acceptance play a decisive role.
Beyond teaching and research tasks, Martina Ziefle is involved in a number of third-party funded projects
of public and industrial sponsors, which deal with the topics communication and interaction between
humans and technology, technology acceptance and user diversity.
... Their findings revealed a variety of preconceptions, misconceptions, and myths about AI [4]. Many studies investigated the public's opinion of AI [15,21,23]. A recurring sentiment highlighted in various studies is the misconception that AI has human-like characteristics or abilities [4,15,21,39]. ...
... Many studies investigated the public's opinion of AI [15,21,23]. A recurring sentiment highlighted in various studies is the misconception that AI has human-like characteristics or abilities [4,15,21,39]. A study conducted with 10,005 participants from eight different countries showed a widespread acknowledgement of the significant societal impact of AI [23]. ...
... The majority of our participants were female (71.1%) (see Table 1). They were distributed across various age groups, with 0.5% in the 18-24 age group, 18.3% between [25][26][27][28][29][30][31][32][33][34]38.6% between [35][36][37][38][39][40][41][42][43][44]24.9% between [45][46][47][48][49][50][51][52][53][54]15.2% between 55-65, and 2.5% aged 65 and above. The majority of our participants were located in Europe (72.6%), with 22.4% located in the U.S. and a smaller fraction in Asia and Australia. ...
Conference Paper
Artificial intelligence (AI) has seamlessly integrated into our daily routines. Recognizing the significance of AI, educators are expected not only to teach about AI but also to utilize AI tools in their teaching, ensuring that their students are equipped with the necessary knowledge to navigate the AI-driven society. Thus, it is important to understand how educators perceive AI. In this study, we explore educators’ perceptions of AI, as well as whether they teach AI, the challenges they face in teaching AI, potential support for teaching AI in the future, and possible differences between Cypriot and international educators. Two online surveys, involving 197 educators, followed by interviews with 10 Cypriot educators were conducted to examine how they perceive AI. The findings revealed a shared conceptualization of AI as “human-like”, intelligent machines, capable of learning and solving problems using data. Notable differences in the definitions of AI emerged between Cypriot educators and their international counterparts. Furthermore, the findings showed that even though educators perceive teaching AI as important, they lack formal training and specialized educational tools on AI, underscoring the urgent need for targeted interventions and tailored resources.
... Branuer et al. (2023), who drew attention to the growing academic interest EURASIAN JOURNAL OF TEACHER EDUCATION, 2024, 5(1), 1-31 in perceptions towards AI, stated that various factors, such as thoughts concerning the advantages of AI, elements from science fiction, uncertainties about the technology, potential threats, and fears, played essential roles in shaping these perceptions. They also highlighted the necessity of providing individuals with a basic understanding of the potential and limitations of AI. Hick and Ziefle (2022) stated that perceptions towards AI were important since they could play a crucial role in integrating AI into society and daily life. Robb et al. (2020) highlighted the potential impact of perceptions towards AI on technology usage. ...
... The existence of these negative perceptions, which cannot be fully explained, may be attributed to the lack of knowledge about AI and the influence of science fiction elements portraying doomsday scenarios like "Matrix" or "Terminator". Such unfamiliarity and negative perceptions based on science fiction elements are widely observed in society (Hick & Ziefle, 2022), including educators , and have been emphasized in various studies. For instance, Schmelzer (2019) highlights that uncertainties about AI lead to significant concerns, while Carillo (2020) points out that elements from mythology, literature, and science fiction influence perceptions of AI. ...
Article
Full-text available
Over the past few years, artificial intelligence (AI) has become one of the prominent concepts closely associated with education. Despite this proximity, studies exploring the attitudes and perceptions of prospective teachers towards AI are still scarce. In this point, the current research aimed to investigate the attitudes and perceptions of prospective social studies teachers towards AI. The research adopted the convergent parallel design, one of the mixed research methods. The results revealed that prospective teachers' attitudes towards AI differed according to the variable of perceived AI knowledge. Accordingly, those who described their knowledge about AI as high exhibited significantly more positive attitudes compared to those with average and low knowledge. In addition, the study determined that the positive attitudes of male prospective teachers were significantly higher than female prospective teachers, and their negative attitudes were significantly lower. However, the results indicated that positive and negative attitudes did not differ significantly according to the variables of grade level and daily internet usage. Further, the qualitative findings highlighted a significant lack of understanding regarding the definition and scope of AI. The study observed that a significant number of prospective teachers' definitions of AI were theoretically baseless. Many of them highlighted that AI could become a major threat in the future. Despite their concerns, they struggled to identify a specific reason behind these potential threats. Moreover, most prospective teachers could not provide examples of AI technology other than dialog systems such as ChatGPT and Siri applications. Although they gave examples of different types of applications supported by AI, they could not sufficiently explain the AI systems on which these applications were based. All these results are evaluated as a situation that reveals the necessity of AI education for prospective teachers.
... Numerous studies have delved into different groups' perceptions of AI, revealing a spectrum of preconceptions and myths [4]. Surveys with the general public highlight exaggerated expectations regarding AI capabilities [12], alongside misconceptions such as anthropomorphic AI and attributing human-like abilities to AI [17,40]. Despite a positive overall attitude towards AI, there's a recognized low self-assessment of AI competency [37]. ...
Conference Paper
The flourishing presence of Artificial Intelligence (AI) in daily life emphasizes the necessity for education about AI. It is crucial to create tailored educational materials and tools to meet the unique educational needs of diverse groups. This ongoing project aims to develop personalised educational tools for AI literacy for different groups of society (i.e., children, teachers, and adults). The perception and knowledge of AI by these groups will be explored. Participatory design workshops with teachers and adults will assist in the development of personalised educational tools for these groups tailored to the specific needs of each group and their perceptions of AI. Finally, this project aims to identify the best practices for developing personalised educational tools for AI literacy for each group.
... There is a need to explore if the users of ChatGPT also have this affiliated opinion, believing that using ChatGPT is similar to just talking to another human behind a computer screen, or in real life. Some of the broader risks of AI misuse include algorithmic or deliberate deception, deep fakes and videos to spread misinformation, and promoting plagiarism, among others (Hick and Ziefle, 2022). This mix of positives and negatives again highlights the question on how risky yet adept of a tool AI agents can be, and how they should be wielded. ...
Conference Paper
In today’s world, children are increasingly interacting with AI technologies as part of their daily routines. However, many children may not fully grasp how these technologies function or their implications. Misconceptions about AI capabilities, risks, and benefits abound, underlining the importance of early education on the subject. Designing educational tools and materials tailored for children necessitates a deep understanding of their existing knowledge and perceptions of AI. By leveraging children’s insights and experiences with AI, we can develop effective educational strategies that cater to their specific needs and enhance their understanding of this rapidly evolving technology.
Article
Full-text available
Since the end of 2022, global discussions on Artificial Intelligence (AI) have surged, influencing diverse societal groups, such as teachers, students and policymakers. This case study focuses on Swedish primary school students aged 11–12. The aim is to examine their cognitive and affective perceptions of AI and their current usage. Data, comprising a pre-test, focus group interviews, and post-lesson evaluation reports, were analysed using a fusion of Mitcham’s philosophical framework of technology with a behavioural component, and the four basic pillars of AI literacy. Results revealed students’ cognitive perceptions encompassing AI as both a machine and a concept with or without human attributes. Affective perceptions were mixed, with students expressing positive views on AI’s support in studies and practical tasks, alongside concerns about rapid development, job loss, privacy invasion, and potential harm. Regarding AI usage, students initially explored various AI tools, emphasising the need for regulations to slow down and contemplate consequences. This study provides insights into primary school students perceptions and use of AI, serving as a foundation for further exploration of AI literacy in education contexts and considerations for policy makers to take into account, listening to children’s voices.
Article
Full-text available
This study explores whether AI is perceived as a threat or an opportunity and examines whether the perception of AI varies according to demographic factors, religious commitment, trust in science, beliefs in conspiracy theories, and previous experiences with AI utilizing data collected from 1443 participants. While most respondents believe that AI will simplify life (63%), increase efficiency (62%), and therefore, the development of AI should be encouraged (51%), a significant portion of the respondents is concerned that AI will increase unemployment (52%) and lead to social inequalities (47%). Around 21% of the respondents believe that AI may destroy humanity eventually. Respondents’ age, gender, occupation, religious commitment, beliefs in conspiracy theories, and previous experiences with AI (familiarity) significantly influence respondents’ perception of AI both as an opportunity and threat. Findings suggest that AI is paradoxically seen as a double-edged sword, perceived both as an opportunity and a threat, which indicates a pronounced confusion about AI among respondents. Keywords: Artificial intelligencesocial impactsopportunitiesthreats https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2297114
Article
Full-text available
Introduction: Artificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs. Methods: To contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality. Results: We visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked. Discussion: The diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a “black box” for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.
Article
Full-text available
The rapid development of automation has led to machines increasingly taking over tasks previously reserved for human operators, especially those involving high-risk settings and moral decision making. To best benefit from the advantages of automation, these systems must be integrated into work environments, and into the society as a whole. Successful integration requires understanding how users gain acceptance of technology by learning to trust in its reliability. It is, thus, essential to examine factors that influence the integration, acceptance, and use of automated technologies. As such, this study investigated the conditions under which human operators were willing to relinquish control, and delegate tasks to automated agents by examining risk and context factors experimentally. In a decision task, participants ( N=43N=43 , 27 female) were placed in different situations in which they could choose to delegate a task to an automated agent or manual execution. The results of our experiment indicated that both, context and risk, significantly influenced people’s decisions. While it was unsurprising that the reliability of an automated agent seemed to strongly influence trust in automation, the different types of decision support systems did not appear to impact participant compliance. Our findings suggest that contextual factors should be considered when designing automated systems that navigate moral norms and individual preferences.
Article
Full-text available
While there is significant enthusiasm in the medical community about the use of artificial intelligence (AI) technologies in healthcare, few research studies have sought to assess patient perspectives on these technologies. We conducted 15 focus groups examining patient views of diverse applications of AI in healthcare. Our results indicate that patients have multiple concerns, including concerns related to the safety of AI, threats to patient choice, potential increases in healthcare costs, data-source bias, and data security. We also found that patient acceptance of AI is contingent on mitigating these possible harms. Our results highlight an array of patient concerns that may limit enthusiasm for applications of AI in healthcare. Proactively addressing these concerns is critical for the flourishing of ethical innovation and ensuring the long-term success of AI applications in healthcare.
Article
Full-text available
Background Artificial Intelligence (AI) innovations in radiology offer a potential solution to the increasing demand for imaging tests and the ongoing workforce crisis. Crucial to their adoption is the involvement of different professional groups, namely radiologists and radiographers, who work interdependently but whose perceptions and responses towards AI may differ. We aim to explore the knowledge, awareness and attitudes towards AI amongst professional groups in radiology, and to analyse the implications for the future adoption of these technologies into practice. Methods We conducted 18 semi-structured interviews with 12 radiologists and 6 radiographers from four breast units in National Health Services (NHS) organisations and one focus group with 8 radiographers from a fifth NHS breast unit, between 2018 and 2020. Results We found that radiographers and radiologists vary with respect to their awareness and knowledge around AI. Through their professional networks, conference attendance, and contacts with industry developers, radiologists receive more information and acquire more knowledge of the potential applications of AI. Radiographers instead rely more on localized personal networks for information. Our results also show that although both groups believe AI innovations offer a potential solution to workforce shortages, they differ significantly regarding the impact they believe it will have on their professional roles. Radiologists believe AI has the potential to take on more repetitive tasks and allow them to focus on more interesting and challenging work. They are less concerned that AI technology might constrain their professional role and autonomy. Radiographers showed greater concern about the potential impact that AI technology could have on their roles and skills development. They were less confident of their ability to respond positively to the potential risks and opportunities posed by AI technology. Conclusions In summary, our findings suggest that professional responses to AI are linked to existing work roles, but are also mediated by differences in knowledge and attitudes attributable to inter-professional differences in status and identity. These findings question broad-brush assertions about the future deskilling impact of AI which neglect the need for AI innovations in healthcare to be integrated into existing work processes subject to high levels of professional autonomy.
Article
Full-text available
AI has received increased attention from the information systems (IS) research community in recent years. There is, however, a growing concern that research on AI could experience a lack of cumulative building of knowledge, which has overshadowed IS research previously. This study addresses this concern, by conducting a systematic literature review of AI research in IS between 2005 and 2020. The search strategy resulted in 1877 studies, of which 98 were identified as primary studies and a synthesise of key themes that are pertinent to this study is presented. In doing so, this study makes important contributions, namely (i) an identification of the current reported business value and contributions of AI, (ii) research and practical implications on the use of AI and (iii) opportunities for future AI research in the form of a research agenda.
Article
Full-text available
Background Artificial intelligence (AI) has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: What ethical issues have been identified in relation to AI in the field of health, including from a global health perspective? Methods Eight electronic databases were searched for peer reviewed and grey literature published before April 2018 using the concepts of health, ethics, and AI, and their related terms. Records were independently screened by two reviewers and were included if they reported on AI in relation to health and ethics and were written in the English language. Data was charted on a piloted data charting form, and a descriptive and thematic analysis was performed. Results Upon reviewing 12,722 articles, 103 met the predetermined inclusion criteria. The literature was primarily focused on the ethics of AI in health care, particularly on carer robots, diagnostics, and precision medicine, but was largely silent on ethics of AI in public and population health. The literature highlighted a number of common ethical concerns related to privacy, trust, accountability and responsibility, and bias. Largely missing from the literature was the ethics of AI in global health, particularly in the context of low- and middle-income countries (LMICs). Conclusions The ethical issues surrounding AI in the field of health are both vast and complex. While AI holds the potential to improve health and health systems, our analysis suggests that its introduction should be approached with cautious optimism. The dearth of literature on the ethics of AI within LMICs, as well as in public health, also points to a critical need for further research into the ethical implications of AI within both global and public health, to ensure that its development and implementation is ethical for everyone, everywhere.
Article
Despite the immense societal importance of ethically designing artificial intelligence, little research on the public perceptions of ethical artificial intelligence principles exists. This becomes even more striking when considering that ethical artificial intelligence development has the aim to be human-centric and of benefit for the whole society. In this study, we investigate how ethical principles (explainability, fairness, security, accountability, accuracy, privacy, and machine autonomy) are weighted in comparison to each other. This is especially important, since simultaneously considering ethical principles is not only costly, but sometimes even impossible, as developers must make specific trade-off decisions. In this paper, we give first answers on the relative importance of ethical principles given a specific use case—the use of artificial intelligence in tax fraud detection. The results of a large conjoint survey ([Formula: see text]) suggest that, by and large, German respondents evaluate the ethical principles as equally important. However, subsequent cluster analysis shows that different preference models for ethically designed systems exist among the German population. These clusters substantially differ not only in the preferred ethical principles but also in the importance levels of the principles themselves. We further describe how these groups are constituted in terms of sociodemographics as well as opinions on artificial intelligence. Societal implications, as well as design challenges, are discussed.
Article
In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent (AI) could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion in clinical processes, weighing the benefits of its efficiency against concerns of responsibility attribution. Fourth, we provide a ‘rule of disagreement’ that fulfils these conditions while retaining some of the benefits of expanding the use of AI-based decision support systems (AI-DSS) in clinical contexts. This is because the rule of disagreement proposes to use AI as much as possible, but retain the ability to use human second opinions to resolve disagreements between AI and physician-in-charge. Fifth, we discuss some counterarguments.
Article
Increasing numbers of older individuals in the societies pose great challenges for countries affected by the demographic change. The rapid development in the technological sector, on the other hand, enables various applications to make everyday life easier for older and disabled people and to maintain their autonomy for longer. This study examines the acceptance and privacy perceptions of a video-based technology for lifelogging in home environments among German and Turkish users, using a multi-method empirical research approach. Results expose an overall differing acceptance of using lifelogging cameras between German and Turkish participants and suggest that the consideration of the varying culture-bound demands is necessary. The findings of this study support the understanding of requirements for a successful implementation of a video-based assistive technology in private environments to optimally address the needs of the future users, drawing attention to the important cultural influences that affect its acceptance.