Content uploaded by Alexander Hick
Author content
All content in this area was uploaded by Alexander Hick on Oct 28, 2022
Content may be subject to copyright.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
David C. Wyld et al. (Eds): AIAD, EDU, COMSCI, IOTCB - 2022
pp. 01-17, 2022. IJCI - 2022 DOI:10.5121/ijci.2022.110401
A QUALITATIVE APPROACH TO THE
PUBLIC PERCEPTION OF AI
Alexander Hick and Martina Ziefle
Chair of Communication Science, RWTH Aachen University, Aachen, Germany
ABSTRACT
Since the Dartmouth workshop on Artificial Intelligence coined the term, AI has been a topic ofever-
growing scientific and public interest. Understanding its impact on society is essential to avoid potential
pitfalls in its applications. This study employed a qualitative approach to focus on the public’s knowledge
of, and expectations for AI. We interviewed 25 participants in 30-minute interviews over a period of two
months. In these interviews we investigated what people generally know about AI, what advantages and
disadvantages they expect, and how much contact they have had with AI or AI based technology. Two main
themes emerged: (1) a dystopian view about AI (e.g., ‘’the Terminator’’) and (2) an exaggerated or
utopian attitude about the possibilities and abilities of AI. In conclusion, there needs to be accurate
information, presentation, and education on AI and its potential impact in order to manage the
expectations and actual capabilities.
KEYWORDS
Artificial Intelligence, machine learning, public perception, qualitative study, technology
1. INTRODUCTION
Between the years 1951 and 2022, PubMed records around 150.000 publications relating to
Artificial Intelligence (AI). There has been a considerable increase in its scientific interest ever
since the Dartmouth workshop on Artificial Intelligence in 1951. This interest, however, is not
confined to the scientific community. The public engagement with AI has also grown over the
past 20 years, especially since 2009 [1]. When googling AI in the year 2000, the results would
have been around 37.000 results. Now, this number is at 3 billion results. This underlines the
exponential growth of available sources, from which to extract information about the topic.
Alongside this digital and scientific forum, there also remains a noteworthy social and cultural
representation of AI. According to Wikipedia (2022), between the year 1927 (the movie
Metropolis) and today (the movie Je suis Auto, 2022) around 150 movies were produced that
include either AI as a technology, a theme, mood, or discussion.
In 2021, the European Commission revised the Coordinated Plan on Artificial Intelligence which
is a set of goals and recommendations for the development and uptake of AI in the European
Union. One of its key policy objectives is to implement AI as a societal good, that is, for people.
The general public is one of the main stakeholders in this discussion and its perception influences
the integration of AI in society and everyday life [2]. To estimate the impact AI has and could
have on society we should understand what it is, what it does, and how and where it is
implemented. AI is a term coined by a group of computer scientists at the Dartmouth workshop
on Artificial Intelligence [3] and refers to the ability of a computer to perform actions commonly
associated with human intelligence [4]. Until then, this definition has undergone various
adjustment and now adds up to a composite that describes AI as the field of science in which we
develop technologies that displays certain cognitive tasks in an intelligent manner [5]. Some of
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
2
these tasks include image, face, and object recognition, speech translation, movie
synchronisation, or transportation [6], [7], [8]. AI is used as a tool to quickly and efficiently
analyse large amounts of data by implementing pattern recognition algorithms that are applicable
for the task at hand. These tools may be in the form of software (i.e., algorithms, deep learning,
neural networks) and hardware (i.e., robots, machines, cars). An essential feature of AI is large
amounts of data with which algorithms can be trained in various desired (or undesired) ways [9],
[10]. AI is thus a tool which automates part of the cognitive labour which, otherwise, would be
carried out by humans [11]-[16]. The technologies that employ AI are the focus of this study.
But what makes people accept AI-based technology ? Many papers, studies and books have been
written on the topic of technology acceptance, and the connected mediating influence of
perception of the technology [20], [21], [22]. However, the invention of something does not
necessarily lead to its acceptance [22]. On the one hand, people have to know something exists
and, on this base, have the chance to draw informed decisions about accepting it. On the other
hand, and often in spite of their knowledge about the technology, people still might refuse to
accept a technology out of personal or affective reasons, even though the technology is also
beneficial for them [23], [24]. Additionally, the technology also has to be useful, easily
accessible, and provide an addition to an otherwise less efficient type of work [24], [25] [20].
Nonetheless, rejection, or more precisely, not accepting a technology might still ensue [26], [27].
Technology acceptance is, thus, a multi-factorial process. During this process a person, the
(potential) user, evaluates whether they approve of the advertised technology based on several
systemic, individual, and context-related influences. Understanding which aspects and factors
influence the deliberation process is key to produce an acceptable, and for the developer
profitable, product [21]-[27].
The next question is: Who accepts the technology? Different people of interest i.e., different
stakeholders, such as health care worker, politicians, doctors, lawyers, engineers, to name a few.
However, should the general public even be included in the acceptance process? In a recent study
on public deliberation about the topic of AI, Lehoux, Miller and William [28] used a scenario-
based approach to bring ethical challenges regarding AI-technology to the public’s attention. The
aim of the study was to gain insights into the public’s imagination, and to validate the value of
citizen’s participation in the field of responsible research and innovation. This is a good example
of public engagement for research purposes which is an essential part of technological integration
within society [29]. Therefore, the public should be considered as an important constituent of the
development and implementation process and the adoption of innovative ideas in general, and
new technologies in particular.
According to AI-watch, a website by the European Commission monitoring AI developments and
progress, there are over 1000 AI firms and more than 400 patents in AI-technologies (i.e.,
software and hardware). These figures only represent the current situation in Germany.
Furthermore, the EC acknowledges that the general public might not be able to ‘fully understand
the workings and effects of AI systems‘ (EC: AI ethics guidelines, 2021, p.23), which is a
reasonable assumption, based on the myriad of different AI-applications, algorithms, and
technical jargon. However, this is no reason not to engage them in a manner that is accessible to a
non-expert in the field. It is evident that AI is already in widespread use in society and everyday
life, with a broad range of individual differences, user diversity aspects, privacy, and trust issues,
all of which need to be considered in this line of research [30], [27], [31]. This widespread
availability and continuing development creates a need for a thorough understanding of the
stakeholders perspectives regarding those technologies.
The development, in the scientific, political, and societal sphere, favours more investigations of
the public perception of AI [32]. The multi-level growth of both societal and scientific interest in
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
3
AI opens various new perspectives on, and opportunities for epistemological, ethical [33],
judicial [34], and technological discussion [35] [36] [9] [37]. While the public’s exposure to the
concept of AI and thus its perception has shifted increasingly into the empirical focus [38] [39]
[40] [41], there remains a need for further and, due to the rapid ongoing development of
technology, updated insights into the public’s perception[28]. This study aims to provide
additional qualitative insights into the public’s knowledge of, contact with and expectations for
AI.
1.1. Questions and Logic of Research
To achieve an understanding on public perceptions and mental models of AI technology and to
learn about potential knowledge-gaps, misconceptions regarding expectations as well as
acceptance-relevant barriers, we carried out a qualitative interview study in which laypeople were
asked about their knowledge about AI, expectations in different application contexts, individual
wants and needs but also potential barriers people see when using AI technologies
The qualitative approach was chosen to better understand the individual reasons people might
give and their explanation for (not) using or accepting AI. The findings might help researchers
understand sensitive information and public communication strategies related to AI technology.
The results can be used to develop educative materials which might help to further support the
public’ s understanding of AI technologies which they will encounter in everyday life. Likewise,
technical designers and computer scientists developing AI technologies might also benefit from
such early cognitive concepts as it gives them a sense of where laypeople have difficulties with
adopting, understanding, or using AI-based technologies.
2. METHOD
2.1. Participants
Participants were recruited from the social network of the researcher. The final sample included
N = 25 participants. Information on the demographic variables can be found in Table 1. No prior
knowledge about AI was necessary for the interviews but three participants (2 males; 25yrs. and
63yrs., and 1 female; 59yrs.) had previous working experience with AI technology or had worked
in the technology industry. The remaining sample had no prior professional experience with AI or
AI-based technologies. All participants were notified prior to the interview about the careful and
anonymised processing of their personal data, that the interview was voluntary, after which, they
all gave their informed consent to participate in this study.
Table 1. Sample statistics
Variable
N
Percentage %
Gender
25
100
Females
14
56
Males
11
44
Age
Mean
SD
Range
43,72
21,65
21-82
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
4
2.2. Procedure
To assess personal-level knowledge and understand subjective attitudes about AI we chose a
qualitative approach with open-ended questions. These questions were asked in semi-structured
interviews over a period of one month and after careful analysis of the existing literature on the
topic. The interview guideline was developed from November 2021 to January 2022. First, a
literature review was conducted to find relevant areas of use for AI. Four areas were found to be
the most relevant for the current research purposes. The areas were (1) Finance, (2) Mobility, (3)
Communication, and (4) Healthcare [42] [8]. On the basis of these areas and the corresponding
literature, we developed the open-ended questions and metaphors for the interview-guideline. The
interviews were designed to be as explorative as possible to avoid influencing the participants.
This approach is useful when examining the particular reasons people give and the attitudes they
hold about a topic. The explorative approach was adapted from existing studies [51], [52], to
evaluate mental models–i.e., the participant’s subjective view–about AI. There were four
segments in the interview guideline. All segments were intended to elicit the most spontaneous
response that the participants had, without too much deliberation or concentration on technical
details. The first segment of the guideline was about general AI knowledge that each participant
might have, and areas of use that would come to mind. The second segment asked participants to
indicate what reasons there were to use AI (i.e., advantages) or not to use it (i.e., disadvantages).
After this segment, the third question was about existing contact (unaware or aware) with AI or
AI-based technologies. The last segment included a list of metaphors from which participants
should choose the option that would best describe what AI is for them personally.
The interview started with a general introduction to the topic, the name and field of interest of the
researchers, and the overall aim of the project. To avoid bias, no specific information was given
about AI or technologies based on it. During the interview participants should firstly, explain
what AI is, and secondly, what advantages and disadvantages they would consider it to have.
Then, participants were asked to describe AI in their own words, and finally encouraged to name
a metaphor which would best describe AI for them. These metaphors could be chosen from a list
in their handouts or from the participants’ own imagination. To assist the participants on how to
answer, additional sub-questions were introduced if needed. These questions were further divided
into the private sphere (home & work) and the public sphere (public places or transportation).
2.3. Data Analysis
The interviews were carried out in February 2022 via Zoom, Skype and, in some cases, in person.
The interviews lasted between 30 min and 1 hour. Four participants (>80 years) were interviewed
together. The remaining 21 participants were single person interviews. All participants were
handed a copy of the interview guideline in which they could follow each question. The
interviews were audiotaped and transcribed in March 2022. The transcripts were evaluated by the
researchers and categorized based on recurring themes in all the interviews, and pre-existing
themes found in the literature. The categorization was performed in MAXQDA (2018) [53]. The
study was carried out in Germany and in German. Selected quotes from the interviews were
translated to English for this publication.
3. RESULTS
The results of this research will be presented in the order of the original interview guideline. The
first interview question was as follows:
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
5
3.1. ‘What do you already know about AI?‘
This question was answered very similarly among the participants. It was intended to invoke the
first knowledge and thoughts about the topic that came to mind. Eight people had some, but not a
lot of prior knowledge about AI:
‘‘Not much, just that it is technology that becomes intelligent through humans, not by itself.
‘‘(F23)
‘‘Basically nothing, I have never concerned myself with it, only some stuff you pick up from the
media. ‘‘(F59)
Others said they had no prior knowledge “I have no idea at all” (M50) or no real prior
knowledge “Well, honestly, I have no real idea what it is” (M58). Some indicated to have never
thought about it: “[…] I never thought about it”(F61).
In contrast to this, 11 people had an idea – sometimes very accurate – about what AI is. There
were, again, very similar answers to the first question:
‘‘Well, AI is Artificial Intelligence. I would define it as machine learning, that is, based on some
particular dataset or data, the AI can create knowledge and from which it can draw
conclusions.‘‘(M25)
‘‘So, I think that AI means an algorithm that can learn, and which is implemented in a
machine.[…] and AI can speed up processes and make them easier in order to save money, time
[…] something where machines get human abilities to make human life easier.‘‘(F58)
3.2. ‘Where is AI implemented?‘
Here, too, similar answers were common among participants. All, but one (F74), participants
indicated that AI was built in their smart-phones for face-recognition, navigation, or speech
recognition (SIRI). Some participants also mentioned smart-home technologies like Amazon’s
ALEXA or the PHILLIPS HUE system. The next most frequent answer was autonomous driving
or automated assistance system in cars (e.g., automatic braking, lane assistant, blind-spot
assistant). Other areas of implementation included–in order of frequency– medical technologies,
industry, video games, and science (i.e., research). Exemplary results and areas of use are
summarised in Table 2.
Table 2. Summary of ‘Where AI is implemented’.
Technology
Area of Use
Autonomous driving
Mobility
Traffic lights
Transportation
Approximation
Engineering
Software to detect
group dynamics in
stadiums
Software
Public Entertainment
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
6
3.3. ‘What is the difference between an AI-based technology and non-AI
technology?‘
The most common answer was that AI is something that learns. One participant (M25, with prior
knowledge about AI) explained that:
‘‘I would say that the [AI] system learns, that is, it analyses data. By means of this data
analysis and the acquisition of new data, it learns and adapts to the individual needs and
the purpose for which we developed the system.‘‘(M25)
Another view about AI was that ‘‘it‘‘–meaning AI as an individual in its own right–develops into
something ‘‘for itself‘‘:
‘‘Well, I think about AI as this individual that develops, and other technology [not-AI])
does not, because it only acts on entered programming codes.‘‘(F47)
Two participants (F80 & M82) discussed the issue of AI as something functional, as opposed to
something emotional or social. For them, AI, could never ‘‘know‘‘ the emotions or ‘‘represent
them (F80)‘‘ in the way humans do. Overall, the 25 participants all mentioned that AI, could not,
or should not be seen as something empathic, emotional or as a ‘‘social agent‘‘:
‘‘[…] it will be able to do everything, but emotions, it should not be able to do.‘‘(F80)
‘‘[…] yes, I think recognizing emotions, that is a step too far.‘‘(M82)
‘‘My emotions are none of its business!
[…]my emotions belong to other humans.‘‘(F74)
Another participant also shared this sentiment indicating that emotions are something reserved
for human beings:
‘‘Recognise emotions…under no circumstances. They are reserved for humans‘‘ (M24a)
Another participant agreed with the classification and said that:
‘‘Exactly, AI should not be seen as something social!‘‘(M58)
One participant elaborated on this classification saying that, even if it (AI) could recognise
emotions, she would not want it to act on them:
‘‘(When the AI recognise sadness) […]oh she is sad, let us play a happy song…I am quite
old-school in that respect…then I’d rather stay sad, nothing should turn this around
(laughs)‘‘ (F24a) (brackets added)
This last answer was shared by another female participant with regard to the individual
description of AI:
‘‘Well, I see AI as a Servant or Advisor because the human aspect…well, this will never be
the case for me. […] friend? Well, how?! I do not share (my emotions) with my
computer.‘‘(F59)
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
7
3.4. ‘What are reasons to use AI?‘
Following the question of the difference between AI and non-AI, participants should now give
reasons to use AI. The most common answer was the usefulness of AI because it would make
processes or work easier. The answers are summarised in Table 3 below.
Table 3. Reasons to use AI and advantages.
The following quotes were extracted from the transcript to explain some of the reasons
participants gave for using AI:
‘‘In the own home it would make sense to use it to make life extremely easier, like these
robots that clean or care for patients at some point.‘‘(F50)
‘‘It probably would make things easier when it takes over easy task. The cleaning robot
would be an example for this.‘‘(M24)
‘‘Because it is faster, it can detect faster. But also, that the progress is faster. Maybe it is,
on the other hand, not too good because one loses touch with rationality. But it is
important that it moves along (AI), and that it is being used.‘‘(F80)
3.5. ‘What are reasons not to use AI?‘
Participants were asked to indicate reasons not to use AI. Their answers are divided into
subsections. These sections represent categories created in the MAXQDA [53] data analysis and
are based on the answer given by the participants. Most of the time participants showed some
level of concern regarding AI. These levels of concern were divided into the following
subcategories.
3.5.1. Data Privacy Concern
Data privacy was distinct from personal privacy. In this sample the participants were more
concerned with their personal data (i.e., name, social security number, bank statements), than
with their own privacy (i.e., the right to be left alone or ‘‘picking my nose, without being
watched‘‘ (F59)). All participants were concerned that their personal data could be used to their
disadvantage (e.g., higher insurance payments).
‘‘[…] things that concern me individually, I reckon there are certain aspects were the AI is
being misused, the personal data.‘‘(M25)
3.5.2. Privacy Concern
Privacy was defined by one participant as the individual’s sphere in which one has no need to
change one’s behaviour in either a negative or positive way. This participant associated this type
Reason to use AI
Advantages
Language
Translation
Translate any language
Cleaning Robot
Convenience
Make life easier
(cleaning robots)
Convenience
Efficiency
Saves time
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
8
of privacy with a decreased cognitive load due to the freedom of not being watched. She
underlined the importance of being able to be left alone:
‘‘That you can be just the way you are. […]even when the own behaviour has no negative
consequences, but just that…being able to be burden free‘‘(F25)
Another participant had a concern regarding the private sphere and dignity of other, particularly
older, people:
‘‘[…] or when older people are monitored in a room […] I think believe there are
personal contexts which become endangered such personal privacy.‘‘(M25)
3.5.3. Loss of control
In addition to this concern of both data privacy and personal privacy, some participants worried
about losing control over an autonomous system. There were differences, however, in what this
‘‘loss of control‘‘ included:
‘‘What it (AI) should not be allowed to be is more intelligent than the human that has
developed it and if one loses control over it.‘‘(M24)
‘‘It should definitely not be able to program itself! Power of the machines and whatnot…If
you have watched Terminator, you surely wouldn’t want that. If it is intelligent and
develops a personality with own interests like: ‘‘I do not like asparagus‘‘, for
example…this would be a problem.‘‘(F42)
‘‘[…] yes, both, loss of control and privacy (as areas of concern‘‘ (F59)
‘‘[…] it depends on how it’s done…or is it like the Terminator…like this autonomous
machine‘‘(M50)
These were concerns related to the presentation of AI in the media or movies. There was another
facet of loss of control regarding the software and algorithm development and implementation.
One participant, who had worked in the technology industry, said:
‘‘The AI-system needs to be supervised…not by the system itself, of course, because then it
becomes rogue[…] that is the problem with AI…that, due to money issues, the mistakes in
the original codes are being transferred to the technologies we use today. And then the AI
provides faulty output‘‘(M63)
3.5.4. Surveillance Concern
In line with this reasoning some participants had concerns about surveillance. There were two
levels of concern. First, that AI might monitor people. Second, that people might not sufficiently
monitor AI. This distinction was not made by all participants. Two participants (M25 & M63)
have a professional background in the area of AI and reported specific, industry related concerns.
(A) AI monitoring people:
‘‘If older people are constantly being watched in their room via a camera and are under
surveillance…that, I think, is not acceptable at all!‘‘(M25)
(B) People monitoring AI:
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
9
‘‘For me it’s just scary because I know about the source of error. If it is done the way my
company used to do it, then I definitely don’t want that because, if so, it cannot work!‘‘
This participant also gave a reason for their view:
‘‘The reason is: Money. Because it is more expensive to sufficiently monitor the system. I
am convinced that no company, no government, doesn’t matter who it is exactly…no one
would be willing to do this is such a way that it works adequately‘‘(M63)
3.5.5. Question of Responsibility
The final concern was about responsibility. In one case the participant was referring to the public
transportation system (e.g., autonomous trams):
‘‘I think if the tram will be able to drive by itself one day, the question of responsibility
arises…in the case of an accident that is.‘‘
However, this participant also added a statement about the state-of-the-art of this specific
technology:
‘‘But I think if we reach the level of technology at which we are able to do this, it is
definitely a good thing.‘‘ (M24)
Other participants had different facets of responsibility in mind:
‘‘From my point of view, it should be forbidden to use the (AI-based) tracking on the
internet…not only as an adult, it pushes me towards buying stuff that I do not need…I also
think that teenagers and children are heavily influenced by this. Their whole nature and
personality‘‘(F42)
‘‘I think that before someone were to use an AI, it should not be possible without my
consent. It should not happen that a doctor won’t treat me because I have not given my
consent.‘‘(F74)
‘‘[…]it should not share this with my insurance…there, my affinity for data privacy is
popping up.‘‘(F25a)
3.6. Metaphors
This part summarises participants personal associations with the term AI and related metaphors
or ‘pictures’ that came to mind. The most common metaphor for AI was servant or advisor. Only
one participant (M25) indicated that, for him, AI is something ‘social‘, like a ‘guardian angel‘.
The other participants described AI in terms of something that is functional i.e., used for a
specific task. For practical reasons, not every quote from the transcripts could be used in this
section. However, in the following section the overall themes, and some additional quotes, will
be discussed and a list of the metaphors is presented in Table 4 and respective frequencies in
Figure 1 below.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
10
Table 4. Metaphors for AI.
Other Metaphors like ‘Roommate, Friend, or Someone equal to me‘ were categorically rejected
by most participants. Only two participants said AI was ‘above me (F25) ‘or ‘considered [AI] a
friend (F59)‘.
Figure 1. Metaphor for AI Frequencies
4. DISCUSSION
4.1. General Summary
This study used a qualitative approach to investigate people‘s knowledge, expectation, and
perception of AI. It applied open-ended questions to assess what people generally know about
expect of, and how much contact they had with technology that is based on AI. Examples
included smart phones, laptops, tablets, cars, and medical technology such as diagnostic and
prognostic tools. This was done to assess what people consider about a new technology and
which aspects they tend to focus on in terms of knowledge, perception and subsequently
acceptance.
Participants in this sample had all heard of the term AI. However, the answers to what AI
specifically is, varied greatly. In general, the participants had a good grasp of the basic function
of AI and AI-based technology. The general understanding of AI was relatively good. Overall,
the participants were also open to the ideas and promises of AI. The results suggest that the
participants in this study were generally open to a ‘new’ technology such as AI, without
Metaphors
Advisor
Servant
Patron
Guardian angel
Roommate
Helper
Friend
Ruler
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
11
necessarily knowing the exact way the technology works, but rather, understanding its aim, the
broad way an e.g., diagnostic algorithm reached a conclusion, or the general transparency of the
system itself to answer the question of responsibility in case of a malfunction.
During the interview most participants were interested in the possibilities that AI holds and also
curious about what it can and cannot do at the moment. However, there was a inaccurate
judgment about how widespread AI really is in everyday life. Although most participants used a
smartphone, not all of them considered it to be AI-based. Furthermore, some people thought that
AI is clearly defined. This shows a certain discrepancy between science and society. In science,
as discussed in the introduction, we do not have one clear definition of AI. Instead, a composite
of different elements is created to come up with a working definition based on context. This
definition, however, is by no means complete and applicable in every context. During this study
participants ascribed AI to systems that were not AI and forgot to do so in cases were a
technology is actually based on it. We will come back to this point in the section about the
relevance of the present study.
Now we will focus on a different kind of discrepancy between the participants themselves. We
will discuss two standpoints that originated from differential exposure to and understanding of
AI. Most participants answered the open-ended questions without mentioning either utopian or
dystopian views. Nonetheless, some participants mentioned either utopian (i.e., exaggerated
views on what AI can do) or dystopian views. (i.e., unrealistic views on what AI could
‘‘become‘‘).
4.2. Dystopian Views on AI
There were some answers that showed either an exaggerated view, that is, ascribing to AI what it
cannot–or could not–actually do. Or an unrealistic view in the sense that, some participants
associated AI with dystopian scenarios from movies such as the Terminator.
‘‘It should definitely not be able to program itself! Power of the machines and whatnot…If
you have watched Terminator, you surely wouldn’t want that. If it is intelligent and
develops a personality with own interests like: ‘‘I do not like asparagus‘‘, for
example…this would be a problem.‘‘(F42)
This dystopian view about ‘what AI can do’, was in response to the item: AI can program itself.
The item-based part of the interview is not part of the current study. It was not further specified
what programming means nor what itself would include. However, it seems that some
participants had an inherently bad or negative association when thinking about losing control
over an AI system:
‘‘What it (AI) should not be allowed to be is more intelligent than the human that has
developed it and if one loses control over it.‘‘(M24)
Almost all participants associated something functional rather than social with AI. Although this
might not lead to a dystopian view per se, this trend indicates that AI is considered to be
something that might be able to surpass the functional abilities of humans (e.g., diagnosing
patients, data analysis, object detection). While there is currently no general-AI, that is, an all-
purpose system that can transfer knowledge to any domain, it is important to consider possible
concerns and implication of such a system ad hoc [43], [44], [45]. The people who knew what AI
actually is indicated that, for them, it is something very transparent and useful. The people who
had no idea about AI could not form this opinion and, thus, described AI as something opaque or
threatening. The part of AI, commonly known as machine learning, is a very complex statistical
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
12
process of calculating different weights from input data to generate an output. However, the
technical details were not explained by any participant because it is not common knowledge, and
also beyond the point of this study. Still, and as mentioned in the introduction, the missing
understanding of the workings and effects of an AI system, could have motivated the European
Commission to propose an organisation that will provide a technical certificate, attesting an AI
system’s transparency, fairness, and accountability. There is a clear political understanding of the
discrepancy between knowledge within the general public and an accessible source of
information in the ‘AI Now Report’ from the year 2018 as well as the European Commission’s
‘Ethics guideline’ from 2021. The implementation of AI-based devices into everyday life should
be based on an informed consent. To be able to give a truly informed consent, it is important that
the provided information is accessible to the user or user group. In the realm of AI-system and
machine learning algorithms, this is currently not the case for most users [54].
4.3. Utopian Views on AI
In contrast to the dystopian associations is an exaggerated utopian view on AI. One participant
ascribed to AI abilities which it cannot perform. The following quote is a good example:
‘‘[…]when I get up in the morning to make coffee and it knows what I want…that is an
advantage. But on the other hand, it doesn’t forget and when it is intelligent…, and has a
personality, everybody does, and it does too…‘‘(F42)‘‘
This view is not utopian per se because it does not describe AI as something desirable. Rather it
exaggerates the potential abilities that AI can have. Intelligence is not the same as personality.
However, one cannot blame this participant for holding this view, because of the way that AI is
being portrayed in the movies (e.g., the Terminator, Her, Ex-Machina). These movies portray AI
as being completely autonomous and super-human. By presenting AI is such a way, people might
get the wrong impression about AI’s actual abilities. Furthermore, this depiction tends to under
represent what AI can, and how it is currently working in everyday life e.g., smartphones,
factories, cars, computer vision, to name but a few. To know that this, too, is AI would help
people understand the potential of this technology, and in turn accurately evaluate its true benefits
(or barriers). This particular association was an exemplary result of the way movies can influence
the perception of technology. The influence of the popular culture, and especially from movies,
on technology perception, is a well-established concept [55]. Therefore, this relationship is
another relevant angle for future studies.
4.4. Relevance
The relevance of this study is first, its explorative summary of subjective knowledge and attitudes
towards AI. Second, the outline of potential barriers that might occur during the development and
implementation process of AI-based technologies. The qualitative nature of this study assessed
the reasons for a particular attitude or subjective perspective on a certain benefit/barrier in terms
of AI development, use, and implementation. On the one hand, participants’ expectations were
(subjectively) influenced by their own contact with AI systems. On the other hand, this view
might also have been moderated by the exposure to the media and popular cultures’ presentation
of AI. In future research, the results of this study need to be quantified to validate a reliable
association between the variables perception and acceptance of AI [1], [28], [46].
According to one participant (M63), the discussion about AI and its technological
implementation has seen a steep increase since the year 2000 due to the developments in
computing abilities. In line with this claim is, Fast and Horvitz [1]. They found that public
discussion too has increased since the year 2009 (coinciding with the release of the critically
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
13
acclaimed movie: Terminator Salvation). In their systematic review of New York Times articles
on the topic of AI over a period of 30 years, Fast and Horvitz discovered a similar trend in public
perception, with a discussion about AI that is usually more optimistic than pessimistic. This
shows that still today, after more than a decade of exposure to AI, people still hold on to some of
the narrow concepts about AI, and thus, tend to worry about losing control over AI. In line with
this finding is also the concern people have about issues and questions of (moral) responsibility,
data privacy, privacy, and security. These are all relevant elements because they can influence
how people think about a technology, and if they accept it or not [9] [39] [46].
As addressed in the introduction, the acceptance of technology is a complex process. To avoid
spending extensive amounts of time and energy careful consideration about this process is needed
from developers, policy makers and other economic, political, or social stakeholders. This study
is a first step in trying to understand what people consider to be important. It summarises a
snapshot off people’s attitudes and reasons which can help with the design as well as
implementation of AI based technology by keeping in mind that, sometimes, there is a clear
concern or fear, about a technology, that can be addressed by the developers.
Another relevant point is the knowledge and curiosity that people have about technology.
Although there were participants who did not know a lot about AI, they still were curious about
its potential and its future. Often these participants took issue with the opaqueness of AI, and they
were uncertain if they could trust it. By providing a sufficient amount of transparency and public
education on the technology, and co-developing AI-technologies with, and for people, many of
the mentioned fears can be resolved. Therefore, this study can also be used as a roadmap to avoid
these certain pitfalls.
Technology usually develops alongside the context and the society it is being developed for [45],
[46]. Breakthroughs like the steam engine, the computer or, nowadays AI, tend to occur in a
period during which, there is both a need for the new technology, and the resources to realise it
[47], [48], [49]. The rapid development and investment in AI are facilitated by the increase in
computing power, hard drive storage capacities, market interest in AI-products, and large
amounts of available, and often unavailable, data [17], [18], [19]. However, the use of these
technologies, in everyday life still depends upon the right perceptions of, accepting attitude
towards, and need for implementation.
4.5. Strengths & Weaknesses
The main strength of this study was its reliance on purely explorative concepts. The open-ended
questions were posed in such a way that any answer was possible and could be considered. The
main interests for this study were the spontaneous and direct associations that people have about
the concept of AI. The interview questions were selected from a pool of different areas e.g., e-
commerce, finance, transportation, healthcare. Thus, a wide variety of potential associations
could be covered and is presented in this paper.
In contrast to its strengths there are some limitations to this approach. The current study only
presents explorative and descriptive findings which might not be generalizable to the wider
public. Furthermore, the interview guideline was semi-structured which resulted in some
variations in the interviews. Also, the sample included more people in the age range of 20-30 and
only some above 70. This led to the skewed average towards a mean age of 43. Another sample-
related limitation is the nature of the sampling method. The sample was conveniently chosen
from the social circle of the researcher. This might have led to some biased answers with regard
to interest in the topic, or the willingness to participate. Future studies should use a random and
varied sample and to increase the variety and balance individual interests in this topic.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
14
4.6. Summary & Future Research
In conclusion the study has shown existing and basic knowledge about AI and personal
deliberation about important societal, ethical issues, and technological. Participants showed a
good grasp of the concept of AI and were generally excited to learn about the topic. There is a lot
of potential in interviewing the general public about AI. Future studies should focus on eliciting
the reasons why people think that AI is threatening, uncontrollable, or a guardian angel.
Additional quantitative studies could aim at generalising findings about the existing knowledge
level regarding AI and AI-based technologies.
ACKNOWLEDGEMENTS
The authors thank all of the participants for sharing their views, stories, and attitudes.
Furthermore, the first author A.H. would like to thank Sophia Otten, Caterina Maidhof, and Julia
Offermann on notes on an earlier version of this draft.
This work is part of the VisuAAL project on Privacy-Aware and Acceptable Video-Based
Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This
project has received funding from the European Union’s Horizon 2020 research and innovation
program under the Marie Skłodowska-Curie grant agreement No 861091 and from the Deutsche
For schungsgemeinschaft (DFG, German Research Foundation) under German’s Excellence
Strategy - EXC-2023 Internet of Production - 390621612.
REFERENCES
[1] E. Fast and E. Horvitz, ‘Long-Term Trends in the Public Perception of Artificial Intelligence’. arXiv,
Dec. 02, 2016. Accessed: Jun. 17, 2022. [Online]. Available: http://arxiv.org/abs/1609.04904
[2] B. J. Copeland and D. Proudfoot, ‘Artificial intelligence’, in Philosophy of Psychology and Cognitive
Science, Elsevier, 2007, pp. 429–482. doi: 10.1016/B978-044451540-7/50032-3.
[3] The Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group et al.,
‘Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial
Intelligence in Radiology’, Can Assoc Radiol J, vol. 70, no. 2, pp. 107–118, May 2019, doi:
10.1016/j.carj.2019.03.001.
[4] K. Murphy et al., ‘Artificial intelligence for good health: a scoping review of the ethics literature’,
BMC Med Ethics, vol. 22, no. 1, p. 14, Dec. 2021, doi: 10.1186/s12910-021-00577-8.
[5] M. Blut, C. Wang, N. V. Wünderlich, and C. Brock, ‘Understanding anthropomorphism in service
provision: a meta-analysis of physical robots, chatbots, and other AI’, J. of the Acad. Mark. Sci., vol.
49, no. 4, pp. 632–658, Jul. 2021, doi: 10.1007/s11747-020-00762-y.
[6] V. Kaul, S. Enslin, and S. A. Gross, ‘History of artificial intelligence in medicine’, Gastrointestinal
Endoscopy, vol. 92, no. 4, pp. 807–812, Oct. 2020, doi: 10.1016/j.gie.2020.06.040.
[7] L. J. Catania, ‘The evolution of artificial intelligence (AI)’, in Foundations of Artificial Intelligence
in Healthcare and Bioscience, Elsevier, 2021, pp. 7–11. doi: 10.1016/B978-0-12-824477-7.00002-X.
[8] A. Jobin, M. Ienca, and E. Vayena, ‘The global landscape of AI ethics guidelines’, Nat Mach Intell,
vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
[9] V. Claes, E. Devriendt, J. Tournoy, and K. Milisen, ‘Attitudes and perceptions of adults of 60 years
and older towards in-home monitoring of the activities of daily living with contactless sensors: An
explorative study’, International Journal of Nursing Studies, vol. 52, no. 1, pp. 134–148, Jan. 2015,
doi: 10.1016/j.ijnurstu.2014.05.010.
[10] P. Climent-Pérez, S. Spinsante, A. Mihailidis, and F. Florez-Revuelta, ‘A review on video-based
active and assisted living technologies for automated lifelogging’, Expert Systems with Applications,
vol. 139, p. 112847, Jan. 2020, doi: 10.1016/j.eswa.2019.112847.
[11] J. Füegi and J. Francis, ‘Lovelace & Babbage and the creation of the 1843 “notes”’, ACM Inroads,
vol. 6, no. 3, pp. 78–86, Aug. 2015, doi: 10.1145/2810201.
[12] A. G. Bromley, ‘Charles Babbage’s Analytical Engine, 1838’, IEEE Annals Hist. Comput., vol. 4, no.
3, pp. 196–217, Jul. 1982, doi: 10.1109/MAHC.1982.10028.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
15
[13] J. Al-Khalili, ‘The birth of the electric machines: a commentary on Faraday (1832) “Experimental
research in electricity”’, Phil. Trans. R. Soc. A., vol. 373, no. 2039, p. 20140208, Apr. 2015, doi:
10.1098/rsta.2014.0208.
[14] W. D. Devine, ‘From Shafts to Wires: Historical Perspective on Electrification’, J. Eco. History, vol.
43, no. 2, pp. 347–372, Jun. 1983, doi: 10.1017/S0022050700029673.
[15] C. Coombs, ‘Will COVID-19 be the tipping point for the Intelligent Automation of work? A review
of the debate and implications for research’, International Journal of Information Management, vol.
55, p. 102182, Dec. 2020, doi: 10.1016/j.ijinfomgt.2020.102182.
[16] J. C. Sipior, ‘Considerations for development and use of AI in response to COVID-19’, International
Journal of Information Management, vol. 55, p. 102170, Dec. 2020, doi:
10.1016/j.ijinfomgt.2020.102170.
[17] C. Collins, D. Dennehy, K. Conboy, and P. Mikalef, ‘Artificial intelligence in information systems
research: A systematic literature review and research agenda’, International Journal of Information
Management, vol. 60, p. 102383, Oct. 2021, doi: 10.1016/j.ijinfomgt.2021.102383.
[18] S. T. M. Peek, E. J. M. Wouters, J. van Hoof, K. G. Luijkx, H. R. Boeije, and H. J. M. Vrijhoef,
‘Factors influencing acceptance of technology for aging in place: A systematic review’, International
Journal of Medical Informatics, vol. 83, no. 4, pp. 235–248, Apr. 2014, doi:
10.1016/j.ijmedinf.2014.01.004.
[19] W. Wilkowska, J. Offermann-van Heek, F. Florez-Revuelta, and M. Ziefle, ‘Video Cameras for
Lifelogging at Home: Preferred Visualization Modes, Acceptance, and Privacy Perceptions among
German and Turkish Participants’, International Journal of Human–Computer Interaction, vol. 37, no.
15, pp. 1436–1454, Sep. 2021, doi: 10.1080/10447318.2021.1888487.
[20] K. Arning and M. Ziefle, ‘“Get that Camera Out of My House!” Conjoint Measurement of
Preferences for Video-Based Healthcare Monitoring Systems in Private and Public Places’, in
Inclusive Smart Cities and e-Health, vol. 9102, A. Geissbühler, J. Demongeot, M. Mokhtari, B.
Abdulrazak, and H. Aloulou, Eds. Cham: Springer International Publishing, 2015, pp. 152–164. doi:
10.1007/978-3-319-19312-0_13.
[21] Bhattacherjee and Sanford, ‘Influence Processes for Information Technology Acceptance: An
Elaboration Likelihood Model’, MIS Quarterly, vol. 30, no. 4, p. 805, 2006, doi: 10.2307/25148755.
[22] Venkatesh, Morris, Davis, and Davis, ‘User Acceptance of Information Technology: Toward a
Unified View’, MIS Quarterly, vol. 27, no. 3, p. 425, 2003, doi: 10.2307/30036540.
[23] V. Venkatesh, ‘Adoption and use of AI tools: a research agenda grounded in UTAUT’, Ann Oper
Res, vol. 308, no. 1–2, pp. 641–652, Jan. 2022, doi: 10.1007/s10479-020-03918-9.
[24] V. Venkatesh and F. D. Davis, ‘A Theoretical Extension of the Technology Acceptance Model: Four
Longitudinal Field Studies’, Management Science, vol. 46, no. 2, pp. 186–204, Feb. 2000, doi:
10.1287/mnsc.46.2.186.11926.
[25] J. Offermann-van Heek, E.-M. Schomakers, and M. Ziefle, ‘Bare necessities? How the need for care
modulates the acceptance of ambient assisted living technologies’, International Journal of Medical
Informatics, vol. 127, pp. 147–156, Jul. 2019, doi: 10.1016/j.ijmedinf.2019.04.025.
[26] P. Lehoux, F. A. Miller, and B. Williams-Jones, ‘Anticipatory governance and moral imagination:
Methodological insights from a scenario-based public deliberation study’, Technological Forecasting
and Social Change, vol. 151, p. 119800, Feb. 2020, doi: 10.1016/j.techfore.2019.119800.
[27] U. Felt, S. Schumann, C. G. Schwarz, and M. Strassnig, ‘Technology of imagination: a card-based
public engagement method for debating emerging technologies’, Qualitative Research, vol. 14, no. 2,
pp. 233–251, Apr. 2014, doi: 10.1177/1468794112468468.
[28] B. S. Zaunbrecher, J. Kluge, and M. Ziefle, ‘Exploring Mental Models of Geothermal Energy among
Laypeople in Germany as Hidden Drivers for Acceptance’, J. sustain. dev. energy water environ.
syst., vol. 6, no. 3, pp. 446–463, Sep. 2018, doi: 10.13044/j.sdewes.d5.0192.
[29] N. Martinez-Martin et al., ‘Ethical issues in using ambient intelligence in health-care settings’, The
Lancet Digital Health, vol. 3, no. 2, pp. e115–e123, Feb. 2021, doi: 10.1016/S2589-7500(20)30275-2.
[30] H. T. Vu and J. Lim, ‘Effects of country and individual factors on public acceptance of artificial
intelligence and robotics technologies: a multilevel SEM analysis of 28-country survey data’,
Behaviour & Information Technology, vol. 41, no. 7, pp. 1515–1528, May 2022, doi:
10.1080/0144929X.2021.1884288.
[31] G. L. Liehner, P. Brauner, A. K. Schaar, and M. Ziefle, ‘Delegation of Moral Tasks to Automated
Agents—The Impact of Risk and Context on Trusting a Machine to Perform a Task’, IEEE Trans.
Technol. Soc., vol. 3, no. 1, pp. 46–57, Mar. 2022, doi: 10.1109/TTS.2021.3118355.
[32] N. Xu and K.-J. Wang, ‘Adopting robot lawyer? The extending artificial intelligence robot lawyer
technology acceptance model for legal industry by an exploratory study’, Journal of Management &
Organization, vol. 27, no. 5, pp. 867–885, Sep. 2021, doi: 10.1017/jmo.2018.81.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
16
[33] H. Kempt and S. K. Nagel, ‘Responsibility, second opinions and peer-disagreement: ethical and
epistemological challenges of using AI in clinical diagnostic contexts’, J Med Ethics, vol. 48, no. 4,
pp. 222–229, Apr. 2022, doi: 10.1136/medethics-2021-107440.
[34] S. Talley, ‘Public Acceptance of AI Technology in Self-Flying Aircraft’, JAAER, 2020, doi:
10.15394/jaaer.2020.1822.
[35] P. Climent-Pérez and F. Florez-Revuelta, ‘Protection of visual privacy in videos acquired with RGB
cameras for active and assisted living applications’, Multimed Tools Appl, vol. 80, no. 15, pp.
23649–23664, Jun. 2021, doi: 10.1007/s11042-020-10249-1.
[36] A. Vellido, ‘Societal Issues Concerning the Application of Artificial Intelligence in Medicine’,
Kidney Dis, vol. 5, no. 1, pp. 11–17, 2019, doi: 10.1159/000492428.
[37] M. D. McCradden, T. Sarker, and P. A. Paprica, ‘Conditionally positive: a qualitative study of public
perceptions about using health data for artificial intelligence research’, BMJ Open, vol. 10, no. 10, p.
e039798, Oct. 2020, doi: 10.1136/bmjopen-2020-039798.
[38] J. P. Richardson et al., ‘Patient apprehensions about the use of artificial intelligence in healthcare’,
npj Digit. Med., vol. 4, no. 1, p. 140, Dec. 2021, doi: 10.1038/s41746-021-00509-1.
[39] Y. Chen, C. Stavropoulou, R. Narasinkan, A. Baker, and H. Scarbrough, ‘Professionals’ responses to
the introduction of AI innovations in radiology and their implications for future adoption: a
qualitative study’, BMC Health Serv Res, vol. 21, no. 1, p. 813, Dec. 2021, doi: 10.1186/s12913-021-
06861-y.
[40] J. Clune, ‘AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial
intelligence’. arXiv, Jan. 31, 2020. Accessed: Jun. 17, 2022. [Online]. Available:
http://arxiv.org/abs/1905.10985
[41] R. Fjelland, ‘Why general artificial intelligence will not be realized’, Humanit Soc Sci Commun, vol.
7, no. 1, p. 10, Dec. 2020, doi: 10.1057/s41599-020-0494-4.
[42] D. Silver, S. Singh, D. Precup, and R. S. Sutton, ‘Reward is enough’, Artificial Intelligence, vol. 299,
p. 103535, Oct. 2021, doi: 10.1016/j.artint.2021.103535.
[43] K. Kieslich, B. Keller, and C. Starke, ‘AI-Ethics by Design. Evaluating Public Perception on the
Importance of Ethical Design Principles of AI’, Big Data & Society, vol. 9, no. 1, p.
205395172210929, Jan. 2022, doi: 10.1177/20539517221092956.
[44] F. J. Swetz, ‘Mathematical Treasure: Ada Lovelace’s Notes on the Analytic Engine’, 2019.
https://www.maa.org/press/periodicals/convergence/mathematical-treasure-ada-lovelaces-notes-on-
the-analytic-engine
[45] W. E. Bijker, Why and How Technology Matters. Oxford University Press, 2006. doi:
10.1093/oxfordhb/9780199270439.003.0037.
[46] R. Forrester, ‘The Invention of the Steam Engine’, SocArXiv, preprint, Oct. 2019. doi:
10.31235/osf.io/fvs74.
[47] C. A. Lin, ‘Exploring personal computer adoption dynamics’, Journal of Broadcasting & Electronic
Media, vol. 42, no. 1, pp. 95–112, Jan. 1998, doi: 10.1080/08838159809364436.
[48] S. Serpa and C. Ferreira, ‘Society 5.0 and Social Development’, SOCIAL SCIENCES, preprint, Nov.
2018. doi: 10.20944/preprints201811.0108.v1.
[49] J. Weizenbaum, ‘On the Impact of the Computer on Society: How does one insult a machine?’,
Science, vol. 176, no. 4035, pp. 609–614, May 1972, doi: 10.1126/science.176.4035.609.
[50] S. Makridakis, ‘The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and
firms’, Futures, vol. 90, pp. 46–60, Jun. 2017, doi: 10.1016/j.futures.2017.03.006.
[51] S. Otten and M. Ziefle, ‘Exploring Trust Perceptions in the Medical Context: A Qualitative Approach
to Outlining Determinants of Trust in AAL Technology’:, in Proceedings of the 8th International
Conference on Information and Communication Technologies for Ageing Well and e-Health, Online
Streaming, Germany 2022, pp. 244–253. doi: 10.5220/0011058300003188.
[52] C. Maidhof, M. Ziefle, and J. Offermann, ‘Exploring Privacy: Mental Models of Potential Users of
AAL Technology’:, in Proceedings of the 8th International Conference on Information and
Communication Technologies for Ageing Well and e-Health, Online Streaming, Germany, 2022, pp.
93–104. doi: 10.5220/0011046200003188.
[53] MAXQDA. VERBI. (2018). Accessed: Feb. 1, 2022. Available:
https://www.maxqda.com/blogpost/how-to-cite-maxqda
[54] “AI Now” https://ainowinstitute.org (accessed: May 4, 2022).
[55] M. Bucchi and B. Trench, Eds., Routledge handbook of public communication of science and
technology, Second edition. London ; New York: Routledge, Taylor & Francis Group, 2014.
International Journal on Cybernetics & Informatics (IJCI) Vol. 11, No.4, August 2022
17
AUTHORS
Alexander Hick received his BSc in Psychology from Tilburg University and holds a
MSc in Philosophy from the University of Edinburgh. He is currently working at the
RWTH Aachenas an Early-Stage Researcher (ESR) within the Marie Skłodowska-Curie
Actions (MSCA) Innovative Training Network‘s (ITN) VisuAAL project. VisuAAL is a
European Marie Skłodowska-Curie graduate programme in which the Chair of
Communication Studies at RWTH Aachen University, together with partners from
Stockholm University, Trinity College Dublin, TU Vienna, and the Universidad de Alicante are
collaborating. His focus is on the acceptance artificial intelligence in the health care sector, and the
perception of AI-based technology by different stakeholder groups.
Martina Ziefle is Professor of Communication Science and head of the eHealth
research group at RWTH Aachen University. She studied psychology at the universities
of Göttingen and Würzburg. She completed her studies with distinction at the University
of Würzburg. At the University of Fribourg, Switzerland, Martina Ziefle received her
doctorate with summa cum laude. Her research focuses on the interface between humans
and technology, considering different usage contexts and user requirements. Especially
in the sensitive area of eHealth technologies, user diversity and technology acceptance play a decisive role.
Beyond teaching and research tasks, Martina Ziefle is involved in a number of third-party funded projects
of public and industrial sponsors, which deal with the topics communication and interaction between
humans and technology, technology acceptance and user diversity.