Access to this full-text is provided by Springer Nature.
Content available from International Journal of Technology and Design Education
This content is subject to copyright. Terms and conditions apply.
Accepted: 18 April 2024 / Published online: 3 May 2024
© The Author(s) 2024
Susanne Walan
susanne.walan@kau.se
1 Department of environmental and life sciences, Karlstad University, Karlstad 651 88, Sweden
Primary school students’ perceptions of articial intelligence
– for good or bad
SusanneWalan1
International Journal of Technology and Design Education (2025) 35:25–40
https://doi.org/10.1007/s10798-024-09898-2
Abstract
Since the end of 2022, global discussions on Articial Intelligence (AI) have surged, in-
uencing diverse societal groups, such as teachers, students and policymakers. This case
study focuses on Swedish primary school students aged 11–12. The aim is to examine
their cognitive and aective perceptions of AI and their current usage. Data, comprising a
pre-test, focus group interviews, and post-lesson evaluation reports, were analysed using
a fusion of Mitcham’s philosophical framework of technology with a behavioural com-
ponent, and the four basic pillars of AI literacy. Results revealed students’ cognitive per-
ceptions encompassing AI as both a machine and a concept with or without human attri-
butes. Aective perceptions were mixed, with students expressing positive views on AI’s
support in studies and practical tasks, alongside concerns about rapid development, job
loss, privacy invasion, and potential harm. Regarding AI usage, students initially explored
various AI tools, emphasising the need for regulations to slow down and contemplate con-
sequences. This study provides insights into primary school students perceptions and use
of AI, serving as a foundation for further exploration of AI literacy in education contexts
and considerations for policy makers to take into account, listening to children’s voices.
Keywords Aective perceptions · Articial intelligence · Cognitive perceptions ·
Primary school students · Use of AI
Introduction
During the last year a new socioscientic issue (SSI) has become one of the most dis-
cussed in society in all kinds of organisations, not at least in education. Although arti-
cial intelligence (AI) has existed for quite some time, the release of ChatGPT to the
public in November 2022 increased awareness of AI. In a short period, ChatGPT became
a tool used worldwide. According to dierent websites (e.g., Shewale, 2023), more than
1 3
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
180 million users have utilized ChatGPT provided by OpenAI since its release until
December 2023.
In media, at least in Sweden, there have been reports almost on a daily basis during
the last year, about the technical revolution of AI, with comments about its benets as
well as expected dangers. From an international perspective, it is an important issue, and
leaders in society have voiced the need for regulations on AI development. For instance,
the prime minister of UK invited leaders from all over the world to a safety summit
about AI in November 2023 to discuss international regulations (gov. uk., 2023). Dur-
ing the summit, the rst agreement was signed by representatives from companies and
governments from the 28 participating countries. More recently, the European Union
has decided on an AI act to regulate the use of AI (European Council, 2023).
Many questions arise, about safety, worries about jobs being lost, but also of how AI
can support us in many ways, maybe even help us to nd solutions to the climate crisis.
However, so far, it seems as there are few, if there are any studies reporting about how
young people perceive AI. They are the ones that are supposed to live in the future, with
AI likely having an even greater impact on society than today. The UN Convention on
the Rights of the Child was adopted by the UN General Assembly already at the end of
1989 and entered into force in September 1990. The Convention on the Rights of the
Child (United Nations, 1989) is a legally binding international agreement that states
that children are individuals with their own rights, not the possessions of parents or
other adults. It contains 54 articles, all of which are equally important and form a whole.
However, four basic principles must always be considered when dealing with matters
concerning children:
Article 2) All children have the same rights and equal value.
Article 3) The best interests of the child must be taken into account in all decisions
concerning children.
Article 6) All children have the right to life and development.
Article 12) All children have the right to express their opinion and have it respected.
It could be argued that for example, article 3 is of interest when making decisions about
AI. UNICEF and the World Economic Forum also claim that AI will impact children
in many ways and they ask for partners to build solutions that uphold child rights and
take into account opportunities as well as risks in the future AI age (UNICEF, 2023).
Already in 2001, Shier argued that children should be part in decision-making based
on the Convention on the Rights of the Child. He proposed a model in dierent steps,
with the rst being that children are listened to, the second, that they are supported in
expressing their views, the third, that their views are taken into account, fourth, that
they are involved in decision-making and nally, that they share power and responsibil-
ity for decision-making.
Hence, to consider children and to listen to their voices about AI, I have in this study
focused on how young people perceive AI, and to be more specic, what Swedish pri-
mary school, students aged 11–12 years old, know and think about AI. Since the use of
AI also has increased during the last year by the public, it was also of my interest to nd
out if young people, already at primary school level use AI, and if so, how. The follow-
ing research questions were posed:
1 3
26
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
What are primary school students’ cognitive and aective perceptions of AI?
If primary school students use AI already, how do they use it?
Background – AI history in short, from launch to being part of
education
Even though AI is on the agenda all over the world, a brief overview of what it is and a
short history about its development is presented as follows.
Already during the 1950s AI was introduced and a proposed denition was:
every aspect of learning or any other feature of intelligence can in principle be so
precisely described that a machine can be made to stimulate it. (McCarthy et al., 1955,
p.2).
Other examples of denitions of AI presented in research include its characterisation as
a specialised eld within computer science. This eld is dedicated to the development
of smart machines capable of executing tasks that generally necessitate human intellect,
including but not limited to visual understanding, voice recognition, decision-making
processes, and translating languages (Russell & Norvig, 2009; Marcus & Davis, 2019).
On the other hand, Machine Learning (ML) is a specic area within AI that emphasises
the creation of algorithms and statistical models that allow machines to enhance their
eciency in a particular task progressively by learning from data, without the need for
explicit programming (Brauner, Hich, Philipsen & Ziee, 2023).
A well-known example from the early days of AI is the chatbot ELIZA, which was
created during the 1960s (Potts et al., 2021). This chatbot could converse with humans
and was the rst program that was able to pass the Turing test, signifying that it could
engage in conversation in an intelligent and natural way (Haenlein & Kaplan, 2019).
However, since then, a lot of development has been made. Another well-known exam-
ple is from 2015 when Google’s AI managed to win over the world chess champion in
the game “Go” (Haenlein & Kaplan, 2019). Nowadays, AI is used in many elds such
as voice assistants, text and image generation, self-driving cars, human-robot interac-
tions, healthcare, etc. (e.g., Corea, 2019; Kulida & Lebedev, 2020; Onnasch & Roesler,
2020; Su & Yang, 2022).
The launch of ChatGPT in late 2022 caused a lot of debate in the education sector.
Globally, the initial apprehension was that students might exploit ChatGPT and similar
AI tools to cheat on their assignments, thereby devaluing the signicance of learning
evaluation, certication, and qualications (Anders, 2023). Some educational institu-
tions prohibited the use of ChatGPT, while others cautiously embraced the new tech-
nology (Tlili et al., 2023). Numerous schools and universities, for example, adopted a
forward-thinking stance, asserting that instead of trying to ban their use, students and
sta should be guided to use AI tools eectively, ethically, and transparently (Russell
Group, 2023). UNESCO has presented Guidance for generative AI in education and
research (2023) to support educators, students, and researchers in how to deal with
access to this kind of AI in education. Furthermore, the guidance suggests the develop-
ment of suitable rules and policies and recommends crucial measures for government
1 3
27
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
bodies to control the application of generative AI. It also introduces models and specic
instances for policy creation and instructional planning that allow the ethical and e-
cient utilization of this technology in education. Lastly, it urges the global community
to ponder over the deep, long-term eects of generative AI on our comprehension of
knowledge and the determination of educational content, techniques, and results, as
well as our approach to evaluating and authenticating learning (UNESCO, 2023).
From a science education perspective, the use of AI was presented in the review by
Jia et al. (2024). They concluded that AI has played an important role in science teach-
ing and learning, especially in the early stages of education. However, despite their
accurate review, there were no articles included that actually had investigated primary
school students’ perceptions of AI from cognitive and aective perspectives. The stud-
ies rather have reported on the use of AI to stimulate learning and how it aected atti-
tudes towards technology.
In addition to the ndings from the review made by Jia et al. (2024), it has been
argued that children need to learn about AI. Yang (2022) argued that already from early
years children should learn about AI and suggested a curriculum design including why,
what and how this could be implemented. The same idea was highlighted by Holmes
et al. (2019). They discussed AI education and argued that it should be classied into
either Learning with AI or Learning about AI. The latter is the focus of this study, trying
to nd out what primary school students know about AI, but also their attitudes to AI.
AI literacy in education
What kind of knowledge do students need to understand AI? Based on a review of publi-
cations about AI in education and literacy, Ng et al. (2021) concluded that there are four
parts that should serve as basis for AI literacy in education, namely to:
●know about and understand how AI works.
●be able to use and apply AI.
●evaluate and create AI.
●consider AI and ethics.
Based on this denition of AI literacy, the previous arguments about the need to include
AI in education, not only as tools, but also for students to learn about AI, are reason-
able. This is of importance not only for young students, but also for the public, as will
be presented in the following section.
Public perceptions of AI
As indicated in the introduction, there are several questions raised about the use of AI,
and some think that beside the opportunities, there are also risks. Some examples of
public perceptions of AI are presented as follows. The World Economic Forum (2022)
reported that the areas where people think AI improve their lives are mainly in educa-
tion, entertainment and transportation. When it comes to public perceptions of AI as
1 3
28
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
dangerous, it has for instance, been argued by researchers (Hick & Zietle, 2022) that
public perceptions of AI in some aspects is inuenced by science ction movies with
intelligent robots that take over the world. Less dangerous, but still perceived as prob-
lematic, fear of replacement and the risk of people losing their jobs is another concern
raised by people (Smith & Anderson, 2014). This is, of course the case, but again, it is
also argued that new jobs will be created (World Economic Forum, 2023). In addition,
Brauner et al. (2023) found in their study that people see both benets and possible
dangers with AI. The participants in their study were not worried about their future on
the labor market. Another nding in their study was that people think that it is good that
AI is not inuenced by emotions, hence more trustful. This was also found by Cismariu
and Gherhes (2019) and Liu and Tao (2022). Finally, Brauner et al. (2023) argued that
education about AI is necessary for the general public to enable people to evaluate the
benets and barriers of AI.
Theoretical framework
Mitcham’s philosophical framework of technology (1994) has been used by several
researchers in technology education (e.g., Ankiewicz, 2019; Blom & Abrie, 2021; Su &
Ding, 2022; Svenningsson, 2020). This framework presents technology in four dierent
manifestations:
1. Objects: Technology as material objects, ranging from kitchenware to computers.
2. Knowledge: This includes recipes, rules, theories, and intuitive “know-how”.
3. Activities: This involves the design, construction, and use of technological objects.
4. Volition: This pertains to knowing how to use technology and understanding its
consequences.
The studies by Blom and Abrie (2021), Su and Ding (2022), and Svenningsson (2020) all
used Mitcham’s typology of technology to analyse students’ perceptions of technology.
They found that students often have a limited understanding of technology, primarily
associating it with objects and activities. This understanding often overlooks the aspects
of knowledge and volition in technology. However, there are variations across dierent
contexts. For instance, while South African and Swedish students frequently associated
technology with modern electrical objects, Chinese students described technology from
various aspects, including its features, production, function, operation, and use. Despite
the limited perception, the studies suggest that students have the potential to describe
technology more comprehensively using all four aspects of Mitcham’s typology. This
indicates a need for educational interventions to broaden students’ understanding of
technology beyond just objects and activities.
One example of the use of Mitcham’s framework has been presented by Ankiewicz
(2019). However, he worked with development of Mitcham’s framework and argued
that a behavioural component of students’ attitudes towards technology needed to be
added. In this study, I will use the developed model of Mitcham’s framework presented
by Ankiewicz (2019) even though this study is a qualitative study and previous studies
1 3
29
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
mostly have been used in quantitative research settings. Thereby, this study can serve as
a new way of using the framework compared to how it has been used before.
To the best of my knowledge, the analysis of primary students’ perceptions of Arti-
cial Intelligence (AI) is not a widely explored area, and there are no existing frameworks
specically designed for this purpose. One potential approach could be to employ the
developed model of Mitcham’s framework as presented by Ankiewicz (2019). Another
approach could be to use the concept of digital literacy, which has been extensively
dened and utilised (Audrin & Audrin, 2022; Tinmaz, Lee, Fanea-Ivanovici, 2022).
However, the application of digital literacy as a theoretical framework presents chal-
lenges due to the multitude of denitions and the lack of explicit inclusion of aective
aspects of individuals’ perceptions. An alternative strategy could be to adopt the AI
literacy framework proposed by Ng et al. (2021), which is based on four fundamental
components. It might also be feasible to integrate the model introduced by Ankiewicz
(2019) with the foundational elements of AI literacy as outlined by Ng et al. (2021).
Consequently, my intention is to utilise the model depicted in Fig. 1 for data analysis
and discussion of the results in this study.
Method
Research context
In this case study, collaboration was made with a primary school where the science and
technology teachers teaching grade ve and six (students at the age of 11–12 years),
were interested in working with AI as an SSI theme during some weeks from March
Fig. 1 In this gure, the fusion of the theoretical model presented by Ankiewicz (2019) is combined with
the four fundamental components of AI literacy presented by Ng et al. (2021). (Arrows indicate the rela-
tionships between the components, with the ones pointing in both directions showing that the components
are considered as similar, while the arrows that only point in one direction show that there is an inuence
on the component only in one direction)
1 3
30
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
to May 2023. The reason for this interest being all the news about ChatGPT, their own
curiosity in learning about AI, but also in exploring how AI could be taught to their stu-
dents. The school is a compulsory school with students aged 6–12 years old. The school
is situated in a municipality in the middle of Sweden with about 12 000 habitants and
in this school, there are about 430 students. Some of the teachers had previously been
involved in research projects with a nearby situated university and based on already
established contacts, the idea of conducting a case study about students’ perceptions of
AI was decided between the teachers and the researcher (author). Before starting any
activities with the students all ethical concerns were taken into account. Hence, infor-
mation letters and consent forms were sent to the students and their parents. All of the
students were allowed and willing to participate in the study. It was informed that all
students were about to take part in all activities, but it was not necessary to be involved
in any data collection. Furthermore, information was also provided that the participants
would be kept anonymous, that data would be safely stored and that it was possible to
withdraw consent to participate anytime during the study. All the ethical steps being
taken were based on the ethical guidelines for scientic research recommended by The
Swedish Research Council (2017).
The next step was to nd out what the students already knew and thought about AI.
One of the teachers designed a pre-test with only ve questions to get an idea of the
general starting point for the students. The reason for only making a test with a few
questions was that this could be enough to nd out what the students already knew
and thought about AI. The questions were about if they had ever used any AI, their
positive and negative thoughts about AI, and also their explanations of what AI is,
both explained in written text and they were also asked to draw a picture to explain
their understanding of AI. Thereafter, the teachers started activities with the students.
The activities were inspired by lesson plans created by researchers at the Mid Univer-
sity of Sweden, for the purpose of teaching students in this age group about AI. The
lesson plans can be found on the website https://www.miun.se/mot-mittuniversitetet/
samverkan/run/barnensuniversitet/ai/.
However, unfortunately this website is only accessible in Swedish. Therefore, a brief
summary of the lesson plans is presented here in Table 1.
All of the lessons include dierent kinds of practical exercises and the content from the
lesson plans presented on the website was separated into several lessons. In total, there were
10 lessons, each lasting 40 min. After these lessons, the primary school students used Chat-
GPT to create dierent questions for practice before tests they were going to have in dier-
ent school subjects. The nal activity for the students was a trip to a nearby university where
Table 1 Overview of lesson plans about AI
Lesson Content
1Presentation of what AI is. Examples: recommendation on Youtube, Tiktok and Instagram;
virtual assistants such as Siri, Alexa, Google Assistant and ChatGPT; self-driving cars;
face recognition and Google translate. Discussions about good and bad with the examples.
2How a computer communicates. AI is based on algorithms. What an algorithm is. Ma-
chine learning.
3 Ethical dilemmas.
4 Human, machine or in between. Biohacking.
1 3
31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
they spent one day with activities related to AI. Half of the day they worked with a combina-
tion of ChatGPT and Dall-E to create stories. Here they were trained in the importance of
writing appropriate prompts. The students also met a person working at the university who
is an expert in programming with a special interest in AI. He held a short lecture about AI for
about 15 min. The other half of the day, the students worked on an art-based activity where
they, in groups of three to four, were asked to create a collage presenting what kind of AI
they would like to have in the future.
At the end of the project, focus group interviews were held with 12 of the participating
primary school students (one girl and one boy randomly picked from each of the classes).
The focus groups lasted about 40 min each. The interview was semi-structured, with ques-
tions found in Appendix I.
In addition, the teachers asked the students at the end of the project to write short evalu-
ation reports and 30 reports were sent to the researcher (author). In the reports the students
were requested to write their responses to two questions; What have you learnt about AI?
and How do you feel about AI? The reason for only 30 reports being collected was that it
was the end of the semester and it was not a top priority to remind students to write these
evaluations with many other things going on such as national exams and the upcoming
summer holiday.
The researcher also had a discussion with the teachers about giving the students the same
test again as they had from the beginning to compare if there had been any changes in the
students’ knowledge about and attitudes to AI. However, based on the situation of many
tests going on and the stress of being at the end of the semester, we agreed upon only using
the interviews and the evaluation reports that were able to be collected as enough for post
data collection.
Participants, data collection and analysis
The participants of this study have already been mentioned in the research context. However,
to clarify even more, there were in total 60 primary students aged 11–12 years that partici-
pated in the data collections. However, data were only collected from 30 of these students’
evaluation reports due to practical issues, and 12 of the students participated in the focus
group interviews that took place after the activities. The interviews were audio-recorded and
transcribed. Quotes from students in the focus group interview will be presented in the result
section as “I” for interview and number 1–12. Quotes from the evaluation reports will only
be presented as “E”, for evaluation report, followed by a number, hence E1-30.
Data was analysed using two dierent approaches. The data from the pretests was mainly
analysed using descriptive statistics, while data from the focus group interviews and the
evaluation reports were analyzed through the model presented in Fig. 1. Hence, using the
fusion of the Ankiewicz (2019) model and the foundational elements of AI literacy as out-
lined by Ng et al. (2021).
The data in this study consist of both a pre-test, interviews, and an evaluation report.
However, there is no research question posed to compare the activities before and after. The
purpose is not to evaluate whether and what students learn from the lessons but rather to
obtain a richer picture of their perceptions and experiences with AI. Data from the activity
when the students worked with collages at the university are not included in this paper since
1 3
32
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
the focus on that particular activity was more based on the students’ ideas about the kind of
support they wished from AI in the future.
Results
Primary school students’ cognitive perceptions of AI
Data from the pre-test, presenting how the primary school students responded to the ques-
tion of where it is possible to nd AI in society, showed that 13% responded that they do not
know. The rest of the students listed that AI can be found on the Internet, in cars, phones,
social media, hospitals, and actually almost everywhere. The pre-test also included a request
that the students would draw a picture trying to show what AI looks like. Even though 13%
had responded that they do not know where to nd AI in society, all of the students made
drawings either with a laptop or a robot. Some students drew both. Two of the students
included a picture of a brain in their drawings, a brain connected to a laptop. Examples of
drawings are presented in Fig. 2.
Data from the focus group interviews showed that the students had similar cognitive
perceptions of what AI is in the pretest. The students mentioned AI as an information tool
that could be found both through Google and ChatGPT. They also mentioned Snapchat,
self-driven cars and referred in particular to the brand Tesla. The ideas of AI being a robot,
or that it could be found in computers were also presented during the interviews. Example
of AI robots being in the service for people, for instance by doing shopping if you are not
able to go to the shop yourself because of illness. Some examples of students’ comments on
what AI is during the interviews:
AI is not a real person that sits there and write when you chat. It is a robot. (I2)
Fig. 2 Three examples of drawings created by the students, illustrating their cognitive perceptions of AI
1 3
33
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Well, AI is like Google, Google translate and Snapchat has an AI, but it is not so good.
(I7)
AI, it’s in self-driven cars, Musk you know, Tesla. (I8)
In addition, there were comments that referred to AI as a brain, a digital brain in computers,
that is able to think for itself. One comment was that AI is more like a human; it can make
up things on its own. These perceptions where more emphasised during the interviews than
in the pretest. The students also talked about AI as not having any feelings as a dierence
compared to humans. Some examples from the students’ comments:
It’s [AI] like a human, it can make things up, on its’ own. (I5)
We have learnt a lot about how AI works. They [AIs] are very clever, but they don’t
have any feelings. They don’t’ think about consequences. (I3)
It is like a brain, a digital one, inside a computer. It can think by itself. (I9)
The evaluation reports did not provide much information about the students’ perceptions of
what AI is. Instead, the students mostly wrote things like I have learnt a lot about AI. Still,
a few quotes will be presented as examples:
I have learnt that AI is much more than a robot. (E15)
I have learnt about programming and ChatGPT (E20).
Summarising the students’ cognitive perceptions of AI, they describe it as both a machine
(such as robots, computers, phones and self-driven cars) and consider its functionalities.
Additionally, there are notions of AI having or lacking human attributes. Some view AI as
humanlike, suggesting it can think on its own or portraying it as a brain. On the other hand,
there were comments about AI diering from human attributes, particularly in the percep-
tion that AI lacks any feelings.
Primary school students’ aective perceptions of AI
In the pre-test, students were prompted to articulate both positive and negative perspectives
on AI. They were also required to express whether they predominantly considered them-
selves positive or negative towards AI. Out of the 60 students, approximately 16% indicated
uncertainty regarding what they found positive about it. The remaining students provided
positive comments, highlighting AI’s capacity to assist in text writing, its accessibility, util-
ity in perilous situations, and its potential for facilitating learning.
In the focus group interviews the students talked about positive aspects of AI based on
how it can be used. The things mentioned were to support them in their studies and AIs as
robots helping with practical things such as shopping. Two examples of comments:
If you are ill, and cannot go and by food, an AI robot could do it for you. That’s good.
(I6)
It is positive that it can help us in our studies, to practice before exams. (I7)
A comment from the evaluation reports was that:
1 3
34
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
It has been fun! I nd it interesting and cool with AI. (E25)
All 60 students responded to negative aspects of AI in the pretest. None of them wrote that
they did not know, or that they did not think of any negative eects. The risks mentioned
were for instance possibilities for bad use, that people could become lazy and that AI could
make mistakes.
During the interviews, students discussed the risks more than the opportunities, and they
mentioned similar negative aspects as in the pretest. However, they also added that they
were afraid and found it a little bit scary, that development is going too fast, that AI could
spy on people and even kill. Furthermore, the risk of people losing their jobs, that AI robots
could develop and become mean. The last comment related to something they had seen in
movies. Some examples of comments:
It feels like it is going a little bit too fast and there is a risk that we will become lazy.
(I2)
It is bad if it (AI) starts to spy on people. If it knows what you are thinking. Like for
instance, maybe you plan for a birthday gift, you want to keep that as a secret. You
want to keep your privacy. (I7)
First, you think that it could be good, like if an AI robot take care of your old grandma.
I don’t want that, it’s not personal, I want to visit and call my grandma myself. Other-
wise, I would get a bad conscience. (I5)
I have seen movies when AI, or robots take over the world. What if that really hap-
pens? If you ask an AI to save the environment and take away the cause of the prob-
lems, then it would probably kill us all. (I4)
The evaluation reports were mainly lled with comments from the students that they found
it scary with the development of AI and similar comments as during the interviews were
found. One example:
It has been interesting to learn more about AI. Interesting, but also scary. It seems as
it is going very fast and people can lose their jobs and we don’t really know what is
going to happen, and what if it becomes smarter than people? (E13)
Still, even though it seems as the students were mainly negative about AI when explicitly
asked if they very more in favor, or more against AI, 75% of the students reported in the
pretest that, they were positive. Out of the 60 students, 13% did not know and 12% wrote
they were negative. During the interviews, the students were asked the same question and
in one of the groups they were despite the risks they had talked about, positive. In the other
group they reported that they felt more scared and therefore were negative, mostly to the
speed of the development.
Summarising the students’ aective perceptions about AI, the students were both positive
and negative. Students discussed positive aspects such as AI’s support in studies and practi-
cal tasks. However, students also expressed concerns about AI’s negative impacts, including
fears of rapid development, job loss, privacy invasion, and potential harm. Overall, students
exhibited a mix of positive and negative sentiments towards AI.
1 3
35
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Primary school students’ use of AI and their recommendations for the future
In the pre-test, 68% of the students reported having used some kind of AI, 2% did not know,
and the remaining 30% responded negatively to this question. Throughout the project, stu-
dents utilised ChatGPT in lessons about AI at school and when crafting stories at the univer-
sity. In the story creation session, they also employed the AI tool Dall-E to generate pictures.
As previously mentioned, many students found it interesting and enjoyable to learn about
AI and experiment with various tools. One student talked raised concerns about others’
behavior, specically regarding the risk of using ChatGPT for cheating in school. However,
students also expressed apprehensions about the future and called for regulations, which
was highlighted in interviews and some evaluation reports. Additionally, students argued
that they have limited inuence on decisions. Two comments exemplify this sentiment:
Since students can cheat, they have started to forbid the use of ChatGPT, at least at the
school where my brother is studying. (I9)
There is not much we can do. There was some researcher who did not want to work
with this anymore. He thought that things were going too fast. They should move
slower. They should think about consequences before they become exalted over what
they can do. They should take it easy and calculate on risks. But, what can we do?
Nothing. (I4)
Summarising aspects related to students’ use of AI, they were initially exploring various AI
tools. Notably, students discussed future behavior and the necessity for regulations, primar-
ily to slow down and contemplate consequences.
Discussion
In this study, primary school students’ perceptions of AI have been presented, both cognitive
and aective and also how they have started to use some tools, primarily ChatGPT, hence
behavioral aspects. The theoretical framework, based on Mitcham’s philosophical frame-
work of technology (1994) set the foundation for analysing students’ perceptions. However,
with the modication made in Ankiewicz’s expanded model (2019). In addition, the AI lit-
eracy components presented by Ng et al. (2021) have been included as part of the analyses,
which will be even more elaborated on here in the discussion section.
In terms of cognition, primary school students displayed an awareness of AI, perceiving
it both as a machine (including robots, computers, phones, and self-driven cars) and as a tool
applicable in various situations. This aligns not only with the technological aspects outlined
in Mitcham’s framework (1994) but also corresponds to the adjusted model proposed by
Ankiewicz (2019). When viewed through the lens of AI literacy (Ng et al., 2021), the cogni-
tive aspect is deemed synonymous with comprehending how AI functions. While students
in this study indicated learning about AI, it cannot be denitively asserted that they have
grasped its operational mechanisms, as such insights are not explicitly evident in the col-
lected data. Still, it might be argued that steps have been taken for students to start learning
about AI as suggested in previous studies as one important aspect in education Holmes et
al., 2019; Yang, 2022).
1 3
36
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
In terms of the aective perceptions, a nuanced perspective was revealed, with students
expressing both positive and negative sentiments towards AI. Positive aspects included
AI’s support in studies and practical tasks, aligning with public perceptions reported by the
World Economic Forum (2022). The students’ negative perceptions encompassed concerns
about rapid development, job loss, privacy invasion and potential harm. These perceptions
were also in line with ndings reported in previous studies Hick & Zietle, 2022; Smith &
Anderson, 2014). For instance, some students referred to science ction movies and won-
dered if this was something that could really be happening in the future if AI were to take
over. In total, the results showed the emotional complexity and the multifaceted nature of
students’ aective perceptions of AI.
When analysing the aective perceptions through the lens of AI literacy (Ng et al., 2021),
I make connections with the ethical perspective proposed by Ng and colleagues. However,
the ethical perspective is also connected to the cognitive domain. As well as the aective
domain is aected by the cognitive. The dimensions are intertwined. Lack of knowledge
may also cause worries and this can inuence decisions on ethical aspects.
Therefore, as argued by for instance Brauner et al. (2023) education about AI is neces-
sary to enable people to evaluate benets and barriers of AI. To be able to evaluate and
create AI, even more education is needed and this was not part of the lessons the primary
school students in this project faced.
However, the students were able to use some AI tools and this I interpreted as being part
of the behavioral component in Ankiewicz’s model (2019) and also be able to use and apply
AI as suggested in the AI literacy basic foundation (Ng et al., 2021). In addition to the actual
use, the students in this study also made suggestions for policy makers and AI developers,
hence suggestions for other people’s behavior, as the students emphasised the need for regu-
lations, emphasising the importance of responsible AI use. In this respect, steps have started
to be taken for instance by the European Union (2023). In the future, maybe policy makers
and AI developers also should listen to the voices from children and take into account the
Convention on the Rights of the Child, recommendations from previous research (Shier,
2001) and UNICEF (2023). AI will most certainly impact children in many ways.
Limitations and conclusions
This study has taken place in Sweden, with only a small number of primary school students.
It could be argued that the data collection is missing a post-test to elaborate on what the
students learned during the project. However, as earlier stated, practical issues made this
impossible to conduct. Still, the idea was not to evaluate what kind of knowledge the stu-
dents developed during the project, but to use all collected data to create an overall picture
of the students’ perceptions of AI. As a researcher, I did not participate during the lessons,
except for the ones taking place at the university. This is a limitation since I cannot say if, or
how the students were inuenced by their teachers. In a future study, this limitation should
be considered, and researchers should also observe what is going on during the whole pro-
cess to be able to identify factors that may inuence the students. On the other hand, there
are other factors to consider as well. Are the children talking about AI at home? What are
their parents saying? Still, this study can serve as a contribution of knowledge about stu-
dents’ perceptions and use of AI. Overall, the study contributes valuable insights into pri-
1 3
37
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
mary school students’ perceptions and use of AI, providing a basis for further exploration of
AI literacy in educational contexts.
Appendix I
Questions for the focus group interviews
1. I know that you have worked with a project about AI, can you please tell me about it?
What have you been doing?
2. What have you learnt about AI? What is it and how does it work?
3. How did you feel about working with the project?
4. How did you feel about AI?
5. Can you tell me more about things you nd positive with AI?
6. Can you tell me more about things you nd negative with AI?
7. What do you think about AI and the future? How should we use it? Or, should we avoid
using it?
8. Do you use any kind of AI yourself, if so, what do you use and for what purpose?
9. Anything else you would like to tell me about the project or AI?
Acknowledgements Thank you to the primary school students who volunteered to be part of this study.
Acknowledgements also to the teachers who collaborated and worked with lessons about AI as well as with
AI during the project.
Funding No external funding for this project.
Open access funding provided by Karlstad University.
Data availability Not applicable.
Code availability Not applicable.
Declarations
Conict of interest The author declares that there is no conict of interest.
Ethical approval and informed consent All procedures performed with human subjects were in accordance
with the ethical standards of the Swedish Research Council (SCR). Informed consent was obtained from all
participants in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
licence, and indicate if changes were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material.
If material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
1 3
38
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
References
A Anders, B. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Cambridge
Cell Press. https://doi.org/10.1016/j.patter.2023.100694
Ankiewicz, P. (2019). Alignment of the traditional approach to perceptions and attitudes with Mitchams’s
philosophical framework of technology. International Journal of Technology and Design Education,
29, 329–340. https://doi.org/10.1007/s10789-018-9443-6
Audrin, C., & Audrin, B. (2022). Key factors in digital literacy in learning and education: A systematic lit-
erature review using text mining. Education and Information Technologies, 27, 7395–7419. https://doi.
org/10.1007/s10639-021-10832-5
Blom, N., & Abrie, A. L. (2021). Students’ perceptions of the nature of technology and its relationship with
science following an integrated curriculum. International Journal of Science Education, 43(11), 1726–
1745. https://doi.org/10.1080/09500693.2021.1930273
Brauner, P., Hick, A., Philipsen, R., & Ziee, M. (2023). What does the public think about articial intel-
ligence? — a criticality map to understand bias in the public perception of AI. Frontiers in Computer
Science, 5. https://doi.org/10.3389/fcomp.2023.1113903
Cismariu, L., & Gherhes, V. (2019). Articial intelligence, between opportunity and challenge. BRAIN Broad
Research in Articial Intelligence and Neuroscience, 10(4), 40–55. https://doi.org/10.18662/brain/04
Corea, F. (2019). AI knowledge map: How to classify AI technologies. In An introduction to data (pp.
25–29). (Vol. 50 of Studies in Big Data). Springer, Cham. https://doi.org/10.1007/978-3-030-04468-8_4
European Council (EC) (2023). Articial intelligence act: Council and parlia-
ment strike a deal on the rst rules for AI in the world Retrieved December 16,
2023, from https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/
articial-intelligence-act-council-and-parliament-strike-a-deal-on-the-rst-worldwide-rules-for-ai/
Government United Kingdom (2023). AI safety summit 2023. Retrieved December 16, 2023, from https://
www.gov.uk/government/topical-events/ai-safety-summit-2023
Haenlein, M., & Kaplan, A. (2019). A brief history of articial intelligence: On the past, present,
and future of articial intelligence. California Management Review, 61(4), 5–14. https://doi.
org/10.1177/000812561986492
Hick, A., & Ziee, M. (2022). A qualitative approach to the public perception of AI. International Journal on
Cybernetics & Informatics (IJCI), (4), 1–17. https://doi.org/10.5121/ijci.2022.110401.
Holmes, W., Bialik, M., & Fadel, C. (2019). Articial intelligence in education Center for Curriculum Rede-
sign. Retrieved December 16, 2023 from https://curriculumredesign.org/wp-content/uploads/AIED-
Book-Excerpt-CCR.pdf
Jia, F., Sun, D., & Looi, C. (2024). Articial intelligence in science education (2013–2023): Research
trends in ten years. Journal of Science Education and Technology, 33, 94–117. https://doi.org/10.1007/
s10956-023-10077-6
Kulida, E., & Lebedev, V. (2020). About the use of articial intelligence methods in avia-
tion In 13th International conference on management of large-scale system develop-
ment (MLSD), 1–5. Retrieved April, 4, 2024, from https://ieeexplore.ieee.org/stamp/
stamp.jsp?arnumber=9247822&casa_token=c8t2OOc7wLMAAAAA:LGacxrsWI3sN
CoU-TfAMoe3L5sl2rOlU97xUwilDHysI8P9sDUBkxIscAp2EXyh3IKmINXsK-a0&tag=1
Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and anthropomorphism in public
acceptance of smart healthcare services. Comput Human Behav, 127, 107026. https://doi.org/10.1016/j.
chb.2021.107026
Marcus, G., & Davis, E. (2019). Rebooting AI: Building Articial Intelligence we can trust. Pantheon Books.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth sum-
mer research project on articial intelligencehttp://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
Mitcham, C. (1994). Thinking through Technology. The University of Chicago.
Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Denition, teaching, evalua-
tion and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1),
504–509. https://doi.org/10.1002/pra2.487
Onnasch, L., & Roesler, E. (2020). A taxonomy to structure and analyze human– robot interaction. Int J Soc
Rob, 13, 833–849. https://doi.org/10.1007/s12369-020-00666-5
Potts, C., Ennis, E., Bond, R., Mulvenna, M., McTear, M., Boyd, K., Broderick, T., Malcolm, M., Kuos-
manen, L., Nieminen, H., Vartiainen, A-K., Kostenius, C., Cahill, B., Vakaloudis, A., McConvey, G.,
& O’Neill, S. (2021). Chatbots to support mental wellbeing of people living in rural areas: Can user
groups contribute to co-design? Journal of Technology in Behavioral Science. https://doi.org/10.1007/
s41347-021-00222-6
Russell, S., & Norvig, P. (2009). Articial Intelligence: A modern approach (3rd ed.). Prentice Hall.
1 3
39
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Russell Group, & Group, R. (2023). Russell Group principles on the use of generative AI tools in education.
Cambridge. https://russellgroup.ac.uk/media/6137/rg_ai_principles-nal.pdf
Shewale, R. (2023). ChatGPT Statistics: Detailed insights on users (2023) Demandsage. https://www.
demandsage.com/chatgpt-statistics/
Shier, H. (2001). Parthways to participation: Openings, opportunities and obligations. Children & Society,
15, 107–117. https://doi.org/10.1002/chi.617
Smith, A., & Anderson, J. (2014). AI, robotics, and the future of jobs. Pew Res Center, 6, 51. https://www.
pewresearch.org/internet/2014/08/06/future-of-jobs/
Su, X., & Ding, B. A. (2022). A phenomenographic study of Chinese primary school students’ conceptions
about technology. International Journal of Technology and Design Education. https://doi.org/10.1007/
s10798-022-09742-5
Su, J., & Yang, W. (2022). Articial intelligence in early childhood education: A scoping review. Computers
and Education: Articial Intelligence, 3, 100049. https://doi.org/10.1016/j.caeai.2022.100049
Svenningsson, J. (2020). The Mitcham score: Quantifying students’ descriptions of technology. Inter-
national Journal of Technology and Design Education, 30, 995–1014. https://doi.org/10.1007/
s10798-019-09530-8
Swedish Research Council (2017). Good research practice. Retrieved December 16, 2023, from https://
www.vr.se/english/analysis/reports/ourreports/2017-08-31-good-research-practice.html
Tlili, A., Shehata, B., Agyemang Adarkwah, M., Bozkurt, A., Hickey, D.T., Huang, R., & Brighter Agy-
emang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in
education. Smart Learning Environments, 10(15), 1–24. https://doi.org/10.1186/s40561-023-00237-x
Tinmaz, H., Lee, Y. T., Fanea-Ivanovici, M., & Baber, H. (2022). A systematic review on digital literacy.
Smart Learning Environments, 9(1), 1–18. https://doi.org/10.1186/s40561-022-00204-y
UNESCO (2023). Guidance for generative AI in education and researchhttps://www.unesco.org/en/articles/
guidance-generative-ai-education-and-research
UNICEF (2023). Children and AI. Where are the opportunities and risks? Retrieved December 16, 2023,
from https://www.unicef.org/innovation/sites/unicef.org.innovation/les/2018-11/Children and AI_
Short Verson %283%29.pdf
United Nations (1989). Convention on the rights of the child. Retrieved December 16, 2023, from https://
www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child
World Economic Forum (2022). 5 charts that show what people around the world think about
AI Retrieved December 16, 2023, from https://www.weforum.org/agenda/2022/01/
articial-intelligence-ai-technology-trust-survey/
World Economic Forum (2023). These are the jobs most likely to be lost – and created – because of AI.
Retrieved December 16, 2023, from https://www.weforum.org/agenda/2023/05/jobs-lost-created-ai-gpt/
Yang, W. (2022). Articial intelligence education for young children: Why, what, and how in curriculum
design and implementation. Computers and Education: Articial Intelligence, 3, 100061. https://doi.
org/10.1016/j.caeai.2022.100061
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
1 3
40
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com