ArticlePDF Available

Primary school students’ perceptions of artificial intelligence – for good or bad

Springer Nature
International Journal of Technology and Design Education
Authors:

Abstract and Figures

Since the end of 2022, global discussions on Artificial Intelligence (AI) have surged, influencing diverse societal groups, such as teachers, students and policymakers. This case study focuses on Swedish primary school students aged 11–12. The aim is to examine their cognitive and affective perceptions of AI and their current usage. Data, comprising a pre-test, focus group interviews, and post-lesson evaluation reports, were analysed using a fusion of Mitcham’s philosophical framework of technology with a behavioural component, and the four basic pillars of AI literacy. Results revealed students’ cognitive perceptions encompassing AI as both a machine and a concept with or without human attributes. Affective perceptions were mixed, with students expressing positive views on AI’s support in studies and practical tasks, alongside concerns about rapid development, job loss, privacy invasion, and potential harm. Regarding AI usage, students initially explored various AI tools, emphasising the need for regulations to slow down and contemplate consequences. This study provides insights into primary school students perceptions and use of AI, serving as a foundation for further exploration of AI literacy in education contexts and considerations for policy makers to take into account, listening to children’s voices.
This content is subject to copyright. Terms and conditions apply.
Accepted: 18 April 2024 / Published online: 3 May 2024
© The Author(s) 2024
Susanne Walan
susanne.walan@kau.se
1 Department of environmental and life sciences, Karlstad University, Karlstad 651 88, Sweden
Primary school students’ perceptions of articial intelligence
– for good or bad
SusanneWalan1
International Journal of Technology and Design Education (2025) 35:25–40
https://doi.org/10.1007/s10798-024-09898-2
Abstract
Since the end of 2022, global discussions on Articial Intelligence (AI) have surged, in-
uencing diverse societal groups, such as teachers, students and policymakers. This case
study focuses on Swedish primary school students aged 11–12. The aim is to examine
their cognitive and aective perceptions of AI and their current usage. Data, comprising a
pre-test, focus group interviews, and post-lesson evaluation reports, were analysed using
a fusion of Mitcham’s philosophical framework of technology with a behavioural com-
ponent, and the four basic pillars of AI literacy. Results revealed students’ cognitive per-
ceptions encompassing AI as both a machine and a concept with or without human attri-
butes. Aective perceptions were mixed, with students expressing positive views on AI’s
support in studies and practical tasks, alongside concerns about rapid development, job
loss, privacy invasion, and potential harm. Regarding AI usage, students initially explored
various AI tools, emphasising the need for regulations to slow down and contemplate con-
sequences. This study provides insights into primary school students perceptions and use
of AI, serving as a foundation for further exploration of AI literacy in education contexts
and considerations for policy makers to take into account, listening to children’s voices.
Keywords Aective perceptions · Articial intelligence · Cognitive perceptions ·
Primary school students · Use of AI
Introduction
During the last year a new socioscientic issue (SSI) has become one of the most dis-
cussed in society in all kinds of organisations, not at least in education. Although arti-
cial intelligence (AI) has existed for quite some time, the release of ChatGPT to the
public in November 2022 increased awareness of AI. In a short period, ChatGPT became
a tool used worldwide. According to dierent websites (e.g., Shewale, 2023), more than
1 3
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
180 million users have utilized ChatGPT provided by OpenAI since its release until
December 2023.
In media, at least in Sweden, there have been reports almost on a daily basis during
the last year, about the technical revolution of AI, with comments about its benets as
well as expected dangers. From an international perspective, it is an important issue, and
leaders in society have voiced the need for regulations on AI development. For instance,
the prime minister of UK invited leaders from all over the world to a safety summit
about AI in November 2023 to discuss international regulations (gov. uk., 2023). Dur-
ing the summit, the rst agreement was signed by representatives from companies and
governments from the 28 participating countries. More recently, the European Union
has decided on an AI act to regulate the use of AI (European Council, 2023).
Many questions arise, about safety, worries about jobs being lost, but also of how AI
can support us in many ways, maybe even help us to nd solutions to the climate crisis.
However, so far, it seems as there are few, if there are any studies reporting about how
young people perceive AI. They are the ones that are supposed to live in the future, with
AI likely having an even greater impact on society than today. The UN Convention on
the Rights of the Child was adopted by the UN General Assembly already at the end of
1989 and entered into force in September 1990. The Convention on the Rights of the
Child (United Nations, 1989) is a legally binding international agreement that states
that children are individuals with their own rights, not the possessions of parents or
other adults. It contains 54 articles, all of which are equally important and form a whole.
However, four basic principles must always be considered when dealing with matters
concerning children:
Article 2) All children have the same rights and equal value.
Article 3) The best interests of the child must be taken into account in all decisions
concerning children.
Article 6) All children have the right to life and development.
Article 12) All children have the right to express their opinion and have it respected.
It could be argued that for example, article 3 is of interest when making decisions about
AI. UNICEF and the World Economic Forum also claim that AI will impact children
in many ways and they ask for partners to build solutions that uphold child rights and
take into account opportunities as well as risks in the future AI age (UNICEF, 2023).
Already in 2001, Shier argued that children should be part in decision-making based
on the Convention on the Rights of the Child. He proposed a model in dierent steps,
with the rst being that children are listened to, the second, that they are supported in
expressing their views, the third, that their views are taken into account, fourth, that
they are involved in decision-making and nally, that they share power and responsibil-
ity for decision-making.
Hence, to consider children and to listen to their voices about AI, I have in this study
focused on how young people perceive AI, and to be more specic, what Swedish pri-
mary school, students aged 11–12 years old, know and think about AI. Since the use of
AI also has increased during the last year by the public, it was also of my interest to nd
out if young people, already at primary school level use AI, and if so, how. The follow-
ing research questions were posed:
1 3
26
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
What are primary school students’ cognitive and aective perceptions of AI?
If primary school students use AI already, how do they use it?
Background – AI history in short, from launch to being part of
education
Even though AI is on the agenda all over the world, a brief overview of what it is and a
short history about its development is presented as follows.
Already during the 1950s AI was introduced and a proposed denition was:
every aspect of learning or any other feature of intelligence can in principle be so
precisely described that a machine can be made to stimulate it. (McCarthy et al., 1955,
p.2).
Other examples of denitions of AI presented in research include its characterisation as
a specialised eld within computer science. This eld is dedicated to the development
of smart machines capable of executing tasks that generally necessitate human intellect,
including but not limited to visual understanding, voice recognition, decision-making
processes, and translating languages (Russell & Norvig, 2009; Marcus & Davis, 2019).
On the other hand, Machine Learning (ML) is a specic area within AI that emphasises
the creation of algorithms and statistical models that allow machines to enhance their
eciency in a particular task progressively by learning from data, without the need for
explicit programming (Brauner, Hich, Philipsen & Ziee, 2023).
A well-known example from the early days of AI is the chatbot ELIZA, which was
created during the 1960s (Potts et al., 2021). This chatbot could converse with humans
and was the rst program that was able to pass the Turing test, signifying that it could
engage in conversation in an intelligent and natural way (Haenlein & Kaplan, 2019).
However, since then, a lot of development has been made. Another well-known exam-
ple is from 2015 when Google’s AI managed to win over the world chess champion in
the game “Go” (Haenlein & Kaplan, 2019). Nowadays, AI is used in many elds such
as voice assistants, text and image generation, self-driving cars, human-robot interac-
tions, healthcare, etc. (e.g., Corea, 2019; Kulida & Lebedev, 2020; Onnasch & Roesler,
2020; Su & Yang, 2022).
The launch of ChatGPT in late 2022 caused a lot of debate in the education sector.
Globally, the initial apprehension was that students might exploit ChatGPT and similar
AI tools to cheat on their assignments, thereby devaluing the signicance of learning
evaluation, certication, and qualications (Anders, 2023). Some educational institu-
tions prohibited the use of ChatGPT, while others cautiously embraced the new tech-
nology (Tlili et al., 2023). Numerous schools and universities, for example, adopted a
forward-thinking stance, asserting that instead of trying to ban their use, students and
sta should be guided to use AI tools eectively, ethically, and transparently (Russell
Group, 2023). UNESCO has presented Guidance for generative AI in education and
research (2023) to support educators, students, and researchers in how to deal with
access to this kind of AI in education. Furthermore, the guidance suggests the develop-
ment of suitable rules and policies and recommends crucial measures for government
1 3
27
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
bodies to control the application of generative AI. It also introduces models and specic
instances for policy creation and instructional planning that allow the ethical and e-
cient utilization of this technology in education. Lastly, it urges the global community
to ponder over the deep, long-term eects of generative AI on our comprehension of
knowledge and the determination of educational content, techniques, and results, as
well as our approach to evaluating and authenticating learning (UNESCO, 2023).
From a science education perspective, the use of AI was presented in the review by
Jia et al. (2024). They concluded that AI has played an important role in science teach-
ing and learning, especially in the early stages of education. However, despite their
accurate review, there were no articles included that actually had investigated primary
school students’ perceptions of AI from cognitive and aective perspectives. The stud-
ies rather have reported on the use of AI to stimulate learning and how it aected atti-
tudes towards technology.
In addition to the ndings from the review made by Jia et al. (2024), it has been
argued that children need to learn about AI. Yang (2022) argued that already from early
years children should learn about AI and suggested a curriculum design including why,
what and how this could be implemented. The same idea was highlighted by Holmes
et al. (2019). They discussed AI education and argued that it should be classied into
either Learning with AI or Learning about AI. The latter is the focus of this study, trying
to nd out what primary school students know about AI, but also their attitudes to AI.
AI literacy in education
What kind of knowledge do students need to understand AI? Based on a review of publi-
cations about AI in education and literacy, Ng et al. (2021) concluded that there are four
parts that should serve as basis for AI literacy in education, namely to:
know about and understand how AI works.
be able to use and apply AI.
evaluate and create AI.
consider AI and ethics.
Based on this denition of AI literacy, the previous arguments about the need to include
AI in education, not only as tools, but also for students to learn about AI, are reason-
able. This is of importance not only for young students, but also for the public, as will
be presented in the following section.
Public perceptions of AI
As indicated in the introduction, there are several questions raised about the use of AI,
and some think that beside the opportunities, there are also risks. Some examples of
public perceptions of AI are presented as follows. The World Economic Forum (2022)
reported that the areas where people think AI improve their lives are mainly in educa-
tion, entertainment and transportation. When it comes to public perceptions of AI as
1 3
28
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
dangerous, it has for instance, been argued by researchers (Hick & Zietle, 2022) that
public perceptions of AI in some aspects is inuenced by science ction movies with
intelligent robots that take over the world. Less dangerous, but still perceived as prob-
lematic, fear of replacement and the risk of people losing their jobs is another concern
raised by people (Smith & Anderson, 2014). This is, of course the case, but again, it is
also argued that new jobs will be created (World Economic Forum, 2023). In addition,
Brauner et al. (2023) found in their study that people see both benets and possible
dangers with AI. The participants in their study were not worried about their future on
the labor market. Another nding in their study was that people think that it is good that
AI is not inuenced by emotions, hence more trustful. This was also found by Cismariu
and Gherhes (2019) and Liu and Tao (2022). Finally, Brauner et al. (2023) argued that
education about AI is necessary for the general public to enable people to evaluate the
benets and barriers of AI.
Theoretical framework
Mitcham’s philosophical framework of technology (1994) has been used by several
researchers in technology education (e.g., Ankiewicz, 2019; Blom & Abrie, 2021; Su &
Ding, 2022; Svenningsson, 2020). This framework presents technology in four dierent
manifestations:
1. Objects: Technology as material objects, ranging from kitchenware to computers.
2. Knowledge: This includes recipes, rules, theories, and intuitive “know-how”.
3. Activities: This involves the design, construction, and use of technological objects.
4. Volition: This pertains to knowing how to use technology and understanding its
consequences.
The studies by Blom and Abrie (2021), Su and Ding (2022), and Svenningsson (2020) all
used Mitcham’s typology of technology to analyse students’ perceptions of technology.
They found that students often have a limited understanding of technology, primarily
associating it with objects and activities. This understanding often overlooks the aspects
of knowledge and volition in technology. However, there are variations across dierent
contexts. For instance, while South African and Swedish students frequently associated
technology with modern electrical objects, Chinese students described technology from
various aspects, including its features, production, function, operation, and use. Despite
the limited perception, the studies suggest that students have the potential to describe
technology more comprehensively using all four aspects of Mitcham’s typology. This
indicates a need for educational interventions to broaden students’ understanding of
technology beyond just objects and activities.
One example of the use of Mitcham’s framework has been presented by Ankiewicz
(2019). However, he worked with development of Mitcham’s framework and argued
that a behavioural component of students’ attitudes towards technology needed to be
added. In this study, I will use the developed model of Mitcham’s framework presented
by Ankiewicz (2019) even though this study is a qualitative study and previous studies
1 3
29
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
mostly have been used in quantitative research settings. Thereby, this study can serve as
a new way of using the framework compared to how it has been used before.
To the best of my knowledge, the analysis of primary students’ perceptions of Arti-
cial Intelligence (AI) is not a widely explored area, and there are no existing frameworks
specically designed for this purpose. One potential approach could be to employ the
developed model of Mitcham’s framework as presented by Ankiewicz (2019). Another
approach could be to use the concept of digital literacy, which has been extensively
dened and utilised (Audrin & Audrin, 2022; Tinmaz, Lee, Fanea-Ivanovici, 2022).
However, the application of digital literacy as a theoretical framework presents chal-
lenges due to the multitude of denitions and the lack of explicit inclusion of aective
aspects of individuals’ perceptions. An alternative strategy could be to adopt the AI
literacy framework proposed by Ng et al. (2021), which is based on four fundamental
components. It might also be feasible to integrate the model introduced by Ankiewicz
(2019) with the foundational elements of AI literacy as outlined by Ng et al. (2021).
Consequently, my intention is to utilise the model depicted in Fig. 1 for data analysis
and discussion of the results in this study.
Method
Research context
In this case study, collaboration was made with a primary school where the science and
technology teachers teaching grade ve and six (students at the age of 11–12 years),
were interested in working with AI as an SSI theme during some weeks from March
Fig. 1 In this gure, the fusion of the theoretical model presented by Ankiewicz (2019) is combined with
the four fundamental components of AI literacy presented by Ng et al. (2021). (Arrows indicate the rela-
tionships between the components, with the ones pointing in both directions showing that the components
are considered as similar, while the arrows that only point in one direction show that there is an inuence
on the component only in one direction)
1 3
30
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
to May 2023. The reason for this interest being all the news about ChatGPT, their own
curiosity in learning about AI, but also in exploring how AI could be taught to their stu-
dents. The school is a compulsory school with students aged 6–12 years old. The school
is situated in a municipality in the middle of Sweden with about 12 000 habitants and
in this school, there are about 430 students. Some of the teachers had previously been
involved in research projects with a nearby situated university and based on already
established contacts, the idea of conducting a case study about students’ perceptions of
AI was decided between the teachers and the researcher (author). Before starting any
activities with the students all ethical concerns were taken into account. Hence, infor-
mation letters and consent forms were sent to the students and their parents. All of the
students were allowed and willing to participate in the study. It was informed that all
students were about to take part in all activities, but it was not necessary to be involved
in any data collection. Furthermore, information was also provided that the participants
would be kept anonymous, that data would be safely stored and that it was possible to
withdraw consent to participate anytime during the study. All the ethical steps being
taken were based on the ethical guidelines for scientic research recommended by The
Swedish Research Council (2017).
The next step was to nd out what the students already knew and thought about AI.
One of the teachers designed a pre-test with only ve questions to get an idea of the
general starting point for the students. The reason for only making a test with a few
questions was that this could be enough to nd out what the students already knew
and thought about AI. The questions were about if they had ever used any AI, their
positive and negative thoughts about AI, and also their explanations of what AI is,
both explained in written text and they were also asked to draw a picture to explain
their understanding of AI. Thereafter, the teachers started activities with the students.
The activities were inspired by lesson plans created by researchers at the Mid Univer-
sity of Sweden, for the purpose of teaching students in this age group about AI. The
lesson plans can be found on the website https://www.miun.se/mot-mittuniversitetet/
samverkan/run/barnensuniversitet/ai/.
However, unfortunately this website is only accessible in Swedish. Therefore, a brief
summary of the lesson plans is presented here in Table 1.
All of the lessons include dierent kinds of practical exercises and the content from the
lesson plans presented on the website was separated into several lessons. In total, there were
10 lessons, each lasting 40 min. After these lessons, the primary school students used Chat-
GPT to create dierent questions for practice before tests they were going to have in dier-
ent school subjects. The nal activity for the students was a trip to a nearby university where
Table 1 Overview of lesson plans about AI
Lesson Content
1Presentation of what AI is. Examples: recommendation on Youtube, Tiktok and Instagram;
virtual assistants such as Siri, Alexa, Google Assistant and ChatGPT; self-driving cars;
face recognition and Google translate. Discussions about good and bad with the examples.
2How a computer communicates. AI is based on algorithms. What an algorithm is. Ma-
chine learning.
3 Ethical dilemmas.
4 Human, machine or in between. Biohacking.
1 3
31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
they spent one day with activities related to AI. Half of the day they worked with a combina-
tion of ChatGPT and Dall-E to create stories. Here they were trained in the importance of
writing appropriate prompts. The students also met a person working at the university who
is an expert in programming with a special interest in AI. He held a short lecture about AI for
about 15 min. The other half of the day, the students worked on an art-based activity where
they, in groups of three to four, were asked to create a collage presenting what kind of AI
they would like to have in the future.
At the end of the project, focus group interviews were held with 12 of the participating
primary school students (one girl and one boy randomly picked from each of the classes).
The focus groups lasted about 40 min each. The interview was semi-structured, with ques-
tions found in Appendix I.
In addition, the teachers asked the students at the end of the project to write short evalu-
ation reports and 30 reports were sent to the researcher (author). In the reports the students
were requested to write their responses to two questions; What have you learnt about AI?
and How do you feel about AI? The reason for only 30 reports being collected was that it
was the end of the semester and it was not a top priority to remind students to write these
evaluations with many other things going on such as national exams and the upcoming
summer holiday.
The researcher also had a discussion with the teachers about giving the students the same
test again as they had from the beginning to compare if there had been any changes in the
students’ knowledge about and attitudes to AI. However, based on the situation of many
tests going on and the stress of being at the end of the semester, we agreed upon only using
the interviews and the evaluation reports that were able to be collected as enough for post
data collection.
Participants, data collection and analysis
The participants of this study have already been mentioned in the research context. However,
to clarify even more, there were in total 60 primary students aged 11–12 years that partici-
pated in the data collections. However, data were only collected from 30 of these students’
evaluation reports due to practical issues, and 12 of the students participated in the focus
group interviews that took place after the activities. The interviews were audio-recorded and
transcribed. Quotes from students in the focus group interview will be presented in the result
section as “I” for interview and number 1–12. Quotes from the evaluation reports will only
be presented as “E”, for evaluation report, followed by a number, hence E1-30.
Data was analysed using two dierent approaches. The data from the pretests was mainly
analysed using descriptive statistics, while data from the focus group interviews and the
evaluation reports were analyzed through the model presented in Fig. 1. Hence, using the
fusion of the Ankiewicz (2019) model and the foundational elements of AI literacy as out-
lined by Ng et al. (2021).
The data in this study consist of both a pre-test, interviews, and an evaluation report.
However, there is no research question posed to compare the activities before and after. The
purpose is not to evaluate whether and what students learn from the lessons but rather to
obtain a richer picture of their perceptions and experiences with AI. Data from the activity
when the students worked with collages at the university are not included in this paper since
1 3
32
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
the focus on that particular activity was more based on the students’ ideas about the kind of
support they wished from AI in the future.
Results
Primary school students’ cognitive perceptions of AI
Data from the pre-test, presenting how the primary school students responded to the ques-
tion of where it is possible to nd AI in society, showed that 13% responded that they do not
know. The rest of the students listed that AI can be found on the Internet, in cars, phones,
social media, hospitals, and actually almost everywhere. The pre-test also included a request
that the students would draw a picture trying to show what AI looks like. Even though 13%
had responded that they do not know where to nd AI in society, all of the students made
drawings either with a laptop or a robot. Some students drew both. Two of the students
included a picture of a brain in their drawings, a brain connected to a laptop. Examples of
drawings are presented in Fig. 2.
Data from the focus group interviews showed that the students had similar cognitive
perceptions of what AI is in the pretest. The students mentioned AI as an information tool
that could be found both through Google and ChatGPT. They also mentioned Snapchat,
self-driven cars and referred in particular to the brand Tesla. The ideas of AI being a robot,
or that it could be found in computers were also presented during the interviews. Example
of AI robots being in the service for people, for instance by doing shopping if you are not
able to go to the shop yourself because of illness. Some examples of students’ comments on
what AI is during the interviews:
AI is not a real person that sits there and write when you chat. It is a robot. (I2)
Fig. 2 Three examples of drawings created by the students, illustrating their cognitive perceptions of AI
1 3
33
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Well, AI is like Google, Google translate and Snapchat has an AI, but it is not so good.
(I7)
AI, it’s in self-driven cars, Musk you know, Tesla. (I8)
In addition, there were comments that referred to AI as a brain, a digital brain in computers,
that is able to think for itself. One comment was that AI is more like a human; it can make
up things on its own. These perceptions where more emphasised during the interviews than
in the pretest. The students also talked about AI as not having any feelings as a dierence
compared to humans. Some examples from the students’ comments:
It’s [AI] like a human, it can make things up, on its’ own. (I5)
We have learnt a lot about how AI works. They [AIs] are very clever, but they don’t
have any feelings. They don’t’ think about consequences. (I3)
It is like a brain, a digital one, inside a computer. It can think by itself. (I9)
The evaluation reports did not provide much information about the students’ perceptions of
what AI is. Instead, the students mostly wrote things like I have learnt a lot about AI. Still,
a few quotes will be presented as examples:
I have learnt that AI is much more than a robot. (E15)
I have learnt about programming and ChatGPT (E20).
Summarising the students’ cognitive perceptions of AI, they describe it as both a machine
(such as robots, computers, phones and self-driven cars) and consider its functionalities.
Additionally, there are notions of AI having or lacking human attributes. Some view AI as
humanlike, suggesting it can think on its own or portraying it as a brain. On the other hand,
there were comments about AI diering from human attributes, particularly in the percep-
tion that AI lacks any feelings.
Primary school students’ aective perceptions of AI
In the pre-test, students were prompted to articulate both positive and negative perspectives
on AI. They were also required to express whether they predominantly considered them-
selves positive or negative towards AI. Out of the 60 students, approximately 16% indicated
uncertainty regarding what they found positive about it. The remaining students provided
positive comments, highlighting AI’s capacity to assist in text writing, its accessibility, util-
ity in perilous situations, and its potential for facilitating learning.
In the focus group interviews the students talked about positive aspects of AI based on
how it can be used. The things mentioned were to support them in their studies and AIs as
robots helping with practical things such as shopping. Two examples of comments:
If you are ill, and cannot go and by food, an AI robot could do it for you. That’s good.
(I6)
It is positive that it can help us in our studies, to practice before exams. (I7)
A comment from the evaluation reports was that:
1 3
34
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
It has been fun! I nd it interesting and cool with AI. (E25)
All 60 students responded to negative aspects of AI in the pretest. None of them wrote that
they did not know, or that they did not think of any negative eects. The risks mentioned
were for instance possibilities for bad use, that people could become lazy and that AI could
make mistakes.
During the interviews, students discussed the risks more than the opportunities, and they
mentioned similar negative aspects as in the pretest. However, they also added that they
were afraid and found it a little bit scary, that development is going too fast, that AI could
spy on people and even kill. Furthermore, the risk of people losing their jobs, that AI robots
could develop and become mean. The last comment related to something they had seen in
movies. Some examples of comments:
It feels like it is going a little bit too fast and there is a risk that we will become lazy.
(I2)
It is bad if it (AI) starts to spy on people. If it knows what you are thinking. Like for
instance, maybe you plan for a birthday gift, you want to keep that as a secret. You
want to keep your privacy. (I7)
First, you think that it could be good, like if an AI robot take care of your old grandma.
I don’t want that, it’s not personal, I want to visit and call my grandma myself. Other-
wise, I would get a bad conscience. (I5)
I have seen movies when AI, or robots take over the world. What if that really hap-
pens? If you ask an AI to save the environment and take away the cause of the prob-
lems, then it would probably kill us all. (I4)
The evaluation reports were mainly lled with comments from the students that they found
it scary with the development of AI and similar comments as during the interviews were
found. One example:
It has been interesting to learn more about AI. Interesting, but also scary. It seems as
it is going very fast and people can lose their jobs and we don’t really know what is
going to happen, and what if it becomes smarter than people? (E13)
Still, even though it seems as the students were mainly negative about AI when explicitly
asked if they very more in favor, or more against AI, 75% of the students reported in the
pretest that, they were positive. Out of the 60 students, 13% did not know and 12% wrote
they were negative. During the interviews, the students were asked the same question and
in one of the groups they were despite the risks they had talked about, positive. In the other
group they reported that they felt more scared and therefore were negative, mostly to the
speed of the development.
Summarising the students’ aective perceptions about AI, the students were both positive
and negative. Students discussed positive aspects such as AI’s support in studies and practi-
cal tasks. However, students also expressed concerns about AI’s negative impacts, including
fears of rapid development, job loss, privacy invasion, and potential harm. Overall, students
exhibited a mix of positive and negative sentiments towards AI.
1 3
35
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Primary school students’ use of AI and their recommendations for the future
In the pre-test, 68% of the students reported having used some kind of AI, 2% did not know,
and the remaining 30% responded negatively to this question. Throughout the project, stu-
dents utilised ChatGPT in lessons about AI at school and when crafting stories at the univer-
sity. In the story creation session, they also employed the AI tool Dall-E to generate pictures.
As previously mentioned, many students found it interesting and enjoyable to learn about
AI and experiment with various tools. One student talked raised concerns about others’
behavior, specically regarding the risk of using ChatGPT for cheating in school. However,
students also expressed apprehensions about the future and called for regulations, which
was highlighted in interviews and some evaluation reports. Additionally, students argued
that they have limited inuence on decisions. Two comments exemplify this sentiment:
Since students can cheat, they have started to forbid the use of ChatGPT, at least at the
school where my brother is studying. (I9)
There is not much we can do. There was some researcher who did not want to work
with this anymore. He thought that things were going too fast. They should move
slower. They should think about consequences before they become exalted over what
they can do. They should take it easy and calculate on risks. But, what can we do?
Nothing. (I4)
Summarising aspects related to students’ use of AI, they were initially exploring various AI
tools. Notably, students discussed future behavior and the necessity for regulations, primar-
ily to slow down and contemplate consequences.
Discussion
In this study, primary school students’ perceptions of AI have been presented, both cognitive
and aective and also how they have started to use some tools, primarily ChatGPT, hence
behavioral aspects. The theoretical framework, based on Mitcham’s philosophical frame-
work of technology (1994) set the foundation for analysing students’ perceptions. However,
with the modication made in Ankiewicz’s expanded model (2019). In addition, the AI lit-
eracy components presented by Ng et al. (2021) have been included as part of the analyses,
which will be even more elaborated on here in the discussion section.
In terms of cognition, primary school students displayed an awareness of AI, perceiving
it both as a machine (including robots, computers, phones, and self-driven cars) and as a tool
applicable in various situations. This aligns not only with the technological aspects outlined
in Mitcham’s framework (1994) but also corresponds to the adjusted model proposed by
Ankiewicz (2019). When viewed through the lens of AI literacy (Ng et al., 2021), the cogni-
tive aspect is deemed synonymous with comprehending how AI functions. While students
in this study indicated learning about AI, it cannot be denitively asserted that they have
grasped its operational mechanisms, as such insights are not explicitly evident in the col-
lected data. Still, it might be argued that steps have been taken for students to start learning
about AI as suggested in previous studies as one important aspect in education Holmes et
al., 2019; Yang, 2022).
1 3
36
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
In terms of the aective perceptions, a nuanced perspective was revealed, with students
expressing both positive and negative sentiments towards AI. Positive aspects included
AI’s support in studies and practical tasks, aligning with public perceptions reported by the
World Economic Forum (2022). The students’ negative perceptions encompassed concerns
about rapid development, job loss, privacy invasion and potential harm. These perceptions
were also in line with ndings reported in previous studies Hick & Zietle, 2022; Smith &
Anderson, 2014). For instance, some students referred to science ction movies and won-
dered if this was something that could really be happening in the future if AI were to take
over. In total, the results showed the emotional complexity and the multifaceted nature of
students’ aective perceptions of AI.
When analysing the aective perceptions through the lens of AI literacy (Ng et al., 2021),
I make connections with the ethical perspective proposed by Ng and colleagues. However,
the ethical perspective is also connected to the cognitive domain. As well as the aective
domain is aected by the cognitive. The dimensions are intertwined. Lack of knowledge
may also cause worries and this can inuence decisions on ethical aspects.
Therefore, as argued by for instance Brauner et al. (2023) education about AI is neces-
sary to enable people to evaluate benets and barriers of AI. To be able to evaluate and
create AI, even more education is needed and this was not part of the lessons the primary
school students in this project faced.
However, the students were able to use some AI tools and this I interpreted as being part
of the behavioral component in Ankiewicz’s model (2019) and also be able to use and apply
AI as suggested in the AI literacy basic foundation (Ng et al., 2021). In addition to the actual
use, the students in this study also made suggestions for policy makers and AI developers,
hence suggestions for other people’s behavior, as the students emphasised the need for regu-
lations, emphasising the importance of responsible AI use. In this respect, steps have started
to be taken for instance by the European Union (2023). In the future, maybe policy makers
and AI developers also should listen to the voices from children and take into account the
Convention on the Rights of the Child, recommendations from previous research (Shier,
2001) and UNICEF (2023). AI will most certainly impact children in many ways.
Limitations and conclusions
This study has taken place in Sweden, with only a small number of primary school students.
It could be argued that the data collection is missing a post-test to elaborate on what the
students learned during the project. However, as earlier stated, practical issues made this
impossible to conduct. Still, the idea was not to evaluate what kind of knowledge the stu-
dents developed during the project, but to use all collected data to create an overall picture
of the students’ perceptions of AI. As a researcher, I did not participate during the lessons,
except for the ones taking place at the university. This is a limitation since I cannot say if, or
how the students were inuenced by their teachers. In a future study, this limitation should
be considered, and researchers should also observe what is going on during the whole pro-
cess to be able to identify factors that may inuence the students. On the other hand, there
are other factors to consider as well. Are the children talking about AI at home? What are
their parents saying? Still, this study can serve as a contribution of knowledge about stu-
dents’ perceptions and use of AI. Overall, the study contributes valuable insights into pri-
1 3
37
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
mary school students’ perceptions and use of AI, providing a basis for further exploration of
AI literacy in educational contexts.
Appendix I
Questions for the focus group interviews
1. I know that you have worked with a project about AI, can you please tell me about it?
What have you been doing?
2. What have you learnt about AI? What is it and how does it work?
3. How did you feel about working with the project?
4. How did you feel about AI?
5. Can you tell me more about things you nd positive with AI?
6. Can you tell me more about things you nd negative with AI?
7. What do you think about AI and the future? How should we use it? Or, should we avoid
using it?
8. Do you use any kind of AI yourself, if so, what do you use and for what purpose?
9. Anything else you would like to tell me about the project or AI?
Acknowledgements Thank you to the primary school students who volunteered to be part of this study.
Acknowledgements also to the teachers who collaborated and worked with lessons about AI as well as with
AI during the project.
Funding No external funding for this project.
Open access funding provided by Karlstad University.
Data availability Not applicable.
Code availability Not applicable.
Declarations
Conict of interest The author declares that there is no conict of interest.
Ethical approval and informed consent All procedures performed with human subjects were in accordance
with the ethical standards of the Swedish Research Council (SCR). Informed consent was obtained from all
participants in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
licence, and indicate if changes were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material.
If material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
1 3
38
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Primary school students’ perceptions of articial intelligence – for good…
References
A Anders, B. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Cambridge
Cell Press. https://doi.org/10.1016/j.patter.2023.100694
Ankiewicz, P. (2019). Alignment of the traditional approach to perceptions and attitudes with Mitchams’s
philosophical framework of technology. International Journal of Technology and Design Education,
29, 329–340. https://doi.org/10.1007/s10789-018-9443-6
Audrin, C., & Audrin, B. (2022). Key factors in digital literacy in learning and education: A systematic lit-
erature review using text mining. Education and Information Technologies, 27, 7395–7419. https://doi.
org/10.1007/s10639-021-10832-5
Blom, N., & Abrie, A. L. (2021). Students’ perceptions of the nature of technology and its relationship with
science following an integrated curriculum. International Journal of Science Education, 43(11), 1726–
1745. https://doi.org/10.1080/09500693.2021.1930273
Brauner, P., Hick, A., Philipsen, R., & Ziee, M. (2023). What does the public think about articial intel-
ligence? — a criticality map to understand bias in the public perception of AI. Frontiers in Computer
Science, 5. https://doi.org/10.3389/fcomp.2023.1113903
Cismariu, L., & Gherhes, V. (2019). Articial intelligence, between opportunity and challenge. BRAIN Broad
Research in Articial Intelligence and Neuroscience, 10(4), 40–55. https://doi.org/10.18662/brain/04
Corea, F. (2019). AI knowledge map: How to classify AI technologies. In An introduction to data (pp.
25–29). (Vol. 50 of Studies in Big Data). Springer, Cham. https://doi.org/10.1007/978-3-030-04468-8_4
European Council (EC) (2023). Articial intelligence act: Council and parlia-
ment strike a deal on the rst rules for AI in the world Retrieved December 16,
2023, from https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/
articial-intelligence-act-council-and-parliament-strike-a-deal-on-the-rst-worldwide-rules-for-ai/
Government United Kingdom (2023). AI safety summit 2023. Retrieved December 16, 2023, from https://
www.gov.uk/government/topical-events/ai-safety-summit-2023
Haenlein, M., & Kaplan, A. (2019). A brief history of articial intelligence: On the past, present,
and future of articial intelligence. California Management Review, 61(4), 5–14. https://doi.
org/10.1177/000812561986492
Hick, A., & Ziee, M. (2022). A qualitative approach to the public perception of AI. International Journal on
Cybernetics & Informatics (IJCI), (4), 1–17. https://doi.org/10.5121/ijci.2022.110401.
Holmes, W., Bialik, M., & Fadel, C. (2019). Articial intelligence in education Center for Curriculum Rede-
sign. Retrieved December 16, 2023 from https://curriculumredesign.org/wp-content/uploads/AIED-
Book-Excerpt-CCR.pdf
Jia, F., Sun, D., & Looi, C. (2024). Articial intelligence in science education (2013–2023): Research
trends in ten years. Journal of Science Education and Technology, 33, 94–117. https://doi.org/10.1007/
s10956-023-10077-6
Kulida, E., & Lebedev, V. (2020). About the use of articial intelligence methods in avia-
tion In 13th International conference on management of large-scale system develop-
ment (MLSD), 1–5. Retrieved April, 4, 2024, from https://ieeexplore.ieee.org/stamp/
stamp.jsp?arnumber=9247822&casa_token=c8t2OOc7wLMAAAAA:LGacxrsWI3sN
CoU-TfAMoe3L5sl2rOlU97xUwilDHysI8P9sDUBkxIscAp2EXyh3IKmINXsK-a0&tag=1
Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and anthropomorphism in public
acceptance of smart healthcare services. Comput Human Behav, 127, 107026. https://doi.org/10.1016/j.
chb.2021.107026
Marcus, G., & Davis, E. (2019). Rebooting AI: Building Articial Intelligence we can trust. Pantheon Books.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth sum-
mer research project on articial intelligencehttp://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
Mitcham, C. (1994). Thinking through Technology. The University of Chicago.
Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Denition, teaching, evalua-
tion and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1),
504–509. https://doi.org/10.1002/pra2.487
Onnasch, L., & Roesler, E. (2020). A taxonomy to structure and analyze human– robot interaction. Int J Soc
Rob, 13, 833–849. https://doi.org/10.1007/s12369-020-00666-5
Potts, C., Ennis, E., Bond, R., Mulvenna, M., McTear, M., Boyd, K., Broderick, T., Malcolm, M., Kuos-
manen, L., Nieminen, H., Vartiainen, A-K., Kostenius, C., Cahill, B., Vakaloudis, A., McConvey, G.,
& O’Neill, S. (2021). Chatbots to support mental wellbeing of people living in rural areas: Can user
groups contribute to co-design? Journal of Technology in Behavioral Science. https://doi.org/10.1007/
s41347-021-00222-6
Russell, S., & Norvig, P. (2009). Articial Intelligence: A modern approach (3rd ed.). Prentice Hall.
1 3
39
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
S. Walan
Russell Group, & Group, R. (2023). Russell Group principles on the use of generative AI tools in education.
Cambridge. https://russellgroup.ac.uk/media/6137/rg_ai_principles-nal.pdf
Shewale, R. (2023). ChatGPT Statistics: Detailed insights on users (2023) Demandsage. https://www.
demandsage.com/chatgpt-statistics/
Shier, H. (2001). Parthways to participation: Openings, opportunities and obligations. Children & Society,
15, 107–117. https://doi.org/10.1002/chi.617
Smith, A., & Anderson, J. (2014). AI, robotics, and the future of jobs. Pew Res Center, 6, 51. https://www.
pewresearch.org/internet/2014/08/06/future-of-jobs/
Su, X., & Ding, B. A. (2022). A phenomenographic study of Chinese primary school students’ conceptions
about technology. International Journal of Technology and Design Education. https://doi.org/10.1007/
s10798-022-09742-5
Su, J., & Yang, W. (2022). Articial intelligence in early childhood education: A scoping review. Computers
and Education: Articial Intelligence, 3, 100049. https://doi.org/10.1016/j.caeai.2022.100049
Svenningsson, J. (2020). The Mitcham score: Quantifying students’ descriptions of technology. Inter-
national Journal of Technology and Design Education, 30, 995–1014. https://doi.org/10.1007/
s10798-019-09530-8
Swedish Research Council (2017). Good research practice. Retrieved December 16, 2023, from https://
www.vr.se/english/analysis/reports/ourreports/2017-08-31-good-research-practice.html
Tlili, A., Shehata, B., Agyemang Adarkwah, M., Bozkurt, A., Hickey, D.T., Huang, R., & Brighter Agy-
emang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in
education. Smart Learning Environments, 10(15), 1–24. https://doi.org/10.1186/s40561-023-00237-x
Tinmaz, H., Lee, Y. T., Fanea-Ivanovici, M., & Baber, H. (2022). A systematic review on digital literacy.
Smart Learning Environments, 9(1), 1–18. https://doi.org/10.1186/s40561-022-00204-y
UNESCO (2023). Guidance for generative AI in education and researchhttps://www.unesco.org/en/articles/
guidance-generative-ai-education-and-research
UNICEF (2023). Children and AI. Where are the opportunities and risks? Retrieved December 16, 2023,
from https://www.unicef.org/innovation/sites/unicef.org.innovation/les/2018-11/Children and AI_
Short Verson %283%29.pdf
United Nations (1989). Convention on the rights of the child. Retrieved December 16, 2023, from https://
www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child
World Economic Forum (2022). 5 charts that show what people around the world think about
AI Retrieved December 16, 2023, from https://www.weforum.org/agenda/2022/01/
articial-intelligence-ai-technology-trust-survey/
World Economic Forum (2023). These are the jobs most likely to be lost – and created because of AI.
Retrieved December 16, 2023, from https://www.weforum.org/agenda/2023/05/jobs-lost-created-ai-gpt/
Yang, W. (2022). Articial intelligence education for young children: Why, what, and how in curriculum
design and implementation. Computers and Education: Articial Intelligence, 3, 100061. https://doi.
org/10.1016/j.caeai.2022.100061
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
1 3
40
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
... Artificial Intelligence (AI) is a branch of computer science that enables machines to mimic human intelligence in completing specific tasks (Rathore et al., 2023). AI encompasses machine learning, natural language processing, and computer vision, allowing systems to learn from data, recognize patterns, and make decisions automatically (Walan, 2024). Various tasks previously performed by humans can now be replaced by AI, particularly repetitive tasks such as industrial production, chatbot-based customer service, and data analysis (Chamunyonga et al., 2020;Henry et al., 2021). ...
... Artificial Intelligence (AI) in education encompasses various technologies designed to enhance the effectiveness of learning and educational management, such as Augmented Reality (AR) and Virtual Reality (VR), which provide interactive and immersive learning experiences; Learning Management Systems (LMS), which assist in managing materials and learning evaluations; and Adaptive Learning (AL), which adjusts materials based on individual student needs. Additionally, Intelligent Tutoring System (ITS) enables automated learning guidance, Natural Language Processing (NLP) supports students' educational needs with chatbots and virtual assistants, while Automated Assessment Systems accelerate the evaluation process through automated analysis (Rathore et al., 2023;Walan, 2024). ...
... While the advantages of AI are widely acknowledged, there remains a notable need for more thorough research focused on its application in primary education. Current studies predominantly emphasize higher education and professional training, which means that the exploration of AI's impact on primary school learning is an area that warrants further investigation (Chamunyonga et al., 2020;Delgado et al., 2020;Walan, 2024). Consequently, it's clear that a deeper investigation is necessary to fully grasp AI's effectiveness, the challenges it poses, and the opportunities it can provide within the realm of primary education. ...
Article
Full-text available
This study aims to analyze the trends, opportunities, and challenges of implementing Artificial Intelligence (AI) in primary education based on a systematic literature review. The research method employed the Systematic Literature Review (SLR) approach with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Data collection techniques involved searching references from reputable journal databases such as Scopus, Web of Science, ERIC, Google Scholar, IEEE Xplore, Taylor & Francis Online, and Wiley Online Library, resulting in 45 studies (2016-2024) selected based on relevance to AI in primary education. The study results indicate that (1) AI trends in primary education are dominated by Augmented Reality (AR), Virtual Reality (VR), Learning Management Systems (LMS), and Adaptive Learning Systems; (2) Opportunities for AI in primary education include the development of more effective and measurable AI technology innovations and their widespread application to improve student learning outcomes; and (3) Challenges in implementing AI in primary education include technological disparities, high costs, uneven infrastructure, lack of teacher training, and insufficient technological and budget support, and student data privacy. The study concludes that AI in primary education has the potential to enhance learning quality through AR, VR, LMS, and Adaptive Learning, yet still faces challenges related to technological disparities, costs, infrastructure, and teacher training. This study offers insights for policymakers and educators to strengthen teacher competence, improve infrastructure, and ensure equitable AI access in schools.
... Although scientific interest in the field of artificial intelligence in education has been increasing in recent years, it has been observed that very little research has been conducted on artificial intelligence in the context of primary school students (Martínez-Comesaña et al., 2023;Walan, 2024;Zawacki-Richter et al., 2019). It is thought that each study in this field will be an important step in contributing to the accumulation of knowledge in the relevant Oruç,Korkmaz,& Kurt 584 literature, informing the public about current technologies and evaluating the potential benefits and risks of these technologies. ...
... Student feedback can help to better design AI-supported educational tools and better adapt to student needs. Therefore, addressing student views on artificial intelligence is necessary for the effective use of this technology in education (Walan, 2024). ...
... When the studies on the use of artificial intelligence at the primary school level are examined, it is noteworthy that the studies dealing with student views are quite limited (Walan, 2024). It is thought that every study to be conducted for this purpose will be an important step in terms of contributing to the accumulation of knowledge in the relevant literature and integrating artificial intelligence technologies into primary school education in a healthy way. ...
Article
Full-text available
The aim of this study is to examine primary school students' views on artificial intelligence. Phenomenology design, one of the qualitative research methods, was used in the study. The study was conducted with 25 fourth grade students. The participants of the study were determined using the criterion sampling method, one of the purposeful sampling methods. The data were collected using a structured interview form and content analysis technique was applied to analyze the data. The results of the study showed that primary school students generally associate AI with technology, science, education, art and daily life. Students define artificial intelligence as human-designed robots and tools that provide information and help in every field. Stating that artificial intelligence has both positive and negative effects, the students emphasized knowledge acquisition and increasing creativity among the positive effects, while they expressed health problems, ethical and privacy concerns among the negative effects. They also stated that the use of AI in the classroom supports learning and development but can create reliability and ethical issues. It is recommended to conduct studies evaluating the long-term effects of artificial intelligence and to better understand the perceptions of artificial intelligence of students in different age groups.
... The studies included are Aure and Cuenca (2024; S1), Qureshi (2023; S2), Valový and Buchalcevova (2023;S3), and Walan (2024;S4). The studies use several GAI tools, mainly ChatGPT, to enhance learning. ...
... By offering a non-judgmental and supportive interface, GAI tools encourage students to express their concerns freely, facilitating a safe space for learning without fear of negative consequences. Moreover, Walan (2024) notes that GAI tools offer emotional support by recognizing and responding to signs of distress or disengagement among students. ...
... These shortcomings can lead to responses that are either irrelevant or incorrect, disrupting the learning process and potentially causing confusion among learners. Furthermore, Valový and Buchalcevova (2023) and Walan (2024) highlighted critical gaps in GAI's personalization and emotional intelligence. Valový and Buchalcevova (2023) discuss how, despite being designed to adapt to individual learning profiles, GAI tools fail to deliver a genuinely personalized learning experience, especially in recognizing and adapting to each learner's unique emotional and cognitive states. ...
Article
Full-text available
Despite advances in educational technology, the specific ways in which Generative Artificial Intelligence (GAI) and Large Language Models cater to learners’ nuanced cognitive and emotional needs are not fully understood. This mini-review methodically describes GAI’s practical implementations and limitations in meeting these needs. It included journal and conference papers from 2019 to 2024, focusing on empirical studies that employ GAI tools in educational contexts while addressing their practical utility and ethical considerations. The selection criteria excluded non-English studies, non-empirical research, and works published before 2019. From the dataset obtained from Scopus and Web of Science as of June 18, 2024, four significant studies were reviewed. These studies involved tools like ChatGPT and emphasized their effectiveness in boosting student engagement and emotional regulation through interactive learning environments with instant feedback. Nonetheless, the review reveals substantial deficiencies in GAI’s capacity to promote critical thinking and maintain response accuracy, potentially leading to learner confusion. Moreover, the ability of these tools to tailor learning experiences and offer emotional support remains limited, often not satisfying individual learner requirements. The findings from the included studies suggest limited generalizability beyond specific GAI versions, with studies being cross-sectional and involving small participant pools. Practical implications underscore the need to develop teaching strategies leveraging GAI to enhance critical thinking. There is also a need to improve the accuracy of GAI tools’ responses. Lastly, deep analysis of intervention approval is needed in cases where GAI does not meet acceptable error margins to mitigate potential negative impacts on learning experiences.
... Bu çalışmalar incelendiğinde ilkokul düzeyinde çocuklar, sınıf öğretmenleri ve veliler gibi paydaşlarla ilgili çalışmalar olduğu söylenebilir. Bunların yapay zekâ ile ilgili metaforik algıların incelenmesi (Aktaş, 2021;Demirtaş ve Türksoy, 2023;Kalemkuş ve Kalemkuş, 2025;Saçan, Tozduman Yaralı ve Kavruk, 2022), yapay zekâ temalı sosyobilimsel konuların öğretimi ile ilgili deneysel bir çalışma (Soydemir Bor ve Alkış Küçükaydın, 2021), yapay zekâ ile ilgili öğretmenlerin kaygılarını araştırmaya yönelik karma yöntemle yürütülmüş çalışma (Özdemir, 2023), sınıf öğretmenleri ve branş öğretmenlerine yönelik çalışmalar (Avcı, Kula ve Haşlaman, 2019;Fissore, Floris, Conte ve Sacchet, 2024;Özer, Sancar Yazıcı, Akgül ve Yıldırım, 2023;Parlak, Yurdakul, Kalender ve Üngör, 2023;Küçükkara, Ünal ve Sezer, 2024;Şanlı, Ateş, Bayburtlu, Bektaş ve Özdemir, 2023), okul müdürleri ve öğretmenlere yönelik çalışma (Demir Dülger ve Gümüşeli, 2023), ilkokul öğrencilerin yapay zekâ algılarına yönelik çalışmalar (Chai, Lin, Jong, Dai, Chiu, ve Qin, 2021;Lin, Chai, Jong, Dai, Guo ve Qin, 2021;Oruç, Korkmaz ve Kurt, 2024;Walan, 2024), yapay zekânın derslere yönelik iyileştirici uygulamaları ile ilgili çalışma (Kotsis, 2024), ilkokul öğrencilerine yönelik çalışmalar (Chen, Qiu, Li, Zhang, Wu, Zeng ve Huang, 2023;Pardamean, Batı Anadolu Eğitim Bilimleri Dergisi, (2025), 16 (1), 1745-1771. Western Anatolia Journal of Educational Sciences, (2025 Suparyanto, Cenggoro, Sudigyo ve Anugrahana, 2022) ve sınıf öğretmenlerinin yapay zekâya yönelik tutumları ile ilgili nicel bir çalışma (Aksakal, Emre ve Özbek, 2024) bulunmaktadır. ...
Article
Bu çalışmanın amacı yapay zekayı sınıf öğretmenlerinin derslere entegre etmelerine yönelik bakış açılarını, yapay zekayı kullanma durumları, kullanıyorlarsa nasıl kullandıklarını ortaya koymaktır. Nitel araştırma desenlerinden biri olan olgubilim(fenomenoloji) ile çalışma sürdürülmüştür. Araştırmanın çalışma grubunu 2023-2024 eğitim öğretim yılında Afyonkarahisar ilinde görev yapmakta olan 13 sınıf öğretmeni oluşturmaktadır. Çalışma grubu belirlenirken amaçlı örnekleme yöntemleri olan ölçüt örnekleme ve kartopu örnekleme tekniği kullanılmıştır. Araştırmada verilerin toplanmasında araştırmacılar tarafından hazırlanan yarı yapılandırılmış görüşme soruları kullanılmıştır. Veriler içerik analizi tekniği ile çözümlenmiştir. Araştırma sonucunda teknolojide yetkinliği olan öğretmenlerin yarısından çoğunun yapay zekayı en az bir kere kullandığı, sınıflarında kullanan öğretmenlerin ise olumlu ve olumsuz düşüncelerinin olduğu belirtilmiştir. Yapay zekayı kullanan öğretmenlerin öğrenme öğretme sürecinde yaptıkları uygulamalar incelendiğinde, daha çok sunu tasarlama ya da tasarlatma üzerine yoğunlaştıkları sonucuna ulaşılmıştır. Tüm öğretmenlerin gelecekte YP’den yararlanma planlarının olduğu ifade edilmiştir. Hem öğretmenlere hem de öğrencilere katkılarının olduğu fakat öğretmenlerde endişe ve sorunlara da yer açtığı sonucuna ulaşılmıştır. Yapılan görüşmelerden yola çıkılarak öğretmenlerin yapay zekaya yabancı hissettiği, konuya yönelik hizmetiçi eğitimlerin olması gerektiği önerilmiştir.
... Stating that the idea of creating intelligent machines and artificial intelligence dates back to the 14 th century, Humble and Mozelius (2019) emphasize that the field of artificial intelligence in education has left a significant impact in the last 25 years. Walan (2024) argues that debates on artificial intelligence intensified in late 2022 and affected various segments of society. Liang (2020) defines artificial intelligence as "a collection of information technologies based on machine learning" and states that its application in the field of education is making ground and will have a profound impact on education reform. ...
... Otro reto importante es el tabú que aún existe respecto al uso de la IA en el ámbito educativo. Para ciertos sectores de la sociedad, el uso de IA en la educación se percibe como algo impersonal y potencialmente perjudicial para el desarrollo cognitivo y social de los estudiantes (Walan, 2024;Sanusi et al., 2024). Este estigma está alimentado por una percepción errónea de la IA como una tecnología fría y deshumanizada, que podría despojar al proceso de enseñanza-aprendizaje de su componente humano y emocional. ...
Chapter
Full-text available
En los últimos años, la Inteligencia Artificial (IA) ha transformado una vasta gama de disciplinas, y la educación no ha sido ajena a esta disrupción. En el ámbito específico de la enseñanza del inglés como lengua extranjera, el despliegue de tecnologías avanzadas ha dado lugar a enfoques más personalizados, adaptativos y eficientes. Por tal motivo, este capítulo tiene como objetivo reflexionar teóricamente sobre el potencial de la IA en la enseñanza del inglés como lengua extranjera, analizando sus beneficios, los desafíos para educadores y estudiantes, y el horizonte futuro de esta tecnología en la enseñanza del idioma. Las aplicaciones de la IA en la enseñanza del inglés han revolucionado los métodos y posibilidades de aprendizaje, proporcionando una experiencia educativa que responde de manera personalizada, inmersiva y efectiva a las necesidades individuales de los estudiantes. A pesar de los múltiples beneficios que la IA ofrece para la enseñanza del inglés, su implementación enfrenta una serie de retos significativos como lo es la falta de infraestructura adecuada y de recursos tecnológicos en las escuelas plantea un desafío crítico. La integración de herramientas basadas en IA requiere no solo de dispositivos como computadoras y tabletas, sino también de una conectividad robusta y estable que permita la utilización de plataformas en línea y aplicaciones avanzadas.
... The emotional complexity and multifaceted nature of affective perceptions of AI have also been observed in studies involving students aged 11-12 years. These perceptions include positive emotions, associated with the perceived support provided by AI in learning and completing tasks, as well as negative emotions, stemming from concerns about AI's potential negative impacts -such as fears related to its rapid development, job loss, and privacy issues (Walan, 2024). For this reason, it is essential that teachers themselves are well-informed and pedagogically equipped to address their own dilemmas and concerns regarding this all-pervasive topic. ...
Article
Full-text available
The contemporary educational paradigm, which brings learning outcomes and competencies to the foreground, puts special emphasis on digital competencies. The relevance of their development is visible in a series of strategies and initiatives at the global and national level. The application of AI and robotics poses a number of pedagogical challenges to teachers, with the use of robots in education being one of the latest trends. The paper discusses the perceptions of students of the University of Belgrade's Faculty of Education (Serbia) about robots. The aim of the research was to determine how future preschool and primary school teachers perceive robots, and their pedagogical implications, in order to create opportunities for improving teaching on the use of robots in an educational setting. Students perceive robots in two dominant functions: educational and assistive. A statistically significant difference in the attitudes of future preschool and primary school teachers were observed regarding the reasons for choosing the robot they drew. Preschool teachers gave primacy to the cognitive domain, while primary school teachers found it difficult to judge which domain was dominant. Misconceptions about robots were observed among some of the respondents, and these were further analyzed. The most dominant function of the robot was the educational one, and its predominant appearance was in the animal form. Most of the respondents did not draw elements that would indicate emotions of the depicted robots. However, the drawings of the robot in the animal form included clear positive emotions. The obtained results can be a significant predictor of the way in which future preschool and primary school teachers will use robots in their teaching and educational work with children and students. They can also give the professors of the faculties of education useful guidelines for modifying syllabuses used for building student digital competencies.
... Despite the valuable insights, significant research gaps remain, suggesting areas for further exploration. Notably, there is a lack of longitudinal studies on how privacy attitudes evolve as AI plays a larger role in daily life, necessitating long-term research [49], [50]. Additionally, more research on cultural comparisons is needed to understand global perspectives on AI and privacy, which is crucial for developing inclusive privacy policies [51], [52]. ...
Preprint
Full-text available
This systematic literature review investigates perceptions, concerns, and expectations of young digital citizens regarding privacy in artificial intelligence (AI) systems, focusing on social media platforms, educational technology, gaming systems, and recommendation algorithms. Using a rigorous methodology, the review started with 2,000 papers, narrowed down to 552 after initial screening, and finally refined to 108 for detailed analysis. Data extraction focused on privacy concerns, data-sharing practices, the balance between privacy and utility, trust factors in AI, transparency expectations, and strategies to enhance user control over personal data. Findings reveal significant privacy concerns among young users, including a perceived lack of control over personal information, potential misuse of data by AI, and fears of data breaches and unauthorized access. These issues are worsened by unclear data collection practices and insufficient transparency in AI applications. The intention to share data is closely associated with perceived benefits and data protection assurances. The study also highlights the role of parental mediation and the need for comprehensive education on data privacy. Balancing privacy and utility in AI applications is crucial, as young digital citizens value personalized services but remain wary of privacy risks. Trust in AI is significantly influenced by transparency, reliability, predictable behavior, and clear communication about data usage. Strategies to improve user control over personal data include access to and correction of data, clear consent mechanisms, and robust data protection assurances. The review identifies research gaps and suggests future directions, such as longitudinal studies, multicultural comparisons, and the development of ethical AI frameworks.
... In the results of working; it shows that students have a perception that AI can be used effectively in the teaching-learning process and academic management processes, but should not be used in exam and placement-related processes. Walan (2024), study focused on Swedish primary school students aged 11-12 and examined their cognitive and emotional perceptions of AI and their current use. ...
Article
Full-text available
The main goal of this study is to reveal special talented primary school students' perceptions of artificial intelligence, one of the popular concepts of recent times, through metaphors. In this study, the phenomenological design, which is within the scope of qualitative research, was used. In this study, Türkiye Science and Art Center included special talented primary school students in the education field. The 104 special talented primary school students participating in the research were 9-14 years old. Of these students, 53 were boys and 51 were girls. The participants were in the 3th grade, 4th grade, 5th, 6th, 7th and 8th grades of primary education. Purposive sampling method was chosen in the research. When the answers from the students were examined, a total of 36 different categories emerged in line with the relevant metaphors. When these categories are examined in detail, it is seen that the concept of artificial intelligence is represented by different metaphors. According to the findings, it was determined that primary school students simulated the concept of artificial intelligence to different metaphors such as robot, smiling robot, robot vacuum cleaner. As a result, it has been revealed that special talented student generally have positive opinions about the concept of artificial intelligence.
Article
This study investigates student perceptions of artificial intelligence (AI) implementation and its implications for academic integrity within Kazakhstan’s higher education system. Through a quantitative survey methodology, data was collected from 840 undergraduate students across three major Kazakhstani universities during May 2024. The research examined patterns of AI usage, ethical considerations, and attitudes toward academic integrity in the context of emerging AI technologies.The findings reveal widespread AI adoption among students, with 90% familiar with ChatGPT and 65% utilizing AI tools at least weekly for academic purposes. Primary applications include essay writing (35%), problem-solving (25%), and idea generation (18%). Notably, while 57% of respondents perceived no significant conflict between AI usage and academic integrity principles, 96% advocated for establishing clear institutional policies governing AI implementation.The study situates these findings within Kazakhstan’s broader AI development strategy, particularly the AI Development Concept 2024-2029, while drawing comparisons with international regulatory frameworks from the United States, China, and the European Union. The research concludes that effective integration of AI in higher education requires balanced regulatory approaches that promote innovation while preserving academic integrity standards.
Article
Full-text available
The use of artificial intelligence has played an important role in science teaching and learning. The purpose of this study was to fill a gap in the current review of research on AI in science education (AISE) in the early stage of education by systematically reviewing existing research in this area. This systematic review examined the trends and research foci of AI in the science of early stages of education. This review study employed a bibliometric analysis and content analysis to examine the characteristics of 76 studies on Artificial Intelligence in Science Education (AISE) indexed in Web of Science and Scopus from 2013 to 2023. The analytical tool CiteSpace was utilized for the analysis. The study aimed to provide an overview of the development level of AISE and identify major research trends, keywords, research themes, high-impact journals, institutions, countries/regions, and the impact of AISE studies. The results, based on econometric analyses, indicate that AISE has experienced increasing influence over the past decade. Cluster and timeline analyses of the retrieved keywords revealed that AI in primary and secondary science education can be categorized into 11 main themes, and the chronology of their emergence was identified. Among the most prolific journals in this field are the International Journal of Social Robotics, Educational Technology Research and Development, and others. Furthermore, the analysis identified that institutions and countries/regions located primarily in the United States have made the most significant contributions to AISE research. To explore the learning outcomes and overall impact of AI technologies on learners in primary and secondary schools, content analysis was conducted, identifying five main categories of technology applications. This study provides valuable insights into the advancements and implications of AI in science education at the primary and secondary levels.
Article
Full-text available
Introduction: Artificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs. Methods: To contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality. Results: We visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked. Discussion: The diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a “black box” for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.
Article
Full-text available
The recent emergence of ChatGPT has led to multiple considerations and discussions regarding the ethics and usage of AI. In particular, the potential exploitation in the educational realm must be considered, future-proofing curriculum for the inevitable wave of AI-assisted assignments. Here, Brent Anders discusses some of the key issues and concerns.
Article
Full-text available
Artificial Intelligence (AI) technologies have been progressing constantly and being more visible in different aspects of our lives. One recent phenomenon is ChatGPT, a chatbot with a conversational artificial intelligence interface that was developed by OpenAI. As one of the most advanced artificial intelligence applications, ChatGPT has drawn much public attention across the globe. In this regard, this study examines ChatGPT in education, among early adopters, through a qualitative instrumental case study. Conducted in three stages, the first stage of the study reveals that the public discourse in social media is generally positive and there is enthusiasm regarding its use in educational settings. However, there are also voices who are approaching cautiously using ChatGPT in educational settings. The second stage of the study examines the case of ChatGPT through lenses of educational transformation, response quality, usefulness, personality and emotion, and ethics. In the third and final stage of the study, the investigation of user experiences through ten educational scenarios revealed various issues, including cheating, honesty and truthfulness of ChatGPT, privacy misleading, and manipulation. The findings of this study provide several research directions that should be considered to ensure a safe and responsible adoption of chatbots, specifically ChatGPT, in education.
Article
Full-text available
Since the Dartmouth workshop on Artificial Intelligence coined the term, AI has been a topic ofevergrowing scientific and public interest. Understanding its impact on society is essential to avoid potential pitfalls in its applications. This study employed a qualitative approach to focus on the public’s knowledge of, and expectations for AI. We interviewed 25 participants in 30-minute interviews over a period of two months. In these interviews we investigated what people generally know about AI, what advantages and disadvantages they expect, and how much contact they have had with AI or AI based technology. Two main themes emerged: (1) a dystopian view about AI (e.g., ‘’the Terminator’’) and (2) an exaggerated or utopian attitude about the possibilities and abilities of AI. In conclusion, there needs to be accurate information, presentation, and education on AI and its potential impact in order to manage the expectations and actual capabilities.
Article
Full-text available
The purpose of this study is to discover the main themes and categories of the research studies regarding digital literacy. To serve this purpose, the databases of WoS/Clarivate Analytics, Proquest Central, Emerald Management Journals, Jstor Business College Col‑ lections and Scopus/Elsevier were searched with four keyword-combinations and fnal forty-three articles were included in the dataset. The researchers applied a systematic literature review method to the dataset. The preliminary fndings demonstrated that there is a growing prevalence of digital literacy articles starting from the year 2013. The dominant research methodology of the reviewed articles is qualitative. The four major themes revealed from the qualitative content analysis are: digital literacy, digital competencies, digital skills and digital thinking. Under each theme, the categories and their frequencies are analysed. Recommendations for further research and for real life implementations are generated.
Article
Full-text available
Although there are some researches conducted about students’ conceptions of technology, little research has been conducted to reveal the primary school students’ conceptions concerning technology in China. This research investigated Chinese primary school students’ (aged 9–12) conceptions of technology as regards their understanding of (a) the concept of technology, (b) the impact of technology on human life and nature, and (c) the relationship between technology and science. Phenomenography as the methodological framework was adopted for this study. A total of 63 primary school students were chosen as participants in the study to probe their conceptions about technology through picture/photo eliciting activities, and semi-structured, personal interviews in a website video format. It is found that the primary school students defined technology from diverse perspectives, including the dimensions of its attributes, production, operation and use, function, with most of them regarding technology as a double-edged sword. It is also found that they lack a comprehensive and rational understanding of the concept of technology and cannot understand the relationship between science and technology properly. This study contributes better to understanding primary school students’ conceptions about technology in mainland China and beyond, thus providing an empirical basis for improving technology education policy, curriculum, instruction, and assessment in the future for China and other countries.
Article
Full-text available
Artificial intelligence (AI) education has posed fundamental challenges to early childhood education (ECE), including (1) why AI is necessary and appropriate for learning in the early years, (2) what is the subset of key AI ideas and concepts that can be learned by children, and (3) how to engage children in a meaningful experience that allows them to acquire these fundamental AI concepts. This report from the ECE field discusses the key considerations for developing an AI curriculum for young children. These key considerations altogether present an innovative pedagogical model for AI literacy education in early childhood. This model argues that AI literacy is an organic part of digital literacy for all citizens in an increasingly intelligent society. The core AI knowledge that can be explored with young children is: Using large amounts of data input, AI algorithms can be continuously trained to identify patterns, make predictions, and recommend actions, even though with limitations. Based on the theoretical notions of learning-by-making and pedagogy-as-relational, an embodied, culturally responsive approach should be used to enable young children's exploration with AI technologies. Finally, an exemplary curriculum named “AI for Kids” is introduced to demonstrate this pedagogical model and explain how educators can provide children culturally responsive inquiry opportunities to interact with and understand AI technologies. The synthesis of knowledge regarding “Why”, “What”, and “How” to do with AI education for young children informs a new way to engage children in STEM and understanding the digital world.