PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

The inclusion of artificial intelligence (AI) in education is increasingly highlighted by international organizations and governments around the world as a cornerstone to enable the adoption of AI in society. That is why we have developed LearningML, aiming to provide a platform that supports educators and students in the creation of hands-on AI projects, specifically based on machine learning techniques. In this investigation we explore how a workshop on AI and the creation of programming projects with LearningML impacts the knowledge on AI of students between 10 and 16 years. 135 participants completed all phases of the learning experience, which due to the COVID-19 pandemic had to be performed online. In order to assess the AI knowledge we created a test that includes different kinds of questions based on previous investigations and publications resulting in a reliable assessment instrument. Our findings show that the initiative had a positive impact on participants' AI knowledge, being the enhancement especially important for those learners who initially showed less familiarity with the topic. We observe , for instance, that while previous ideas on AI revolve around the term robot, after the experience they do around solve and problem. Based on these results we suggest that LearningML can be seen as a promising platform for the teaching and learning of AI in K-12 environments. In addition, researchers and educators can make use of the new instrument we provide to evaluate future educational interventions.
Content may be subject to copyright.
Evaluation of an Online Intervention to Teach Artificial
Intelligence With LearningML to 10-16-Year-Old Students
Juan David Rodríguez-García
INTEF
Madrid, Spain
juanda.rodriguez@intef.educacion.es
Jesús Moreno-León
Programamos
Sevilla, Spain
jesus.moreno@programamos.es
Marcos Román-González
Universidad Nacional de Educación a Distancia
Madrid, Spain
mroman@edu.uned.es
Gregorio Robles
Universidad Rey Juan Carlos
Madrid, Spain
grex@gsyc.urjc.es
ABSTRACT
The inclusion of articial intelligence (AI) in education is increas-
ingly highlighted by international organizations and governments
around the world as a cornerstone to enable the adoption of AI in
society. That is why we have developed
LearningML
, aiming to pro-
vide a platform that supports educators and students in the creation
of hands-on AI projects, specically based on machine learning
techniques. In this investigation we explore how a workshop on
AI and the creation of programming projects with
LearningML
im-
pacts the knowledge on AI of students between 10 and 16 years. 135
participants completed all phases of the learning experience, which
due to the COVID-19 pandemic had to be performed online. In order
to assess the AI knowledge we created a test that includes dierent
kinds of questions based on previous investigations and publica-
tions – resulting in a reliable assessment instrument. Our ndings
show that the initiative had a positive impact on participants’ AI
knowledge, being the enhancement especially important for those
learners who initially showed less familiarity with the topic. We ob-
serve, for instance, that while previous ideas on AI revolve around
the term robot, after the experience they do around solve and prob-
lem. Based on these results we suggest that
LearningML
can be seen
as a promising platform for the teaching and learning of AI in K-12
environments. In addition, researchers and educators can make use
of the new instrument we provide to evaluate future educational
interventions.
CCS CONCEPTS
Social and professional topics Computing education
;K-
12 education;Computational thinking.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
SIGCSE ’21, March 17–20, 2021, Toronto, Canada
©2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-XXXX-X/18/06.. . $15.00
https://doi.org/10.1145/1122445.1122456
KEYWORDS
articial intelligence, machine learning, computational thinking,
K-12, assessment
ACM Reference Format:
Juan David Rodríguez-García, Jesús Moreno-León, Marcos Román-González,
and Gregorio Robles. 2021. Evaluation of an Online Intervention to Teach Ar-
ticial Intelligence With LearningML to 10-16-Year-Old Students. In SIGCSE
’21: ACM SIGCSE Technical Symposium, March 17–20, 2021, Toronto, Canada.
ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/1122445.1122456
1 INTRODUCTION
“Ditch the algorithm" or “The algorithm stole my future" are some
of the messages that can be heard in the protests around England
in which, at the time of writing this paper, students challenge the
A-levels grades provided by a predictive assessment system. This
is just an example, although very illustrating, of how society is
becoming aware of the potential impact that articial intelligence
(AI) systems can have in their lives. And this also indicates that
society as a whole, from policy makers to service users, is probably
still unprepared.
Organizations, such as UNESCO, and governments around the
world are developing policies, strategic plans, and other initiatives
highlighting the challenges, opportunities and impact of AI in edu-
cation [
39
,
45
]. Furthermore, the big success achieved by articial
neural networks and machine learning (ML) development in last
years has changed dramatically the view educators, AI researchers
and the general public have about AI [
24
], yielding a growing in-
terest in AI education [34].
Consequently, new tools intended to facilitate the learning and
teaching of ML fundamentals in K-12 levels have been recently
developed. However, we have found some inconveniences that
hinder the adoption of those tools in classroom scenarios. Thus,
we have designed and developed
LearningML
[
19
]
1
, a platform to
learn ML fundamentals, to overcome these drawbacks.
In this paper we investigate whether children, with no previous
knowledge about AI or ML, can learn the basic of ML through
hands-on activities with
LearningML
. To do so, we conducted an
online workshop. In particular, the research questions (RQs) we
address are following:
1https://learningml.org
SIGCSE ’21, March 17–20, 2021, Toronto, Canada Rodríguez-García, et al.
RQ1 - Instructional validity of LearningML
: Can children im-
prove their knowledge about ML fundamentals when performing
hands-on activities with LearningML?
RQ2 - Face validity of LearningML
: How do children perceive
LearningML after developing some hands-on projects with it?
RQ3 - Perception of AI
: Do children change the perception
they have about AI after having performed hands-on activities with
LearningML?
Throughout the paper, we consider children to be in the age
range from 10 to 16 years.
The paper is organized as follows. Related research can be found
in Section 2. A brief description of
LearningML
is made in Section 3.
The assessment instrument, the experimental procedure, and text
network analysis technique are presented in Section 4. Section 5
oers the results. Discussion follows in Section 6, including the
threats to the validity of our results. Finally, conclusions are drawn
in Section 7.
2 RELATED WORK
The concept of AI literacy is emerging as a new set of compe-
tences necessary for a future in which AI transforms the way
that we live [
28
]. Considering the perception about AI of both
students [
12
,
13
] and teachers [
27
], and taking into account ethical
issues raised on AI [
1
,
8
], can greatly help to eective develop cur-
ricular content aimed to reach AI literacy [
7
,
25
,
28
,
41
,
49
]. Along
with programming and unplugged activities, AI contents could con-
tribute to foster computational thinking [
36
], and could add new
dimensions to existing computational thinking frameworks [
30
,
46
].
One of the most promising initiatives promoting AI literacy
is AI4K12
2
, with the main goal to organize the knowledge that
every child should have about AI. AI4K12 has developed a frame-
work aimed to guide AI content creators. It is grounded in ve big
ideas [
44
]: i) perception, ii) representation and reasoning, iii) com-
puters can learn from data, iv) natural interaction, and v) societal
impact.
The third idea, “computers can learn from data", is intimately
related with ML, since the latter encompasses a family of algorithms
and techniques aimed to solve problems we do not have algorithms
for, but that we do have relevant data to learn patterns from [
43
].
These techniques belong to one of the following types: supervised
learning,unsupervised learning and reinforcement learning [2].
In general, two approaches are found in the literature [
32
] about
ML education in school. The rst one focuses in revealing the steps
of training, learning and evaluating [
9
], followed in supervised ML
techniques to build ML models able to recognize patterns. Some of
the tools designed according to this approach [
11
,
19
,
23
,
26
,
43
,
50
]
also allow to export the model to a programming platform (e.g.,
Scratch
,
MIT App Inventor
,
Snap!
or
Python
) and build ML based
applications. These tools hide the ML algorithm required in the
learning step in a black box [
11
,
19
,
26
,
50
], or in the best case they
only allow to handle some few relevant parameters controlling the
ML algorithm [
9
,
23
,
43
]. Many instructional units regarding this
approach have been proposed [
18
,
21
,
42
,
47
,
50
,
51
], some of them
make use of any of these tools [18, 47, 50, 51].
2http://AI4K12.org
The second approach is aimed to get into the essence of ML
algorithms to explain how they work by programming them [
17
,
22
,
31
,
40
,
48
]. Since the focus is placed on the ML algorithm itself, there
are works that deal with unsupervised [
17
,
48
], supervised [
31
,
40
],
and even with reinforcement ML [22].
Although the second approach helps to reach a deeper insight
about the way computers learn from data, it is not so immediate to
start with it. Indeed, due to the complexity of the algorithms, some
advanced mathematical knowledge and programming skills are
required. Hence, this kind of activities are not suitable to be taught
at early ages, neither they are, in our opinion, the best option to
start to learn ML fundamentals.
That is why
LearningML
follows the rst approach. In the next
Section, the main characteristics of the platform are presented. A
more detailed description can be consulted in [19].
3 THE LEARNINGML PLATFORM
LearningML
is an educational web platform designed and devel-
oped by the authors to help non-specialists to easily learn ML
fundamentals. “Low oor, high ceiling and wide walls" [
4
] is our
design principle, since some successful programming platforms
such as Scratch [41] have shown that this is a suitable strategy to
engage learners and, in particular, young learners. This means that
the platform has to be very easy to start with (low oor), provide
opportunities to create increasingly complex projects over time
(high ceiling), and oer the possibility to support dierent types of
projects (wide walls).
LearningML
has been released with a GNU Aero GPL free (as
in freedom) license, encouraging everyone interested in it to use,
study or improve the software and to participate in its development.
The platform is freely (as in gratis) available on the web in several
languages to everyone who wants to get into the world of ML.
The platform oers, on the one hand an ML editor where users
can build text and image recognition models, and on the other hand,
a programming interface where applications that use such models
can be developed.
The ML editor reveals in a single screen the three rst steps of
supervised ML: training, learning and evaluating. The user gathers
and labels the example data, then, s/he launches the ML algorithm,
and a model able to recognize new data is built from the dataset.
Finally, the model can be tested and evaluated by feeding it with
new data. The ML editor can be used in an interactive and iterative
way: in order to improve the model data can be added or removed as
needed. These steps can be repeated as many times as wanted, until
an enough performing model is obtained. This helps the user to
gain insight and develop intuition about the ML process. Although
at this time the inner working of the ML algorithms are hidden from
the user (i.e., used as black boxes), we are looking for strategies to
uncover them in a future release.
LearningML
does not depend on any external ML service, since
ML algorithms run locally in the browser. This is one of the main
dierences with the tools presented in Section 2.
Once the learner completes the model, s/he can launch the pro-
gramming interface from the ML editor, and develop an application
that uses it. This programming interface is a
Scratch
fork with
some new blocks aimed to deal with text and image ML models.
An Online Intervention to Teach Artificial Intelligence with LearningML SIGCSE ’21, March 17–20, 2021, Toronto, Canada
Registering as a user in the platform is not required to get a full
experience with
LearningML
. This helps to “lower the oor", since
anyone can start developing an ML project as soon as the platform
has been rendered in the web browser. Both the dataset and the
created code can be locally downloaded and retrieved again for a
later use. This is another of the dierences with the tools presented
in Section 2.
Learners can register if they wish. Registered users can save their
projects in the cloud, share them and copy other projects shared by
other LearningML members.
Finally, a website
3
is maintained to promote the
LearningML
platform, to provide documentation about it, to supply guided ac-
tivities and other curricular content on AI and ML, and to feed a
blog with related contents.
4 METHODOLOGY
4.1 Assessment Instrument
“The Computational Thinking and Articial Intelligence School", is
a project led by INTEF
4
aimed to oer Spanish teachers tools and
resources to incorporate computational thinking and AI into their
classroom.
Some of the authors were commissioned an investigation on the
impact of this project. So, one of the instruments they developed
was a test intended to assess the students’ knowledge on AI and
ML. The questions selected were taken from other available tests,
such as [
16
], the
Machine Learning for Kids
website
5
, a MOOC
on AI, and previous research of the KGBL36group [18, 19].
A total of 14 questions, showing a greater statistical reliability
and delity, has been chosen as starting point to develop the assess-
ment instrument used in our research. In addition, several questions
intended to describe the sample and an open question asking for a
denition of AI have been added to the pre and post-test. Finally,
others questions regarding how children perceive the platform have
been included in the post-test. Pre and post-test are provided in the
replication package of the paper7.
4.2 Pre-experimental study design
The original plan for the investigation was to organize several in-
person workshops in dierent primary and secondary schools. Due
to the coronavirus lockdown, however, we had to perform an online
experiment.
We announced our intention to conduct the online experiment
on specialized education websites and social networks. In partic-
ular, we made a call for primary and secondary school teachers,
trainers and parents interested in participating with their students
or children.
3https://learningml.org
4
INTEF stands for Instituto Nacional de Tecnologías de la Educación y Formación del
Profesorado, the unit of the Spanish Ministry of Education and Vocational Training
responsible for the integration of ICT and Teacher Training in the non-university
educational stages.
5https://machinelearningforkids.co.uk/
6KGBL3 stands for KinderGarten and Beyond and LifeLong Learning
7https://github.com/kgblll/kgblll-ReplicationPackage- 2021-SIGCSE.git
We then organized a preliminary webinar
8
to explain all the
details of the research. 63 teachers and parents lled out the regis-
tration form and accepted the terms to participate as a tutor. They
contributed with a total of 494 children.
Thereafter, all the instructions that had to be followed by students
were delivered to their tutors by email. Each tutor also received a
range of codes for their group of participants. These codes allowed
us to identify and match the answers from each student in the pre
and post-test. No personal data was requested except for the gender,
which was optional.
From June 1
st
to June 7
th
, children had to respond to the pre-test
online. On June 8
th
we held a second online webinar
9
introducing
AI and supervised ML. During the webinar we also presented the
main features of
LearningML
and showed how to develop ML text
and image recognition projects with the platform.
After this training webinar, which could be watched on-demand
and as many times as wanted, children were instructed to tinker
with
LearningML
and to develop their own ML project. We included
several guided activities
10
in the
LearningML
website to support
students during the process.
For instance, in one of these activities students learn how to use
the ML editor to take and label some photos of themselves while
wearing dierent fashion accessories, such as caps, hats or sun
glasses. Then they are guided to build a ML model able to recognize
that they are wearing each of those elements. Once the ML model
is working ne, the activity shows how to use the programming
interface to create an application in which a character will appear
with the same accessory that the user is wearing in every moment.
LearningML
allows direct access to the computer webcam both
to take the pictures needed to train the model and to get the user
image when the application is running.
Only after attending the online seminar and creating an ML
project of their own, participants could ll the post-test. This nal
test was available online from June 9
th
to 22
th
and was identical to
the pre-test, except for the additional questions about the perception
of the platform that were added to assess its face validity.
4.3 Text Network Analysis
The open question in which we ask participants to provide a deni-
tion of AI can be used to study the improvements of the students. For
instance, here are two examples that show the dierences between
the denitions by the same participants in the pre and post-test: (i)
“[I do] Not [know] much [about AI]" became “The programming that
we want to put into a machine so that it acts like a human"; and (ii)
“It is something that we do not control" became “articial intelligence
is that a machine is capable of solving problems".
Aiming to compare the dierences between the answers provided
by students in the pre and post-test on this open question, we
performed a text network analysis. To this end we used
InfraNodus
,
an open-source tool enabling the visualization of texts as a network
showing the most relevant topics, their relations, and the structural
gaps between them to help generate new ideas [38].
8https://youtu.be/27oDM08Hsv4
9https://youtu.be/6yga0cilxo0
10https://web.learningml.org/actividades/
SIGCSE ’21, March 17–20, 2021, Toronto, Canada Rodríguez-García, et al.
5 RESULTS
Of the 494 students registered in the online research, 469 completed
the pre-test, 184 the post-test, and 162 did both. Among these, 8
said to be older than 16, so we started with 154 subjects who met
the research requirements. Finally we ltered those students out
who did not answer all questions. This gave us a total of 135 valid
participants.
Of the 135 valid subjects, 76 were boys, 55 girls, and 4 of them
did not provide information of their gender. These numbers are in
line with the gender gap in STEM engagement [33].
Regarding the course level, 47 were primary, and 88 were sec-
ondary school students. Table 1 shows the distribution of age of the
participants. The larger number of student with ages 15-16 suggests
that teachers in the higher levels of secondary education are more
interested in the teaching of AI and ML contents.
Table 1: Age distribution of our sample.
Age 10-11 11-12 12-13 13-14 14-15 15-16
# Learners 24 23 3 6 15 64
108 out of the 135 learners declared to have some previous pro-
gramming experience; 105 had used Scratch, while 3 said to have
programmed in other languages. The 27 remaining did not have
programming experience.
The reliability analysis, carried out on the valid sample of 135
subjects, yields and internal consistency of 0.6 for pre-test and 0.7
for post-test. As this is a pre-experimental exploratory study, the
reliability of the test is sucient [37].
5.1 RQ1: Instructional validity
To answer
RQ1
(Can children improve their knowledge about ML
fundamentals when performing hands-on activities with
LearningML
?),
we have created two variables computed as the sum of the scores
in the 14 questions aimed to measure ML knowledge in the pre
and post-test. Ten of the questions were multiple choice with one
correct answer, so they have been scored with 0 or 1 point. The
remaining 4 questions were Likert-style, and have been scored with
0, 0.25, 0.50, 0.75 or 1 point, in terms of its proximity to the right
answer. Hence, the variation range for both variables is between 0
and 14.
The count, mean, standard deviation, minimum, rst quartile,
median, third quartile, and maximum values are shown in the rst
two columns of Table 2. As can be seen, there was an increase in
the results, as the mean in the pre-test was 9.230, while the mean
in the post-test was 10.370.
Since the dierence between the two conditions is not nor-
mally distributed, according to the Shapiro-Wilk test (W=0.959,
p-value=0.0004), we performed a Wilcoxon signed-rank Test (p-
value=1.902e-9), which indicates that the null hypothesis of equality
of means is rejected and proves signicant dierences between pre
and post-test.
In addition, the eect size [
10
] was 0.486, which is considered a
moderate eect [
14
] and, consequently, indicates that, according
to the “inuence barometer" [
20
] the educational intervention has
fullled the desired goals.
Table 2: Pre-test and post-test results.
Full sample First quartile
Pre Post Pre Post
Count 135 135 34 34
Mean 9.230 10.370 6.200 8.221
Std 2.310 2.400 1.312 2.518
Min 3.500 4.250 3.500 4.250
25% 7.875 8.750 5.063 5.938
Median 9.250 10.750 6.625 8.250
75% 11.000 12.250 7.250 10.375
Max 14.000 14.000 7.750 12.500
Table 3: Answers regarding the face validity of LearningML
Q17 Q18
Totally agree 71 28
Agree 43 44
Neither agree nor disagree 11 42
Disagree 5 19
Strongly disagree 5 2
Even though we designed the intervention for learners with no
previous experience with AI, the results in the pre-test seemed
to indicate that either some of the participants had certain prior
knowledge of this discipline or that they had received some help to
answer the questions.
If we consider only those learners that had a pre-test score in
the rst quartile (score
<
7.875), the results can be found in the
last two columns of Table 2. Again, a Wilcoxon signed-rank Test
(p=0.0001), indicates a signicant dierence between pre and post-
test. In this case, the computed eect size raises to 1.007, considered
a big eect according to [
14
]. This result reveals a higher impact of
the intervention on participants with less AI previous knowledge.
It is also worth noting that we found similar results in the statisti-
cal analysis for learners with and without programming experience,
being 0.498 the computed eect size for learners with previous
programming experience and 0.4317 for those without. Although
such results may seem counterintuitive at rst sight, they are in line
with previous investigations positing that AI literacy is independent
from computational literacy [28].
5.2 RQ2: Face validity
Results for
RQ2
(How do children perceive
LearningML
after de-
veloping some hands-on projects with it?) are oered in Table 3,
where we provide the number of children answering each of the
Likert-style options for questions
Q17
(Did you nd LearningML
a useful application to learn about Articial Intelligence?) and
Q18
(Was it easy for you to use LearningML to program an application
with Articial Intelligence?).
The main challenge when designing and developing
LearningML
was to build a platform that non-experienced users could work with
easily while learning ML fundamentals. Theses results support that
An Online Intervention to Teach Artificial Intelligence with LearningML SIGCSE ’21, March 17–20, 2021, Toronto, Canada
Figure 1: Visual representation of the main topics and inu-
ential keywords in AI denitions provided by participants
in the pre-test.
our design goal has been achieved, as more than a half of partici-
pants (53%) perceived the platform as easy or very easy to use, being
only a 15.5% the percentage of respondents feeling it as dicult
(14%) or very dicult (1.5%). Furthermore, 84.4% of participants
agree that LearningML is an useful application to learn about AI.
5.3 RQ3: Perception of AI
The text network analysis performed on the open question where
learners gave their own denition of AI, both in pre-test and post-
test, has provided us the input to answer
RQ3
(Do children change
the perception they have about AI after having performed hands-on
activities with LearningML?).
It must be noted that in order to reveal non-obvious topics and
relationships, we have removed the terms human and machine
from the analysis, which are constantly repeated in most of the
denitions.
Figures 1 and 2 are graph images that present a visual representa-
tion of the main topics and inuential keywords of the AI denitions
provided by participants in the pre and post-test denitions. The
communities of words that are closely related –called contextual
clusters or themes– are displayed in dierent colors. Words that
appear in dierent contexts, on the contrary, are placed far away
from each other. The size of the nodes indicates the number of dif-
ferent themes or contexts that each node connects, which is called
its betweenness centrality.
As shown in Figure 1, the most inuential words in the pre-test
network were computer,learn and robot. In fact, there are multiple
denitions that revolve around this last term, such as the following:
“It’s about what robots can do”,“The intelligence of the robots”,“It is a
Figure 2: Visual representation of the main topics and inu-
ential keywords in AI denitions provided by participants
in the post-test.
robot that thinks for itself, I mean that nobody controls it”,“Something
that knows a lot, like robots, but depends on a person because they
are machines”. This is something we expected, because that is the
way movies and social media tend to present AI to the public.
On the contrary, as shown in Figure 2, robot did not appear
among the most inuential elements of the network in the post-test.
In fact its betweenness centrality is only 0.02, while in the pre-test
graph it was 0.2. We can also see that a new cluster emerges in the
post-test, being solve and problem the main elements of that theme.
The following examples illustrate the inuence of these nodes in
some of the denitions: “It is the science that seeks to create machines
that solve problems that require intelligence”,“It is the ability of a
machine to solve problems or recognize a text in which a characteristic
of intelligence is needed”,“It is everything that has to do with making
a machine capable of solving problems that need intelligence”.
Based on the structure of the text network graphs,
InfraNodus
is also able to identify the discourse structure. The metrics of the
analysis indicate that the discourse structure in the pre-test is di-
versied, since the most inuential words are distributed among
dierent communities. This means that the discourse has several
topics, that each topic has a relatively high number of nodes in
the graph, and that topics are somewhat connected. On the con-
trary, the structure of the discourse in the post-test is focused. In
this case, communities are present but not that easily detectable,
since the most inuential words are concentrated around one of
the topics [38].
Therefore, the results show that before the intervention there
was a myriad of ideas on what AI is, probably inuenced by the per-
ception of AI promoted by the media. After the learning experience
SIGCSE ’21, March 17–20, 2021, Toronto, Canada Rodríguez-García, et al.
the denitions of AI are more similar to each other and include
terms that are closer to the computer science discipline.
6 DISCUSSION
The main outcome of this work is that it oers evidence that
LearningML
enables young learners to create their own ML mod-
els and to make use of these models in their own programming
projects in an easy and aordable way. Whatsmore, learners can
use
LearningML
to solve problems that are important to them and
their community. The connection with learners’ interests and ideas
is one of the keys that explain the success of
Scratch
[
5
] and,
therefore, we have tried to imitate “its simultaneous simplicity and
power” that “engage and excite students in the rst place” [
29
]. In
the near future we plan to add new features to
LearningML
to allow
users to dive deeper in the ML algorithms and to use ML models in
other programming languages, which we think will increase even
more their learning experience.
When it comes to teachers,
LearningML
oers a solution that
works right out of the box. This is especially important in educa-
tional settings, since it allows educators to focus in the pedagogic
and curricular aspects of the learning experience, saving them from
managing accounts in AI cloud services, or dealing with pricing and
limitations of the dierent plans these services provide. This kind of
issues are discussed in detail in [
19
]. At this moment, nonetheless,
LearningML
only works online –as it requires connection with the
server that hosts it– but the roadmap of the platform includes an
oine version that we hope will be available soon. On other hand,
educators can easily adapt the learning experience presented in this
paper so it could be deployed in face-to-face scenarios, such as Sum-
mer camps. There are well-documented success cases that share
insight on how to achieve broad goals of the computer science com-
munity as “broadening participation by underrepresented groups
and/or increasing learning” that could be taken into account [15].
Regarding policy making, the results prove that young learners
between 10 and 16 years old are able to learn about AI. However,
there is need for more research regarding pedagogical approaches
and educational resources development in terms of age and prior
knowledge of learners.
Finally, from the researchers’ point of view perhaps the most
interesting feature of
LearningML
is the possibility of sharing the
ML models and the projects created by the users. This empowers
the creation of a large scale repository of learners’ activity and cre-
ations that researchers may use for their own studies, in a similar
way that a dataset from Blackbox [
6
] has enabled the investiga-
tion of common mistakes in student data [
3
]. Furthermore such a
repository would allow longitudinal studies to inspect learners’ pro-
gression over time, as other researchers have done with a
Scratch
dataset [35].
6.1 Threats to validity
As all empirical research, ours has some threats to validity to be
taken into account.
Our pre-experimental design has a clear drawback: being online
many aspects of the process could not be controlled. In this sense,
it was not under the control of the authors to know if children were
helped by their parents while lling the tests.
We cannot assure that all those who lled out the post-test also
attended the training seminar, although the number of visualiza-
tions of the training seminar (390) before the post-test was opened,
supports the hypothesis that most of them did.
We instructed participants to watch the webinar and create their
own
LearningML
project before responding the post-test. Although
we cannot assure participants followed it, this assumption is con-
sistent with the data collected.
The task of having developed a complete ML project, to tinker
with
LearningML
, can not be assured either. However, and although
it was not a requirement, many teachers and parents sent us some
interesting projects of their students
11
, so we think this task was
performed by participants predominantly.
As a result of these threats, some biases can emerge. We expect
more solid results in a more controlled environment.
7 CONCLUSION
In our research some evidence supporting the hypothesis that ML
fundamentals can be taught to children in the age range 10-16,
through hands-on activities with
LearningML
, has been found.
These results are in line with other works addressing the same
problem, which have been presented as related work.
LearningML
has proven to be eective in helping young learners to learn ML
fundamentals. In comparison to other AI learning tools and plat-
forms it makes it easier to start using the platform (e.g., the platform
is stand-alone and does not require to register to any third party
service as the other tools demand). As a result, young learners
found it useful, attractive and easy to use. In addition, we designed
an assessment instrument aimed to measure the AI knowledge that
shows enough statistical reliability and delity.
Due to the COVID19 pandemic we had to do our intervention
online. This design made full control of some conditions impossi-
ble. Therefore, the results could be aected by unwanted biases.
However, some hints, such as the number of visualizations of the
training webinar or the learners’ projects sent to us after the inter-
vention, seem to indicate that a large part of the participants met
the instructions delivered to their tutors.
The results of this work encourage us to continue developing
LearningML
by adding more activities and resources, exploring
strategies to unbox and explain the ML algorithms used to recognize
data. We also look forward to include new types of problems, such
as recognition of sounds and numbers.
REFERENCES
[1]
Sanah Ali, Blakeley H Payne, Randi Williams, Hae Won Park, and Cynthia
Breazeal. 2019. Constructionism, Ethics, and Creativity: Developing Primary
and Middle School Articial Intelligence Education. In International Workshop on
Education in Articial Intelligence K-12 (EDUAI’19).
[2] Ethem Alpaydin. 2020. Introduction to machine learning. MIT press.
[3]
Amjad Altadmri and Neil C.C. Brown. 2015. 37 Million Compilations: Investigat-
ing Novice Programming Mistakes in Large-Scale Student Data. In Proceedings of
the 46th ACM Technical Symposium on Computer Science Education (Kansas City,
Missouri, USA) (SIGCSE ’15). Association for Computing Machinery, New York,
NY, USA, 522–527. https://doi.org/10.1145/2676723.2677258
[4]
Karen Brennan and Mitchel Resnick. 2012. New frameworks for studying and as-
sessing the development of computational thinking. In Proceedings of the 2012 an-
nual meeting of the American educational research association, Vancouver, Canada,
Vol. 1. 25.
11https://bit.ly/2FPGxAB
An Online Intervention to Teach Artificial Intelligence with LearningML SIGCSE ’21, March 17–20, 2021, Toronto, Canada
[5]
Karen Brennan and Mitchel Resnick. 2013. Stories from the Scratch Community:
Connecting with Ideas, Interests, and People. In Proceeding of the 44th ACM
Technical Symposium on Computer Science Education (Denver, Colorado, USA)
(SIGCSE ’13). Association for Computing Machinery, New York, NY, USA, 463–464.
https://doi.org/10.1145/2445196.2445336
[6]
Neil Christopher Charles Brown, Michael Kölling, Davin McCall, and Ian Utting.
2014. Blackbox: A Large Scale Repository of Novice Programmers’ Activity. In
Proceedings of the 45th ACM Technical Symposium on Computer Science Education
(Atlanta, Georgia, USA) (SIGCSE ’14). Association for Computing Machinery,
New York, NY, USA, 223–228. https://doi.org/10.1145/2538862.2538924
[7]
Harald Burgsteiner, Martin Kandlhofer, and Gerald Steinbauer. 2016. IRobot:
Teaching the Basics of Articial Intelligence in High Schools.. In AAAI. 4126–
4127.
[8]
Emanuelle Burton, Judy Goldsmith, Sven Koenig, Benjamin Kuipers, Nicholas
Mattei, and Toby Walsh. 2017. Ethical considerations in articial intelligence
courses. AI magazine 38, 2 (2017), 22–34.
[9]
Michelle Carney, Barron Webster, Irene Alvarado, Kyle Phillips, Noura Howell,
Jordan Grith, Jonas Jongejan, Amit Pitaru, and Alexander Chen. 2020. Teach-
able Machine: Approachable Web-Based Tool for Exploring Machine Learning
Classication. In Extended Abstracts of the 2020 CHI Conference on Human Factors
in Computing Systems. 1–8.
[10]
Jacob Cohen. 1992. Things I have learned (so far).. In Annual Convention of the
American Psychological Association, 98th, Aug, 1990, Boston, MA, US; Presented at
the aforementioned conference. American Psychological Association.
[11]
Stefania Druga. 2018. Growing up with AI: Cognimates: from coding to teaching
machines. Ph.D. Dissertation. Massachusetts Institute of Technology.
[12]
Stefania Druga, Sarah T Vu, Eesh Likhith, and Tammy Qiu. 2019. Inclusive AI
literacy for kids around the world. In Proceedings of FabLearn 2019. 104–111.
[13]
Stefania Druga, Randi Williams, Cynthia Breazeal, and Mitchel Resnick. 2017.
"Hey Google is it OK if I eat you?" Initial Explorations in Child-Agent Interaction.
In Proceedings of the 2017 Conference on Interaction Design and Children. 595–600.
[14]
Paul D Ellis. 2010. The essential guide to eect sizes: Statistical power, meta-analysis,
and the interpretation of research results. Cambridge University Press.
[15]
Barbara Ericson and Tom McKlin. 2012. Eective and Sustainable Computing
Summer Camps. In Proceedings of the 43rd ACM TechnicalSymposium on Computer
Science Education (Raleigh, North Carolina, USA) (SIGCSE ’12). Association for
Computing Machinery, New York, NY, USA, 289–294. https://doi.org/10.1145/
2157136.2157223
[16]
Julian Estevez, Gorka Garate, and Manuel Graña. 2019. Gentle introduction to
articial intelligence for high-school students using scratch. IEEE Access 7 (2019),
179027–179036.
[17]
Julian Estevez, Gorka Garate, JM Guede, and Manuel Graña. 2019. Using Scratch
to Teach Undergraduate Students’ Skills on Articial Intelligence. arXiv preprint
arXiv:1904.00296 (2019).
[18]
Juan David Rodríguez García, Jesús Moreno León, Marcos Román González,
and Gregorio Robles. 2019. Developing computational thinking at school with
machine learning: an exploration. In 2019 International Symposium on Computers
in Education (SIIE). IEEE, 1–6.
[19]
Juan David Rodríguez García, Jesús Moreno-León, Marcos Román-González, and
Gregorio Robles. 2020. LearningML: A Tool to Foster Computational Thinking
Skills Through Practical Articial Intelligence Projects. Revista de Educación a
Distancia 20, 63 (2020).
[20]
John Hattie. 2012. Visible learning for teachers: Maximizing impact on learning.
Routledge.
[21]
Tom Hitron, Yoav Orlev, Iddo Wald, Ariel Shamir, Hadas Erel, and Oren Zucker-
man. 2019. Can Children Understand Machine Learning Concepts? The Eect of
Uncovering Black Boxes. In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems. 1–11.
[22]
Sven Jatzlau, Tilman Michaeli, Stefan Seegerer, and Ralf Romeike. 2019. It’s not
Magic After All-Machine Learning in Snap! using Reinforcement Learning. In
2019 IEEE Blocks and Beyond Workshop (B&B). Memphis, USA TN: IEEE. 37–41.
[23]
Ken Kahn, Yu Lu, Jingjing Zhang, Niall Winters, and Ming Gao. 2020. Deep
learning programming by all. (2020).
[24]
Ken Kahn and Niall Winters. 2020. Constructionism and AI: A history and
possible futures. (2020).
[25]
Martin Kandlhofer, Gerald Steinbauer, Sabine Hirschmugl-Gaisch, and Petra
Huber. 2016. Articial intelligence and computer science in education: From
kindergarten to university. In 2016 IEEE Frontiers in Education Conference (FIE).
IEEE, 1–9.
[26]
Dave Lane. 2018. Explaining Articial Intelligence. Hello World 4 (2018), 44–45.
[27]
Annabel Lindner and Ralf Romeike. 2019. Teachers’ Perspectives on Articial
Intelligence. ISSEP 2019 - 12th International conference on informatics in schools:
Situation, evaluation and perspectives, Local Proceedings (2019), 22–29.
[28]
Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and
Design Considerations. In Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems. 1–16.
[29]
David J. Malan and Henry H. Leitner. 2007. Scratch for Budding Computer
Scientists. In Proceedings of the 38th SIGCSE Technical Symposium on Computer
Science Education (Covington, Kentucky, USA) (SIGCSE ’07). Association for
Computing Machinery, New York, NY, USA, 223–227. https://doi.org/10.1145/
1227310.1227388
[30]
Joyce Malyn-Smith, Fred Martin A. Lee, Shuchi Grover, Michael A. Evans, and
Sarita Pillai. 2018. Developing a Framework for Computational Thinking from
a Disciplinary Perspective. In Proceedings of the International Conference on
Computational Thinking Education 2018. 182–186.
[31]
Radu Mariescu-Istodor and Ilkka Jormanainen. 2019. Machine Learning for High
School Students. In Proceedings of the 19th Koli Calling International Conference
on Computing Education Research. 1–9.
[32]
Lívia S Marques, Christiane Gresse von Wangenheim, and Jean CR HAUCK. 2020.
Teaching Machine Learning in School: A Systematic Mapping of the State of the
Art. Informatics in Education 19, 2 (2020).
[33]
Allison Master, Sapna Cheryan, Adriana Moscatelli, and Andrew N Meltzo. 2017.
Programming experience promotes higher STEM motivation among rst-grade
girls. Journal of experimental child psychology 160 (2017), 92–106.
[34]
Antonio-José Moreno-Guerrero, Jesús López-Belmonte, José-Antonio Marín-
Marín, and Rebeca Soler-Costa. 2020. Scientic Development of Educational
Articial Intelligence in Web of Science. Future Internet 12, 8 (2020), 124.
[35]
Jesús Moreno-León, Gregorio Robles, and Marcos Román-González. 2016. Examin-
ing the Relationship between Socialization and Improved Software Development
Skills in the Scratch Code Learning Environment. J. UCS 22, 12 (2016), 1533–1557.
[36]
Jesús Moreno-León, Gregorio Robles, Marcos Román-González, and Juan David
Rodríguez García. 2019. Not the same: a text network analysis on computational
thinking denitions to study its relationship with computer programming. Revista
Interuniversitaria de Investigación en Tecnología Educativa (2019).
[37]
Jum C Nunnally and IH Bernstein. 1978. Psychometric Theory McGraw-Hill
New York. The role of university in the development of entrepreneurial vocations: a
Spanish study (1978).
[38]
Dmitry Paranyushkin. 2019. InfraNodus: Generating insight using text network
analysis. In The World Wide Web Conference. 3584–3589.
[39]
Francesc Pedro, Miguel Subosa, Axel Rivas, and Paula Valverde. 2019. Arti-
cial intelligence in education: Challenges and opportunities for sustainable
development. (2019).
[40]
Rubens Lacerda Queiroz, Fábio Ferrentini Sampaio, Cabral Lima, and Priscila
Machado Vieira Lima. 2020. AI from concrete to abstract: demystifying articial
intelligence to the general public. arXiv preprint arXiv:2006.04013 (2020).
[41]
Alpay Sabuncuoglu. 2020. Designing One Year Curriculum to Teach Articial
Intelligence for Middle School. In Proceedings of the 2020 ACM Conference on
Innovation and Technology in Computer Science Education. 96–102.
[42]
Bawornsak Sakulkueakulsuk, Siyada Witoon, Potiwat Ngarmkajornwiwat, Porn-
pen Pataranutaporn, Werasak Surareungchai, Pat Pataranutaporn, and Pakpoom
Subsoontorn. 2018. Kids making AI: Integrating Machine Learning, Gamication,
and Social Context in STEM Education. In 2018 IEEE International Conference on
Teaching, Assessment, and Learning for Engineering (TALE). IEEE, 1005–1010.
[43]
Danny Tang. 2019. Empowering Novices to Understand and Use Machine Learning
With Personalized Image Classication Models, Intuitive Analysis Tools, and MIT
App Inventor. Ph.D. Dissertation. Massachusetts Institute of Technology.
[44]
David Touretzky, Christina Gardner-McCune, Fred Martin, and Deborah Seehorn.
2019. Envisioning AI for K-12: What should every child know about AI?. In
Proceedings of the AAAI Conference on Articial Intelligence, Vol. 33. 9795–9799.
[45]
Ilkka Tuomi et al
.
2018. The impact of articial intelligence on learning, teaching,
and education. Luxembourg: Publications Oce of the European Union (2018).
[46]
Jessica Van Brummelen, Judy Hanwen Shen, and Evan W Patton. 2019. The
Popstar, the Poet, and the Grinch: Relating Articial Intelligence to the Com-
putational Thinking Framework with Block-based Coding. In Proceedings of
International Conference on Computational Thinking Education. 160–161.
[47]
Henriikka Vartiainen, Matti Tedre, and Teemu Valtonen. 2020. Learning machine
learning with very young children: Who is teaching whom? International Journal
of Child-Computer Interaction (2020), 100182.
[48]
Xiaoyu Wan, Xiaofei Zhou, Zaiqiao Ye, Chase K Mortensen, and Zhen Bai. 2020.
SmileyCluster: supporting accessible machine learning in K-12 scientic discov-
ery. In Proceedings of the Interaction Design and Children Conference. 23–35.
[49]
Randi Williams, Hae Won Park, Lauren Oh, and Cynthia Breazeal. 2019. Popbots:
Designing an articial intelligence curriculum for early childhood education. In
Proceedings of the AAAI Conference on Articial Intelligence, Vol. 33. 9729–9736.
[50]
Abigail Zimmermann-Nieeld, Shawn Polson, Celeste Moreno, and Benjamin
Shapiro. 2020. Youth Making Machine Learning Models for Gesture-Controlled
Interactive Media. Proceedings of the Interaction Design and Children Conference.
2020. p. 63-74. (2020).
[51]
Abigail Zimmermann-Nieeld, Makenna Turner, Bridget Murphy, Shaun K Kane,
and R Benjamin Shapiro. 2019. Youth learning machine learning through building
models of athletic moves. In Proceedings of the 18th ACM International Conference
on Interaction Design and Children. 121–132.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Profound knowledge about Artificial Intelligence (AI) will become increasingly important for careers in science and engineering. Therefore an innovative educational project teaching fundamental concepts of AI at high school level will be presented in this paper. We developed an AI-course covering major topics (problem solving, search, planning, graphs, datastructures, automata, agent systems, machine learning) which comprises both theoretical and hands-on components. A pilot project was conducted and empirically evaluated. Results of the evaluation show that the participating pupils have become familiar with those concepts and the various topics addressed. Results and lessons learned from this project form the basis for further projects in different schools which intend to integrate AI in future secondary science education.
Article
Full-text available
Constructionism, long before it had a name, was intimately tied to the field of Artificial Intelligence. Soon after the birth of Logo at BBN, Seymour Papert set up the Logo Group as part of the MIT AI Lab. Logo was based upon Lisp, the first prominent AI programming language. Many early Logo activities involved natural language processing, robotics, artificial game players, and generating poetry, art, and music. In the 1970s researchers explored enhancements to Logo to support AI programming by children. In the 1980s the Prolog community, inspired by Logo's successes, began exploring how to adapt logic programming for use by school children. While there have been over 40 years of active AI research in creating intelligent tutoring systems, there was little AI‐flavoured constructionism after the 1980s until about 2017 when suddenly a great deal of activity started. Amongst those activities were attempts to enhance Scratch, Snap!, and MIT App Inventor with new blocks for speech synthesis, speech recognition, image recognition, and the use of pre‐trained deep learning models. The Snap! enhancements also include support for word embeddings, as well as blocks to enable learners to create, train, and use deep neural networks. Student and teacher project‐oriented resources highlighting these new AI programming components appeared at the same time. In this paper, we review this history, providing a unique perspective on AI developments—both social and technical—from a constructionist perspective. Reflecting on these, we close with speculations about possible futures for AI and constructionism. Practitioner notes What is already known about this topic There exist excellent broad surveys of the current status of teaching machine learning in schools, for example Marques et al. (2020). There are historical collections of AI and education research papers that include descriptions of constructionist activities, for example Yazdani (1984). What this paper adds This paper adds an in‐depth focus on historical and current efforts on AI and education that support constructionist teaching. This focus enables us to delve deeper than a broad survey. Uniquely, we provide a 50‐year historical perspective on constructionist AI tools, trials, and research. Grounded in this history and our survey of current tools and projects, we provide speculations about future directions. Implications for practice and/or policy We hope our descriptions of current AI programming tools for non‐experts placed in a broad historical context will be of use to teachers wishing to introduce AI to their students in a constructionist manner, as well as to developers and researchers aiming to support such teaching.
Article
Full-text available
The social and technological changes that society is undergoing in this century are having a global influence on important aspects such as the economy, health and education. An example of this is the inclusion of artificial intelligence in the teaching–learning processes. The objective of this study was to analyze the importance and the projection that artificial intelligence has acquired in the scientific literature in the Web of Science categories related to the field of education. For this, scientific mapping of the reported documents was carried out. Different bibliometric indicators were analyzed and a word analysis was carried out. We worked with an analysis unit of 379 publications. The results show that scientific production is irregular from its beginnings in 1956 to the present. The language of greatest development is English. The most significant publication area is Education Educational Research, with conference papers as document types. The underlying organization is the Open University UK. It can be concluded that there is an evolution in artificial intelligence (AI) research in the educational field, focusing in the last years on the performance and influence of AI in the educational processes.
Article
Full-text available
Although Machine Learning (ML) is integrated today into various aspects of our lives, few understand the technology behind it. This presents new challenges to extend computing education early to ML concepts helping students to understand its potential and limits. Thus, in order to obtain an overview of the state of the art on teaching Machine Learning concepts in elementary to high school, we carried out a systematic mapping study. We identified 30 instructional units mostly focusing on ML basics and neural networks. Considering the complexity of ML concepts, several instructional units cover only the most accessible processes, such as data management or present model learning and testing on an abstract level black-boxing some of the underlying ML processes. Results demonstrate that teaching ML in school can increase understanding and interest in this knowledge area as well as contextualize ML concepts through their societal impact.
Conference Paper
We describe an open-source blocks-based programming library in Snap! [Harvey and Mönig, 2010] that enables non-experts to construct machine learning applications. The library includes blocks for creating models, defining the training and validation datasets, training, and prediction. We present several sample applications: approximating mathematical functions from examples, attempting to predict the number of influenza infections given historical weather data, predicting ratings of generated images, naming random colours, question answering, and learning to win when playing Tic Tac Toe.
Article
While artificial intelligence and machine learning is becoming a commonplace feature of people’s everyday lives, so far few theoretical or empirical studies have focused on investigating it in K–12 education. Drawing on the sociocultural theory of learning and participation, this case study explored how six very young children taught and explored Google’s Teachable Machine in nonschool settings. Through fine-grained analysis of video recordings and interviews with the children, the article illustrates the content and the process of teaching where 3-9 year old children were producing machine learning data sets and models as well as observing, exploring, and explaining their own interaction with machine learning systems. The results illustrate the quick-paced and embodied nature of the child-computer interaction that also supported children to reason about the relationship between their own bodily expressions and the output of an interactive ML-based tool. The article concludes with discussions on the emergent process of teaching and learning as well as on ways of promoting children’s participation and sense of agency in the age of machine learning.