PreprintPDF Available

A review of the state-of-art of the use of Machine-Learning and Artificial Intelligence by educational portals and OER repositories (white paper)

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

In this report, we provide an overview of the most prominent Artificial Intelligence (AI) and Machine Learning (ML) practices used in educational contexts focusing on Open Educational Resources (OER) and educational portals aiming to support K-12 education. To that end, we will provide definitions and descriptions of relevant terms, a short historical overview of Artificial Intelligence (AI) and Machine Learning (ML) in education and an overview of the goals and common practices of the use of computational methods (AI and ML) in educational contexts. We will present the state of the art with respect to the adaptation and use of computational methods in educational portals and OERs and we will discuss the potential benefits and open challenges that may arise from these practices. This report concludes with future directions that could support local structures and directives to move forward with the integration of computational approaches to existing OER and educational portals.
A review of the state-of-art of the use of Machine-
Learning and Artificial Intelligence by educational
portals and OER repositories (white paper)
Irene-Angelica Chounta
University of Tartu | chounta@ut.ee
1
Table of Contents
1 Introduction ......................................................................................................................... 2
1.1 Purpose of this report .................................................................................................. 3
2 Artificial Intelligence and Machine Learning: Definitions and Historical overview ............ 4
3 AI and ML approaches in education: an overview of the state of the art .......................... 6
3.1 Student modeling and personalized feedback ............................................................ 7
3.2 Recommendation systems ........................................................................................ 10
3.3 Computational tools to support teachers ................................................................. 12
3.4 Examples of use of AI and ML technologies in the European K-12 sector ............... 14
4 Challenges .......................................................................................................................... 17
4.1 Privacy and data protection challenges .................................................................... 17
4.2 Technical challenges .................................................................................................. 18
4.3 Pedagogical Challenges .............................................................................................. 19
5 Conclusion .......................................................................................................................... 20
5.1 Looking ahead: Taking the next step to promote AI and ML integration in European
schools ………………………………………………………………………………………………………………. 21
2
Title: A review of the state-of-art of the use of Machine-Learning
and Artificial Intelligence by educational portals and OER
repositories (white paper)
Author: Irene-Angelica Chounta, University of Tartu, chounta@ut.ee
Abstract
In this report, we provide an overview of the most prominent Artificial Intelligence (AI) and Machine
Learning (ML) practices used in educational contexts focusing on Open Educational Resources (OER)
and educational portals aiming to support K-12 education. To that end, we will provide definitions and
descriptions of relevant terms, a short historical overview of Artificial Intelligence (AI) and Machine
Learning (ML) in education and an overview of the goals and common practices of the use of
computational methods (AI and ML) in educational contexts. We will present the state of the art with
respect to the adaptation and use of computational methods in educational portals and OERs and we will
discuss the potential benefits and open challenges that may arise from these practices. This report
concludes with future directions that could support local structures and directives to move forward with
the integration of computational approaches to existing OER and educational portals.
1 Introduction
The rapid adoption of technological advancements, along with the (big) data revolution
in education, have provided numerous learning opportunities for stakeholders (that is
educators, learners, curriculum developers, program directors, learning content creators
and researchers). For example, the adoption of technology in education has provided us
with new tools and methods to support learning and teaching. At the same time, users of
educational technologies can become part of large learning communities or communities
of practice and learn together with peers who are physically located in different places.
With the increased popularity of online courses, it soon became evident that there is a
need for educational resources and learning content – from textbooks to videos and
podcasts – that would be openly available and freely accessible for every stakeholder, that
could be re-distributed and re-used in different settings and that could be adapted to meet
the needs and interests of different learners. This need for suitable resources and content
led to the emerging concept of Open Educational Resources (OER) and educational
repositories as a means of scaffolding educational transformation (Butcher, 2015).
The OER movement created manifold opportunities for learners, teachers and educational
institutions since they gained access to a plethora of resources and content. Learning
materials can nowadays be easily distributed, shared and adapted to meet learners’ needs
3
and interests. Data richness offers new opportunities for testing analytical approaches to
facilitate and inform new or revisited paradigms. Thus, there is a unique opportunity to
employ cutting-edge computational approaches to address fundamental pedagogical
challenges such as: how to adaptively guide students and how to provide appropriate
scaffolding to facilitate learning and to improve learning outcomes.
At the same time, in order to benefit from aforementioned technological advances in
education, we need to address certain challenges that have been raised through those
advances. With the amount of learning material increasing, we need to ensure that
materials are accessible to the general population. Most importantly, it is necessary to set
quality standards for the learning material we distribute, and we have to ensure that these
materials meet those quality standards. On the other hand, due to the vast amount of
learning content that is offered, learners face the challenge to efficiently and effectively
choose learning material to accommodate their requirements. This indicates a need for
tools to support learners when choosing appropriate learning content.
However, providing material to learners alone is not enough. In the digital age, we want
learners to acquire the kind of competencies and skills – the so-called 21st century skills
(Bell, 2010) – that will lead them beyond the acquisition of inert knowledge and that will
support them in using new technologies and digital resources in an efficient and effective
way in order to achieve their goals. Therefore, besides materials, we have to provide
learners with the tools that will support reflection and self-regulation, scaffold
collaboration and communication, foster information and digital literacy and promote
creativity and critical-thinking. Finally, we should raise awareness regarding challenges
that learning in the digital age may bring along, such as data ownership, privacy and
security.
1.1 Purpose of this report
In this report we aim to identify and review best practices in the use of Artificial
Intelligence (AI) and Machine Learning (ML) technologies in educational research,
focusing on the context of educational portals, OER repositories and K-12 education. We
will elaborate on the state-of-the art of AI and ML in education and we will also provide
a vision on how AI and ML can be promoted and used to better serve teachers and
learners. In the next sections we will discuss the expected benefits from the use of
4
computational methods in education and the potential risks and opportunities such
practices may entail. We will attempt to identify the relationships between open and
reusable learning content, data ownership, technical challenges and privacy aspects.
Finally, we will conclude with a discussion about future directions regarding the use of
computational approaches in OER and educational platforms as well as how we could
promote and orchestrate common actions from stakeholders in order to support
advancements on a European level.
2 Artificial Intelligence and Machine Learning:
Definitions and Historical overview
In computer science, we use the term Artificial Intelligence (AI) to describe
computational technologies that allow machines (that is, computers) to act and take
decisions imitating human behavior and intelligence (McCarthy, 1998). Machine
Learning (ML) is a subfield of Artificial Intelligence, in which statistical methods and
computational algorithms are being used to teach machines through examples and by
experimentation with data on how to perform specific tasks (Michalski, Carbonell, &
Mitchell, 2013).
Research regarding the use of AI and ML in education has been ongoing since the late
'70s and '80s when the first computer-assisted instruction (CAI) and intelligent tutoring
systems (ITSs) were developed (Nwana, 1990). In the early days, AI methods were
employed in two ways:
a) to design and facilitate interactive learning environments that would support
learning by doing. In such environments, students received guidance to learn through
their experiences while they were experimenting with interactive computer artifacts.
An example of this research line is the work of Seymor Papert introducing an
educational programming language called LOGO for teaching geometry (Papert,
1980);
b) to design and implement tutoring systems, that is computational systems that
imitate” human tutors and gradually support students in mastering skills by adapting
instruction with respect to the student’s knowledge state (Corbett, Koedinger, &
Anderson, 1997).
5
Thus, we could say that AI has originally been used either as a learning tool itself so that
students could learn by experimenting with AI algorithms or as a technology to support
the personalization of learning environments and the adaptation of instruction to learner's
needs and personal goals.
On one hand, Papert’s constructionist approach was originally criticized since it
demanded radical changes to the educational status quo (Nwana, 1990). However, it
strongly influenced modern educational programming languages and programming
environments that are nowadays used widely, such as Scratch
1
or educational approaches
that explore the use and programming of robots as a means of facilitating and promoting
STEM education
2
.
On the other hand, the practice of Intelligent Tutoring Systems (ITSs) was widely adopted
in K-12 and Higher Education demonstrating promising results and outcomes. In a meta-
analysis of research that aims to compare the effectiveness of human tutors with the
effectiveness of ITSs, it became evident that ITSs are almost as effective as human tutors
with respect to problem-solving and reading activities (VanLehn, 2011). ITSs’ success is
largely due to the fact that such systems are capable of tracking student performance and
to choose appropriate content for practicing skills and fostering knowledge tailored to the
individual student’s needs. To achieve that, ITSs use student models that are implemented
using AI and ML methods and that are based on the idea of mastery learning’: that is,
the student is asked to continue solving problems or answering questions about a concept
until s/he has mastered it. Only then will the student be guided to move forward to other
concepts. We will provide an overview of state-of-the-art student modeling methods used
in ITSs alongside best practices in following paragraphs. However, despite the positive
outcomes demonstrated by the use of ITSs, their practice has been criticized for not
considering the social aspects of the learning process, for the lack social interaction and
for not promoting the acquisition of social skills. For ITSs, the student is considered an
individual unit who learns while practicing with learning material. Nonetheless, learning
is not only about providing knowledge and mastering skills, but also about integrating
1
https://scratch.mit.edu/
2
https://robohub.org/work-play-and-stem-how-robotics-will-bridge-the-gap/
6
students in a structured society of information and knowledge (Hawkins, Sheingold,
Gearhart, & Berger, 1982).
The breakout of e-learning practices in online and massive online open courses (MOOCs)
addresses the aforementioned criticism, since they provide the opportunity to learn in a
social arena thus allowing students to interact with instructors and their peers. At the same
time, we aim to promote personalization of online learning and to provide learners with
unique, adaptive and personalized feedback and support. Furthermore, due to the large
number of data traces that are produced from learners' online activities, there is a need for
cutting-edge, computational approaches for processing and analyzing learner's activity as
captured by logfiles and for providing adaptive feedback that addresses learner's needs
and considers individual learner characteristics. In this context, AI and ML methods
promise to fill in the gap and offer solutions
3
.
3 AI and ML approaches in education: an overview of the
state of the art
There are three main research lines that explore the use of AI and ML methods in the
context of computational and online learning systems and environments:
1. Student modeling and personalized feedback: Research in this field studies the use of
computational methods to assess student's background knowledge, to model student's
knowledge state and to predict student's performance. Student models are commonly
used in educational platforms and intelligent tutoring systems in order to provide
indications about student’s knowledge state. These indications are then used to guide
their learning activity by providing adaptive, personalized feedback, tailored to the
learner’s needs and personal characteristics taking as an example the effective practice
of the one-to-one tutoring paradigm (Bloom, 1984).
2. Recommendation systems: Research in this field focuses on intelligent methods for
choosing appropriate study materials, learning activities, examples and exercises with
respect to the learner’s needs and interests.
3
https://www.mckinsey.com/featured-insights/artificial-intelligence/the-role-of-education-in-ai-and-vice-versa
7
3. Computational tools to support teachers: Research in this field studies the use of
computational tools to support either teachers’ training (by providing evidence or
assessments regarding teachers’ competencies and skills as well as recommendations
about how to improve their pedagogies) or to support teachers’ practice (usually by
providing indications from students’ activities on the individual or on the classroom
level through applications commonly known as teacher dashboards).
Research in these lines usually intertwines and complements. Here, we provide an
overview of typical AI and ML methods used in each area in educational research in
general and in OER and educational portals in particular. We also discuss some examples
of AI and ML technologies integration into the European K-12 sector.
3.1 Student modeling and personalized feedback
Student models are typically used in Intelligent Tutoring Systems and learning platforms
in order to model and assess student's performance. Most student models are either
cognitive or statistical. Cognitive models represent the structure of the internal reasoning
system which students use to solve problems while statistical models use latent factors
that explain observed performance data in order to predict student's performance. AI
methods, such as Bayesian networks, or machine learning algorithms, such as logistic
regression, are used to design and train these models.
One of the most well-established practices in student modeling is the use of cognitive
models in cognitive tutors. By cognitive tutors, we refer to the ITSs that were established
by Carnegie Learning
4
and are based on the ACT-R (Adaptive Control of Thought-
Rational) theory of human cognition (Anderson, 1996).
Most cognitive student models are based on the notion of mastery learning. Only when a
student has mastered a concept, will s/he be guided to move forward to other concepts
(Corbett et al., 1997). Mastery learning is in line with the notion of learning curves that
is, how many opportunities a student needs in order to master a skill or knowledge
component. In order to assess mastery, ITSs use student models that predict the
performance of students on various steps of a learning activity. Based on these
predictions, the tutoring system chooses what kind of content or scaffolding to provide to
4
https://www.carnegielearning.com/
8
students and if further practice is necessary (Chounta, Albacete, Jordan, Katz, &
McLaren, 2017).
Typically, cognitive student models predict step outcomes – that is whether a student will
carry out a step-task correctly or not – based on the skills involved in this step and
student’s prior practice. There are two main AI/ML modeling approaches that dominate
cognitive modeling:
1. Bayesian Knowledge Tracing (BKT) models
Bayesian Knowledge Tracing models use data that reflect student’s performance to
predict the probability of a student knowing a skill at a given time (d Baker, Corbett,
& Aleven, 2008). The response from the model is binary, this means that the model
predicts either positively (for example, the student knows the skill and therefore the
student will answer correctly) or negatively (for example, the student does not know
the skill and therefore the student will not answer correctly). To compute this
probability, the model takes into account four modeling parameters on every practice
opportunity: a) the probability that a student knows the answer before the activity
even begins; b) the Guess parameter that indicates the probability that the student
will guess the correct answer; c) the Slip parameter that indicates the probability that
the student will slip and give an incorrect answer although s/he knows the correct
one; d) the probability of the student learning the skill. Bayesian Knowledge Tracing
models are implemented as dynamic Bayesian networks (Yudelson, Koedinger, &
Gordon, 2013). In order to fit the modeling parameters, the BKT models are trained
using data from student’s prior practice.
2. Logistic regression models
Logistic regression models use student-specific and problem-specific parameters in
order to predict student performance. For example, the Additive Factors Analysis
Model (AFM) - introduced into ITS research by Cen et. al. (Cen, Koedinger, &
Junker, 2008) - predicts the likelihood of a student correctly completing a step as a
linear function of student parameters (the student’s proficiency), knowledge
components or skill parameters (the difficulty of the knowledge components or skill
involved in certain questions or tasks) and the learning rates of skills. AFM considers
the frequency of prior practice and exposure to skills. In addition to AFM, the
Performance Factors Analysis Model (PFM) (Pavlik Jr, Cen, & Koedinger, 2009)
9
considers whether prior practice was successful (that is, how many times a student
answered correctly or incorrectly) and the Instructional Factors Analysis Model
(IFM) (Chi, Koedinger, Gordon, Jordon, & VanLahn, 2011) also considers the tells
(that is, how many times the tutor gave away the answer of the next step directly
instead of eliciting).
Apart from ITSs, these student models are used in open online courses, such as the
courses implemented as part of the OER Open Learning Initiative by Carnegie
Mellon University
5
, in order to assess student’s performance in learning activities
and to provide recommendations.
In other educational settings, like for example in MOOCs, student modeling extends
beyond modeling individual student’s performance. Since the social aspects of learning
are particularly important for MOOCs, it is common to use ML techniques, such as block
modeling and social network analysis to model students with respect to their social profile
or to model the knowledge flow as a proxy for learning (Hecking, Chounta, & Hoppe,
2017). Similarly, in collaborative learning scenarios that aim to support K-12 learning
activities, time-series analysis has been used as a tool for assessing the quality of
collaboration between peers (Chounta & Avouris, 2015). These assessments can be
provided to the teachers in order to help them orchestrate learning activities more
efficiently and to support them in providing appropriate feedback to students (Chounta &
Avouris, 2016).
Recently, there is an increasing interest in using Natural Language Processing (NLP) to
analyze textual content produced either during learning activities or to assess
communication that is facilitated by discussion boards or chat tools (Ferschke, Yang,
Tomar, & Rosé, 2015; Yang, Wen, Kumar, Xing, & Rose, 2014). A combination of ML
methods, such as NLP and logistic regression or dynamic Bayesian networks, is used in
dialogue-based tutoring systems that aim to guide students through adaptive lines of
reasoning and support them in conceptualized learning, aiming especially at K-12
education (Albacete et al., 2018; VanLehn et al., 2007)
5
http://oli.cmu.edu/
10
In the context of educational portals and Learning Management Systems (LMS), ML and
data-mining approaches are used to analyze student practices, to assess student
performance and provide feedback to learners. The computational methods used for the
analysis of learning activities and processes can be summed up in three categories
(Suthers et al., 2015):
a) network-analytics methods (usually employing approaches stemming from social
network analysis and graph theory) (Chounta, Hecking, Hoppe, & Avouris, 2014;
Martı́nez, Dimitriadis, Rubia, Gómez, & de la Fuente, 2003);
b) process-oriented activity analytics (for example, sequential-pattern mining
techniques, time-series analysis, classification and clustering methods) (Bannert,
Reimann, & Sonnenberg, 2014; Reimann, 2009; Tarus, Niu, & Kalui, 2018; Ziebarth,
Chounta, & Hoppe, 2015); and
c) content analysis using text-mining and NLP methods (Ferschke, Howley, et al.,
2015; Mu, Stegmann, Mayfield, Rosé, & Fischer, 2012; Wen, Yang, & Rosé, 2014).
The use of computational methods derived from AI and ML in order to analyze the
learning process and to improve the learning outcomes is commonly described by the
term Learning Analytics(Siemens, 2013). Personalized feedback that is informed by
learning analytics is offered through student dashboards using visualizations and graph
representations (Verbert, Duval, Klerkx, Govaerts, & Santos, 2013).
The report of the Joint Research Centre (JRC), the European Commission’s science and
knowledge service, published in 2016 (Ferguson et al., 2016) provides an extensive
overview of the state of the art with respect to the implementation, adaptation and
adoption of learning analytics for education and training. Furthermore, it discusses what
actions should be taken with respect to European policies in order to further empower the
use of computational methods and tools (such as learning analytics) in education and
consequently enrich educational practices in Europe. This report strongly focuses on
learning analytics and not necessarily in the context of OER or K-12 sector but
nonetheless, it provides insight with respect to the technological trends in education, and
the open issues that we still need to address, both on the policy level as well as on the
training ground.
3.2 Recommendation systems
11
Another application of AI and ML methods in education is in designing and implementing
recommendation systems. In recent years, recommendation (or else, recommender)
systems are widely used in technology enhanced learning, e-learning and online learning
in order to provide recommendations to learners about learning materials and learning
content that will assist them in learning efficiently (Verbert et al., 2012). At the same
time, it is crucial that these recommendations will be tailored to their personal, individual
learning needs (Salehi, Kamalabadi, & Ghoushchi, 2013). This is particularly important
nowadays with the information overload that learners are facing due to the vast amount
of information and content available online. Recommendation systems that aim to support
learning activities take into account learner's preferences (based for example on learner’s
ratings) in order to provide material close to learner's interest and learner's prior activity
and knowledge level in order to provide material that will meet the learner's needs (Tarus,
Niu, & Mustafa, 2018).
The most common recommendation techniques used in educational contexts, as described
in (Burke, 2007) and (Adomavicius & Tuzhilin, 2005), are :
a) Collaborative filtering. The learner receives recommendations based on the learner’s
similarity with other learners regarding their preferences. Similarity of preferences is
computed based on learners’ ratings;
b) Content-based recommendations. The learner receives recommendations items that
have similar content to the ones the learner visited or rated positively in the past.
c) Social-network recommendations. The learner receives recommendations based on
her/his profile and his/her social status.
d) Knowledge-based or ontology-based recommendations. In this case, the learner
receives as recommendations items that relate to the learner’s needs and items that
relate with other items the learner has viewed. Therefore, an ontology-based
recommendation system needs to maintain explicit descriptions of concepts in the
learning domain that the learner is studying and in addition, the system needs to
maintain a representation of the learner’s knowledge.
e) Hybrid recommendations. The recommendations are provided using a combination of
aforementioned recommendation methods.
According to a recent literature review carried out by (Tarus, Niu, & Mustafa, 2018),
ontology-based recommendation systems are the most popular systems used in
12
educational contexts. They have been used in various e-learning scenarios in order to
recommend additional reading material to students (Sosnovsky, Hsiao, & Brusilovsky,
2012), to suggest learning goals to leaners based on their knowledge state (Capuano,
Gaeta, Ritrovato, & Salerno, 2014) or to assist teachers in the design of learning scenarios
by recommending appropriate teaching-learning techniques (Mota, de Carvalho, & Reis,
2014).
In the context of OER and open repositories, recommendation systems are used to
generate recommendations of learning resources and materials using content features and
ratings as well as learner-specific features, such as the learner’s history and background
knowledge. For example, Ruiz-Iniesta et. al. (2014), proposed a knowledge-based
strategy to recommend educational resources from an OER for a computer science major
(Ruiz-Iniesta, Jimenez-Diaz, & Gómez-Albarrán, 2014). To do that, they used keywords
that were used to describe the domain learning topics covered by the OER. An evaluation
of their system suggested that learners were able to find resources faster. Additionally,
the recommendations that learners received, were satisfactory with respect to their interest
and existing knowledge. Similarly, Shelton et.al. (2010) proposed and evaluated an
ontology-based recommendation system (“Folksemantic”) and a recommendation
functionality that can be used in order to provide personalized recommendations in
OERs (Shelton, Duffin, Wang, & Ball, 2010).
Zhuhadar and Nasraoui (2010) proposed a hybrid recommender system that aimed to
personalize user experience on the HyperManyMedia OER (Zhuhadar & Nasraoui, 2010).
This repository contains and offers educational content consisting of online courses,
textbooks, multimedia and other learning materials at the Western Kentucky University.
The recommendations of learning resources were provided to learners based on their
content and with respect to learner’s interests.
3.3 Computational tools to support teachers
Recent reports show the benefits of OER use, like for example cost savings for courses using
OER material but still demonstrating the same or even better learning outcomes than courses
using traditional textbooks. However, they also revealed significant barriers to implementation.
Many teachers and instructors are not familiar with OER and they don’t know how to use it,
13
many teachers said that OER take too much time to implement while for some disciplines no
good-quality repository with accompanying material, such as tests or homework, is even
available
6
.
Thus, the need for computational approaches to support teachers and instructors is evident. In
particular, AI and ML approaches aim to support teachers in two ways:
1. By assisting teachers practice.
Such kind of tools use information about student’s activity in order to provide teachers
with indications or evidence and support them in student’s assessment. Teachers can
benefit from the accumulation and analysis of students’ data traces by receiving input
about an individual learner, a learning group or a whole class (Sergis et al., 2017). This
information is usually aggregated, processed and visualized using computational
approaches and delivered to teachers through teacher dashboards.
Teacher dashboards typically present informative statistics and visualizations of
“meaningful” student activity, that is student actions that may indicate either learning
or some kind of disruption of the learning process, either on the class or the individual
level (Dyckhoff, Zielke, Bültmann, Chatti, & Schroeder, 2012). Teacher dashboards
that build on Intelligent Tutoring Systems (ITSs) present information with respect to
skills mastery, such as the number of students who have mastered each skill, the number
of skills each students have mastered or comparisons between students and mastery
levels per skill and so on (Xhakaj, Aleven, & McLaren, 2017). Some dashboards
provide feedback to teachers regarding student’s practice by comparing the activity of
students to either ideal solutions or to other students’ practices (Constantino-González
& Suthers, 2007). Other, provide “snapshots” of student’s work to teachers and textual
content coming from student’s activity (Voyiatzaki and Avouris 2014). Teacher
dashboards allow teachers to be confident in their assessments and the decisions they
make (Van Leeuwen, Janssen, Erkens, & Brekelmans, 2015). However, related studies
show that the confidence of teachers decreases when their workload increases. When
this happens, the dashboards might add workload especially if they offer
visualizations or indicators that need to be further interpreted. To address this issue,
some dashboards provide automatic assessments of student’s performance (Chounta &
6
https://www.insidehighered.com/digital-learning/article/2018/11/16/north-dakota-audit-reports-significant-
cost-savings-after-oer
14
Avouris, 2016) or explicit alerts of potential problems (Holstein, McLaren, & Aleven,
2017). Such dashboards aim to assist teachers in adapting to student’s needs easier,
faster and to support them in deciding whether an intervention (and potentially what
kind of intervention) is necessary.
2. By supporting teachers training
In order to assist teachers and help them create learning activities and assess learning
outcomes, computational approaches focus on identifying and providing information
about the level of difficulty of learning activities and suggestions for more appropriate
activities or on exploring teacher performance data to pinpoint the characteristics of
effective teaching (Agudo-Peregrina, Iglesias-Pradas, Conde-González, & Hernández-
García, 2014; Slavuj, Meštrović, & Kovačić, 2017).
Other platforms, such as eDidaktikum
7
, aim to facilitate teachers training by using
learning analytics and competency models to support teachers in acquiring
competences necessary for teaching. The platform of eDidaktikum offers tools for
teachers to organize materials and learning activities, but most importantly, it offers
tools to associate materials with competencies and to track how students’ competencies
evolve over time. Specifically, eDidaktikum implements a competency model for
teacher training to support teachers to track their own practice and learning.
One of the most common criticism of AI and ML approaches is that they are data-driven
and not theoretically grounded using pedagogical reasoning (Duval, 2011; Gašević,
Dawson, & Siemens, 2015). In addition, their results can be interpreted in more than
one way leading to misunderstandings and misinterpretations (Spada, Meier, Rummel,
& Hauser, 2005). Current findings (Peña-López, 2016) point towards the need to use
technology in order to support and complement - and not to replace the classroom
teacher. Therefore, it is important that any computational approach should be
understood and adopted by both learners and teachers.
3.4 Examples of use of AI and ML technologies in the European K-12 sector
Recent reviews and studies point out that schools rarely adopt or widely use the technology
that researchers develop for educational purposes. This gap is attributed either to teachers
7
https://edidaktikum.ee/
15
having other issues of higher priority than the ones addressed by educational technologies or
because technology is perceived – or, sometimes, described – as awkward to use (Donaldson,
Ntarmos, & Portelli, 2017). A detailed report on educational platforms used in K-12 and their
outcomes - published as part of the 2016 JRC Science for Policy Report (Ferguson et al., 2016)
- shows that either the “clever” educational tools are rarely used in the classroom consistently
or there is little or no information at all about their actual use and outcomes. Furthermore, AI
and ML technologies are mostly used implicitly”, that is as part of the backbone of educational
platforms. In most cases, AI and ML technologies are considered as a black box and they are
seldom visible (if visible at all) to the end-user.
Knewton
8
– one of the biggest adaptive learning platforms – has been used by more than 10
millions students globally, either as standalone technology or as a component of custom-made
local platforms. Its user base is diversified geographically (North and South America, Europe,
Africa, Asia, Australia) but also with respect to the education level (K12, Higher Ed, Corporate
training, vocational education). Even though, there have been some indications of
improvements in students’ retention and academic achievement, there were concerns about the
way these indications were obtained (Ferguson et al., 2016). Furthermore, there were concerns
and criticism regarding the way corporate platforms like Knewton can be adapted in
educational contexts
9
.
A successful example of integrating adaptive and personalized learning spaces into K-12
school classrooms is the European-funded initiative of Go-Lab
10
and its successor, Next-Lab
11
.
Go-Lab is an online experimentation platform that implements virtual labs and that aims to
support inquiry learning and to promote innovative and interactive teaching methods in primary
and secondary schools. The platform uses learning analytics (in the form of apps) to provide
information to teachers and students about their progress and knowledge status. Content is
offered in more than 50 languages and teachers can share material and participate in teacher
training and other social events (de Jong, Sotiriou, & Gillet, 2014; Gillet, De Jong, Sotirou, &
Salzmann, 2013; Sergis et al., 2017).
8
http://www.knewton.com
9
https://www.insidehighered.com/news/2013/01/25/arizona-st-and-knewtons-grand-experiment-adaptive-
learning
10
https://www.golabz.eu/
11
http://nextlab.golabz.eu/
16
Third Space Learning, an online platform that specializes in math teaching and school
leadership, uses AI technologies to support teachers improve their practices. To do that, the
platform monitors lessons and provides real-time alerts to teachers regarding their teaching
practice. Third Space Learning was used in Pakeman primary school suggesting improvements
in students’ achievements in Math
12
and also in collaboration with University College London
in order to point out what makes a teaching strategy successful
13
.
With respect to measuring learning outcomes, many teachers use specific-purpose applications
(such as PearDuck and EdPuzzle, or even the game-like environment Kahoot) in order to
monitor student’s achievement and they believe that data-driven computational approaches can
lead to innovative and sustainable and, most importantly, positive changes in the classroom
14
.
Other initiatives, such as IBM Watson Education along with Edmodo and Scholastic, aim to
using AI in order to improve learning outcomes and students’ achievement specifically for K-
12. The idea is to use AI as personalized, adaptive learning assistants that, on one hand will
support students in acquiring the skills as directed by curriculum and, on the other hand will
help teachers understand student’s individual learning process and accordingly create
personalized material
15
. It is still however not clear, to the best of our knowledge, how
frequently this technology is used in the European K-12 sector and to what end.
The New Media Consortium (NMC) / Consortium for School Networking (CoSN) Horizon
Report for 2017 (Freeman, Becker, & Cummins, 2017) provides a detailed roadmap of trending
innovative practices and technologies for K-12 as well as their projected short and mid-term
impact within the next 5-years. Based on the findings of the NMC/CoSN Horizon report, as
well as the JRC Science for Policy report, it is evident that it takes a long time for new
technologies to be actually adapted in the classroom. In this sense, it is hard to predict how
new, “smart” technologies will impact the educational sector in practice. Nowadays,
technological breakthroughs are happening too fast for everyone to follow. This however
seems not to hold for young people who are exposed in the widespread, everyday use of
technology from an early age and who were named for this reason as the “Digital Natives”
12
https://thirdspacelearning.com/blog/secrets-pupil-premium-award-winning-school-pakeman-primary/
13
https://www.theguardian.com/technology/2016/dec/26/could-online-tutors-and-artificial-intelligence-be-the-
future-of-teaching
14
https://www.edsurge.com/news/2016-10-25-not-just-numbers-how-educators-are-using-data-in-the-classroom
15
https://www.ibm.com/blogs/watson/2018/06/using-ai-to-close-learning-gap/
17
(Tapscott & Barry, 2009). Nonetheless, the familiarity that students often show with digital
media, devices and so on, does not presuppose that they fully understand their operation or the
fundamental technological principles on which they operate. Consequently, this means that the
familiarity of students with technology can become the source of misconceptions,
misinterpretations and, especially in the context of OER, misinformation and
misrepresentation. Therefore, on the one hand it is important to integrate technological
advancements in the school context and ensure that students’ knowledge of the new digital
world moves from superficial to a deep learning experience. On the other hand, the knowledge
and experience of students with technology could be used as a challenge for teachers and other
educational stakeholders to create the conditions for a new learning paradigm.
4 Challenges
The OER movement along with new opportunities and promising potentials as described in
the paragraphs above, also introduced challenges that might potentially hinder the growth
and impact of the movement. Such challenges can be the lack of awareness on behalf of
content creators on copyright issues; the lack of quality standards and quality control
mechanisms that will ensure the appropriateness of open content; and sustainability
mechanisms in OER initiatives that will propel them forward (Hylén, 2006). However, here
we will focus on challenges and potential pitfalls towards the integration of computational
methods. In particular, we will focus on three challenges:
1. Privacy and data protection challenges
2. Technical challenges
3. Pedagogical challenges
4.1 Privacy and data protection challenges
AI and ML methods in educational contexts strongly rely on keeping detailed records of
learner’s personal (demographics and history data) and activity (as recorded from the
learner’s activity through some educational platform) data and using this data in order to
provide adaptive and personalized support, tailored to the learner’s individual needs.
However, the unsolicited and non-transparent collection and use of private data has often
been criticized and it has raised both ethical and legal considerations. Especially since the
18
EU established the General Data Protection Regulation (GDPR
16
) in May 2018 to provide a
common ground for data privacy laws across Europe and to protect individual privacy of EU
citizens, we should put emphasis on two directions:
1. Protecting and preserving the privacy of end users. By the term “end users”, we refer
to the main stakeholders of OER, open repositories and educational platforms:
teachers, learners and content creators.
2. Ensuring the ethical collection, use and sharing of data for research purposes under
the rules and regulations imposed by GDPR and ethical research guidelines. To that
end, researchers need to communicate the purposes of their research and potential
implications to stakeholders. Stakeholders should be aware what data is collected,
what the collected data will be used for, by whom it will be used, for how long it will
be stored, and how they (stakeholders) can maintain control over their data over time.
Additionally, stakeholders’ informed consent has to be provided before any
collection of data takes place.
In order to further ensure the protection of users’ privacy, the technical infrastructure that
will implement the computational approaches should be carefully designed in terms of
supporting use of anonymized data and providing secure storing of user data.
4.2 Technical challenges
The vision for open and accessible education through the distribution and reuse of learning
content advocates for global, cloud-based data infrastructures that will provide centralized
access to learning materials and learning resources (such as the Open Discovery Space CIP
PSP initiative
17
). To the best of our knowledge, this is still an ongoing effort (according to
the initiative’s website, up to this day, more that 2500 European schools use the school
innovation toolkit) that aims to provide access to shared content as well as to train
stakeholders (mainly teachers and school leaders) in designing innovation plans, designing
educational activities and producing and assessing educational content.
However, at the same time these centralized repositories should also facilitate user
experience and practice by integrating “intelligent” support for multiple aspects of user
activity, such as personalization of the learning environment, adaptive instruction,
personalized recommendations, and tailored feedback. A potential challenge may arise from
16
https://eugdpr.org/
17
http://www.opendiscoveryspace.eu
19
the complexity and high cost of maintaining systems that can provide computational power
for such demanding tasks. To that end, we could also consider cloud-based solutions that
will choose computational approaches with respect to efficiency from an open and
centralized repository of AI and ML tools. An example of such a centralized repository is
the LearnSphere initiative
18
that combines educational data from multiple data sources and
analytical methods from various online workbenches.
4.3 Pedagogical Challenges
OERs provide the opportunity for learners to gain access to high-quality learning materials
that they can access from their own private space. This empowers the individual learner but
at the same time it may raise pedagogical challenges regarding social aspects of learning and
the role of the individual as a member of a community of practice. It has been argued that in
OERs individual learners may “find themselves adrift in an ocean of information, struggling
to solve ill-structured problems, with little clear idea of how to solve them, or how to
recognize when they have solved them(Shum & Ferguson, 2012). By using AI and ML
methods to support adaptation and personalization, we may be enforcing the isolation of
individual learner and put community-building practices at risk. Therefore, our
computational approaches should on one hand focus on personalization and adaptation to
address the learner’s needs but on the other hand we should consider scaffolding
communication and fostering collaboration and the acquisition of social skills that will
empower learning in a social arena.
Another pedagogical challenge that we have to address is how to support content creators in
designing high-quality content and content distributors in locating and accessing high-
quality content. To address this challenge, it is crucial to identify what we mean by high-
quality”. In short, a good OER should follow the following guidelines (Kawachi, 2014;
Shank, 2013):
a. It should be easily discoverable, accessible in multiple formats and multiple
locations, and to include transcripts/subtitles if needed;
b. It should provide accurate and relevant information, without errors, clearly described
content and free-standing (not assuming knowledge of other resources);
c. It should be free of copyright and it should not require any license for educational
reuse or modification/adaptation of the materials
18
http://learnsphere.org/
20
d. It should be easy to modify, easy to navigate and of good production quality
e. It should provide scaffolding and encourage active learning and opportunities for
practicing and testing students’ understanding of the material.
In order to assess relevance of information, semantic search approaches such as concept
matching and natural language processing have been widely used. Similarly, accessibility
guidelines that were originally developed to ensure accessibility of web content (like for
example, the Web Content Accessibility Guidelines 2.0, WCAG 2.0
19
) are now used to assess
whether OER content is easily accessible. Furthermore, OER-specific search engines have also
been brought together to support stakeholders in retrieving good-quality material.
However, we should keep in mind that quality of content largely depends on the needs and
perceptions of the end-user. In this sense, quality assurance and validation cannot come from
a top-down approach that will perform a centralized quality control
20
. On the contrary, such
quality control should take into account the end-users, engage them proactively in providing
feedback and suggestions for enhancements, support them in adapting the content themselves
and sharing it with the community. In other words, quality control can be envisioned as a
bottom-up approach, empowered by big data, AI and ML technologies and crowd-sourcing
models.
5 Conclusion
In this report, we provide an overview of the most prominent AI and ML practices used in
educational contexts focusing on OER and educational portals aiming to support K-12
education. To that end, we provided definitions and descriptions of the terms that were
referenced in this paper, a short historical overview of AI and ML in education and an
overview of the goals and common practices of the use of computational methods (AI and
ML) in educational contexts.
The benefits of adopting AI and ML computational methods in educational portals and OERs
is threefold:
a) Delivering the tools to stakeholders to choose effectively and efficiently for
appropriate resources and content that meets their standards;
19
https://www.w3.org/WAI/standards-guidelines/wcag/
20
http://www.oerafrica.org/healthoer/AfricanHealthOERVisionStatement/tabid/955/Default.aspx
21
b) Allowing the adaptation and personalization of learning content and also the
adaptation and personalization of the learning environment itself in order to address
the characteristics of the individual learner;
c) Providing the means to learners to gain and maintain an overview of their practices
and gain control not only over the resources they are using but also with respect to
their activity and performance through monitoring and adaptive feedback.
Future efforts should support local structures and directives to move forward with the
integration of computational approaches to existing OER and educational portals. As
aforementioned, evidently there is a need for cloud-based, large infrastructures that will offer
centralized access not only to learning materials but also to user data and computational
tools. An “intelligent” infrastructure will support the scaling of computational methods in
OER and educational platforms, bring together approaches from different educational
contexts and methodological traditions and will enable the emergence of new technological
advances.
However, in order for such an infrastructure to be successfully and consistently implemented
and used, there is a need for a methodological framework that will promote and orchestrate
stakeholders to undertake common actions that would support the integration and use of AI
and ML in OER and educational portals on a large scale. This will require the support of
different entities, such as educational and academic institutions that will provide quality
assurance with respect to the pedagogical value and benefits and policy makers on a national
and EU level. This is a crucial issue since attempting to create a common infrastructure that
would allow sharing of learning materials, learning data and analytical tools across different
educational settings and beyond national borders would entail not only pedagogical and
technological but also legal and ethical challenges.
5.1 Looking ahead: Taking the next step to promote AI and ML integration
in European schools
In order to achieve successful integration of AI and ML technologies in the classroom, we
first and foremost have to bridge the gap between the rapid research and technological
advancements in AI and ML and the slow – or even no – adaptation of such technologies in
schools. One step towards that direction could be:
a. to support stakeholders (teachers, students, school directors, policy makers) to
familiarize with AI and ML research;
22
b. to demonstrate to stakeholders that AI tools are not black-boxes but they are actual
products of rigorous scientific research; and
c. to carry out long-term initiatives (such as hands-on workshops) where teachers and
students will be guided how to properly adopt AI and ML technologies in their
classrooms without disruptions.
A second step would be to involve stakeholders when designing cutting-edge computational
tools. It is a common feeling between teachers and a valid concern that there is a big
disconnect …between the technology we receive versus the tech we want” and that “it’s not
about the data, but how do we apply it. The reason why this technology sucks is because we
don’t do good design
21
. Therefore, in order to address such concerns, it is necessary that we
involve stakeholders (teachers, school leaders, students and policy makers) when designing
educational technology. This can be achieved by using a socio-technical participatory
approach where technology experts along with stakeholders identify prominent needs and
requirements and then work together to develop a solution for addressing these needs. A
similar approach was followed by the European-funded project SHEILA
22
that aimed to
build a policy development framework that would promote formative assessment and
personalized learning.
In the case of using AI and ML to facilitate OER in the European K-12 sector, we envision
a project that will have as a purpose to create complementary sociotechnical-pedagogical
prototypes and designs-in-practice that help a) learners to become critical-constructive and
reflective thinkers in cultures of sharing and participation and b) teachers to become the
process designersof the critical thinkers. In a digital networked world, teaching is not
only seen as the creation of conditions for enabling learning; most importantly, teaching is
creating conditions for learning through discovering, understanding, and critically assessing
material that can be found openly. Borrowing the acronym from the Bring Your Own Device
movement (Bennett & Tucker, 2012) that is, BYOD and that it refers to workers who
use their own personal devices in their workplace, we envision the BYOR movement: Bring
Your Own Resources. In this context, students are expected to retrieve relevant learning
resources for meaningful activities designed by teachers, who are also responsible for
21
https://www.edsurge.com/news/2018-09-26-what-can-machine-learning-really-predict-in-education
22
http://sheilaproject.eu/
23
ensuring that that the learning objectives have been met. The role of AI and ML in this
project would be threefold:
1. to support students in retrieving appropriate learning materials and provide feedback
that could be used for improvement;
2. to assist teachers in assessing the students’ materials and provide further suggestions
to students with respect to quality control;
3. to implement a crowd-sourcing based infrastructure that will communicate students
and teachers findings and suggestions in order to create a bottom-up quality control
approach that can be shared and re-used from stakeholders with similar needs.
We foresee that such an effort could substantially promote the vision of open education
where learners can access freely resources and practice competencies that will help them
acquire the skills needed for functioning in the digital age, teachers can ensure that the
learning objectives are achieved and that OER end-users can contribute actively to setting
the bar with respect to quality control and standards.
24
References
Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender systems: A survey of the
state-of-the-art and possible extensions. IEEE Transactions on Knowledge & Data Engineering, (6),
734749.
Agudo-Peregrina, Á. F., Iglesias-Pradas, S., Conde-González, M. Á., & Hernández-García, Á. (2014). Can we
predict success from log data in VLEs? Classification of interactions for learning analytics and their
relation with performance in VLE-supported F2F and online learning. Computers in Human Behavior,
31, 542550.
Albacete, P., Jordan, P. W., Lusetich, D., Chounta, I.-A., Katz, S., & McLaren, B. M. (2018). Providing Proactive
Scaffolding During Tutorial Dialogue Using Guidance from Student Model Predictions. In Artificial
Intelligence in Education (AIED 2018).
Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psychologist, 51(4), 355.
Bannert, M., Reimann, P., & Sonnenberg, C. (2014). Process mining techniques for analysing patterns and
strategies in students’ self-regulated learning. Metacognition and Learning, 9(2), 161185.
Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House, 83(2), 39
43.
Bennett, L., & Tucker, H. (2012). Bring your own device. ITNow, 54(1), 2425.
Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-
one tutoring. Educational Researcher, 13(6), 416.
Burke, R. (2007). Hybrid web recommender systems. In The adaptive web (pp. 377408). Springer.
Butcher, N. (2015). A basic guide to open educational resources (OER). Commonwealth of Learning (COL);
Capuano, N., Gaeta, M., Ritrovato, P., & Salerno, S. (2014). Elicitation of latent learning needs through learning
goals recommendation. Computers in Human Behavior, 30, 663673.
https://doi.org/10.1016/j.chb.2013.07.036
Cen, H., Koedinger, K., & Junker, B. (2008). Comparing two IRT models for conjunctive skills. In International
Conference on Intelligent Tutoring Systems (pp. 796798). Springer.
Chi, M., Koedinger, K. R., Gordon, G. J., Jordon, P., & VanLahn, K. (2011). Instructional factors analysis: A
cognitive model for multiple instructional interventions.
Chounta, I.-A., Albacete, P., Jordan, P., Katz, S., & McLaren, B. M. (2017). The “Grey Area”: A computational
approach to model the Zone of Proximal Development. In European Conference on Technology
Enhanced Learning (pp. 316). Springer.
Chounta, I.-A., & Avouris, N. (2015). Towards a time series approach for the classification and evaluation of
collaborative activities. Computing and Informatics, 34(3), 588614.
Chounta, I.-A., & Avouris, N. (2016). Towards the real-time evaluation of collaborative activities: Integration of
an automatic rater of collaboration quality in the classroom from the teacher’s perspective. Education
and Information Technologies, 21(4), 815835.
Chounta, I.-A., Hecking, T., Hoppe, H. U., & Avouris, N. (2014). Two Make a Network: Using Graphs to Assess
the Quality of Collaboration of Dyads. In Collaboration and Technology (pp. 5366). Springer.
Retrieved from http://link.springer.com/chapter/10.1007/978-3-319-10166-8_5
Constantino-González, M. de los Á., & Suthers, D. D. (2007). An approach for coaching collaboration based on
difference recognition and participation tracking. In H. U. Hoppe, H. Ogata, & A. Soller (Eds.), The Role
of Technology in CSCL (pp. 87113). Springer US. Retrieved from
http://link.springer.com/chapter/10.1007/978-0-387-71136-2_6
Corbett, A. T., Koedinger, K. R., & Anderson, J. R. (1997). Intelligent tutoring systems. In Handbook of Human-
Computer Interaction (Second Edition) (pp. 849874). Elsevier.
d Baker, R. S., Corbett, A. T., & Aleven, V. (2008). More accurate student modeling through contextual estimation
of slip and guess probabilities in bayesian knowledge tracing. In Intelligent Tutoring Systems (pp. 406
415). Springer.
de Jong, T., Sotiriou, S., & Gillet, D. (2014). Innovations in STEM education: the Go-Lab federation of online
labs. Smart Learning Environments, 1(1), 3.
Donaldson, P., Ntarmos, N., & Portelli, K. (2017). A Systematic Review of the Potential of Machine Learning
and Data Science in Primary and Secondary Education.
Duval, E. (2011). Attention Please!: Learning Analytics for Visualization and Recommendation. In Proceedings
of the 1st International Conference on Learning Analytics and Knowledge (pp. 917). New York, NY,
USA: ACM. https://doi.org/10.1145/2090116.2090118
Dyckhoff, A. L., Zielke, D., Bültmann, M., Chatti, M. A., & Schroeder, U. (2012). Design and Implementation
of a Learning Analytics Toolkit for Teachers. Educational Technology & Society, 15(3), 5876.
25
Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., … Vuorikari, R. (2016). Research
evidence on the use of learning analytics: Implications for education policy.
Ferschke, O., Howley, I., Tomar, G., Yang, D., Liu, Y., & Rosé, C. P. (2015). Fostering discussion across
communication media in massive open online courses. International Society of the Learning Sciences,
Inc.[ISLS].
Ferschke, O., Yang, D., Tomar, G., & Rosé, C. P. (2015). Positive impact of collaborative chat participation in an
edX MOOC. In International Conference on Artificial Intelligence in Education (pp. 115124). Springer.
Freeman, A., Becker, S. A., & Cummins, M. (2017). NMC/CoSN horizon report: 2017 K. The New Media
Consortium.
Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning.
TechTrends, 59(1), 6471.
Gillet, D., De Jong, T., Sotirou, S., & Salzmann, C. (2013). Personalised learning spaces and federated online labs
for stem education at school. In Global Engineering Education Conference (EDUCON), 2013 IEEE (pp.
769773). IEEE.
Hawkins, J., Sheingold, K., Gearhart, M., & Berger, C. (1982). Microcomputers in schools: Impact on the social
life of elementary classrooms. Journal of Applied Developmental Psychology, 3(4), 361373.
Hecking, T., Chounta, I. A., & Hoppe, H. U. (2017). Role modelling in MOOC discussion forums. Journal of
Learning Analytics, 4(1), 85116.
Holstein, K., McLaren, B. M., & Aleven, V. (2017). Intelligent tutors as teachers’ aides: exploring teacher needs
for real-time analytics in blended classrooms. In Proceedings of the Seventh International Learning
Analytics & Knowledge Conference (pp. 257266). ACM.
Hylén, J. (2006). Open educational resources: Opportunities and challenges. Proceedings of Open Education,
4963.
Kawachi, P. (2014). Quality assurance guidelines for open educational resources: TIPS framework.
Commonwealth Educational Media Centre for Asia (CEMCA).
Martı́nez, A., Dimitriadis, Y., Rubia, B., Gómez, E., & de la Fuente, P. (2003). Combining qualitative evaluation
and social network analysis for the study of classroom social interactions. Computers & Education,
41(4), 353368. https://doi.org/10.1016/j.compedu.2003.06.001
McCarthy, J. (1998). What is artificial intelligence?
Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (2013). Machine learning: An artificial intelligence
approach. Springer Science & Business Media.
Mota, D., de Carvalho, C. V., & Reis, L. P. (2014). OTILIAAn architecture for the recommendation of teaching-
learning techniques supported by an ontological approach. In Frontiers in Education Conference (FIE),
2014 IEEE (pp. 17). IEEE.
Mu, J., Stegmann, K., Mayfield, E., Rosé, C., & Fischer, F. (2012). The ACODEA framework: Developing
segmentation and classification schemes for fully automatic analysis of online discussions. International
Journal of Computer-Supported Collaborative Learning, 7(2), 285305.
Nwana, H. S. (1990). Intelligent tutoring systems: an overview. Artificial Intelligence Review, 4(4), 251277.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc.
Pavlik Jr, P. I., Cen, H., & Koedinger, K. R. (2009). Performance Factors AnalysisA New Alternative to
Knowledge Tracing. Online Submission.
Peña-López, I. (2016). Innovating Education and Educating for Innovation. The Power of Digital Technologies
and Skills.
Reimann, P. (2009). Time is precious: Variable- and event-centred approaches to process analysis in CSCL
research. International Journal of Computer-Supported Collaborative Learning, 4(3), 239257.
https://doi.org/10.1007/s11412-009-9070-z
Ruiz-Iniesta, A., Jimenez-Diaz, G., & Gómez-Albarrán, M. (2014). A semantically enriched context-aware OER
recommendation strategy and its application to a computer science OER repository. IEEE Transactions
on Education, 57(4), 255260.
Salehi, M., Kamalabadi, I. N., & Ghoushchi, M. B. G. (2013). An effective recommendation framework for
personal learning environments using a learner preference tree and a GA. IEEE Transactions on Learning
Technologies, 6(4), 350363.
Sergis, S., Sampson, D. G., Rodríguez-Triana, M. J., Gillet, D., Pelliccione, L., & de Jong, T. (2017). Using
educational data from teaching and learning to inform teachers’ reflective educational design in inquiry-
based STEM education. Computers in Human Behavior.
Shank, J. D. (2013). Interactive open educational resources: A guide to finding, choosing, and using what’s out
there to transform college teaching. John Wiley & Sons.
Shelton, B. E., Duffin, J., Wang, Y., & Ball, J. (2010). Linking open course wares and open education resources:
creating an effective search and recommendation system. Procedia Computer Science, 1(2), 28652870.
Shum, S. B., & Ferguson, R. (2012). Social Learning Analytics, 24.
26
Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist, 57(10),
13801400.
Slavuj, V., Meštrović, A., & Kovačić, B. (2017). Adaptivity in educational systems for language learning: a
review. Computer Assisted Language Learning, 30(12), 6490.
Sosnovsky, S., Hsiao, I.-H., & Brusilovsky, P. (2012). Adaptation “in the Wild”: ontology-based personalization
of open-corpus learning material. In European Conference on Technology Enhanced Learning (pp. 425
431). Springer.
Spada, H., Meier, A., Rummel, N., & Hauser, S. (2005). A New Method to Assess the Quality of Collaborative
Process in CSCL. In Proceedings of th 2005 Conference on Computer Support for Collaborative
Learning: Learning 2005: The Next 10 Years! (pp. 622631). Taipei, Taiwan: International Society of
the Learning Sciences. Retrieved from http://dl.acm.org/citation.cfm?id=1149293.1149375
Suthers, D., Wise, A., Schneider, B., Shaffer, D., Hoppe, H. U., & Siemens, G. (2015). Learning Analytics of and
in Meditational Processes of Collaborative Learning. In Proceedings of the 11th International
Conference on Computer Supported Learning (Gothenburg, Sweden) (pp. 2630). ICLS.
Tapscott, D., & Barry, B. (2009). Grown up digital: How the net generation is changing your world (Vol. 200).
McGraw-Hill New York.
Tarus, J. K., Niu, Z., & Kalui, D. (2018). A hybrid recommender system for e-learning based on context awareness
and sequential pattern mining. Soft Computing, 22(8), 24492461.
Tarus, J. K., Niu, Z., & Mustafa, G. (2018). Knowledge-based recommendation: a review of ontology-based
recommender systems for e-learning. Artificial Intelligence Review, 50(1), 2148.
Van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2015). Teacher regulation of cognitive activities
during student collaboration: Effects of learning analytics. Computers & Education, 90, 8094.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring
systems. Educational Psychologist, 46(4), 197221.
VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rosé, C. P. (2007). When are tutorial
dialogues more effective than reading? Cognitive Science, 31(1), 362.
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning analytics dashboard applications.
American Behavioral Scientist, 57(10), 15001509.
Verbert, K., Manouselis, N., Ochoa, X., Wolpers, M., Drachsler, H., Bosnic, I., & Duval, E. (2012). Context-
aware recommender systems for learning: a survey and future challenges. IEEE Transactions on
Learning Technologies, 5(4), 318335.
Wen, M., Yang, D., & Rosé, C. P. (2014). Linguistic Reflections of Student Engagement in Massive Open Online
Courses. In ICWSM.
Xhakaj, F., Aleven, V., & McLaren, B. M. (2017). Effects of a teacher dashboard for an intelligent tutoring system
on teacher knowledge, lesson planning, lessons and student learning. In European Conference on
Technology Enhanced Learning (pp. 315329). Springer.
Yang, D., Wen, M., Kumar, A., Xing, E. P., & Rose, C. P. (2014). Towards an integration of text and graph
clustering methods as a lens for studying social interaction in MOOCs. The International Review of
Research in Open and Distributed Learning, 15(5).
Yudelson, M. V., Koedinger, K. R., & Gordon, G. J. (2013). Individualized Bayesian Knowledge Tracing Models.
In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial Intelligence in Education (Vol. 7926,
pp. 171180). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-
39112-5_18
Zhuhadar, L., & Nasraoui, O. (2010). A hybrid recommender system guided by semantic user profiles for search
in the e-learning domain. Journal of Emerging Technologies in Web Intelligence, 2(4).
Ziebarth, S., Chounta, I.-A., & Hoppe, H. U. (2015). Resource access patterns in exam preparation activities. In
Design for Teaching and Learning in a Networked World (pp. 497502). Springer, Cham.
... Според [2] има три основни направления, в които се използват методите на изкуствения интелект и машинното учене в контекста на онлайн системите за обучение: 1. Оценка на работата и знанията на обучаемите, с цел предсказване на успеваемост. За анализ и оценка на учебната работа и за предоставяне на обратна връзка към обучаващите се използват методи, базирани на теория на графите [4], контекстен анализ [6], правила за асоцииране и последователно извличане на данни [15], анализ на времеви серии [10], класификация и клъстеризация [9] и др. ...
Conference Paper
Full-text available
This article presents an approach for creating a content recommendation model in a gamе-based learning platform designed for students with special educational needs. Various referral systems and existing opportunities for intelligent delivery of personalized learning content tailored to the individual needs of children are addressed.
Chapter
Full-text available
This paper discusses how a dialogue-based tutoring system makes decisions to proactively scaffold students during conceptual discussions about physics. The tutor uses a student model to predict the likelihood that the student will answer the next question in a dialogue script correctly. Based on these predictions, the tutor will, step by step, choose the granularity at which the next step in the dialogue is discussed. The tutor attempts to pursue the discussion at the highest possible level, with the goal of helping the student achieve mastery, but with the constraint that the questions it asks are within the student’s ability to answer when appropriately supported; that is, the tutor aims to stay within its estimate of the student’s zone of proximal development for the targeted concepts. The scaffolding provided by the tutor is further adapted by adjusting the way the questions are expressed.
Article
Full-text available
This review, commissioned by the Royal Society, will examine the current state of the art developments in data science and machine learning within education, if they can be applied more broadly now, or in the future, and the impact they could have on transforming primary and secondary education in the UK. A sample of the total volume of research reviewed is directly referenced in the body of the report to help illustrate key issues. The full list of sources reviewed is included as the last section of the technical appendix. This appendix also provides further details on the systematic review process, important researchers in each sub-field, a list of current and recent research projects and existing tools, applications and products.
Conference Paper
Full-text available
In this paper, we propose a computational approach to model the Zone of Proximal Development (ZPD) using predicted probabilities of correctness while students engage in reflective dialogue. We employ a predictive model that uses a linear function of a variety of parameters, including difficulty and student knowledge, as students use a natural-language tutoring system that presents conceptual reflection questions after they solve high-school physics problems. In order to operationalize our approach, we introduce the concept of the “Grey Area”, that is, the area of uncertainty in which the student model cannot predict with acceptable accuracy whether a student is able to give a correct answer without support. We further discuss the impact of our approach on student modeling, the limitations of this work and future work in systematically and rigorously evaluating the approach.
Article
Full-text available
The rapid evolution of the Internet has resulted in the availability of huge volumes of online learning resources on the web. However, many learners encounter difficulties in retrieval of suitable online learning resources due to information overload. Besides, different learners have different learning needs arising from their differences in learner’s context and sequential access pattern behavior. Traditional recommender systems such as content based and collaborative filtering (CF) use content features and ratings, respectively, to generate recommendations for learners. However, for accurate and personalized recommendation of learning resources, learner’s context and sequential access patterns should be incorporated into the recommender system. Traditional recommendation techniques do not incorporate the learner’s context and sequential access patterns in computing learner similarities and providing recommendations; hence, they are likely to generate inaccurate recommendations. Furthermore, traditional recommender systems provide unreliable recommendations in cases of high rating sparsity. In this paper, we propose a hybrid recommendation approach combining context awareness, sequential pattern mining (SPM) and CF algorithms for recommending learning resources to the learners. In our recommendation approach, context awareness is used to incorporate contextual information about the learner such as knowledge level and learning goals; SPM algorithm is used to mine the web logs and discover the learner’s sequential access patterns; and CF computes predictions and generates recommendations for the target learner based on contextualized data and learner’s sequential access patterns. Evaluation of our proposed hybrid recommendation approach indicated that it can outperform other recommendation methods in terms of quality and accuracy of recommendations.
Article
Full-text available
To further develop rich and expressive ways of modelling roles of contributors in discussion forums of massive open online courses (MOOCs), we analyse a networks of forum users based on the relations of information-giving and information seeking. Specific connection patterns that appear in the information exchange networks of forum users are used to characterize user roles in a social context. Additionally, semantic roles are derived by identifying thematic areas in which an actor seeks for information (problem areas) and the areas of interest in which an actor provides information to others (areas of expertise). The interplay of social and semantic roles is analysed using a socio-semantic blockmodelling approach. The results show that social and semantic roles are not strongly interdependent. Here, the methodological contribution is in combining traditional blockmodelling with semantic information to characterize participant roles. Furthermore, we have applied sequential pattern analysis techniques to analyze the posting activity of users over time in terms of categories of cognitive engagement derived from the ICAP framework of Chi & Wylie (2014). A combination of the different approaches and results reveals that user roles derived from the analysis of engagement patterns are strongly related to socio-semantic user roles.
Article
Full-text available
Recommender systems in e-learning domain play an important role in assisting the learners to find useful and relevant learning materials that meet their learning needs. Personalized intelligent agents and recommender systems have been widely accepted as solutions towards overcoming information retrieval challenges by learners arising from information overload. Use of ontology for knowledge representation in knowledge-based recommender systems for e-learning has become an interesting research area. In knowledge-based recommendation for e-learning resources, ontology is used to represent knowledge about the learner and learning resources. Although a number of review studies have been carried out in the area of recommender systems, there are still gaps and deficiencies in the comprehensive literature review and survey in the specific area of ontology-based recommendation for e-learning. In this paper, we present a review of literature on ontology-based recommenders for e-learning. First, we analyze and classify the journal papers that were published from 2005 to 2014 in the field of ontology-based recommendation for e-learning. Secondly, we categorize the different recommendation techniques used by ontology-based e-learning recommenders. Thirdly, we categorize the knowledge representation technique, ontology type and ontology representation language used by ontology-based recommender systems, as well as types of learning resources recommended by e-learning recommenders. Lastly, we discuss the future trends of this recommendation approach in the context of e-learning. This study shows that use of ontology for knowledge representation in e-learning recommender systems can improve the quality of recommendations. It was also evident that hybridization of knowledge-based recommendation with other recommendation techniques can enhance the effectiveness of e-learning recommenders.
Article
Full-text available
Adaptive and intelligent instructional systems are used to deal with the issue of learning personalisation in contexts where human instructors are not immediately available, so their role is transferred entirely or in part onto the computer. Even though such systems are mostly developed for well-defined domains that have a rather straightforward acquisition order, such as mathematics or computer programming, they found their application in ill-defined domains as well. Natural language learning is one such domain, and developing adaptive instructional systems for this specific purpose is notoriously complex and challenging due to the nature of language systems. The paper at hand examines the theoretical background of adaptivity and intelligence in instructional systems, and discusses their application in learning and teaching of natural languages. Moreover, the paper reviews adaptive and intelligent language learning systems in existence and identifies their characteristics in a systematic manner. The discussion section offers a more detailed view of selected systems for language learning that exhibit interesting and innovative implementation solutions. The paper concludes by suggesting possible developmental directions and future work in the field.
Article
Science, Technology, Engineering and Mathematics (STEM) education is recognized as a top school education priority worldwide and Inquiry-based teaching and learning is identified as a promising approach. To effectively engage students in Inquiry tasks, appropriate guidance should be provided, usually by combining digital tools such online labs and modeling tools. This is a cumbersome task for teachers, since it involves manually assessing the type/level of tool-supported guidance to be provided and potentially refining these to meet guidance needs of individual students. In our research we target to investigate how to support this systematic reflection process with educational data analytics methods and tools from both the design and the delivery of Inquiry-based educational designs (IED). The contribution of this paper is to propose a novel "Teaching and Learning" Analytics method and research prototype tool, extending the scope of purely learning analytics methods, to analyze IED in terms of the tool-supported guidance they offer and relate these analyses to students' educational data that are already being collected by existing learning analytics systems, increasing teachers' awareness. A two-layer evaluation methodology positively assessed the capacity of our method to analyze IED and provided initial evidence that the insights generated offer statistically significant indicators that impact students' activity during the delivery of these IED. The insights of this work aim to contribute in the field of cognitive data analytics for teaching and learning, by investigating new ways to combine analyses of the educational design and students' activity, and inform teachers' reflection from a holistic perspective.
Conference Paper
Intelligent Tutoring Systems (ITSs) help students learn but often are not designed to support teachers and their practices. A dashboard with analytics about students’ learning processes might help in this regard. However, little research has investigated how dashboards influence teacher practices in the classroom and whether they can help improve student learning. In this paper, we explore how Luna, a dashboard prototype designed for an ITS and used with real data, affects teachers and students. Results from a quasi-experimental classroom study with 5 middle school teachers and 17 classes show that Luna influences what teachers know about their students’ learning in the ITS and that the teachers’ updated knowledge affects the lesson plan they prepare, which in turn guides what they cover in a class session. Results did not confirm that Luna increased student learning. In summary, even though teachers generally know their classes well, a dashboard with analytics from an ITS can still enhance their knowledge about their students and support their classroom practices. The teachers tended to focus primarily on dashboard information about the challenges their students were experiencing. To the best of our knowledge, this is the first study that demonstrates that a dashboard for an ITS can affect teacher knowledge, decision-making and actions in the classroom.
Conference Paper
Intelligent tutoring systems (ITSs) are commonly designed to enhance student learning. However, they are not typically designed to meet the needs of teachers who use them in their classrooms. ITSs generate a wealth of analytics about student learning and behavior, opening a rich design space for real-time teacher support tools such as dashboards. Whereas real-time dashboards for teachers have become popular with many learning technologies, we are not aware of projects that have designed dashboards for ITSs based on a broad investigation of teachers' needs. We conducted design interviews with ten middle school math teachers to explore their needs for on-the-spot support during blended class sessions, as a first step in a user-centered design process of a real-time dashboard. Based on multi-methods analyses of this interview data, we identify several opportunities for ITSs to better support teachers' needs, noting that the analytics commonly generated by existing teacher support tools do not strongly align with the analytics teachers expect to be most useful. We highlight key tensions and tradeoffs in the design of such real-time supports for teachers, as revealed by "Speed Dating" possible futures with teachers. This paper has implications for our ongoing co-design of a real-time dashboard for ITSs, as well as broader implications for the design of ITSs that can effectively collaborate with teachers in classroom settings.