ArticlePDF Available

Cognitive Tutors: The End of Boredom and Confusion?


Abstract and Figures

In the field of e-learning, many approaches have been tried to create adaptive digital learning environments. Based on interactions with the student, these systems make inferences about individual pre-knowledge, aptitude, affect and/or progress, and dynamically adapt the student's instruction accordingly. Cognitive tutors are an example of such systems. Based on a model of human cognition, a cognitive tutor tracks the process of knowledge acquisition and can thus select appropriate teaching contents. Keeping track of the student's cognitive states also allows it to identify common mistakes and to guide the student through exercises by giving contextual hints and feedback. In this article, I outline the general concept of cognitive tutors. In particular, I focus on cognitive tutors based on the prominent ACT-R theory to demonstrate how a cognitive tutor can be implemented. Then I move on to analyse the potential of cognitive tutors. I point out that, while they are good at adapting to individual students, they have some remarkable shortcomings. Most notably, their use is limited to teaching how to solve well-defined and well-structured problems. Moreover, they are highly domain-specific and their development is expensive. As a conclusion, a cognitive tutor's benefit largely depends on the actual use case.
Content may be subject to copyright.
Cognitive Tutors:
The End of Boredom and Confusion?
Term Paper
Lukas Gebhard
Submitted on 01/08/2018
Albert-Ludwigs-Universität Freiburg
Department of Computer Science
This work is licensed under the Creative Commons Attribution Share-
Alike 4.0 International License. To view a copy of this license, visit
In the field of e-learning, many approaches have been tried to create adaptive digital
learning environments. Based on interactions with the student, these systems make infer-
ences about individual pre-knowledge, aptitude, affect and/or progress, and dynamically
adapt the student’s instruction accordingly. Cognitive tutors are an example of such
systems. Based on a model of human cognition, a cognitive tutor tracks the process of
knowledge acquisition and can thus select appropriate teaching contents. Keeping track
of the student’s cognitive states also allows it to identify common mistakes and to guide
the student through exercises by giving contextual hints and feedback. In this article, I
outline the general concept of cognitive tutors. In particular, I focus on cognitive tutors
based on the prominent ACT-R theory to demonstrate how a cognitive tutor can be
implemented. Then I move on to analyse the potential of cognitive tutors. I point out
that, while they are good at adapting to individual students, they have some remarkable
shortcomings. Most notably, their use is limited to teaching how to solve well-defined
and well-structured problems. Moreover, they are highly domain-specific and their de-
velopment is expensive. As a conclusion, a cognitive tutor’s benefit largely depends on
the actual use case.
1 Introduction 4
2 Theoretical Foundations 6
2.1 TypesofKnowledge ............................ 6
2.2 ACT-R................................... 7
3 Concept 10
3.1 Denition.................................. 10
3.2 Architecture ................................ 10
3.3 Cognitive Tutors based on ACT-R . . . . . . . . . . . . . . . . . . . . . 11
3.3.1 The Domain Model . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.2 The Student Model . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.3 The Tutoring Model . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3.4 The User Interface . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Discussion 16
4.1 EmpiricalResults.............................. 16
4.2 RangeofApplication............................ 16
4.3 DevelopmentCosts............................. 17
4.4 Discrepancies with Constructivism . . . . . . . . . . . . . . . . . . . . . 17
4.5 Adaptivity of Instruction . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Conclusions 19
Bibliography 20
1 Introduction
For practical reasons, teachers in traditional classroom settings cannot adapt teaching
to each student individually. Of course, they can vary teaching methods, provide exer-
cises of various difficulty levels, and at times even consider some students individually.
However, as they have to take care of many things in a lesson (e.g., classroom manage-
ment, presenting new learning contents, answering questions, assessing students, giving
instructions), most of the time they need to generalise instruction to some extent. This
leads to undesirable situations. For example, some students may be bored because they
are already familiar with the presented topics, others may not understand what is going
on due to lacking pre-knowledge, and still others may be overstrained since they require
a slower pace of learning.
Intelligent tutoring systems (ITS) are a promising approach to tackle this problem. These
digital learning environments adapt to each student individually by accounting for their
pre-knowledge, aptitude and/or emotions and thus allowing them to learn at their own
pace. Today, there is a wide range of ITSs, focussing on different application areas,
based on various theories, and implemented using all kinds of techniques.
A prominent type of ITS is known as the cognitive tutor. What makes cognitive tutors
particularly interesting is that they model cognition, which seems reasonable in the con-
text of knowledge acquisition. Their development dates back to the mid 80s [5] when
John R. Anderson, a researcher at the U.S. university Carnegie Mellon, evaluated his
cognitive theory ACT [2]. As Anderson and his colleagues extended and improved ACT,
they also continued to develop cognitive tutors based on that theory. By 2003, their
cognitive tutors were brought to more than 1400 schools [20], and today they are still
being improved and distributed by Carnegie Learning Inc. [8], an offspin company of
Carnegie Mellon [20].
In this article we will have a closer look at cognitive tutors. More specifically, the main
objectives are:
1. To outline the general concept of cognitive tutors,
2. To explain how they work using the example of ACT-R-based tutors (ACT-R [3] is
the current version of ACT), and
3. To analyse their potential, especially with respect to their ability to adapt to students
To that end, I first cover theoretical foundations in chapter 2. In chapter 3 I then outline
what cognitive tutors are and how they work. Finally, I discuss strengths and limitations
of cognitive tutors in chapter 4 and draw some conclusions in chapter 5.
2 Theoretical Foundations
Before actually turning towards cognitive tutors, in this chapter I first introduce some
theoretical basics needed to understand the following chapters.
2.1 Types of Knowledge
As any other learning environment, cognitive tutors are built to facilitate the acquisition
of knowledge. Since knowledge is a broad and unspecific term, many attempts have
been made to provide a classification of it (e.g., [16] and [1]). In the field of educa-
tional psychology, it is common to distinguish three types of knowledge [1]: declarative
knowledge,procedural knowledge and conditional knowledge. The terms are explained
in Table 2.1.
Description Example
Declarative The mental representation of factual in-
formation ("knowing that").
Knowing that Paris is the
capital of France.
Procedural The mental representation of skills
("knowing how").
Knowing how to solve a
system of linear equations.
Conditional The understanding of when and where the
application of declarative or procedural
knowledge is appropriate ("knowing when
and where")
Knowing when a citation is
required when writing a re-
search article.
Table 2.1: A common classification of knowledge (based on Alexander et al. [1]).
In addition to the above three terms, metacognitive knowledge - knowledge about knowl-
edge - is of key importance with respect to education. This is because successful learning
is based on efficient learning strategies, and these strategies themselves must be learned
as well. Metacognitive knowledge comprises all three of the above knowledge types [15].
For example, for learning to be successful it is useful to know that prior knowledge influ-
ences reading comprehension (declarative metacognitive knowledge). Moreover, it can
be beneficial to know how to summarise a text (procedural metacognitive knowledge).
In addition, it is important to know when it is actually a good idea to summarise a text
to better understand it (conditional metacognitive knowledge).
2.2 ACT-R
ACT-R is a comprehensive model of human cognition. As such it can serve as the under-
lying theory of cognitive tutors. In section 3.3, I will illustrate the concept of cognitive
tutors at the example of ACT-R-based cognitive tutors. For that reason, in this section
I give a high-level overview of ACT-R, mostly based on Anderson et al. [4].
Originally, ACT-R consisted of "a theory of the nature of human knowledge, a theory
of how this knowledge is deployed, and a theory of how that knowledge is acquired"
[3]. Meanwhile, it "has evolved into a theory that consists of multiple modules but also
explains how these modules are integrated to produce coherent cognition" [4]. Each of
these modules is concerned with a certain aspect of cognition, such as the memory, the
processing of visual information, the management of intentions, and the controlling of
movements, to name only a few.
One of the most fundamental assumptions of ACT-R is that knowledge can be divided
into declarative and procedural knowledge. In contrast to the threefold classification of
knowledge outlined in section 2.1, ACT-R does not explicitly consider conditional knowl-
edge. This is not contradictory, though, as ACT-R resides on a lower level of abstraction.
That is, ACT-R models how information are processed, stored and retrieved in/from the
brain, while the threefold classification rather emphasises high-level characteristics of
each of the types of knowledge.
Furthermore, ACT-R assumes a centralised organisation of the modules. More precisely,
each module exchanges information with a central component called the production sys-
tem, but does not directly exchange information with other modules.1
Figure 2.1 visualises the interplay of core components in version 5.0 of ACT-R. Each
component is believed to be associated with a certain brain area which is specified in
parentheses. The following list provides high-level descriptions of the components shown
in the figure.
Buffers To communicate with the production system, each module has an interface
called the buffer. "[T]he content of any buffer is limited to a single declarative
unit of knowledge, called a chunk. Thus, only a single memory can be retrieved
at a time or only a single object can be encoded from the visual field." [4]
The declarative module This module implements a structured storage of declarative
knowledge. Each chunk is associated with a level of activation. The declarative
buffer always contains the most highly activated chunk. The activation of a chunk
in turn depends on the relevance to the current context and its usefulness in
1Anderson et al. [4] admit that this clearly over-simplifies the structure of the brain as brain areas are
directly connected with each other.
Figure 2.1: The interplay of core components in ACT–R 5.0. Image source: Anderson
et al. [4]
the past. On a more abstract level, the declarative module "takes the form of
a semantic net linking propositions, images, and sequences by associations" [12].
Thus, in ACT-R declarative knowledge not only comprises single facts but also an
understanding of complex relationships and concepts.
The visual module The task of this module is to visually identify objects.
The manual module This module controls the hands.
The intentional module This module keeps track of goals and intentions. With that,
it is also required for problem solving. The process of problem solving can be seen
as the maintenance of a list of goals which have to be reached. While trying to
reach one of the goals, one may identify a sub-goal which is then appended to the
list. On reaching a goal, it is removed. Eventually, the overall problem is solved
as soon as the list is empty.
The production system As noted earlier, the production system is ACT-R’s central
component. By coordinating all of the modules, it achieves coherent behaviour
and thus implements procedural knowledge. This works as follows. At the core
lies a set of so-called production rules which are basically condition-action pairs.
A condition specifies a pattern of the buffer contents whereas an action represents
a modification of these contents. If a matching pattern is detected in the buffers,
the corresponding action is selected as a candidate for execution. However, in
each cycle only one action is executed.2To be able to take the most appropriate
action, each production rule is associated with a continuously varying value (similar
to the activation levels of chunks). These values are called utilities and learned
from experience. The production system always selects the production rule that
has the highest utility and executes the corresponding action. Thereby, it modifies
the buffer contents, and with that influences the processing that occurs in the
modules. Together with the continuous adjustment of utilities, it thus achieves
coherent behaviour, that is, procedural knowledge.
With regard to instructional design, it is important to note how ACT-R explains the
acquisition of skills. First, the learner takes the instruction for the task and converts
it into declarative knowledge. Then, they convert this knowledge into production rules,
thus creating procedural knowledge. After that, they do not need the declarative rep-
resentation any more but can instead carry out the task based on the production rules,
which leads to better performance.
2ACT-R models time as a sequence of cycles. In each cycle, the modules concurrently update the
contents of their buffers and then the production system executes an action correspondingly.
3 Concept
Now that we have covered the basics, in this chapter we turn towards the concept of cog-
nitive tutors. Concretely, after defining what cognitive tutors actually are (section 3.1),
we have a look at their general architecture (section 3.2). Finally, in section 3.3 I outline
how cognitive tutors work using the example of ACT-R-based tutors.
3.1 Definition
Cognitive tutors are "problem solving environments constructed around cognitive models
of the knowledge students are acquiring" [11]. According to this definition, a cognitive
tutor is a special type of the more general concept of an intelligent tutoring system (ITS).
The latter simply refers to "any computer program that contains some intelligence and
can be used in learning" [14].
The term ’cognitive tutor’ was coined by researchers of the Advanced Computer Tutoring
Project at the U.S. university Carnegie Mellon in the late 80s and early 90s [5]. Their
main focus of work was to design instruction "with reference to a cognitive model of the
competence that the student is being asked to learn".
As a side note, Cognitive Tutor R
is a registered trademark of the U.S. company Carnegie
Learning, Inc. [7].
3.2 Architecture
As noted in section 3.1, a cognitive tutor is a special type of an ITS. Therefore, it is
not surprising that its high-level architecture is mostly identical to the architecture of an
ITS, which traditionally entails four components [14][18]:
The domain model encodes the knowledge which is to be learned. There are
various possibilities to encode that knowledge. For instance, it can be represented
as a curriculum, an ontology, a set of constraints, or a set of rules [18].
The student model reflects "the student’s cognitive and affective states and their
evolution as the learning process advances" [18].
The tutoring model chooses tutoring strategies and actions based on input from
both the domain model and the student model.
The user interface interacts with the student and incorporates the actual learning
Figure 3.1 sketches the high-level structure of a typical ITS. The domain, student, and
tutoring model exchange information among each other. Together, they make up the
programme logic and provide the data for the user interface to display.
Figure 3.1: The traditional architecture of an ITS. Image source: Nkambou et al. [18]
What makes a cognitive tutor a special type of an ITS is how these components and
their interactions are implemented. This is demonstrated in the following section.
3.3 Cognitive Tutors based on ACT-R
A cognitive tutor is a special type of an ITS in the sense that its domain model is based
on some kind of a cognitive model. In this section, I outline the concept of cognitive
tutors with respect to the cognitive model presented by Corbett et al. [9][10], which
builds upon ACT-R (see section 2.2).
3.3.1 The Domain Model
As mentioned in section 2.2, ACT-R models problem solving as the maintenance of a
hierarchy of goals which have to be reached one by one to eventually solve the problem.
A goal can be reached by effectively applying production rules. Correspondingly, the
domain model of a cognitive tutor is made up by a set of production rules, representing
"a complete, executable model of procedural knowledge in the domain" [9].
As an example, we consider the (very simple) problem of solving a linear equation of the
form ax =bwith constants a, b > 0and variable x > 0,a, x, b R. The following
pseudo-code production rules could represent the domain model:
(1) IF a != 1 THEN
adopt goal isolate_x;
(2) IF goal == isolate_x THEN
divide equation by a;
drop goal isolate_x;
adopt goal get_result;
(3) IF goal == get_result THEN
set result to right side of equation;
drop goal get_result;
In practice, domain experts work together with cognitive psychologists to derive such
a set of production rules "from a cognitive task analysis of the domain knowledge and
reasoning strategies students are learning and applying in problem solving" [10].
Apart from being a source of expert knowledge, the domain model should be able to
"solve the problems posed to students in the many ways that students solve them" [10].
To that end, it even includes so-called buggy rules [10] which support the tutoring model
in detecting common mistakes. For instance, we could extend the above example by the
following buggy rule:
(4) IF goal == isolate_x THEN
multiply equation by a;
drop goal isolate_x;
adopt goal get_result;
If the student applies this production rule, the domain model notifies the tutoring model
which may then tell the user interface to display an error-specific hint such as ’to isolate
the variable, you have to divide the equation by a, not to multiply it’.
3.3.2 The Student Model
To track the student’s progress, the student model makes use of a technique known as
bayesian knowledge tracing [9]. Basically, the rationale is to create an overlay of the
production rules to be learned, where each rule is associated with the estimated proba-
bility that the student has already learned it. Each time the student applies a production
rule, the corresponding probability is updated depending on whether they applied it cor-
rectly or not. Based on these probabilities, the tutoring model can dynamically choose
appropriate exercises.
Bayesian knowledge tracing is a specialisation of the concept of hidden Markov models
[23]. The model makes the following simplifying assumptions:
Each production rule is either in the learned or in the unlearned state. That is, it
is either contained or not contained in the student’s procedural knowledge.
Production rules cannot make transitions from the learned to the unlearned state.
In other words, procedural knowledge is never forgotten.
Each production rule ris parametrised as explained in Table 3.1.
Name Term Description
p(L0)rThe probability that rule ris in the learned state prior to the
first opportunity to apply it.
Acquisition p(T)rThe probability that rule rtransits from the unlearned to the
learned state at an opportunity to apply it.
Slip p(S)rThe probability to make a mistake when applying rule rfrom
the learned state.
Guess p(G)rThe probability of correctly applying rule rif it is in the un-
learned state.
Table 3.1: The parameters in bayesian knowledge tracing. For each production rule r,
these four parameters have to be set individually. Source: Adaptation of
Corbett et al. [9]
Based on these values, the probability p(Ln)s
rthat rule ris in the learned state of safter
nopportunities to apply it is recursively given as
where Enis a binary random variable representing the n-th evidence, that is, if scorrectly
applied rin step n(En= correct) or not (En= wrong). Consequently, p(Ln1|En)s
the posterior probability that rhad already been in the learned state right before shad
the n-th opportunity to apply it. We can compute this posterior via Bayesian inference.
Concretely, if scorrectly applies rat the n-th opportunity, we have
p(Ln1|En= correct)s
r=p(Ln1En= correct)s
p(En= correct)s
Similarly, if sfails to correctly apply rat step n, we have
p(Ln1|En= wrong)s
r=p(Ln1En= wrong)s
p(En= wrong)s
As a side note, instead of fitting the four parameters shown in Table 3.1 per skill only,
they can also be fit per skill and per student. This yields a more accurate model which
explicitly accounts for differences between students [23] [13].
3.3.3 The Tutoring Model
Without the tutoring model, the other two models are not of much use. In a process
called model tracing, the tutoring model leverages the domain model and the student
model to create an adaptive learning environment. This adaptivity is the result of several
The tutoring model guides the student through the solution space. More precisely,
it keeps the student "within a specified tolerance of an acceptable solution path"
[14]. For example, a tutoring model with minimum error tolerance would reject
any non-applicable action and have the user interface inform the student about
their mistake immediately. The student can then correct their mistake.
It offers situation-specific guidance. For instance, if the student requests a hint,
the tutoring model could identify a production rule applicable in the given situation
and suggest to perform the corresponding action.
It takes the student’s progress into account. In particular, it can select exercises
covering production rules that the student model considers unlikely to be in the
learned state.
3.3.4 The User Interface
The three components described above all work behind the scenes. The actual interaction
with the user happens through a separate component, the user interface. It has several
It explains the domain and how to solve problems in the domain. For instance, it
may provide learning material in the form of hypertext, show step-by-step solutions
for sample problems, or visualise the domain knowledge using figures or animations.
It receives the student’s answers to problems at the granularity of production rules.
Thus, the tutoring component can evaluate each solution step separately.
It provides support on demand. For example, on request it may give a context-
specific hint to guide the student into the right direction.
If the student deviates too much from an acceptable solution path, it rejects their
It makes the learning progress transparent. This does not only support self-
assessment but also adds an element of gamification to the system. For instance,
the APT LispTutor, a cognitive tutor teaching the programming language Lisp,
shows a progress bar for each skill to be learned [11] (see Figure 3.2 for a screen-
Figure 3.2: The ’skill meter’ in the APT LispTutor interface. Image source: Corbett et
al. [11]
4 Discussion
After having seen what cognitive tutors are and how they work, in this chapter I highlight
some of their strengths and limitations.
4.1 Empirical Results
To the best of my knowledge, so far cognitive tutors have only been evaluated within
the framework of blended learning curricula. Most notably, The U.S. Department of Ed-
ucation conducted a meta-study [22] on the effectiveness of high school Maths curricula
involving a cognitive tutor developed by Carnegie Learning Inc., as compared to tradi-
tional curricula. Altogether, the authors of the meta-study reviewed a total of 27 studies
but only considered 6 of them to meet their institution’s evidence standards. These 6
studies include a total of about 2,500 students from 39 schools. The empirical results
suggest that it is unclear if Carnegie Learning’s curricula improve student performance.
More specifically, one of the evaluated studies showed a statistically significant negative
effect, another one showed a statistically significant positive effect, while the remaining
four studies were found to show indeterminate effects.
However, cognitive tutors are not restricted to blended learning environments. They can
also be used for autonomous learning, or in the context of standalone online courses.
In particular, they may prove beneficial in countries with a severe shortage of qualified
teachers, just as any ITS.1From this viewpoint, what counts is if cognitive tutors are
effective in the absence of human instruction. This has yet to be investigated in detail.
4.2 Range of Application
By definition, cognitive tutors are problem solving environments. Typical application
areas are well-defined and highly structured domains of subjects such as programming
[11], genetics [10], and geometry [5].
However, there are numerous cases where problem solving is difficult to formalise or can-
not be formalised at all. These include problems that are not solved yet, such as ’how to
tackle climate change’, as well as tasks with high degrees of freedom, such as ’develop
a programme that can play chess’. Since such problems require creativity and can often
1See Nye [19] for a review of trends and approaches for educational technology in a global context.
be solved in infinitely many ways, it is not feasible to develop domain models for them.
Beginners are likely to be overstrained by unstructured problems, but advanced students
should be given the opportunity to practice solving such problems. Consequently, cog-
nitive tutors might be a good choice for students with low to intermediate knowledge in
the domain but not for advanced ones.
Apart from that, as cognitive tutors are restricted to problem solving (and thus focus
on the acquisition of procedural and conditional knowledge), they are not suitable for
the teaching of declarative knowledge. To the best of my knowledge, so far no-one
has made an attempt to explicitly implement a cognitive model for the learning of facts
and their complex relationships. As a consequence, today it does not make much sense
to use cognitive tutors for teaching topics such as photosynthesis, the Vietnam War,
capitalism, or permaculture.
4.3 Development Costs
Converting domain knowledge into a domain model (e.g., a set of production rules) is a
time-consuming task [6], even though there are software tools that facilitate this work
[17]. In addition, the resulting domain model is highly specific for the domain it was
created for and cannot be transferred to another domain. For example, usually a domain
model built for teaching a certain programming language cannot be used for teaching
another one.
On the other hand, just as any other ITS, a cognitive tutor has the potential to save
a tremendous amount of time once it is implemented. It can take over lots of tasks a
human teacher would otherwise have to carry out. Therefore, depending on the scenario,
high development costs may be negligible as compared to the added value. Furthermore,
data mining seems to be a promising approach to further reduce the amount of expert
knowledge needed to create a domain model [6].
4.4 Discrepancies with Constructivism
On a more fundamental level, the four-component architecture of ITSs (see section 3.2)
can be criticised for not being compatible with the theory of constructivism [21]. Con-
structivists argue that learning is an active process involving individual interpretation
of information and integration ot that information into already existing knowledge. To
some degree this contradicts the typical architecture of ITSs. This is because they eval-
uate learning progress based on a predefined set of knowledge, the domain model. This
implies that after successful learning each student should have internalised a copy of this
domain model as part of their procedural knowledge. However, according to construc-
tivism, each student would rather construct their own knowledge representation, as a
result of individual interpretation and integration into pre-knowledge.
4.5 Adaptivity of Instruction
Finally, we return to the article’s central question: Are cognitive tutors the end of bore-
dom and confusion? Or, more precisely, to what extent are cognitive tutors adaptive to
the student? The answer depends on the tutor’s actual design. As an example, I assess
the adaptivity of ACT-R-based tutors (see section 3.3) here.
Most notably, ACT-R-based tutors enable students to learn at their own pace. By apply-
ing bayesian knowledge tracing (see subsection 3.3.2, they are able to choose exercises
based on individual learning progress. In addition, they can account for pre-knowledge
and intelligence (especially if the model parameters are fit per student), for example
by adjusting the amount of exercises a students needs to work through. Furthermore,
through model tracing (see subsection 3.3.1) ACT-R-based tutors provide individual
guidance. They offer problem-specific feedback and hints and prevent students from
drifting away from correct solutions.
However, ACT-R-based tutors also have some shortcomings with respect to adaptivity.
For one, while they are able to track the student’s learning progress, they cannot identify
their current affect towards the task. This could be beneficial to increase learning
motivation, though, for example by giving empathic feedback. For another, ACT-R-
based tutors lack adaptivity with respect to the degrees of freedom associated with
exercises. This is simply because they cannot be used to teach how to solve problems
with high degrees of freedom, as argued in section 4.2.
5 Conclusions
In this article, I have outlined the concept of cognitive tutors and evaluated their poten-
tial with the example of tutors based on ACT-R. To wrap it up, within the framework
of problem solving cognitive tutors are a promising approach to provide individualised
instruction without a human element. Based on findings from cognitive sciences, they
are able to quantify learning progress with fine granularity, give problem-specific feed-
back and guide the student through the problem space. However, when considering to
use or to develop a cognitive tutor, one has to bear in mind that its range of application
is quite limited and its development expensive.
Where the future is concerned, it is not clear yet if cognitive tutors or another type of ITS
will eventually prevail. Perhaps various approaches will be merged together to produce
some kind of an universal ITS. Or maybe different types of ITSs will be used depending
on the application area. Whatever the case, I hope that further research in the field of
e-learning promotes and facilitates the access to high-quality education worldwide.
[1] P. A. Alexander, D. L. Schallert, and V. C. Hare. “Coming to Terms: How Re-
searchers in Learning and Literacy Talk about Knowledge”. In: Review of Educa-
tional Research 61.3 (1991), pp. 315–343.
[2] J. R. Anderson. The Architecture of Cognition. Harvard University Press, 1983.
[3] J. R. Anderson and C. Lebiere. The Atomic Components of Thought. Lawrence
Erlbaum Associates, 1998.
[4] J. R. Anderson et al. “An Integrated Theory of the Mind”. In: Psychological Review
111.4 (2004), pp. 1036–1060.
[5] J. R. Anderson et al. “Cognitive Tutors: Lessons Learned.” In: The Journal of
Learning Sciences 4 (1995), pp. 167–207.
[6] T. Barnes and J. Stamper. “Toward the Extraction of Production Rules for Solving
Logic Proofs”. In: International Conference of Artificial Intelligence in Education.
2007, 11–20.
[7] Carnegie Learning, Inc. Copyright Information.url:https://www.carnegiele (visited on 2018-06-03).
[8] Carnegie Learning, Inc. Product Overview.url:https://www.carnegielearn (visited on 2018-06-12).
[9] A. T. Corbett and J. R. Anderson. “Knowledge Tracing: Modeling the Acquisi-
tion of Procedural Knowledge”. In: User Modeling and User-Adapted Interaction
(1994), pp. 253–278.
[10] A. T. Corbett et al. “A Cognitive Tutor for Genetics Problem Solving: Learning
Gains and Student Modeling”. In: Journal of Educational Computing Research
42.2 (2010), pp. 219–239.
[11] A. Corbett, M. McLaughlin, and K. C. Scarpinatto. “Modeling Student Knowledge:
Cognitive Tutors in High School and College”. In: User Modeling and User-Adapted
Interaction. Kluwer Academic Publishers, 2000, pp. 81–108.
[12] R. Culatta and G. Kearsley. ACT-R (John Anderson).url:http://www.instr (visited on 2018-06-03).
[13] M. Eagle et al. “Exploring Learner Model Differences Between Students”. In: Lec-
ture Notes in Computer Science. 2017.
[14] R. Freedman. “What is an Intelligent Tutoring System?” In: Intelligence 11.3
[15] J. E. Jacobs and S. G. Paris. “Children’s Metacognition about Reading: Issues in
Definition, Measurement, and Instruction”. In: Educational Psychologist 22.3-4
(1987), pp. 255–278.
[16] T. de Jong and M. G. Ferguson-Hessler. “Types & Qualities of Knowledge”. In:
Educational Psychologist 31.2 (1996), pp. 105–113.
[17] T. Murray. “Authoring Intelligent Tutoring Systems: An Analysis of the State of
the Art”. In: International Journal of Artificial Intelligence in Education 10 (1999),
pp. 98–129.
[18] R. Nkambou, R. Mizoguchi, and J. Bourdeau. Advances in Intelligent Tutoring
Systems. Springer, 2010.
[19] B. D. Nye. “Intelligent Tutoring Systems by and for the Developing World: A Re-
view of Trends and Approaches for Educational Technology in a Global Context”.
In: International Journal of Artificial Intelligence in Education 25 (2015), pp. 177–
[20] Pittsburgh Advanced Cognitive Tutor Center. PACT Center @ Carnegie Mellon
University.url: (visited on 2018-06-12).
[21] J. Self. “Theoretical Foundations for Intelligent Tutoring Systems”. In: Journal of
Artificial Intelligence in Education 1.4 (1990), pp. 3–14.
[22] U.S. Department of Education, Institute of Education Sciences, What Works Clear-
inghouse. High School Mathematics Intervention Report: Carnegie Learning Cur-
ricula and Cognitive Tutor R
. 2013.
[23] M. V. Yudelson, K. R. Koedinger, and G. J. Gordon. “Individualized Bayesian
Knowledge Tracing Models”. In: Artificial Intelligence in Education. Ed. by H. C.
Lane et al. Springer Berlin Heidelberg, 2013, 171–180.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Bayesian Knowledge Tracing (BKT) has been employed successfully in intelligent learning environments to individualize curriculum sequencing and help messages. Standard BKT employs four parameters, which are estimated separately for individual knowledge components, but not for individual students. Studies have shown that individualizing the parameter estimates for students based on existing data logs improves goodness of fit and leads to substantially different practice recommendations. This study investigates how well BKT parameters in a tutor lesson can be individualized ahead of time, based on learners' prior activities, including reading text and completing prior tutor lessons. We find that directly applying best-fitting individualized parameter estimates from prior tutor lessons does not appreciably improve BKT goodness of fit for a later tutor lesson, but that individual differences in the later lesson can be effectively predicted from measures of learners' behaviors in reading text and in completing the prior tutor lessons.
Full-text available
Terms used to designate knowledge constructs have proliferated in the literature and often seem to duplicate, subsume, or contradict one another. In this article, we present a conceptual framework for organizing and relating terms that pertain to select knowledge constructs. We begin with an examination of the literature. Based on that review, we build a framework that is intended to clarify terms, and the associations among them, and to articulate definitional statements for these knowledge terms. Finally, we consider the importance of this theoretical undertaking for future research in cognition and in learning.
Conference Paper
Full-text available
Bayesian Knowledge Tracing (BKT)[1] is a user modeling method extensively used in the area of Intelligent Tutoring Systems. In the standard BKT implementation, there are only skill-specific parameters. However, a large body of research strongly suggests that student- specific variability in the data, when accounted for, could enhance model accuracy [5,6,8]. In this work, we revisit the problem of introducing student-specific parameters into BKT on a larger scale. We show that student-specific parameters lead to a tangible improvement when predicting the data of unseen students, and that parameterizing students’ speed of learning is more beneficial than parameterizing a priori knowledge.
As information and communication technology access expands in the developing world, learning technologies have the opportunity to play a growing role to enhance and supplement strained educational systems. Intelligent tutoring systems (ITS) offer strong learning gains, but are a class of technology traditionally designed for most-developed countries. Recently, closer consideration has been made to ITS targeting the developing world and to culturally-adapted ITS. This paper presents findings from a systematic literature review that focused on barriers to ITS adoption in the developing world. While ITS were the primary focus of the review, the implications likely apply to a broader range of educational technology as well. The geographical and economic landscape of tutoring publications is mapped out, to determine where tutoring systems research occurs. Next, the paper discusses challenges and promising solutions for barriers to ITS within both formal and informal settings. These barriers include student basic computing skills, hardware sharing, mobile-dominant computing, data costs, electrical reliability, internet infrastructure, language, and culture. Differences and similarities between externally-developed and locally-developed tutoring system research for the developing world are then considered. Finally, this paper concludes with some potential future directions and opportunities for research on tutoring systems and other educational technologies on the global stage.
The idea for this book on Intelligent Tutoring Systems (ITS) was sparked by the success of the ITS’08 international conference. The number of presentations and their quality bore witness to the vitality and maturity of the field, and the enthusiasm of the participants held out a promise of sustainability and innovative research. Long life to ITS research! The book is divided into five parts. The introductory chapters to these parts, which summarize foundations, developments, strengths and weaknesses in each of the areas covered, are addressed to all readers. For those who want more in-depth knowledge, we give the floor to researchers who present their work, their results, and their view of what the future holds. It is our hope that all readers will find the book informative and thought-provoking. (
Genetics is a unifying theme of biology that poses a major challenge for students across a wide range of post-secondary institutions, because it entails complex problem solving. This article reports a new intelligent learning environment called the Genetics Cognitive Tutor, which supports genetics problem solving. The tutor presents complex, multi-step problems and is constructed around a cognitive model of the knowledge needed to solve the problems. This embedded cognitive model enables the tutor to provide step-by-step assistance, and to maintain a model of the student's problem-solving knowledge. The tutor consists of 16 modules with about 125 total problems, spanning five general topics: Mendelian inheritance, pedigree analysis, genetic mapping, gene regulation, and population genetics. This article reports two evaluations of the tutor. A pretest/posttest evaluation of student learning gains for individual tutor modules across multiple colleges and universities yielded average gains equivalent to almost two letter grades, and the accuracy of student modeling in predicting students' test performance was empirically validated.