ArticlePDF Available

The ASSISTment Builder: Supporting the Life Cycle of Tutoring System Content Creation

Authors:

Abstract and Figures

Content creation is a large component of the cost of creating educational software. Estimates are that approximately 200 hours of development time are required for every hour of instruction. We present an authoring tool designed to reduce this cost as it helps to refine and maintain content. The ASSISTment Builder is a tool designed to effectively create, edit, test, and deploy tutor content. The Web-based interface simplifies the process of tutor construction to allow users with little or no programming experience to develop content. We show the effectiveness of our Builder at reducing the cost of content creation to 40 hours for every hour of instruction. We describe new features that work toward supporting the life cycle of ITS content creation through maintaining and improving content as it is being used by students. The variabilization feature allows the user to reuse tutoring content across similar problems. The Student Comments feature provides a way to maintain and improve content based on feedback from users. The Most Common Wrong Answer feature provides a way to refine remediation based on the users' answers. This paper describes our attempt to support the life cycle of content creation.
Content may be subject to copyright.
The ASSISTment Builder: Supporting the Life
Cycle of Tutoring System Content Creation
Leena Razzaq, Jozsef Patvarczki, Shane F. Almeida, Manasi Vartak,
Mingyu Feng, Neil T. Heffernan, and Kenneth R. Koedinger
Abstract—Content creation is a large component of the cost of creating educational software. Estimates are that approximately
200 hours of development time are required for every hour of instruction. We present an authoring tool designed to reduce this cost as
it helps to refine and maintain content. The ASSISTment Builder is a tool designed to effectively create, edit, test, and deploy tutor
content. The Web-based interface simplifies the process of tutor construction to allow users with little or no programming experience to
develop content. We show the effectiveness of our Builder at reducing the cost of content creation to 40 hours for every hour of
instruction. We describe new features that work toward supporting the life cycle of ITS content creation through maintaining and
improving content as it is being used by students. The Variabilization feature allows the user to reuse tutoring content across similar
problems. The Student Comments feature provides a way to maintain and improve content based on feedback from users. The Most
Common Wrong Answer feature provides a way to refine remediation based on the users’ answers. This paper describes our attempt
to support the life cycle of content creation.
Index Terms—Computer uses in education, e-learning tools, adaptive and intelligent educational systems, authoring tools.
Ç
1INTRODUCTION
ALTHOUGH intelligent tutors have been shown to produce
significant learning gains in students [1], [8], few
intelligent tutoring systems (ITSs) have become commer-
cially successful. The high cost of building intelligent tutors
may contribute to their scarcity and a significant part of that
cost concerns content creation. Murray [13] asked why there
are not more ITS and proposed that a major part of the
problem was that there were few useful tools to support ITS
creation. In 2003, Murray et al. [14] reviewed 28 authoring
systems for learning technologies. Unfortunately, they
found that there are very few authoring systems that are
of “release quality,” let alone commercially available. Two
systems that seem to have “left the lab” stage of develop-
ment are worth mentioning: APSPIRE [10], an authoring
tool for Constraint-Based Tutors [11], and Carnegie Learn-
ing [3] for their work on creating an authoring tool for
Cognitive Tutors by focusing on creating a graphical user
interface for writing production rules. Writing production
rules is naturally a difficult software engineering task, as
flow of control is hard to follow in production systems.
Murray after looking at many authoring tools [13] said,
“A very rough estimate of 300 hours of development time
per hour of online instruction is commonly used for the
development time of traditional computer-assisted instruc-
tion (CAI).” While building intelligent tutoring systems is
generally agreed to be much harder, Anderson et al. [2]
suggested that it took a ratio of development time to
instruction time of at least 200:1 hours to build the
Cognitive Tutor.
We hope to lower the skills needed to author tutoring
system content to the point that normal classroom teachers
can author their own content. Our approach is to allow users
to create example-tracing tutors [7] via the Web to reduce the
amount of expertise and time it takes to create an intelligent
tutor, thus reducing the cost. The goal is to allow both
educators and researchers to create tutors without even basic
knowledge of how to program a computer. Toward this end,
we have developed the ASSISTment System; a Web-based
authoring, tutoring, and reporting system.
Worcester Polytechnic Institute (WPI) and Carnegie
Mellon University (CMU) were funded by the Office of
Naval Research (which funded much of the CMU effort to
build Cognitive Tutors) to explore ways to reduce the cost
associated with creating cognitive model-based tutors used
in tutoring systems [7]. In the past, ITS content has been
authored by programmers who need PhD-level experience
in AI computer programming as well as a background in
cognitive psychology. The attempt to build tools that open
the door to nonprogrammers led to Cognitive Tutor
Authoring Tools (CTATs) [1] which the last two authors
of this paper had a hand in creating.
The ASSISTment System emerged from CTAT and
shares some common features, with the ASSISTment
System’s main advantage of being completely Web-based.
Over time, tutoring content may grow and become
difficult to maintain. The ASSISTment System contains
tutoring for over 3,000 problems and is growing everyday
as teachers and researchers build content regularly. As a
result, quality control can become a problem. We attempted
to address this problem by adding features to help maintain
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009 157
.L. Razzaq, J. Patvarczki, S.F. Almeida, M. Vartak, M. Feng, and
N.T. Heffernan are with the Computer Science Department, Worcester
Polytechnic Institute, 100 Institute Road, Worcester, MA 01609.
E-mail: {leenar, patvarcz, almeida, mvartak, mfeng, nth}@wpi.edu.
.K.R. Koedinger is with the Carnegie Mellon University, Pittsburgh, PA
15213-3891. E-mail: koedinger@cmu.edu.
Manuscript received 28 Dec. 2008; revised 20 Mar. 2009; accepted 23 Apr.
2009; published online 7 May 2009.
For information on obtaining reprints of this article, please send e-mail to:
lt@computer.org, and reference IEEECS Log Number TLTSI-2008-12-0131.
Digital Object Identifier no. 10.1109/TLT.2009.23.
1939-1382/09/$25.00 ß2009 IEEE Published by the IEEE CS & ES
and refine content as it is being used by students, supporting
the life cycle of content creation.
While template-based authoring has been done in the past
[16], we believe that the ASSISTment System has some novel
features. In this paper, we describe the ASSISTment Builder
which is used to author math tutoring content and present
our estimate of content development time per hour of
instruction time. We also describe our efforts to incorporate
variablization into the Builder. With our server-based
system, we are attempting to support the whole life cycle of
content creation that includes error correction and debug-
ging as well. We present our work toward easing the
maintenance, debugging, and refining of content.
2THE ASSISTMENT SYSTEM
The ASSISTment System is joint research conducted by
Worcester Polytechnic Institute and Carnegie Mellon Uni-
versity and is funded by grants from the US Department of
Education, the National Science Foundation, and the Office
of Naval Research. The ASSISTment System’s goal is to
provide cognitive-based assessment of students while
providing tutoring content to students.
The ASSISTment System aims to assist students in
learning the different skills needed for the Massachusetts
Comprehensive Assessment System (MCAS) test or (other
statetests)whileatthesametimeassessingstudent
knowledge to provide teachers with fine-grained assess-
ment of their students; it assists while it assesses. The
system assists students in learning different skills through
the use of scaffolding questions, hints, and messages for
incorrect answers (also known as buggy messages) [19].
Assessment of student performance is provided to teachers
through real-time reports based on statistical analysis.
Using the Web-based ASSISTment System is free and only
requires registration on our Website; no software need be
installed. Our system is primarily used by middle- and
high-school teachers throughout Massachusetts who are
preparing students for the MCAS tests. Currently, we have
over 3,000 students and 50 teachers using our system as part
of their regular math classes. We have had over 30 teachers
use the system to create content.
Cognitive Tutor [2] and the ASSISTment System are built
for different anticipated classroom use. Cognitive Tutor
students are intended to use the tutor two class periods a
week. Students are expected to proceed at their own rate
letting the mastery learning algorithm advance them through
the curriculum. Some students will make steady progress,
while others will be stuck on early units. There is value in this
in that it allows students to proceed at their own paces. One
downside from the teachers’ perspective could be that they
might want to have their class all do the same material on the
same day, so they can assess their students. ASSISTments
were created with this classroom use in mind. ASSISTments
were created with the idea that teachers would use it once
every two weeks as part of their normal classroom instruc-
tion, meant more as a formative assessment system and less
as the primary means of assessing students. Cognitive Tutor
advances students only after they have mastered all of the
skills in a unit. We know that some teachers use some
features to automatically advance students to later lessons
because they might want to make sure all the students get
some practice on Quadratics, for instance.
We think that no one system is “the answer” but that they
have different strengths and weaknesses. If the student uses
the computer less often, there comes a point where the
Cognitive Tutor may be behind on what a student knows,
and seem to move along too slowly to teachers and students.
On the other hand, ASSISTments do not automatically offer
mastery learning, so if students struggle, it does not
automatically adjust. It is assumed that the teacher will
decide if a student needs to go back and look at a topic again.
We are attempting to support the full life cycle of content
authoring with the tools available in the ASSISTment
System. Teachers can create problems with tutoring, map
each question to the skills required to solve them, bundle
problems together in sequences that students work on, view
reports on students’ work, and use tools to maintain and
refine their content over time.
2.1 Structure of an ASSISTment
Koedinger et al. [7] introduced example-tracing tutors
which mimic cognitive tutors but are limited to the scope
of a single problem. The ASSISTment System uses a further
simplified example-tracing tutor, called an ASSISTment,
where only a linear progression through a problem is
supported which makes content creation easier and more
accessible to a general audience.
An ASSISTment consists of a single main problem, or
what we call the original question. For any given problem,
assistance to students is available either in the form of a
hint sequence or scaffolding questions. Hints are messages
that provide insights and suggestions for solving a specific
problem, and each hint sequence ends with a bottom-out
hint which gives the student the answer. Scaffolding
problems are designed to address specific skills needed to
solve the original question. Students must answer each
scaffolding question in order to proceed to the next
scaffolding question. When students finish all of the
scaffolding questions, they may be presented with the
original question again to finish the problem. Each
scaffolding question also has a hint sequence to help the
students answer the question if they need extra help.
Additionally, messages called buggy messages are provided
to students if certain anticipated incorrect answers are
selected or entered. For problems without scaffolding, a
student will remain in a problem until the problem is
answered correctly and can ask for hints which are
presented one at a time. If scaffolding is available, the
student will be programmatically advanced to the first
scaffolding problems in the event of an incorrect answer on
the original question.
Hints, scaffolds, and buggy messages together help create
ASSISTments that are structurally simple but can address
complex student behavior. The structure and the supporting
interface used to build ASSISTments are simple enough so
that users with little or no computer science and cognitive
psychology background can use it easily. Fig. 1 shows an
ASSISTment being built on the left and what the student sees
is shown on the right. Content authors can easily enter
question text, hints, and buggy messages by clicking on the
appropriate field and typing; formatting tools are also
158 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009
provided for easily bolding, italicizing, etc. Images and
animations can also be uploaded in any of these fields.
The Builder also enables scaffolding within scaffold
questions, although this feature has not often been used in
our existing content. In the past, the Builder allowed
different lines of scaffolds for different wrong answers but
we found that this was seldom used and seemed to
complicate the interface causing the tool to be harder to
learn. We removed support for different lines of scaffolding
for wrong answers but plan to make it available for an
expert mode in the future. In creating an environment that
is easy for content creators to use, we realize that there is a
trade-off between ease of use and having a more flexible
and complicated ASSISTment structure. However, we think
that the functionality that we do provide is sufficient for the
purposes of most content authors.
2.1.1 Skill Mapping
We assume that students may know certain skills and rather
than slowing them down by going through all of the
scaffolding first, ASSISTments allow students to try to
answer questions without showing every step. This differs
RAZZAQ ET AL.: THE ASSISTMENT BUILDER: SUPPORTING THE LIFE CYCLE OF TUTORING SYSTEM CONTENT CREATION 159
Fig. 1. The Builder and associated student screen.
from Cognitive Tutors [2] and Andes [20] which both ask the
students to fill in many different steps in a typical problem.
We prefer our scaffolding pattern as it means that students
get through items that they know faster and spend more
time on items they need help on. It is not unusual for a single
Cognitive Tutor Algebra Word problem to take 10 minutes
to solve, while filling in a table of possibly dozens of
substeps, including defining a variable, writing an equation,
filling in known values, etc. We are sure, in circumstances
where the student does not know these skills, that this is very
useful. However, if the student already knows most of the
steps, this may not be pedagogically useful.
The ASSISTment Builder also supports the mapping of
knowledge components, which are organized into sets
known as transfer models. We use knowledge components
to map certain skills to specific problems to indicate that a
problem requires knowledge of that skill. Mapping between
skills and problems allows our reporting system to track
student knowledge over time using longitudinal data
analysis techniques [4].
In April 2005, our subject matter expert helped us to
make up knowledge components and tag all of the existing
eighth grade MCAS items with these knowledge compo-
nents in a 7-hour-long “coding session.” Content authors
who are building eighth grade items can then tag their
problems in the Builder with one of the knowledge
components for eighth grade. Tagging an item with a
knowledge component typically takes 2-3 minutes. The cost
of building a transfer model can be high initially, but the
cost of tagging items is low.
We currently have more than 20 transfer models available
in the system with up to 300 knowledge components each.
See [18] for more information about how we constructed our
transfer models. Content authors can map skills to problems
and scaffolding questions as they are building content. The
Builder will automatically map problems to any skills that
its scaffolding questions are marked with.
2.2 Problem Sequences
Problems can be arranged in problem sequences in the
system. The sequence is composed of one or more sections,
with each section containing problems or other sections.
This recursive structure allows for a rich hierarchy of
different types of sections and problems.
The section component, an abstraction for a particular
ordering of problems, has been extended to implement our
current section types and allows for new types to be added
in the future. Currently, our section types include “Linear”
(problems or subsections are presented in linear order),
“Random” (problems or subsections are presented in a
pseudorandom order), and “Choose Condition” (a single
problem or subsection is selected pseudorandomly from a
list, the others are ignored).
We are interested in using the ASSISTment system to
find the best ways to tutor students and being able to easily
build problem sequences helps us to run randomized
controlled experiments very easily. Fig. 2 shows a problem
sequence that has been arranged to run an experiment that
compares giving students scaffolding questions to allowing
them to ask for hints. (This is similar to an experiment
described in [17].) Three main sections are presented in
linear order: a pretest, experiment, and posttest sections.
Within the experiment section, there are two conditions and
students will randomly be presented with one of them.
2.3 Teacher Reports
The various reports that are available on students’ work are
valuable tools for teachers. Teachers can see how their
students are doing on individual problems or on complete
assignments. They can also see how their students are
performing on each skill. These reports allow teachers to
determine where students are having difficulties and they
can adapt their instruction to the data found in the reports.
For instance, Fig. 3 shows an item report which shows
teachers how students are doing on individual problems.
Teachers can tell at a glance which students are asking for
too many bottom-out hints (cells are colored in yellow).
Teachers can also see what students have answered for each
question, whether the answer was correct, what percent of
the class got the answer correct and individual students’
percent correct for the whole problem set.
2.4 Cost-Effective Content Creation
The ASSISTment Builder’s interface, shown in Fig. 1, uses
common Web technologies such as HTML and JavaScript,
allowing it to be used on most modern browsers. The
Builder allows a user to create example-tracing tutors
composed of an original question and scaffolding questions.
In the next section, we evaluate this approach in terms of
usability and decreased creation time of content.
2.4.1 Methodology
We wished to create new 10th grade math tutoring content
in addition to our existing eighth grade math content. In
September 2006, a group of nine WPI undergraduate
students, most of whom had no computer programming
experience, began to create 10th grade math content as part
of an undergraduate project focused on relating science and
technology to society. Their goal was to create as much
10th grade content as possible for this system.
All content was first approved by the project’s subject
matter expert, an experienced math teacher. We also gave
the content authors a 1 hour tutorial on using the ASSIST-
ment Builder where they were trained to create scaffolding
questions, hints, and buggy messages. Creating images and
animations were also demonstrated.
160 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009
Fig. 2. A problem sequence arranged to conduct an experiment.
We augmented the Builder to track how long it takes
authors to create an ASSISTment. This does ignore the time
it takes authors to plan the ASSISTment, work with their
subject matter expert, and any time spent making images
and animated gifs. All of this time can be substantial, so we
cannot claim to have tracked all time associated with
creating content.
Once we know how many ASSISTments authors have
created, we can estimate the amount of content tutoring time
created by using the previously established number that
students spend about 2 minutes per ASSISTment [5]. This
number is averaged from data from thousands of students.
This will give us a ratio that we can compare against the
literature suggesting a 200:1 ratio [2].
2.4.2 Results
The nine undergraduate content authors worked on their
project over three seven-week terms. During the first term,
Term A, authors created 121 ASSISTments with no
assistance from the ASSISTment team other than meeting
with their subject matter expert to review the pedagogy.
Since we know from prior studies [5] that students being
tutored by the ASSISTment system spend an average of
2 minutes per ASSISTment, the content authors created
242 minutes, or a little over 4 hours of content. The log files
were analyzed to determine that authors spent 79 minutes
(standard deviation ¼30 minutes), on average, to create an
ASSISTment. In the second seven weeks, Term B, the
authors created 115 more additional ASSISTments at a rate
of 55 minutes per ASSISTment. This increased rate of
creation was statistically significant (p < 0.01), suggesting
that students were becoming faster at creating content. To
look for other learning curves, we noticed that in Term A,
each ASSISTment was edited, on average, over the space of
four days, while in Term B, the content authors were only
editing an ASSISTment over the space of three days on
average. This rate was statistically significantly faster than in
Term A. Table 1 shows these results.
It appears that we have created a method for creating
intelligent tutoring content much more cost effectively. We
did this by building a tool that reduces both the skills
needed to create content as well as the time needed to do so.
This produced a ratio of development time to online
instruction time of about 40:1 and the development time
does decrease slightly as authors spend more time creating
content. The determination of whether the ASSISTments
RAZZAQ ET AL.: THE ASSISTMENT BUILDER: SUPPORTING THE LIFE CYCLE OF TUTORING SYSTEM CONTENT CREATION 161
Fig. 3. An item report tells teachers how students are doing on individual problems.
TABLE 1
Experiment Results
created by our undergraduate content authors produce
significant learning is work in progress. However, our
subject matter expert was satisfied that the content created
was of good quality.
3VARIABILIZATION
An important limitation of the example-tracing tutor
framework used by the present ASSISTment system is the
inability of example-tracing tutors to generalize over similar
problems [7]. A direct result of this drawback is that
separate example-tracing tutors are required to be created
for each individual problem regardless of similarities in
tutoring content. This process is not only tedious and time-
consuming, but the opportunities for errors can also
increase on the part of the content creators. In our present
system, about 140 (out of approximately 2,000) commonly
used ASSISTments are “morphs”—ASSISTments which
have been generated by subtly modifying (e.g., changing
numerical quantities) existing ASSISTments.
Pavlik and Anderson [15] have reported that learners,
particularly beginners, need practice at closely spaced
intervals, while McCandliss et al. [9] and others claim that
beginners benefit from practice on closely related pro-
blems. Applying these results to a tutoring system requires
a significant body of content addressing the same skill
sets. However, the time and effort required to generate
morphs has been an important limitation on the amount of
content created in the ASSISTment system. Through the
addition of the variabilization feature—use of variables to
create parameterized templates of ASSISTments—to the
ASSISTment builder, we seek to extend our content-
building tools to facilitate the reuse of tutoring content
across similar problems.
3.1 Implementation
The variabilization feature of the ASSISTment builder
enables the creation of parameterized template ASSISTments.
Variables are used as parameters in the template ASSISTment
and are evaluated while creating instances of the template
ASSISTment—ASSISTments where variables and their
functions are assigned values.
Our current implementation of variabilization associ-
ates variables with individual ASSISTments. Since an
ASSISTment is made of the main problem, scaffold
problems,answers,hints,andbuggymessages,this
implementation allows a broad use of variables. Each
variable associated with an ASSISTment has a name and
one or more values. These values may be numerical or
may include text related to the problem statement.
Depending on the degree of flexibility required, mathe-
matical functions like those to randomly generate num-
bers, or those doing complex arithmetic can be used in
variable values.
We also provide the option of defining relationships
between variables in two ways. The first way is to define
values of variables in terms of variables that have already
been defined. If variables called x and y have already been
defined, then we can define a new variable z to be equal to a
function involving x and y, for instance, x*y. The other way
to define a relationship is to create what are called sets of
variables. Values of variables in a set are picked together
while evaluating them. For example, in a Pythagorean
Theorem problem, having the lengths of the three sides of a
right-angled triangle as variables in a set, we can associate
certain values of the variables like 3-4-5 or 5-12-13 to
represent the lengths of the sides of right triangles.
We now give an example of the process involved in
generating a template-variabilized ASSISTment, and then,
creating instances of this ASSISTment. The number of
possible values for the variables dictates the number of
instances of an ASSISTment that can be generated. The first
step toward creating a template-variabilized ASSISTment
from an existing ASSISTment is determining the possible
variables in the problem.
After identifying possible variables, these variables are
created through the variables widget and used throughout
the ASSISTment. A variable has a unique name and one or
more values associated with it. A special syntax in the form of
%v{variable-name} is used to refer to variables throughout
the Builder environment. Functions of these variables can be
used in any part of the ASSISTment including the problem
body by using the syntax %v{function()}. For example,
%vfsqrtða^2 þb^2Þg could ^ be used to calculate the length
of the hypotenuse of a right triangle. Additional variables can
be introduced to make the problem statement grammatically
correct such as delimiters and pronouns.
Fig. 4 shows an existing ASSISTment addressing the
Pythagorean Theorem with candidates for variables marked.
This ASSISTment is commonly encountered by students
using our system and it contains 13 hints: eight buggy
messages, one main problem, and four scaffold problems.
Generation of variables in the system is simple and
follows the existing format of answers and hints. Main-
taining consistency with other elements of the Builder
162 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009
Fig. 4. A variabilized ASSISTment on the Pythagorean Theorem. Variables have been introduced for various parts of the problem including
numerical values and parts of the problem statement.
tools minimizes the learning time for content creators. In
the Pythagorean Theorem ASSISTment (shown in Fig. 4),
we can make use of the set feature of variables to make
sure that the correct values of the three sides of the
triangle are picked together.
Once variables have been generated and introduced into
problems, scaffold questions, answers, hints, and buggy
messages as required, it is possible to create multiple
instances of this ASSISTment using the Create button.
The number of generated ASSISTments depends on the
number of values specified in the sets. Our system performs
content validation to check if variables have been correctly
generated and used, and alerts the content creator to any
mistakes. The main advantage of variabilization lies in the
fact that once a template-variablized ASSISTment is created,
new ASSISTments including their scaffolds, answers, hints,
and buggy messages can be generated instantly.
Our preliminary studies of variabilization, comparing
the time required to generate five morphs using traditional
morphing techniques (e.g., copy and paste) as opposed to
generating five morphs using variabilization, indicate that
in the former case, the average time required to create one
morph is 20.18 (std 9.05) minutes, while in the latter case,
this time is 7.76 minutes (std 0.56). Disregarding the
ordering effect introduced due to repeated exposure to
the same ASSISTment, this indicates a speedup by a factor
of 2.6. Further studies are being done to assess the impact
that variabilization can have in reducing content creation
time. It is important to note that speedup heavily depends
on the number of ASSISTments generated since creating
one template-variabilized ASSISTment requires 38.8 (std
2.78) minutes, on average, as opposed to 20.18 (std 9.05)
minutes for a morphed ASSISTment. However, the var-
iabilized ASSISTment can be used to produce multiple
instances of the ASSISTment, while the morph is essentially
a single ASSISTment.
4REFINING AND MAINTAINING CONTENT
The ASSISTment project is also interested in easing the
maintenance of content in the system. Because of the large
number of content developers and teachers creating content
and the large amount of content currently stored in the
ASSISTment system, maintenance and quality assurance
becomes more difficult.
4.1 Maintaining Content through
Student Comments
We have implemented a way to find and correct errors in
our content by allowing users to comment on issues. As
seen in Fig. 5, students using the system can comment on
issues they find as they are solving problems.
Content creators can see a list of comments and address
problems that have been pointed out by users.
We assigned an undergraduate student to address the
issues found in comments. He reported working on these
issues over five weeks, approximately 8 hours a week,
scanning through the comments made since the system was
implemented. There were a total 2,453 comments, and the
student went through 216 comments during this time and
85 ASSISTments were modified to address issues brought
up by students.
Therefore, this means that about 45 percent of the
comments that the undergraduate student reviewed were
important enough that he decided to take action. We
originally thought that many students would not take
commenting seriously and the percentage of comments that
were not actionable would be closer to 95 percent, so we
were pleased with this relatively high number of useful
comments.
Given that the undergraduate student worked for 8 hours
a week addressing comments, he estimates that 80 percent of
that time was spent editing the ASSISTments. Since he
edited a total number of 102 ASSISTments (including
problems brought up by professors) over the five week
period, on average, editing an ASSISTment took a little
under 20 minutes.
Many comments were disregarded because they were
either repeating themselves (ranging from a couple of
repeats to 20 hits), or because they had nothing to do with
the purpose of the commenting system.
During his analysis, the undergraduate student categor-
ized the comments in Table 2.
It was useful, when starting to edit an ASSISTment
because of a comment, to find other comments related to
that problem that might lead to subsequent corrections.
In addition, there was one special type of comment that
pointed out visual problems from missing html code
(included in the Migration issues). These indicated strange
text behavior (i.e., words in italic, bolded, colored, etc.)
because of unclosed html tags or too many breaks.
In a nutshell, we believe that this account strengthens the
importance of the commenting system in maintaining and
improving a large body of content such as we have in the
ASSISTment system.
4.2 Refining Remediation
There is a large literature on student misconceptions, and
ITS developers spend large amounts of time developing
buggy libraries [21] to address common student errors
which requires expert domain knowledge as well as
cognitive science expertise. We were interested in finding
areas where students seemed to have common misconcep-
tions that we had inadvertently neglected to address with
buggy messages.
If a large percentage of students were answering
particular problems with the same incorrect answer, we
RAZZAQ ET AL.: THE ASSISTMENT BUILDER: SUPPORTING THE LIFE CYCLE OF TUTORING SYSTEM CONTENT CREATION 163
Fig. 5. Students can comment on spelling mistakes, math errors, or
confusing wording.
could determine that a buggy message was needed to
address this common misconception. In this way, we are
able to refine our buggy messages over time. Fig. 6 shows a
screenshot of a feature we constructed to find and show the
most common incorrect answers. In this shot, it is apparent
that the most common incorrect answer is 5, answered by
20 percent of students. We can easily address this by adding
a buggy message, as shown in Fig. 6.
5CONCLUSIONS AND CONTRIBUTIONS
In this paper, we have presented a description of our
authoring tool that grew out of the CTAT [7] authoring
tool. When CTAT was initially designed (by the last two
authors of this paper as well as Vincent Aleven), it was
mainly thought of as a tool to author cognitive rules.
CTAT supports the authoring of both example-tracing
tutors, which do not require computer programming but
are problem-specific, and cognitive tutors, which require
AI programming to build a cognitive model of student
problem solving but support tutoring across a range of
problems. Writing rules is time-intensive. CTAT allowed
authors to first demonstrate the actions that the model was
supposed to be able to “model-trace” with CTAT’s
Behavior Recorder. This enabled users to author a tutor
by demonstration, without programming.
It turned out that the demonstrations that CTAT would
record for this seemed like good tutors sometimes, and that
we might not ever have to write rules for the actions. The
CTAT example-tracing tutors mimic a cognitive tutor, in
that they could give buggy messages and hint messages.
When funding for ASSISTments was given by the US
Department of Education, it made sense to create a new
version of a simplified CTAT, which we call the ASSISTment
Builder. This builder is a simplification of the CTAT
example-tracing tutors in that they no longer support the
writing of production rules at all and only allow a single
directed line of reasoning. Is this a good design decision? We
are not sure. There are many things ASSISTments are not
good for (such as telling which solution strategy a student
used) but the data presented in this paper suggest that they
are much easier to build than cognitive tutors. They both
take less time to build and also require a lower threshold of
entry (learning to be a rule-based programmer is very hard
and the skill set is not common as very few professional
programmers have ever written a rule-based program (i.e.,
in a language like JESS (http://www.jessrules.com/jess/)).
What don’t we know that we would like to know? It
would be nice to do an experiment that pitted the CTAT
rule-based tutors against ASSISTments, give both teams an
equal amount of money, and see which produces better
tutoring. By better tutoring, we mean which performs better
164 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009
TABLE 2
Categorization of Comments on
Issues with ASSISTment Content
Fig. 6. Common wrong answers for problems are shown to help with remediation.
on a standard pretest/posttest type of analysis to see if
students learn more from either system. We assume that the
rule-based cognitive tutor would probably lead to better
learning, but it will cost more to get the same amount of
content built. How much better does the system have to be
to justify the cost? There are several works where
researchers built two different systems to compare them
[6], [12]. One work where researchers build two different
systems and tried to make statements of which one is better
is Kodaganallur’s work [6]. They built a model-tracing tutor
and a constraint-based tutor, and expressed the opinion that
the constraint-based tutor was easier to build but they
thought it would not be as effective at increasing learning.
However, they did not collect student data to substantiate
the claim of better learning from the model-tracing tutors.
We need more studies like this to help figure out if
example-tracing tutors/ASSISTments are very different
from model-tracing tutors in terms of increasing student
learning. The obvious problem is that few researchers have
the time to build two different tutoring systems.
There is clearly a trade-off between the complexity of
what a tool can express and the amount of time it takes to
learn to use a tool. Very simple Web-based answering
systems (like www.studyisland.com) sit at the “easy to use
end” in that they only allow simple question-answer drill-
type activities. Imagine that is on the left. At the other
extreme, to the far right, is Cognitive Tutors which are very
hard to learn to create and to produce content, but offer
greater flexibility in creating different types of tutors.
Where do we think ASSISTments sit on this continuum?
We think that ASSISTments is very close to the Web-based
drill-type systems but just to the right. We think that CTAT
created example-tracing tutors sit a little bit to the right of
ASSISTments but still clearly on the left end of the scale.
Where do other authoring tools sit on this spectrum?
Carnegie Learning researchers Blessing et al. are putting a
nice GUI onto the tools to create rule-based tutors [3] which
probably sits just to the left of rule-based tutors. It is much
harder to place other authoring tools onto this spectrum,
but we guess that ASPRIRE [10], a system to build
constraint-based tutors, sits just to the left of Blessing’s
tool, based upon the assumption that constraint-based
tutors are easier to create than cognitive rule-based tutors,
but still require some programming.
We think that there is a huge open middle ground in this
spectrum that might be very productive for others to look
at. The difference is what level of programming is required
by the user. Maybe it is possible to come up with a
programming language simple enough for most authors
that gives some reasonable amount of flexibility so that a
broader range of tutors could be built that would be better
for student learning.
In summary, we think that some of the good aspects of
the ASSISTment Builder and associated authoring tools
include: 1) they are completely Web-based and simple
enough for teachers to create content themselves; 2) they
capture some of the aspects of Cognitive Tutors (i.e., bug
messages, hint messages, etc.) but at less cost to the author;
and 3) they support the full life cycle of tutor creation and
maintenance with tools to show when buggy messages
need to be added, and tools to get feedback from users, and
of course, allowing teachers to get reports. We make no
claim that these are the optimal set of features, only that
they represent what we think might represent a reasonable
complexity versus ease-of-use trade-off.
ACKNOWLEDGMENTS
The authors would like to thank all of the people associated
with creating the ASSISTment system listed at www.
ASSISTment.org including investigators Kenneth Koedin-
ger and Brian Junker at Carnegie Mellon. They would also
like to acknowledge funding from the US Department of
Education, the National Science Foundation, the US Office
of Naval Research, and the Spencer Foundation. All of the
opinions expressed in this paper are those solely of the
authors and not those of our funding organizations.
REFERENCES
[1] V. Aleven, J. Sewall, B. McLaren, and K. Koedinger, “Rapid
Authoring of Intelligent Tutors for Real-World and Experimental
Use,” Proc. Int’l Conf. Advanced Learning Technologies (ICALT ’06),
pp. 847-851, 2006.
[2] J.R. Anderson, A.T. Corbett, K.R. Koedinger, and R. Pelletier,
“Cognitive Tutors: Lessons Learned,” The J. Learning Sciences,
vol. 4, no. 2, pp. 167-207, 1995.
[3] S. Blessing, S. Gilbert, S. Ourada, and S. Ritter, “Lowering the Bar
for Creating Model-Tracing Intelligent Tutoring Systems,” Proc.
13th Int’l Conf. Artificial Intelligence Education, R. Luckin and
K. Koedinger, eds., pp. 443-450, 2007.
[4] M. Feng, N.T. Heffernan, and K.R. Koedinger, “Predicting State
Test Scores Better with Intelligent Tutoring Systems: Developing
Metrics to Measure Assistance Required,” Proc. Eighth Int’l Conf.
Intelligent Tutoring Systems, M. Ikeda, K.D. Ashley, and T.-W. Chan,
eds., pp. 31-40, 2006.
[5] N.T. Heffernan, T.E. Turner, A.L.N. Lourenco, M.A. Macasek, G.
Nuzzo-Jones, and K.R. Koedinger, “The ASSISTment Builder:
Towards an Analysis of Cost Effectiveness of ITS Creation,” Proc.
Int’l Florida Artificial Intelligence Research Soc. Conf. (FLAIRS ’06),
2006.
[6] V. Kodaganallur, R.R. Weitz, and D. Rosenthal, “A Comparison of
Model-Tracing and Constraint-Based Intelligent Tutoring Para-
digms,” Int’l J. Artificial Intelligence Education, vol. 15, pp. 117-144,
2005.
[7] K.R. Koedinger, V. Aleven, N.T. Heffernan, B. McLaren, and M.
Hockenberry, “Opening the Door to Non-Programmers: Author-
ing Intelligent Tutor Behavior by Demonstration,” Proc. Seventh
Ann. Intelligent Tutoring Systems Conf., pp. 162-173, 2004.
[8] K.R. Koedinger, J.R. Anderson, W.H. Hadley, and M.A. Mark,
“Intelligent Tutoring Goes to School in the Big City,” Int’l J.
Artificial Intelligence Education, vol. 8, pp. 30-43, 1997.
[9] B. McCandliss, I.L. Beck, R. Sandak, and C. Perfetti, “Focusing
Attention on Decoding for Children with Poor Reading Skills:
Design and Preliminary Tests of the Word Building Intervention,”
Scientific Studies Reading, vol. 7, no. 1, pp. 75-104, 2003.
[10] A. Mitrovic, P. Suraweera, B. Martin, K. Zakharov, N. Milik, and J.
Holland, “Authoring Constraint-Based Tutors in ASPIRE,” Proc.
Eighth Int’l Conf. Intelligent Tutoring Systems, pp. 41-50, June 2006.
[11] A. Mitrovic, M. Mayo, P. Suraweera, and B. Martin, “Constraint-
Based Tutors: A Success Story,” Proc. 14th Int’l Conf. Industrial Eng.
Applications Artificial Intelligence Expert Systems (IEA/1002-EIA),
L. Monostori, J. Vancza, and M. Ali, eds., pp. 931-940, June 2001.
[12] A. Mitrovic, K. Koedinger, and B. Martin, “A Comparative
Analysis of Cognitive Tutoring and Constraint-Based Modeling,”
Proc. Int’l Conf. User Modeling, pp. 313-322, 2003.
[13] T. Murray, “Authoring Intelligent Tutoring Systems: An Analysis
of the State of the Art,” Int’l J. Artificial Intelligence Education,
vol. 10, pp. 98-129, 1999.
[14] T. Murray, S. Blessing, and S. Ainsworth, Authoring Tools for
Advanced Technology Learning Environment. Kluwer, 2003.
RAZZAQ ET AL.: THE ASSISTMENT BUILDER: SUPPORTING THE LIFE CYCLE OF TUTORING SYSTEM CONTENT CREATION 165
[15] P.I. Pavlik and J.R. Anderson, “Practice and Forgetting Effects on
Vocabulary Memory: An Activation-Based Model of the Spacing
Effect,” Cognitive Science, vol. 78, no. 4, pp. 559-586, 2005.
[16] S. Ramachandran and R. Stottler, “A Meta-Cognitive Computer-
Based Tutor for High-School Algebra,” Proc. World Conf. Educa-
tional Multimedia, Hypermedia, and Telecomm., D. Lassner and
C. McNaught, eds., pp. 911-914, 2003.
[17] L. Razzaq and N.T. Heffernan, “Scaffolding vs. Hints in the
Assistment System,” Proc. Eighth Int’l Conf. Intelligent Tutoring
Systems, Ikeda, Ashley, and Chan, eds., pp. 635-644, 2006.
[18] L. Razzaq, N. Heffernan, M. Feng, and Z. Pardos, “Developing
Fine-Grained Transfer Models in the ASSISTment System,
J. Technology, Instruction, Cognition, Learning, vol. 5, no. 3,
pp. 289-304, 2007.
[19] L. Razzaq, N. Heffernan, K. Koedinger, M. Feng, G. Nuzzo-Jones,
B.Junker,M.Macasek,K.Rasmussen,T.Turner,andJ.
Walonoski, “Blending Assessment and Instructional Assistance,”
Intelligent Educational Machines within the Intelligent Systems
Engineering Book Series, N. Nedjah, L. deMacedo Mourelle,
M.N. Borges, and N.N. Almeida, eds., pp. 23-49, Springer, 2007.
[20] K. VanLehn, C. Lynch, K. Schulze, J.A. Shapiro, R. Shelby, L.
Taylor, D. Treacy, A. Weinstein, and M. Wintersgill, “The Andes
Physics Tutoring System: Lessons Learned,” Int’l J. Artificial
Intelligence Education, vol. 15, no. 3, pp. 1-47, 2005.
[21] K. VanLehn, Mind Bugs: The Origins of Procedural Misconceptions.
MIT Press, 1990.
Leena Razzaq received the MS degree in
computer science from Worcester Polytechnic
Institute. She is currently working toward the
PhD degree in computer science at the same
university. She is interested in intelligent tutoring
systems, human-computer interaction, and user
modeling. She is a member of the ASSISTment
Project under the role of content director where
she has been in charge of authoring tutoring
content. She spends a large amount of time in
middle schools in the Worcester area, helping teachers to use the
system in their classrooms, and running randomized controlled studies
to determine the best tutoring practices. Her research is focused on
studying how different tutoring strategies in intelligent tutoring systems
affect students of varying abilities and how to adapt tutoring systems to
individual students.
Jozsef Patvarczki received the BS degree
from Budapest Tech, Hungary, and the MS
degree from the University of Applied Sciences,
Germany. He is currently working toward the
PhD degree in computer science at Worcester
Polytechnic Institute. His primary interests lie in
the areas of scalability, load-balancing, net-
works, intelligent tutoring systems, and grid
computing, particularly, load modeling and
performance tuning. He has also worked in
the areas of database layout advisor and Web-based applications. His
research also contributed to the infrastructure design of the ASSIST-
ment system and produced a robust 24/7 system.
Shane F. Almeida received the BS degree in
computer science and the MS degrees in
computer science and electrical and computer
engineering from Worcester Polytechnic Insti-
tute. He was previously a researcher and a lead
developer for the ASSISTment Project. While a
graduate student, he received funding from the
US National Science Foundation and the US
Department of Education. He now develops
software for wireless controller technology and
products for mobile devices while continuing to consult for the
ASSISTment Project.
Manasi Vartak is an undergraduate student at
Worcester Polytechnic Institute majoring in
computer science and mathematics. She plans
to graduate in 2010. She worked on a semester-
long independent study project involving the
ASSISTment System. As a part of this project,
she added a “template” feature to the ASSIST-
ment System which enables the rapid genera-
tion of isomorphic content while also increasing
flexibility by allowing the system to interface with
third party software.
Mingyu Feng received the BS and MS degrees
in computer science from Tianjin University,
China. She is currently working toward the
PhD degree in computer science at Worcester
Polytechnic Institute. Her primary interests lie in
the areas of intelligent tutoring systems, parti-
cularly, student modeling and educational data
mining. She has also worked in the area of
cognitive modeling and psychometrics as well.
Her research has contributed to the design and
evaluation of educational software, has developed computing techni-
ques to address problems in user learning, and has produced basic
results on tracking student learning of mathematical skills.
Neil T. Heffernan received the BS degree from
Amherst College, Massachusetts. He then
taught middle school in inner city Baltimore,
Maryland, for two years, after which he received
the PhD degree in computer science from
Carnegie Mellon University, building intelligent
tutoring systems. He currently works with teams
of researchers, graduate students, and tea-
chers to build the ASSISTment System, a Web-
based intelligent tutor that is used by more than
3,000 students as part of their normal mathematics classes. He has
more than 50 peer-reviewed publications and has received more than
$8 million in funding on more than a dozen different grants from the US
National Science Foundation, the US Department of Education, the US
Army, the Spencer Foundation, the Massachusetts Technology
Transfer Center, and the US Office of Naval Research.
Kenneth R. Koedinger received the MS degree
in computer science from the University of
Wisconsin in 1986 and the PhD degree in
psychology from Carnegie Mellon University
(CMU) in 1990. He is a professor of human-
computer interaction and psychology at Carne-
gie Mellon University. He has authored more
than 190 papers and has won more than
16 major grants. He is a cofounder of Carnegie
Learning, a company marketing advanced edu-
cational technology, and directs the Pittsburgh Science of Learning
Center (see LearnLab.org).
166 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 2, NO. 2, APRIL-JUNE 2009
... While not fully an ITS, ASSISTments is an adaptive tutor that places the instructor in an important role within its system [14]. Supporting such a design philosophy, the ASSISTments Builder was created to simplify the content authoring process for teachers and instructors [35]. To accomplish this, the builder provided easy integration of problem-help features such as hints and scaffolds, while also allowing skill mapping of knowledge components. ...
... To accomplish this, the builder provided easy integration of problem-help features such as hints and scaffolds, while also allowing skill mapping of knowledge components. Furthermore, it managed to match the lower end of CTAT's 50 hour estimate of development for one hour's worth of instructional content despite using a GUI instead of a programmatic interface [35]. The ASSISTments Builder also supported problem variabilization directly through its GUI, allowing for easier variation of problem content [35]. ...
... Furthermore, it managed to match the lower end of CTAT's 50 hour estimate of development for one hour's worth of instructional content despite using a GUI instead of a programmatic interface [35]. The ASSISTments Builder also supported problem variabilization directly through its GUI, allowing for easier variation of problem content [35]. ...
Preprint
Full-text available
Involving subject matter experts in prompt engineering can guide LLM outputs toward more helpful, accurate, and tailored content that meets the diverse needs of different domains. However, iterating towards effective prompts can be challenging without adequate interface support for systematic experimentation within specific task contexts. In this work, we introduce PromptHive, a collaborative interface for prompt authoring, designed to better connect domain knowledge with prompt engineering through features that encourage rapid iteration on prompt variations. We conducted an evaluation study with ten subject matter experts in math and validated our design through two collaborative prompt-writing sessions and a learning gain study with 358 learners. Our results elucidate the prompt iteration process and validate the tool's usability, enabling non-AI experts to craft prompts that generate content comparable to human-authored materials while reducing perceived cognitive load by half and shortening the authoring process from several months to just a few hours.
... To evaluate the effectiveness of the EGRec model, we utilized three datasets: ASSISTments 2009 [31], ASSISTments 2015 [32], and XueTangX [33]. These experiments aimed to compare EGRec with multiple baseline models across various evaluation metrics and to understand the contribution of each component to the overall performance of the EGRec model through ablation studies. ...
... • ASSISTments 2009 [31]: This dataset contains interactions from 4,151 students and 325,673 records covering 110 knowledge points. It is rich in interaction data, allowing us to effectively evaluate knowledge tracing models. ...
Preprint
Full-text available
Massive Open Online Courses (MOOCs) provide abundant learning resources but also overwhelm learners with their sheer volume, leading to challenges such as data sparsity and cold-start issues in conventional recommendation systems. To address these challenges, we propose EGRec, a novel course recommendation model that combines knowledge graphs and Heterogeneous Graph Attention Networks to improve recommendation precision, diversity, and relevance. By integrating multimodal data, EGRec captures intricate semantic relationships between courses and knowledge points, enabling personalized and context-sensitive recommendations. Extensive experiments on real MOOCs datasets demonstrate that EGRec significantly outperforms traditional models, highlighting its potential to enhance tailored learning experiences.
... Both cognitive tutors and CBM systems use static approaches for building problem domain, which means that in practice these systems can be built only by high-qualified experts who thoroughly understand the domain and possess adequate programming knowledge and skills (Razzaq et al., 2009;Stamper et al., 2011;Stein et al., 2013). Using artificial intelligence methods, such as data mining and machine learning, knowledge base can be built dynamically. ...
Preprint
Today's software industry requires individuals who are proficient in as many programming languages as possible. Structured query language (SQL), as an adopted standard, is no exception, as it is the most widely used query language to retrieve and manipulate data. However, the process of learning SQL turns out to be challenging. The need for a computer-aided solution to help users learn SQL and improve their proficiency is vital. In this study, we present a new approach to help users conceptualize basic building blocks of the language faster and more efficiently. The adaptive design of the proposed approach aids users in learning SQL by supporting their own path to the solution and employing successful previous attempts, while not enforcing the ideal solution provided by the instructor. Furthermore, we perform an empirical evaluation with 93 participants and demonstrate that the employment of hints is successful, being especially beneficial for users with lower prior knowledge.
... While adult tutoring is particularly effective, it is cost prohibitive, therefore not available ubiquitously. There have been several studies where ITS have demonstrated success when used amongst k-12 learners in controlled settings where all participants engage in a fixed amount of activity [1,14,16,25,26]. For example, one of the largest randomized trials of educational tools in action was, PAT. ...
Preprint
Full-text available
This paper aims to uncover needs of adult learners when using pedagogical technologies such as intelligent tutoring systems. Further, our aim with this work is to understand the usability challenges when deploying tutors at scale within the adult learning audience. As educational technologies become more ubiquitous within k-12 education, this paper aims to bridge the gap in understanding on how adult users might utilize intelligent tutors. In pursuit of this, we built four intelligent tutors, and deployed them to 110 classrooms at a state technical college for an entire academic year. Following this deployment, we conducted focus groups amongst users to gather data to understand how learners perceived the optional educational technology during their academic journey. We further analyzed this data using foundational HCI methodologies to extract leanings and design recommendations on how developers might craft educational technologies for adoption at scale for the adult learning population.
... This motivation the creation of content authoring tools (CATs) to facilitate ITS creation. ASSISTment Builder [30] was developed to support content authoring in a math ITS and enabled a development ratio of 40:1. For model tracing-based ITSs, example tracing [1] has proven itself as an effective technique that depending on the domain enables development ratios between 50:1 and 100:1. ...
Chapter
Full-text available
Conversational tutoring systems (CTSs) offer learning experiences through interactions based on natural language. They are recognized for promoting cognitive engagement and improving learning outcomes, especially in reasoning tasks. Nonetheless, the cost associated with authoring CTS content is a major obstacle to widespread adoption and to research on effective instructional design. In this paper, we discuss and evaluate a novel type of CTS that leverages recent advances in large language models (LLMs) in two ways: First, the system enables AI-assisted content authoring by inducing an easily editable tutoring script automatically from a lesson text. Second, the system automates the script orchestration in a learning-by-teaching format via two LLM-based agents (Ruffle &Riley) acting as a student and a professor. The system allows for free-form conversations that follow the ITS-typical inner and outer loop structure. We evaluate Ruffle &Riley’s ability to support biology lessons in two between-subject online user studies ( N=200N = 200 N = 200 ) comparing the system to simpler QA chatbots and reading activity. Analyzing system usage patterns, pre/post-test scores and user experience surveys, we find that Ruffle &Riley users report high levels of engagement, understanding and perceive the offered support as helpful. Even though Ruffle &Riley users require more time to complete the activity, we did not find significant differences in short-term learning gains over the reading activity. Our system architecture and user study provide various insights for designers of future CTSs. We further open-source our system to support ongoing research on effective instructional design of LLM-based learning technologies.
Article
Knowledge Tracing (KT) aims to predict students' future performances based on their former exercises and additional information in educational settings. KT has received significant attention since it facilitates personalized experiences in educational situations. Simultaneously, the autoregressive modeling on the sequence of former exercises has been proven effective for this task. One of the primary challenges in autoregressive modeling for Knowledge Tracing is effectively representing the anterior (pre-response) and posterior (post-response) states of learners across exercises. Existing methods often employ complex model architectures to update learner states using question and response records. In this study, we propose a novel perspective on knowledge tracing task by treating it as a generative process, consistent with the principles of autoregressive models. We demonstrate that knowledge states can be directly represented through autoregressive encodings on a question-response alternate sequence, where model generate the most probable representation in hidden state space by analyzing history interactions. This approach underpins our framework, termed Alternate Autoregressive Knowledge Tracing (AAKT). Additionally, we incorporate supplementary educational information, such as question-related skills, into our framework through an auxiliary task, and include extra exercise details, like response time, as additional inputs. Our proposed framework is implemented using advanced autoregressive technologies from Natural Language Generation (NLG) for both training and prediction. Empirical evaluations on four real-world KT datasets indicate that AAKT consistently outperforms all baseline models in terms of AUC, ACC, and RMSE. Furthermore, extensive ablation studies and visualized analysis validate the effectiveness of key components in AAKT.
Article
Full-text available
This study examined the reading skills of children who have deficient decoding skills in the years following the first grade and traced their progress across 20 sessions of a decoding skills intervention called Word Building. Initially, the children demon- strated deficits in decoding, reading comprehension, and phonemic awareness skills. Further examination of decoding attempts revealed a pattern of accurate decoding of the first grapheme in a word, followed by relatively worse performance on subsequent vowels and consonants, suggesting that these children were not engaging in full al- phabetic decoding. The intervention directed attention to each grapheme position within a word through a procedure of progressive minimal pairing of words that dif- fered by one grapheme. Relative to children randomly assigned to a control group, children assigned to the intervention condition demonstrated significantly greater im- provements in decoding attempts at all grapheme positions and also demonstrated significantly greater improvements in standardized measures of decoding, reading comprehension, and phonological awareness. Results are discussed in terms of the consequences of not fully engaging in alphabetic decoding during early reading expe- rience, and the self-teaching role of alphabetic decoding for improving word identifi- cation, reading comprehension, and phonological awareness skills.
Article
Full-text available
Author to provide abstract Keywords: Author to provide keywords 1
Article
Full-text available
Authoring tools for Intelligent Tutoring Systems are especially valuable if they not only provide a rich set of options for the efficient authoring of tutoring systems but also support controlled experiments in which the added educational value of new tutor features is evaluated. The Cognitive Tutor Authoring Tools (CTAT) provide both. Using CTAT, real-world "Example-Tracing Tutors" can be created without programming. CTAT also provides various kinds of support for controlled experiments, such as administration of different experimental treatments, logging, and data analysis. We present two case studies in which Example-Tracing Tutors created with CTAT were used in classroom experiments. The case studies illustrate a number of new features in CTAT: Use of Macromedia Flash MX 2004 for creating tutor interfaces, extensions to the Example-Tracing Engine that allow for more flexible tutors, a Mass Production facility for more efficient template-based authoring, and support for controlled experiments.
Chapter
Full-text available
Middle school mathematics teachers are often forced to choose between assisting students' development and assessing students' abilities because of limited classroom time available. To help teachers make better use of their time, a web-based system, called the Assistment system, was created to integrate assistance and assessment by offering instruction to students while providing a more detailed evaluation of their abilities to the teacher than is possible under current approaches. An initial version of the Assistment system was created and used in May, 2004 with approximately 200 students and over 1000 students currently use it once every two weeks. The hypothesis is that Assistments can assist students while also assessing them. This chapter describes the Assistment system and some preliminary results.
Book
Researchers and educational software developers have talked about building authoring tools for intelligent tutoring systems (ITSs), adaptive and knowled- based instructional systems, and other forms of advanced-technology learning environments (ATLEs) ever since these forms of educational software were introduced in the 1970s. The technical complexity and high development costs of these systems contrasts sharply with the common picture of education as a "home grown" activity performed by individual teachers and trainers who craft lessons tailored for each situation. There have been two primary reasons to create authoring tools for ATLEs: to reduce development cost, and to allow practicing educators to become more involved in their creation. The goal of creating usable authoring tools dovetails with the recent trend toward interoperability and reusability among these systems. We use the phrase "advanced-technology learning environment" to refer to educational software on the leading edge of practice and research. These systems go beyond traditional computer-based instruction or educational simulations by providing one or more of the following benefits: ß Providing rich, or even "immersive" interfaces so that students can "learn by doing" in realistic and meaningful contexts. ß Dynamically adapting the interface or content to the student's goals, skill level, or learning style. ß Providing expert hints, explanations, or problem solving guidance. ß Allowing "mixed-initiative" tutorial interactions, where students can ask questions and have more control over their learning.
Conference Paper
Constraint-based modelling (CBM) was proposed in 1992 as a way of overcoming the intractable nature of student modelling. Originally, Ohlsson viewed CBM as an approach to developing short-term student models. In this talk, I will illustrate how we have extended CBM to support both short-and long-term models, and developed methodology for using such models to make various pedagogical decisions. In particular, I will present several successful constraint-based tutors built for a various procedural and non-procedural domains. I will illustrate how constraint-based modelling supports learning and meta-cognitive skills, and present current project within the Intelligent Computer Tutoring Group.
Chapter
Intelligent tutoring systems are computer programs that use artificial intelligence techniques to interact with students and to provide experiences tailored to students' immediate needs. Originally, the term included only systems that modeled the student's developing knowledge, but today the term often includes systems that adapt to the pattern of a student's activity and to estimates of what the student knows.
Article
Algebra is an important high-school subject and serves as a gateway to higher math education. Yet many students struggle with Algebra and are left behind. This paper describes a computer-based Algebra tutor for use in tandem with classroom instruction to provide adaptive practice with problem-solving skills. This tutor is designed so as to reify the problem-solving process to the student and bring it to their awareness. This is accomplished in the problem-solving environment that supports each problem with helper sub-problems that are organized in a meta-cognitive framework. The tutor also adapts instruction to provide optimal challenge to each student. In addition, the tutor adapts the interface to provide help suggestions to students who have difficulties solving problems. The tutor is currently being evaluated.