ArticlePDF Available

Does immediate feedback while doing homework improve learning?



Much of the literature surrounding the effectiveness of intelligent tutoring systems has focused on the type of feedback students receive. Current research suggests that the timing of feedback also plays a role in improved learning. Some researchers have shown that delaying feedback might lead to a "desirable difficulty", where students' performance while practicing is lower, but they in fact learn more. Others using Cognitive Tutors have suggested delaying feedback is bad, but those students were using a system that gave detailed assistance. Many web-based homework systems give only correctness feedback (e.g. web-assign). Should such systems give immediate feedback or might it be better for that feedback to be delayed? It is hypothesized that immediate feedback will lead to better learning than delayed feedback. In a randomized controlled crossover-"within-subjects" design, 61 seventh grade math students participated. In one condition students received correctness feedback immediately, while doing their homework, while in the other condition, the exact same feedback was delayed, to when they checked their homework the next day in class. The results show that when given feedback immediately students learned more than when receiving the same feedback delayed. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.
Kehrer, P., Kelly, K. & Heffernan, N. (2013). Does Immediate Feedback While Doing Homework Improve
Learning. In Boonthum-Denecke, Youngblood(Eds) Proceedings of the Twenty-Sixth International Florida Artificial
Intelligence Research Society Conference, FLAIRS 2013, St. Pete Beach, Florida. May 22-24, 2013. AAAI Press
2013. p 542-545.
Does Immediate Feedback While Doing Homework Improve Learning?
Paul Kehrer, Kim Kelly & Neil Heffernan
Worcester Polytechnic Institute
Much of the literature surrounding the effectiveness of intelligent
tutoring systems has focused on the type of feedback students
receive. Current research suggests that the timing of feedback
also plays a role in improved learning. Some researchers have
shown that delaying feedback might lead to a “desirable
difficulty”, where students’ performance while practicing is
lower, but they in fact learn more. Others using Cognitive Tutors
have suggested delaying feedback is bad, but those students were
using a system that gave detailed assistance. Many web-based
homework systems give only correctness feedback (e.g. web-
assign). Should such systems give immediate feedback or might
it be better for that feedback to be delayed? It is hypothesized
that immediate feedback will lead to better learning than delayed
feedback. In a randomized controlled crossover-“within-subjects”
design, 61 seventh grade math students participated. In one
condition students received correctness feedback immediately,
while doing their homework, while in the other condition, the
exact same feedback was delayed, to when they checked their
homework the next day in class. The results show that when
given feedback immediately students learned more than when
receiving the same feedback delayed.
The field of Intelligent Tutoring Systems (ITS) has had a
long history (Anderson et al. 1995, Koedinger et al. 1997,
Corbett et al. 1997). Recently, Kurt VanLehn (2011)
claims that ITS can be nearly as effective as human tutors.
VanLehn also concludes that Computer Aided Instruction
(CAI) is not as effective as ITS. The distinction is the type
and granularity of feedback provided. ITS provide fine-
grained, detailed and specific feedback and tutoring often
at the step or sub-step level. In contrast, CAI provides
immediate feedback on the answer only. The focus of this
past research has predominately been the use of these
systems in the classroom not as homework support.
Some studies have shown the effectiveness of ITS used
in the context of a Web-Based Homework Support
Copyright © 2013, Association for the Advancement of Artificial
Intelligence ( All rights reserved.
(WBHS) (Mendicino et al. 2009, Singh, 2011, Bonham et
al. 2003). Similarly, VanLehn et al. (2005) have shown
significant learning gains in students using the Andes
Physics tutoring system in place of traditional homework.
However, these learning gains are the result of
sophisticated feedback rather than correctness-only
feedback. Kelly et al. (submitted) shows that correctness-
only feedback and unlimited attempts to self-correct result
in significant learning gains compared to no feedback at
all. However, this feedback was provided while students
were completing their homework. Does immediacy of this
type of feedback matter?
Timing of Feedback
In addition to the type of feedback affecting efficacy,
timing of feedback has also been studied. Shute (2007)
summarizes the inconsistencies in the research on
immediate versus delayed feedback and concludes that
both types of feedback have pros and cons. Much of the
research sited in her analysis was conducted in laboratory
settings or within the context of a classroom. However, the
reality is that students in America are given homework
every night and traditionally receive feedback the
following day. ITS, as WBHS, provide an opportunity for
students to receive feedback immediately, while doing
their homework instead of waiting. But does this
immediacy of feedback impact learning in the unique case
of homework?
The current research question is, do students learn more
when they are getting correctness feedback as they work
on their homework than when they get the same feedback
the next day. Given that the quality of the feedback is
lacking compared to previous studies, one might wonder is
it critical for students to receive feedback immediately. We
seek to determine if there is a difference in learning gains,
but also how large an effect does the immediacy of
feedback have when used in a real educational setting?
Current Study
The present study used, ASSISTments, an intelligent
tutoring system, which is capable of providing scaffolding
and tutoring. Because this study focuses on the
effectiveness of correctness only feedback, tutoring
features were turned off.
Experimental Design
A total of 65 seventh grade students in a suburban middle
school in MA participated in this study as part of their
regular math homework and Pre-Algebra math class. The
topics covered during this study included surface area and
volume of 3-dimensional figures.
A pre-test was administered for each topic. The pre-test
consisted of one question for each sub-topic included in the
lesson. For example, the lesson on surface area of 3-
dimensional figures actually had four sub-topics that were
being taught: surface area of a pyramid, surface area of a
cone given the slant height, surface area of a cone given
the height, and surface area of a sphere. The lesson on
volume of 3-dimensional figures had five sub-topics,
which included: volume of a pyramid, volume of a cone,
volume of a sphere, volume of a compound figure, and
given the volume of a figure find the missing value of a
side. All of the study materials including the data can be
found in Kelly (2012).
The accompanying homework assignments were
completed using ASSISTments, a web-based tutoring
system. Students were accustomed to using the program
for nightly homework. The homework was designed using
triplets, or groups of the 3 questions that were
morphologically similar to the questions on the pre-test.
There were three questions in a row for each of the primary
topics. Additional challenge questions relating to the topic
were also included in the homework to maintain ecological
Post-tests for each topic were also administered. There
was one question for each sub-topic and they were
morphologically similar to the questions on the pre-test and
homework assignments. Therefore, the tests on surface
area had four questions while the tests on volume had five.
Students were blocked based on prior knowledge into two
conditions, immediate feedback and delayed feedback. To
do this, overall performance in ASSISTments was used to
rank students. Pairs of students were taken and each was
randomly assigned to either of the conditions. Students in
the immediate feedback condition were given correctness
feedback immediately on each question as they completed
their homework. Students in the delayed feedback
condition completed their homework on a worksheet but
were given the same feedback the next day.
At the start of the study, all students were pre-tested in
class, which was part of the typical routine in this
classroom. They were then formally instructed on surface
area of 3-dimensional figures. That night, they completed a
related homework assignment. Students in the delayed
feedback condition completed their assignment on a
worksheet, receiving NO feedback. Students in the
immediate feedback condition completed their homework
using ASSISTments, which immediately told if their
response was correct. In the case of an incorrect response,
students were given unlimited attempts to correct their
answer. A correct response was required to move on to the
next question. Therefore, students could ask for the correct
response if needed by pressing the “Show hint 1 of 1”
button. It is important to note that when tutoring features
are active, this button would provide a hint. However, to
explore correctness-only feedback, this button provided the
correct response.
The following day, all students reviewed their
homework. Students in the delayed feedback condition
used ASSISTments to enter their answers from their
worksheet, providing them the same correctness-only
feedback and unlimited attempts to self-correct that was
given in the experimental condition. Students in the
immediate feedback condition reviewed their responses
using the item report in ASSISTments. The item report
shows students which questions they answered incorrectly
and what response they initially gave. They were
encouraged to look back over their responses and work. To
end class, all students were then given a post-test on
surface area of 3-dimensional figures.
The study was replicated the following week with
students switching conditions and with a new topic. Again,
students were pre-tested during class and formally
instructed on volume of 3-dimensional figures. That night,
students completed their homework in the opposite
condition. Specifically, students who had received
immediate feedback now completed the homework on a
worksheet, without feedback and those who had received
delayed feedback now used ASSISTments to receive
feedback immediately. The next day, in class, students
reviewed their homework. Students in the delayed
feedback condition used ASSISTments to receive
correctness feedback and those in the immediate feedback
condition used the ASSISTments item report to review
incorrect responses. A post-test was then given.
Data from 61 students were included in the data analysis.
Students were excluded from the analysis if they were
absent for any part of the study (n=4). A two-tailed t-test
analysis of the pre-tests showed that students were evenly
distributed for both assignments. (Surface Area: Immediate
Feedback M=14, SD=16, Delayed Feedback M=13,
SD=13, p=0.82. Volume: Immediate Feedback M=3,
SD=0.7, Delayed Feedback M=5, SD=0.9, p=0.25).
While between subject analysis are common, this study
was conducted to provide a within subject analysis. Results
showed that when students received immediate feedback
(M=60, SD=27) they performed better than when receiving
the same feedback delayed (M=51, SD=30), however this
difference is only marginally significant (t(60)=2.1,
However, a paired t-test analysis of the pre-test scores
shows that students had significantly more background
knowledge of Surface Area (M=14, SD=14) than Volume
(M=4, SD=8) t(60)=3.9, p<.0001) Therefore, relative gain
scores were calculated and analyzed to determine if there
was in fact increased learning as a result of immediate
feedback when the potential for growth was accounted for.
To calculate the relative gain score, for each student, we
took his/her gain score and divided by the possible number
of points they could have gained (total number of questions
pretest score). For example, if a student scored 1 correct
on the pre-test out of 5 questions, and later scored 3 on the
post-test, the relative gain score was ((3-1)/4)=50%. We
had one student with a negative gain score, (she had one
correct on the pretest, but then zero correct on the post-test,
and the resulting negative score was included.
A paired t-test of relative gains shows that students
learned 12% more when given immediate feedback
(M=67%, SD=26), than delayed feedback (M=55%,
SD=32), (t(60)=2.501, p=0.015). The effect size is 0.37
with a 95% confidence interval of 0.05 to 0.77.
We were curious to know if the effect of condition were
experienced similarly across the two math topics, so we
compared both the post-tests alone, their absolute gain
scores, and their relative gain scores, and found similar
patterns of Immediate Feedback being more effective than
delayed, but with an expected, lower level of significance
that was no longer reliable. See Table 2.
Contributions and Discussion
This study adds to the delayed versus immediate feedback
debate by exploring a critical context that has been ignored
in previous research. Specifically, immediate feedback,
while students complete homework leads to better learning
than waiting until the next day to receive that same
feedback. This is an extremely important situation to
consider as it applies to almost every student in America.
While further research is needed comparing different types
of feedback, assessment, and control conditions, this study
moves the debate in a new direction with respect to delay
Discussion and Future Research:
There has been some controversy about whether and when
immediate feedback is good especially surrounding
performance on immediate tasks versus delayed
assessments. However, in the context of homework
support, the goal is immediate learning gains that prepare
the student for the next lesson. In this ecologically valid
setting, it’s very difficult to measure retention or to deliver
a valid delayed assessment because other learning occurs
after the intervention.
One area that should be explored further with this
“overnight delay” is task transfer. For instance, according
to Lee (1992) immediate feedback did worse than delayed
on far transfer tasks. The lack of self-correction and error
analysis was attributed with these findings. Similarly,
Mathan (2003) argued, feedback( could( prevent(
important( secondary( skills( from( being( exercised.”
These secondary skills include error detection, error
correction and metacognitive skills. The author discusses
the need to “check your work”. However, middle school
students are still learning how to check their work and
what it means to find errors. Quite often strong students
aren’t even aware they made a mistake unless it’s pointed
out. Similarly, many students don’t know what it means to
check their work. We would argue that providing
correctness only feedback actually promotes these skills
because it requires students to self-correct in order to move
on. They are responsible for detecting their error and
correcting it. Additionally, students begin to recognize the
types of errors they make repeatedly and learn to check
specifically for those.
Merrill et al. (1995) argues that a benefit of human
tutors is that they do not intervene when learning might
occur through the mistake. In the present study, the timing
of the feedback allows students to make that mistake and
like a human tutor simply tell the students that the answer
is not quite right. Students must then detect their error and
correct it, much like in the experience with a human tutor.
Table 2: Mean and standard deviation (in parenthesis) post-
test scores, absolute gain scores and relative gain sores for
both topics. Effect sizes and significance levels included.
Effect Size
& p-value
Surface Area:
Post Test
Absolute Gains
Relative Gains
77% (22)
64% (26)
61% (26)
66% (29)
53% (31)
50% (29)
0.37, p=0.11
0.36, p=0.14
0.38, p=0.13
Post Test
Absolute Gains
Relative Gains
63% (24)
61% (23)
61% (26)
54% (27)
48% (29)
50% (29)
0.37, p=0.14
0.42, p=0.08
0.39, p=0.13
The results of the current study largely support our
hypothesis that immediate feedback does improve learning
compared to delayed feedback. As expected, students who
were told if their answers were correct and were able to fix
them as they completed their homework, learned more than
students who completed the homework on a worksheet and
were then given the exact same routine to get their
feedback the following day.
There are many possible explanations for why this
happens. Perhaps students show more effort while doing
their homework the first time as opposed to the next day
after they have already done the work. Without immediate
feedback, students practice the skill incorrectly and must
then re-condition their thinking once feedback is given.
This process takes more time. Our intuition for this result
is that immediate feedback helps to correct misconceptions
in student learning as soon as they are made. In the
delayed feedback condition, it is possible for a student to
reinforce a misconception of the content by making the
same mistake over and over without being corrected by
ASSISTments’ immediate feedback. Future research
should focus on which aspects of feedback make it more
effective to further establish the roll timing plays in the
delivery of that feedback.
The controversy over “Is math homework valuable?”
A second considerable contribution provided by this paper
addresses the question of the value of homework supported
by computers. The most comprehensive meta-analysis of
homework has been done by Cooper et al. (2006), which
points out many criticisms of homework in the US. It is
possibly the case that many students are wasting their time
with homework, therefore tarnishing the use of math
homework practice. In a review of 69 studies, 50 showed a
positive correlation supporting the benefits of homework,
but a full 19 were negative. To quote Cooper et al (2006)
No strong evidence was found for an association between the
homeworkachievement link” We offer this study as one that
is able to not only show an overall positive effect of
homework, but also shows a benefit for computer
supported homework. Cooper et al 2006 complained of
the lack of randomized controlled trials in these homework
studies, particularly those that that had the unit of
assignment being the same as the unit of analysis. Our
study uses strong methodology, to provide such an
example. We found that intelligent tutoring systems can be
a perfect vehicle to demonstrate the value of homework
support as this study certainly shows that computer
supported homework leads to improved learning gains.
NSF GK12 grant (0742503); NSF grant number 0448319;
IES grant R305C100024; R305A120125; Bill and Melinda
Foundation Next Generation Learning Challenge via
Anderson, J.R., Corbett, A.T., Koedinger, K.R. & Pelletier, R.
(1995). Cognitive tutors: Lessons learned. The Journal of the
Learning Sciences, 4, 167-207.
Bonham, S.W., Deardorff, D.L., & Beichner, R.J. (2003).
Comparison of student performance using Web- and paper- based
homework in college-level physics. Journal of Research in
Science Teaching, 40(10), 1050-1071.
Cooper, H., Robinson, J. C., & Patall, E. A. (2006). Does
homework improve academic achievement? A synthesis of
research, 1987-2003. Review of Educational Research, 76, 1-62.
Corbett, A., Koedinger, K.R., & Anderson, J.R. (1997).
Intelligent Tutoring Systems. In M. Helander, T.K. Landauer &
P. Prahu (Eds.) Handbook of Human-Computer Interaction,
Second Edition (pp.849-874). Amsterdam: Elsevier Science.
Kelly, K. (2012) Study Materials. Accessed 11/18/12.
Koedinger, Kenneth R.; Anderson, John R.; Hadley, Willam H.;
and Mark, Mary A., (1997). Intelligent tutoring goes to school in
the big city. International Journal of Artificial Intelligence in
Education, 8,30-43.
Lee, A.Y. (1992). Using tutoring systems to study learning: An
application of HyperCard. Behavior Research Methods,
Instruments, & Computers, 24(2), 205-212.
Mathan, S. & Koedinger K. (2003). Recasting the Feedback
Debate: Benefits of Tutoring Error Detection and Correction
Skills. Proc. of the International AIED Conference 2003, July 20-
24, Sydney, Australia.
Mendicino M., Razzaq, L., & Heffernan, N., (2009). A
comparison of traditional homework to computer-supported
homework. Journal of Research on Technology in Education,
41(3), 331-358.
Merrill, S. K., Reiser, B. J., Merrill, D. C. & Landes, S. (1995)
Tutoring: Guided Learning by Doing. Congntion & Instruction,
V13(3) 315-372
Shute, V. J. (2007). Focus on Formative Feedback. Educational
Testing Services. Princeton, NJ.
Singh, R., Saleem, M., Pradhan, P., Heffernan, C., Heffernan, N.,
Razzaq, L. Dailey, M. O'Connor, C. & Mulchay, C. (2011)
Feedback during Web-Based Homework: The Role of Hints In
Biswas et al (Eds) Proceedings of the Artificial Intelligence in
Education Conference 2011. Springer. LNAI 6738, Pages. 328
VanLehn, K., Lynch, C., Schulze, K., Shapiro, J., Shelby, R.,
Taylor, L., Treacy, D., Weinstein, A. & Wintersgill, M. (2005).
The Andes physics tutoring system: Five years of evaluations.
Artificial Intelligence in Education, (pp. 678-685) Amsterdam,
Netherlands: IOS Press.
VanLehn, Kurt (2011). The relative effectiveness of human
tutoring, intelligent tutoring systems, and other tutoring systems.
Educational Psychologist, 46(4), 197-221.
... Similarly, Winstone et al. (2017) also advised researchers to avoid using technical terms in the feedback given as it would hinder students from understanding the feedback. Since feedback can be given in a corrective nature (Shute 2008), it should be given in a timely manner to avoid reinforcement of mistakes (Brookhart 2011;Kehrer et al. 2013). ...
Time is a difficult topic for many elementary students. We evaluated the effect of the feedback incorporated in online Cognitive Diagnostic Assessment (CDA) on students’ achievement in this topic by conducting a sequential explanatory mixed method case study which involved 125 Grade Five students from the six classrooms in six Malaysian elementary schools. Simple feedback and detailed feedback were delivered to the participants in the control and experimental groups, respectively as intervention through the online CDA. The findings indicated that the detailed feedback was more effective than the simple feedback in enhancing the participants’ achievement in the topic of time. The findings of the follow-up interviews suggested that the comprehensiveness of the feedback, the usefulness of the feedback and the ability of feedback to engage students were the main factors which explained the effectiveness of the detailed feedback. This study sheds light on the advantages of incorporating detailed feedback in the online CDA. Thus, teachers may be encouraged to use it as a classroom assessment tool for providing instant detailed feedback in supporting students’ learning of the topic of Time.
... Feedback delays can cause formative evaluations to be useless. Some studies have addressed through experimentation that immediate feedback leads to better learning than a delayed one (Kehrer, Kelly & Heffernan, 2013). In this sense, MOOCs usually take place in fast paced contexts, and hence, deadlines times are usually tight. ...
Full-text available
Peer assessment activities might be one of the few personalized assessment alternatives to the implementation of auto-graded activities at scale in Massive Open Online Course (MOOC) environments. However, teacher's motivation to implement peer assessment activities in their courses might go beyond the most straightforward goal (i.e., assessment), as peer assessment activities also have other side benefits, such as showing evidence and enhancing the critical thinking, comprehension or writing capabilities of students. However, one of the main drawbacks of implementing peer review activities, especially when the scoring is meant to be used as part of the summative assessment, is that it adds a high degree of uncertainty to the grades. Motivated by this issue, this paper analyses the reliability of all the peer assessment activities performed as part of the MOOC platform of the Spanish University for Distance Education (UNED) UNED-COMA. The following study has analyzed 63 peer assessment activities from the different courses in the platform, and includes a total of 27,745 validated tasks and 93,334 peer reviews. Based on the Krippendorff's alpha statistic, which measures the agreement reached between the reviewers, the results obtained clearly point out the low reliability, and therefore, the low validity of this dataset of peer reviews. We did not find that factors such as the topic of the course, number of raters or number of criteria to be evaluated had a significant effect on reliability. We compare our results with other studies, discuss about the potential implications of this low reliability for summative assessment, and provide some recommendations to maximize the benefit of implementing peer activities in online courses.
... In the navigation condition, in addition to the map of the 3D environment and the location of the target, participants also got an immediate feedback about their position on the model which is missing for the map on paper. Consistent with the results of the current study, previous studies show that immediate feedback improves learning in education (Kehrer, Kelly, & Heffernan, 2013) and medical education (Garner, Gusberg, & Kim, 2014). In this respect, this additional information (i.e., the current location information) in the navigation display may be another reason for the performance improvement of the participants in the navigation condition. ...
Navigation control skills of surgeons become very critical for surgical procedures. Strategies improving these skills are important for developing higher-quality surgical training programs. In this study, the underlying reasons of the navigation control effect on performance in a virtual reality-based navigation environment are evaluated. The participants’ performance is measured in conditions: navigation control display and paper-map display. Performance measures were collected from 45 beginners and experienced residents. The results suggest that navigation display significantly improved performance of the participants. Also, navigation was more beneficial for beginners than experienced participants. The underlying reason of the better performance in the navigation condition was due to lower number of looks to the map, which causes attention shifts between information sources. Accordingly, specific training scenarios and user interfaces can be developed to improve the navigation skills of the beginners considering some strategies to lower their number of references to the information sources.
... It is also possible to benchmark these findings against the results of similar studies, which have a mean effect size of 0.43 (Lipsey, et al., 2012), showing the clear strength of providing immediate correctness feedback as an intervention. Kehrer, Kelly & Heffernan (2013) replicated the positive effects of immediate correctness feedback observed in Kelly, Heffernan, Heffernan, et al.'s original work (2013). Similar hypotheses examining the efficacy of feedback within ASSISTments have led to numerous publications over the past decade. ...
Full-text available
Large-scale randomized controlled experiments conducted in authentic learning environments are commonly high stakes, carrying extensive costs and requiring lengthy commitments for all-or-nothing results amidst many potential obstacles. Educational technologies harbor an untapped potential to provide researchers with access to extensive and diverse subject pools of students interacting with educational materials in authentic ways. These systems log extensive data on student performance that can be used to identify and leverage best practices in education and guide systemic policy change. Tomorrow's educational technologies should be built upon rigorous standards set forth by the research revolution budding today.
... Experimental comparisons can therefore be used within the platform to evaluate the relative value of crowdsourced alternatives, just as they are used to adaptively improve and personalize other components of educational technology (Williams et al. 2014). The promise of this approach is reinforced by numerous studies within ASSISTments that have already identified large positive effects on student learning, by varying factors like the type of feedback provided on homework (Mendicino et al. 2009;Kelly et al. 2013;Kehrer et al. 2013). A series of similar experiments currently serve as a proof of concept for various iterations of teachersourcing and learnersourcing elaborate feedback. ...
Full-text available
Due to substantial scientific and practical progress, learning technologies can effectively adapt to the characteristics and needs of students. This article considers how learning technologies can adapt over time by crowdsourcing contributions from teachers and students – explanations, feedback, and other pedagogical interactions. Considering the context of ASSISTments, an online learning platform, we explain how interactive mathematics exercises can provide the workflow necessary for eliciting feedback contributions and evaluating those contributions, by simply tapping into the everyday system usage of teachers and students. We discuss a series of randomized controlled experiments that are currently running within ASSISTments, with the goal of establishing proof of concept that students and teachers can serve as valuable resources for the perpetual improvement of adaptive learning technologies. We also consider how teachers and students can be motivated to provide such contributions, and discuss the plans surrounding PeerASSIST, an infrastructure that will help ASSISTments to harness the power of the crowd. Algorithms from machine learning (i.e., multi-armed bandits) will ideally provide a mechanism for managerial control, allowing for the automatic evaluation of contributions and the personalized provision of the highest quality content. In many ways, the next 25 years of adaptive learning technologies will be driven by the crowd, and this article serves as the road map that ASSISTments has chosen to follow.
... About a dozen universities now are using ASSISTments to run studies of some sort. The tool has been used to do randomized controlled trials reported in over 18 peerreviewed publications (Broderick et al. 2012;Heffernan et al. 2012a, b;Kehrer et al. 2013;Kelly et al. 2013a;Kelly et al. 2013b;Kim et al. 2009;Mendicino et al. 2009;Ostrow and Heffernan 2014a, b;Pardos et al. 2011;Razzaq et al. 2005Razzaq et al. , 2009Razzaq et al. 2007;Razzaq et al. 2008;Razzaq and Heffernan 2006;2009;Sao Pedro et al. 2009;Shrestha et al. 2009;Singh et al. 2011;Walonoski and Heffernan 2006). Those studies were generally about comparing different ways of giving feedback to students, and measuring learning on a posttest. ...
Full-text available
The ASSISTments project is an ecosystem of a few hundred teachers, a platform, and researchers working together. Development professionals help train teachers and get teachers to participate in studies. The platform and these teachers help researchers (sometimes explicitly and sometimes implicitly) simply by using content the teacher selects. The platform, hosted by Worcester Polytechnic Institute, allows teachers to write individual ASSISTments (composed of questions with answers and associated hints, solutions, web-based videos, etc.) or to use pre-built ASSISTments, bundle them together in a problem set, and assign these to students. The system gives immediate feedback to students while they are working and provides student-level data to teachers on any assignment. The word “ASSISTments” blends tutoring “assistance” with “assessment” reporting to teachers and students. While originally focused on mathematics, the platform now has content from many other subjects (e.g., science, English, Statistics, etc.). Due to the large library of mathematics content, however, it is mostly used by math teachers. Over 50,000 students used ASSISTments last school year (2013–4) and this number has been doubling each year for the last 8 years. The platform allows any user, mostly researchers, to create randomized controlled trials in the content, which has helped us use the tool in over 18 published and an equal number of unpublished studies. The data collected by the system has also been used in a few dozen peer-reviewed data mining publications. This paper will not seek to review these publications, but instead we will share why ASSISTments has been successful and what lessons were learned along the way. The first lesson learned was to build a platform for learning sciences, not a product that focused on a math topic. That is, ASSISTments is a tool, not a curriculum. A second lesson learned is expressed by the mantra “Put the teacher in charge, not the computer.” This second lesson is about building a flexible system that allows teachers to use the tool in concert with the classroom routine. Once teachers are using the tool they are more likely to want to participate in research studies. These lessons were born from the design decisions about what the platform supports and does not support. In conclusion, goals for the future will be presented.
... [157][158][159] Available research suggests that immediate and contextualized feedback is superior to traditional approaches to promote behavior change 160 as well as improve learning. 161,162 These technologies also enhance the possibility of scaling surveillance, prevention and intervention efforts in ways that have been unthinkable with conventional faceto-face programs. Further, these devices have the ability to capitalize on small, but frequent intervention doses at timing that is optimized for the individual user. ...
Full-text available
The 2013 Pennington Biomedical Research Center's Scientific Symposium focused on the treatment and management of pediatric obesity and was designed to (i) review recent scientific advancesin the prevention, clinical treatment and management of pediatric obesity, (ii) integrate the latest published and unpublished findings, and to (iii)explore how these advances can be integrated into clinical and public health approaches. The symposium provided an overview of important new advances in the field which led to several recommendations for incorporating the scientific evidence into practice. The science presented covered a range of topics related to pediatric obesity, including the role of genetic differences, epigenetic events influencedby in utero development, pre-pregnancy maternal obesity status, maternal nutrition and maternal weight gain on developmental programming of adiposity in offspring. Finally, the relative merits of a range of various behavioral approaches targeted at pediatric obesity were covered, together with the specific roles of pharmacotherapy and bariatric surgery in pediatric populations.In summary, pediatric obesity is very challenging problem that is unprecedented in evolutionary terms; one which has the capacity to negate many of the health benefits that have contributed to the increased longevity observed in the developed world.International Journal of Obesity accepted article preview online, 25 March 2014; doi:10.1038/ijo.2014.49.
Concern that many graduate medical students do not know sufficient anatomy to safely and effectively assess and treat patients is a frequent complaint by clinicians. Although downgrading of anatomy relative to newer basic sciences is often blamed, there is evidence students rapidly forget anatomy. However, there are a number of ways instructors can foster long-term retention of anatomy, the most powerful involving intertwining clinical and anatomical information and assessing in-depth processing. Assisting this process is ‘triaging’ the curriculum so it contains only clinically engaged anatomy. Students are far more likely to remember information which they consider to be relevant to their future vocation. Therefore, teaching only anatomy which is likely to be useful in a clinical context tends to improve long-term retention of anatomy by medical students. Other helpful techniques include incorporating surface and radiological anatomy in a vertically integrated curriculum, reciprocal peer teaching and employing clinically qualified instructors.
Technical Report
Full-text available
Destilleren van kennis rond het thema gepersonaliseerd leren met ict. Dit artikel is geschreven in het kader van de pilot 'Destilleren' in opdracht van NRO. 28 september 2018 Marian Habermehl (Oberon) Ditte Lockhorst (Oberon) Wilfried Admiraal (ICLON, Universiteit Leiden) Liesbeth Kester (Educatie, Universiteit Utrecht)
Conference Paper
Intelligent tutoring systems have been developed to help students learn independently. However, students who are poor self-regulated learners often struggle to use these systems because they lack the skills necessary to learn independently. The field of psychology has extensively studied self-regulated learning and can provide strategies to improve learning, however few of these include the use of technology. The present proposal reviews three elements of self-regulated learning (motivational beliefs, help-seeking behavior, and meta-cognitive self-monitoring) that are essential to intelligent tutoring systems. Future research is suggested, which address each element in order to develop self-regulated learning strategies in students while they are engaged in learning mathematics within an intelligent tutoring system.
Full-text available
This study compared learning for fifth grade students in two math homework conditions. The paper-and-pencil condition represented traditional homework, with review of problems in class the following day. The Web-based homework condition provided immediate feedback in the form of hints on demand and step-by-step scaffolding. We analyzed the results for students who completed both the paper-and-pencil and the Web-based conditions. In this group of 28 students, students learned significantly more when given computer feedback than when doing traditional paper-and-pencil homework, with an effect size of .61. The implications of this study are that, given the large effect size, it may be worth the cost and effort to give Web-based homework when students have access to the needed equipment, such as in schools that have implemented one-to-one computing programs.
Conference Paper
Full-text available
Prior work has shown that computer-supported homework can lead to better results over traditional paper-and-pencil homework. This study about learning from homework involved the comparison of immediate-feedback with tutoring versus a control condition where students got feedback the next day in math class. After analyzing eighth grade students who participated in both conditions, it was found that they gained significantly more (effect size 0.40) with computer-supported homework. This result has practical significance as it suggests an effective improvement over the widely used paper-and-pencil homework. The main result is followed with a second set of studies to better understand this result: is it due to the timeliness of feedback or quality tutoring? Keywordsevaluation of CAL systems–intelligent tutoring systems–interactive learning environments–secondary education–teaching/learning strategies
Full-text available
This paper reviews the corpus of research on feedback, with a particular focus on formative feedback—defined as information communicated to the learner that is intended to modify the learner's thinking or behavior for the purpose of improving learning. According to researchers in the area, formative feedback should be multidimensional, nonevaluative, supportive, timely, specific, credible, infrequent, and genuine (e.g., Brophy, 1981; Schwartz & White, 2000). Formative feedback is usually presented as information to a learner in response to some action on the learner's part. It comes in a variety of types (e.g., verification of response accuracy, explanation of the correct answer, hints, worked examples) and can be administered at various times during the learning process (e.g., immediately following an answer, after some period of time has elapsed). Finally, there are a number of variables that have been shown to interact with formative feedback's success at promoting learning (e.g., individual characteristics of the learner and aspects of the task). All of these issues will be discussed in this paper. This review concludes with a set of guidelines for generating formative feedback.
In this article, research conducted in the United States since 1987 on the effects of homework is summarized. Studies are grouped into four research designs. The authors found that all studies, regardless of type, had design flaws. However, both within and across design types, there was generally consistent evidence for a positive influence of homework on achievement. Studies that reported simple homework–achievement correlations revealed evidence that a stronger correlation existed (a) in Grades 7–12 than in K–6 and (b) when students rather than parents reported time on homework. No strong evidence was found for an association between the homework–achievement link and the outcome measure (grades as opposed to standardized tests) or the subject matter (reading as opposed to math). On the basis of these results and others, the authors suggest future research.
Individualized instruction significantly improves students' pedagogical and motivational outcomes, In this article, we seek to characterize tutorial behaviors that could lead to these benefits and to consider why these actions should be pedagogically useful. This experiment examined university students learning LISP programming with the assistance of a tutor. Tutoring sessions were audiotaped, allowing us to analyze every verbal utterance during the sessions and thereby to identify the conversational events that lead to pedagogical success. This discourse analysis suggests that tutors are successful because they take a very active role in leading the problem solving by offering confirmatory feedback and additional guidance while students are on profitable paths and error feedback after mistakes. However, tutors carefully structure their feedback to allow students to perform as much of the work as possible while the tutor ensures that problem solving stays on track. These results suggest the types of strategies tutors employ to facilitate guided learning by doing.
This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities of interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the effect size of intelligent tutoring systems was 0.76, so they are nearly as effective as human tutoring.
Homework gives students an opportunity to practice important college-level physics skills. A switch to Web-based homework alters the nature of feedback received, potentially changing the pedagogical benefit. Calculus- and algebra-based introductory physics students enrolled in large paired lecture sections at a public university completed homework of standard end-of-the-chapter exercises using either the Web or paper. Comparison of their performances on regular exams, conceptual exams, quizzes, laboratory, and homework showed no significant differences between groups; other measures were found to be strong predictors of performance. This indicates that the change in medium itself has limited effect on student learning. Ways in which Web-based homework could enable exercises with greater pedagogical value are discussed. © 2003 Wiley Periodicals, Inc. J Res Sci Teach 40: 1050–1071, 2003
HyperCard was used to develop a simplified tutoring system whose principles were based on a learning theory, and a genetics tutoring system was evaluated experimentally. Learning was studied by examining immediate versus delayed feedback after an error was made. Such tutoring systems aid in psychological studies of learning, because experimental variables can be easily manipulated. HyperCard provides a good vehicle for tutoring system development, since it requires no extensive programming skills.