Content uploaded by Valerie J. Shute
Author content
All content in this area was uploaded by Valerie J. Shute
Content may be subject to copyright.
http://rer.aera.net
Review of Educational Research
DOI: 10.3102/0034654307313795
2008; 78; 153 REVIEW OF EDUCATIONAL RESEARCH
Valerie J. Shute Focus on Formative Feedback
http://rer.sagepub.com/cgi/content/abstract/78/1/153
The online version of this article can be found at:
Published on behalf of
http://www.aera.net
By
http://www.sagepublications.com
can be found at:Review of Educational Research Additional services and information for
http://rer.aera.net/cgi/alerts Email Alerts:
http://rer.aera.net/subscriptions Subscriptions:
http://www.aera.net/reprintsReprints:
http://www.aera.net/permissionsPermissions:
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
153
Review of Educational Research
March 2008, Vol. 78, No. 1, pp. 153–189
DOI: 10.3102/0034654307313795
© 2008 AERA. http://rer.aera.net
Focus on Formative Feedback
Valerie J. Shute
Florida State University
This article reviews the corpus of research on feedback, with a focus on for-
mative feedback—defined as information communicated to the learner that is
intended to modify his or her thinking or behavior to improve learning.
According to researchers, formative feedback should be nonevaluative, sup-
portive, timely, and specific. Formative feedback is usually presented as infor-
mation to a learner in response to some action on the learner’s part. It comes
in a variety of types (e.g., verification of response accuracy, explanation of the
correct answer, hints, worked examples) and can be administered at various
times during the learning process (e.g., immediately following an answer, after
some time has elapsed). Finally, several variables have been shown to inter-
act with formative feedback’s success at promoting learning (e.g., individual
characteristics of the learner and aspects of the task). All of these issues are
discussed. This review concludes with guidelines for generating formative
feedback.
KEYWORDS:formative feedback, learning, performance.
It is not the horse that draws the cart, but the oats.
—Russian proverb
Feedback used in educational contexts is generally regarded as crucial to improv-
ing knowledge and skill acquisition (e.g., Azevedo & Bernard, 1995; Bangert-
Drowns, Kulik, Kulik, & Morgan, 1991; Corbett & Anderson, 1989; Epstein et al.,
2002; Moreno, 2004; Pridemore & Klein, 1995). In addition to its influence on
achievement, feedback is also depicted as a significant factor in motivating learn-
ing (e.g., Lepper & Chabay, 1985; Narciss & Huth, 2004). However, for learning,
the story on feedback is not quite so rosy or simple.
According to Cohen (1985) feedback “is one of the more instructionally pow-
erful and least understood features in instructional design” (p. 33). In support of
this claim, consider the hundreds of research studies published on the topic of feed-
back and its relation to learning and performance during the past 50 years (for
excellent historical reviews, see Bangert-Drowns et al., 1991; Kluger & DeNisi,
1996; Kulhavy & Stock, 1989; Kulhavy & Wager, 1993; Mory, 2004; Narciss &
Huth, 2004). Within this large body of feedback research, there are many conflict-
ing findings and no consistent pattern of results.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
154
Definition of Formative Feedback
Formative feedback is defined in this review as information communicated to
the learner that is intended to modify his or her thinking or behavior for the pur-
pose of improving learning. And although the teacher may also receive student-
related information and use it as the basis for altering instruction, I focus on the
student (or more generally, the “learner”) as the primary recipient of formative
feedback herein.
The premise underlying most of the research conducted in this area is that good
feedback can significantly improve learning processes and outcomes, if delivered
correctly. Those last three words—“if delivered correctly”—constitute the crux of
this review.
Goals and Focus
The dual aims of this article are to (a) present findings from an extensive liter-
ature review of feedback to gain a better understanding of the features, functions,
interactions, and links to learning and (b) apply the findings from the literature
review to create a set of guidelines relating to formative feedback. The overarch-
ing goal is to identify the features of formative feedback that are the most effective
and efficient in promoting learning, and to determine under what conditions that
learning support holds. This is not an easy task. The vast literature reveals dozens
of feedback types that have been subjected to experimental scrutiny—for exam-
ple, accuracy of the solution, topic contingent, response contingent, attribute iso-
lation, worked examples, hints, and partial solutions. However, different studies
report disparate findings regarding the same feedback variable. In addition, forma-
tive feedback variables have been shown to interact with other variables, such as
student achievement level, task level, and prior knowledge.
This review focuses on task-level feedback as opposed to general summary
feedback. Task-level feedback typically provides more specific and timely (often
real-time) information to the student about a particular response to a problem or
task compared to summary feedback and may additionally take into account the
student’s current understanding and ability level. For instance, a struggling student
may require greater support and structure from a formative feedback message com-
pared to a proficient student. Summary information is useful for teachers to mod-
ify instruction for the whole class and for students to see how they are generally
progressing. The intended audience for this article includes: educators (e.g., teach-
ers and administrators) seeking to improve the quality of student learning in the
classroom using well-crafted feedback, cognitive psychologists and instructional
system designers interested in researching and developing more effective learning
environments, graduate students in search of meaningful research to pursue, and
others who are interested in harnessing the power of feedback to support teaching
and learning—in the classroom, workplace, or even the home.
Some of the major questions addressed in this review include: What are the
most powerful and efficient types of formative feedback, and under what condi-
tions do these different types of feedback help a learner revise a skill or improve
his or her understanding? What are the mechanisms by which feedback facilitates
the transformation of rudimentary skills into the competence of a more expert
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
155
state? Answers to these questions can facilitate the design and development of
teacher-delivered or automated feedback to support learning.
This article begins with a summary of the methods used to accomplish the lit-
erature review, followed by an extensive review of formative feedback research,
which makes up the bulk of the article. Afterward, I showcase four important feed-
back articles, each associated with a theoretically and empirically based model of
formative feedback. I conclude with specific recommendations for using forma-
tive feedback that are supported by the current literature review and discuss future
research in the area.
Method
Procedure
Seminal articles in the feedback literature were identified (i.e., from sites that
provide indices of importance such as CiteSeer), and then collected. The bibliog-
raphy compiled from this initial set of research studies spawned a new collection-
review cycle, garnering even more articles, and continuing iteratively throughout
the review process.
The following online databases were employed in this search–collection effort:
•ERIC, a database on educational reports, evaluations, and research from the
Educational Resources Information Center, consisting of Resources in
Education Index, and Current Index to Journals in Education.
•PsycINFO, from the American Psychological Association, which carries
citations and summaries of scholarly journal articles, book chapters, books,
and dissertations, in psychology and related disciplines.
•PsycARTICLES, a source of full-text, peer-reviewed scholarly and scientific
articles in psychology. The database covers general psychology and special-
ized, basic, applied, clinical, and theoretical research. It contains articles
from 56 journals (45 published by the American Psychological Association
and 11 from allied organizations).
•Academic Search Premier, a multidisciplinary full-text database offering
information in many areas of academic study including: computer science,
engineering, physics, language and linguistics, and so forth.
•MasterFILE Premier, designed specifically for public libraries, and cover-
ing a broad range of disciplines including general reference, business, edu-
cation, health, general science, and multicultural issues.
In addition to these databases, online catalogs were used at the libraries of the
Educational Testing Service and University of Pennsylvania to access their elec-
tronic collections of journals and research studies. Google Scholar was also
employed—a Web site providing peer-reviewed papers, theses, books, abstracts,
and articles from academic publishers, professional societies, preprint repositories,
universities, and other scholarly organizations—to search for and acquire specific
references.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
156
Inclusion Criteria
The focus of the search was to access full-text documents using various search
terms or keywords such as feedback, formative feedback, formative assessment,
instruction, learning, computer-assisted/based, tutor, learning, and performance.
The search was not limited to a particular date range, although slight preference
was given to more recent research. In all, approximately 170–180 articles, disser-
tations, abstracts, books, and conference proceedings were collected. From this
larger set, a total of more than 100 documents met the criteria for inclusion in the
literature review. The inclusion criteria consisted of topical relevance, use of exper-
imental design, and meta-analytic procedures. The majority of the documents were
journal articles (103), followed by books and book chapters (24), conference pro-
ceedings (10), and “other” (e.g., research reports; 4).
Literature Review
There have been hundreds of articles written about feedback and its role in
knowledge and skill acquisition. Many of these articles describe the results from
experimental tests examining different features of feedback, and several represent
important historical reviews (a few going back to the early 1900s, such as Kluger
& DeNisi, 1996; Kulhavy & Stock, 1989; Mory, 2004). Despite the plethora of
research on the topic, the specific mechanisms relating feedback to learning are
still mostly murky, with very few (if any) general conclusions. Researchers who
have tackled the tough task of performing meta-analyses on the feedback data
use descriptors such as “inconsistent,” “contradictory,” and “highly variable” to
describe the body of feedback findings (Azevedo & Bernard, 1995; Kluger &
DeNisi, 1996). Ten years later those descriptors still apply.
Feedback has been widely cited as an important facilitator of learning and per-
formance (Bandura, 1991; Bandura & Cervone, 1983; Fedor, 1991; Ilgen, Fisher,
& Taylor, 1979), but quite a few studies have reported that feedback has either no
effect or debilitating effects on learning (for examples of nonfacilitative effects of
feedback on learning, see Bangert-Drowns et al., 1991; Kluger & DeNisi, 1996;
Mory, 2004). In fact, about one third of the total studies reviewed in two landmark
meta-analyses (i.e., Bangert-Drowns et al., 1991; Kluger & DeNisi, 1996) demon-
strate negative effects of feedback on learning. For instance, feedback that is con-
strued as critical or controlling (Baron, 1993) often thwarts efforts to improve
performance (Fedor, Davis, Maslyn, & Mathieson, 2001). Other features of feed-
back that tend to impede learning include: providing grades or overall scores indi-
cating the student’s standing relative to peers, and coupling such normative
feedback with low levels of specificity (i.e., vagueness) (Butler, 1987; Kluger &
DeNisi, 1998; McColskey & Leary, 1985; Wiliam, 2007; Williams, 1997). In addi-
tion, when a student is actively engaged in problem solving and interrupted by
feedback from an external source, this too has been shown to inhibit learning
(Corno & Snow, 1986). In line with the definition in this review, feedback that has
negative effects on learning is not formative.
Feedback Purposes
The main aim of formative feedback is to increase student knowledge, skills,
and understanding in some content area or general skill (e.g., problem solving),
and there are multiple types of feedback that may be employed toward this end
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
157
(e.g., response specific, goal directed, immediately delivered). In addition to vari-
ous formats of feedback, there are different functions. According to Black and
Wiliam (1998), there are two main functions of feedback: directive and facilita-
tive. Directive feedback is that which tells the student what needs to be fixed or
revised. Such feedback tends to be more specific compared to facilitative feedback,
which provides comments and suggestions to help guide students in their own revi-
sion and conceptualization. The next section describes some of the ways feedback
may exert influences on student learning.
Cognitive Mechanisms and Formative Feedback
There are several cognitive mechanisms by which formative feedback may be
used by a learner. First, it can signal a gap between a current level of performance
and some desired level of performance or goal. Resolving this gap can motivate
higher levels of effort (Locke & Latham, 1990; Song & Keller, 2001). That is, for-
mative feedback can reduce uncertainty about how well (or poorly) the student
is performing on a task (Ashford, 1986; Ashford, Blatt, & VandeWalle, 2003).
Uncertainty is an aversive state that motivates strategies aimed at reducing or man-
aging it (Bordia, Hobman, Jones, Gallois, & Callan, 2004). Because uncertainty is
often unpleasant and may distract attention away from task performance (Kanfer
& Ackerman, 1989), reducing uncertainty may lead to higher motivation and more
efficient task strategies.
Second, formative feedback can effectively reduce the cognitive load of a
learner, especially novice or struggling students (e.g., Paas, Renkl, & Sweller,
2003; Sweller, Van Merriënboer, & Paas, 1998). These students can become cog-
nitively overwhelmed during learning due to high performance demands and thus
may benefit from supportive feedback designed to decrease the cognitive load. In
fact, Sweller et al. (1998) provided support for this claim by showing how the pre-
sentation of worked examples reduces the cognitive load for low-ability students
faced with a complex problem-solving task. Moreno (2004) provided additional
support using explanatory feedback to support novice learners.
Finally, feedback can provide information that may be useful for correcting
inappropriate task strategies, procedural errors, or misconceptions (e.g., Ilgen
et al., 1979; Mason & Bruning, 2001; Mory, 2004; Narciss & Huth, 2004). The
corrective function effects appear to be especially powerful for feedback that is
more specific (Baron, 1988; Goldstein, Emanuel, & Howell, 1968), described next.
Feedback Specificity
Feedback specificity is defined as the level of information presented in feedback
messages (Goodman, Wood, & Hendrickx, 2004). In other words, specific (or
elaborated) feedback provides information about particular responses or behaviors
beyond their accuracy and tends to be more directive than facilitative.
Several researchers have reported that feedback is significantly more effective
when it provides details of how to improve the answer rather than just indicating
whether the student’s work is correct or not (e.g., Bangert-Drowns et al., 1991;
Pridemore & Klein, 1995). Feedback lacking in specificity may cause students to
view it as useless, frustrating, or both (Williams, 1997). It can also lead to uncer-
tainty about how to respond to the feedback (Fedor, 1991) and may require greater
information-processing activity on the part of the learner to understand the intended
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
158
message (Bangert-Drowns et al., 1991). Uncertainty and cognitive load can lead
to lower levels of learning (Kluger & DeNisi, 1996; Sweller et al., 1998) or even
reduced motivation to respond to the feedback (Ashford, 1986; Corno & Snow,
1986).
In an experiment testing feedback specificity and its relationship to learning,
Phye and Sanders (1994) tested two types of feedback (i.e., general advice vs. spe-
cific feedback, the latter providing the learner with the correct answer). Students
were assigned to one of the two learning conditions and received either general
advice or specific feedback as part of a verbal analogy problem-solving task. In
line with the research cited previously, they found that the more specific feedback
was clearly superior to general advice on a retention task. However, they found no
significant differences between feedback types on a transfer task. They caution
against assuming that procedures that enhance performance during acquisition
(e.g., providing specific feedback) will necessarily enhance transfer to new tasks.
In summary, providing feedback that is specific and clear, for conceptual and
procedural learning tasks, is a reasonable, general guideline. However, this may
depend on other variables, such as learner characteristics (e.g., ability level, moti-
vation) and different learning outcomes (e.g., retention vs. transfer tasks). In addi-
tion, the specificity dimension of formative feedback itself is not very “specific”
as described in the literature. More focused feedback features are now reviewed.
Features of Formative Feedback
In an excellent historical review on feedback, Kulhavy and Stock (1989) reported
that effective feedback provides the learner with two types of information: verifi-
cation and elaboration. Verification is defined as the simple judgment of whether
an answer is correct, and elaboration is the informational aspect of the message, pro-
viding relevant cues to guide the learner toward a correct answer. Researchers
appear to be converging toward the view that effective feedback should include ele-
ments of both verification and elaboration (e.g., Bangert-Drowns et al., 1991;
Mason & Bruning, 2001). These features are now described in more detail.
Verification
Confirming whether an answer is correct or incorrect can be accomplished in
several ways. The most common way involves simply stating “correct” or “incor-
rect.” More informative options exist—some of which are explicit and some more
implicit. Among explicit verifications, highlighting or otherwise marking a
response to indicate its correctness (e.g., with a checkmark) can convey the infor-
mation. Implicit verification can occur when, for instance, a student’s response
yields expected or unexpected results (e.g., within a simulation). This review
focuses more on explicit than implicit feedback as it is more readily subject to
experimental controls.
Elaboration
Feedback elaboration has even more variations than verification. For instance,
elaboration can (a) address the topic, (b) address the response, (c) discuss the par-
ticular error(s), (d) provide worked examples, or (e) give gentle guidance. The first
three types of elaborated feedback are more specific and directive, and the last two
types are more general and facilitative.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
159
Elaborated feedback usually addresses the correct answer, may explain why the
selected response is wrong, and may indicate what the correct answer should be.
There seems to be growing consensus that one type of elaboration, response-
specific feedback, appears to enhance student achievement, particularly learning
efficiency, more than other types of feedback, such as simple verification or
“answer until correct” (e.g., Corbett & Anderson, 2001; Gilman, 1969; Mory,
2004; Shute, Hansen, & Almond, 2007). However, as is discussed in a later sec-
tion, feedback specificity has been shown to affect performance by way of an inter-
action with learners’ goal orientations.
Feedback Complexity and Length
Although more specific feedback may be generally better than less specific
feedback (at least under certain conditions), a related dimension to consider is
length or complexity of the information. For example, if feedback is too long or
too complicated, many learners will simply not pay attention to it, rendering it use-
less. Lengthy feedback can also diffuse or dilute the message. Feedback complex-
ity thus refers to how much and what information should be included in the
feedback messages.
Many research articles have addressed feedback complexity, but only a few
have attempted to array the major variables along a dimension of complexity
(albeit, see Dempsey, Driscoll, & Swindell, 1993; Mason & Bruning, 2001;
Narciss & Huth, 2004). I have aggregated information from their respective lists
into a single compilation (see Table 1), arrayed generally from least to most com-
plex information presented. Terms appearing in the “feedback type” column are
used throughout the remainder of this article.
If formative feedback is to serve a corrective function, even in its simplest form
it should (a) verify whether the student’s answer is right or wrong and (b) provide
information to the learner about the correct response (either directive or facilita-
tive). Studies that have examined the type and amount of information in feedback,
however, have shown inconsistent results (see Kulhavy, 1977, and Mory, 2004, for
summaries of the range of results). Specific findings on the feedback complexity
issue are described next.
No Effect of Feedback Complexity
Schimmel (1983) performed a meta-analysis on feedback as used in computer-
based instruction (CBI) and programmed (scripted) instruction. He analyzed the
results from 15 experimental studies and found that the amount of information (i.e.,
feedback complexity) was not significantly related to feedback effects. He also
found that feedback effects were significantly larger in computer-based than in
programmed instruction.
Sleeman, Kelly, Martinak, Ward, and Moore (1989) examined conflicting find-
ings in the literature concerning the diagnosis and remediation of students’ errors.
They noted that few studies have systematically compared the effects of different
styles of error-based feedback, and of those that have, the results are inconclusive.
For instance, Swan (1983) found that a conflict approach (pointing out errors
made by students and demonstrating their consequences, classified in Table 1 as
“bugs/misconceptions”) was more effective than reteaching (classified in Table 1
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
160
as “topic contingent” feedback), but Bunderson and Olsen (1983) found no differ-
ence between these two feedback approaches.
To untangle these conflicting findings, Sleeman et al. (1989) conducted three
studies that explicitly compared error-specific or model-based remediation (MBR;
TABLE 1
Feedback types arrayed loosely by complexity
Feedback type Description
No feedback Refers to conditions where the learner is presented a question and
is required to respond, but there is no indication as to the
correctness of the learner’s response.
Verification Also called “knowledge of results” or “knowledge of outcome.”
It informs the learners about the correctness of their responses
(e.g., right–wrong, or overall percentage correct).
Correct Also known as “knowledge of correct response.” Informs the learner
response of the correct answer to a specific problem, with no additional
information.
Try again Also known as “repeat-until-correct” feedback. It informs the learner
about an incorrect response and allows the learner one or more
attempts to answer it.
Error flagging Also known as “location of mistakes.” Error flagging highlights
errors in a solution, without giving correct answer.
Elaborated General term relating to the provision of an explanation about why a
specific response was correct or not and may allow the
learner to review part of the instruction. It may or may not present
the correct answer (see below for six types of elaborated feedback).
Attribute Elaborated feedback that presents information addressing central
isolation attributes of the target concept or skill being studied.
Topic Elaborated feedback providing the learner with information relating
contingent to the target topic currently being studied. May entail simply
reteaching material.
Response Elaborated feedback that focuses on the learner’s specific response.
contingent It may describe why the incorrect answer is wrong and why the
correct answer is correct. This does not use formal error analysis.
Hints/cues/ Elaborated feedback guiding the learner in the right direction, e.g.,
prompts strategic hint on what to do next or a worked example or
demonstration. Avoids explicitly presenting the correct answer.
Bugs/
misconceptions Elaborated feedback requiring error analysis and diagnosis. It provides
information about the learner’s specific errors or misconceptions
(e.g., what is wrong and why).
Informative The most elaborated feedback (from Narciss & Huth, 2004), this
tutoring presents verification feedback, error flagging, and strategic hints
on how to proceed. The correct answer is not usually provided.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
161
bugs/misconceptions) with simply reteaching the algebra content (topic-contingent
feedback). MBR bases its feedback on a model of student errors, whereas reteach-
ing simply shows students a correct procedure and answer without addressing spe-
cific errors. Their results showed that MBR (more complex approach) and
reteaching (simpler approach) are both more effective than no tutoring; however,
MBR was not more effective than reteaching. The results are discussed in terms of
stability of errors and their relevance to educational practice and to intelligent
tutoring systems. Although the studies were carried out using human tutors, the
results suggest that for the purpose of feedback in the algebra domain, when taught
procedurally, feedback based on just reteaching content was found to be as effec-
tive as feedback based on more expensive error analyses.
Negative Effects of Feedback Complexity
Kulhavy, White, Topp, Chan, and Adams (1985) similarly examined the feed-
back complexity issue. They tested a group of college undergraduates who read a
2,400-word passage, responded to 16 multiple-choice questions about it, and
received one of four types of feedback, increasing in complexity, following their
responses. Feedback complexity was systematically varied. The lowest level was
simply correct answer feedback, and the most complex feedback included a com-
bination of verification, correct answer, and an explanation about why the incorrect
answer was wrong with a pointer to the relevant part of the text passage where the
answer resided. The main finding was that complexity of feedback was inversely
related to both ability to correct errors and learning efficiency (i.e., the ratio of feed-
back study time to posttest score). Specifically, Kulhavy et al. showed that more
complex versions of feedback had a small effect on students’ ability to correct their
own errors, and the least complex feedback (i.e., correct answer) demonstrated
greater learner benefits in terms of efficiency and outcome than complex feedback.
In summary, the inconclusive findings on feedback complexity suggest that
there may be other mediating factors involved in the relationship between forma-
tive feedback and learning. For instance, instead of feedback complexity, a more
salient facet of feedback may be the nature and quality of the content, such as pro-
viding information about learning goals and how to attain them.
Goal-Directed Feedback and Motivation
Goal-directed feedback provides learners with information about their progress
toward a desired goal (or set of goals) rather than providing feedback on discrete
responses (i.e., responses to individual tasks). Research has shown that for a learner
to remain motivated and engaged depends on a close match between a learner’s
goals and his or her expectations that these goals can be met (Fisher & Ford, 1998;
Ford, Smith, Weissbein, Gully, & Salas, 1998). If goals are set so high that they
are unattainable, the learner will likely experience failure and become discouraged.
When goals are set so low that their attainment is certain, success loses its power
to promote further effort (Birney, Burdick, & Teevan, 1969).
According to Malone (1981), there are certain features that goals must have to
make them challenging for the learner. For example, goals must be personally
meaningful and easily generated, and the learner must receive performance feed-
back about whether the goals are being attained. Hoska (1993) classified goals as
being of two types: acquisition (i.e., to help the learner acquire something desirable)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
162
and avoidance (i.e., to help the learner avoid something undesirable). Moreover,
acquisition and avoidance goals can be either external or internal.
Motivation has been shown to be an important mediating factor in learners’ per-
formance (Covington & Omelich, 1984), and feedback can be a powerful motiva-
tor when delivered in response to goal-driven efforts. Some researchers suggest
that the learner’s goal orientation should be considered when designing instruc-
tion, particularly when feedback can encourage or discourage a learner’s effort
(Dempsey et al. 1993). Goal orientation describes the manner in which people are
motivated to work toward different kinds of goals. The idea is that individuals hold
either a learning or a performance orientation toward tasks (e.g., Dweck, 1986). A
learning orientation is characterized by a desire to increase one’s competence by
developing new skills and mastering new situations with the belief that intelligence
is malleable. In contrast, performance orientation reflects a desire to demonstrate
one’s competence to others and to be positively evaluated by others, with the belief
that intelligence is innate (Farr, Hofmann, & Ringenbach, 1993).
Research has shown that the two types of goal orientation differentially influ-
ence how individuals respond to task difficulty and failure (Dweck & Leggett,
1988). That is, learning orientation is characterized by persistence in the face of
failure, the use of more complex learning strategies, and the pursuit of challenging
material and tasks. Performance orientation is characterized by a tendency to with-
draw from tasks (especially in the face of failure), less interest in difficult tasks,
and the tendency to seek less challenging material and tasks on which success is
likely. Consistent with these labels, research has generally shown that learning ori-
entation is associated with more positive outcomes and performance orientation is
related to either equivocal or negative outcomes (e.g., Button, Mathieu, & Zajac,
1996; Fisher & Ford, 1998; VandeWalle, Brown, Cron, & Slocum, 1999).
One way to influence a learner’s goal orientation (e.g., to shift from a focus on
performing to an emphasis on learning) is via formative feedback. Hoska (1993)
showed how goal-orientation feedback can modify a learner’s view of intelligence
by helping a learner see that (a) ability and skill can be developed through prac-
tice, (b) effort is critical to increasing this skill, and (c) mistakes are part of the skill-
acquisition process. Feedback can also serve as a cognitive support mechanism,
described next.
Formative Feedback as Scaffolding
Like training wheels, scaffolding enables learners to do more advanced activi-
ties and to engage in more advanced thinking and problem solving than they could
without such help. Eventually, high-level functions are gradually turned over to the
students as the teacher (or computer system) removes the scaffolding and fades
away (see Collins, Brown, & Newman, 1989; Graesser, McNamara, & VanLehn,
2005). For instance, Graesser et al. (2005) described a theoretically based approach
to facilitating explanation-centered learning via scaffolding, including (a) peda-
gogical agents that scaffold strategies, metacognition, and explanation construc-
tion; (b) computer coaches that facilitate answer generation to questions that
require explanations by using mixed-initiative dialogue; and (c) modeling and
coaching students in constructing self-explanations. Their systems (i.e., Point&Query,
AutoTutor, and iSTART) built with these components have shown promising
results in tests of learning gains and improved learning strategies.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
163
In their book How People Learn, Bransford, Brown, and Cocking (2000)
describe how psychological theories and insights can be translated into actions and
practices. In relation to feedback, they suggest a goal-directed approach to learn-
ing using scaffolding (or scaffolded feedback) that (a) motivates the learner’s inter-
est related to the task, (b) simplifies the task to make it more manageable and
achievable, (c) provides some direction to help the learner focus on achieving the
goal, (d) clearly indicates the differences between the learner’s work and the stan-
dard or desired solution, (e) reduces frustration and risk, and (f) models and clearly
defines the expectations (goals) of the activity to be performed.
Conventional wisdom suggests that facilitative feedback (providing guidance
and cues, as illustrated in the research cited previously) would enhance learning
more than directive feedback (providing corrective information), yet this is not nec-
essarily the case. In fact, some research has shown that directive feedback may actu-
ally be more helpful than facilitative—particularly for learners who are just learning
a topic or content area (e.g., Knoblauch & Brannon, 1981; Moreno, 2004). Because
scaffolding relates to the explicit support of learners during the learning process, in
an educational setting, scaffolded feedback may include models, cues, prompts,
hints, partial solutions, and direct instruction (Hartman, 2002). Scaffolding is grad-
ually removed as students gain their cognitive footing; thus, directive feedback may
be most helpful during the early stages of learning. Facilitative feedback may be
more helpful later, and the question is: When? According to Vygotsky (1987), exter-
nal scaffolds can be removed when the learner develops more sophisticated cogni-
tive systems, where the system of knowledge itself becomes part of the scaffold for
new learning. The issue of feedback timing is now discussed in more detail.
Timing
It was my teacher’s genius, her quick sympathy, her loving tact which made
the first years of my education so beautiful. It was because she seized the right
moment to impart knowledge that made it so pleasant and acceptable to me.
—Helen Keller
Similar to the previously mentioned feedback variables (e.g., complexity and
specificity), there are also conflicting results in the literature relating to the timing
of feedback and the effects on learning outcome and efficiency. Researchers have
been examining the effects of immediate versus delayed feedback on learning for
decades (e.g., Clariana, 1999; Jurma & Froelich, 1984; Pound & Bailey, 1975;
Prather & Berry, 1973; Reddy, 1969). The timing of feedback literature concerns
whether feedback should be delivered immediately or delayed. “Immediately” may
be defined as right after a student has responded to an item or problem or, in the
case of summative feedback, right after a quiz or test has been completed.
“Delayed” is usually defined relative to immediate, and such feedback may occur
minutes, hours, weeks, or longer after the completion of some task or test.
Regardless of the unit of time, the effects of the feedback timing variable are
mixed. Again, although there appears to be no consistent main effect of timing,
there are interactions involving the timing of feedback and learning. Some
researchers have argued for immediate feedback as a means to prevent errors being
encoded into memory, whereas others have argued that delayed feedback reduces
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
164
proactive interference, thus allowing the initial error to be forgotten and the cor-
rect information to be encoded with no interference (for more on this debate, see
Kulhavy & Anderson, 1972).
Support for Delayed Feedback
Researchers who support using delayed feedback generally adhere to what is
called the interference-perseveration hypothesis proposed by Kulhavy and
Anderson (1972). This asserts that initial errors do not compete with to-be-learned
correct responses if corrective information is delayed. This is because errors are
likely to be forgotten and thus cannot interfere with retention.
The superiority of delayed feedback, referred to as the delay-retention effect
(DRE), was supported in a series of experiments by Anderson and colleagues (e.g.,
Kulhavy & Anderson, 1972; Surber & Anderson, 1975), comparing the accuracy
of responses on a retention test with the accuracy of responses on an initial test.
Although many studies in the literature do not support the DRE (e.g., Kippel, 1974;
Newman, Williams, & Hiller, 1974; Phye & Baller, 1970), delayed feedback has
often been shown to be as effective as immediate feedback.
Schroth (1992) presented the results from an experiment that investigated the
effects of delayed feedback and type of verbal feedback on transfer using a concept-
formation task. The four conditions of delayed feedback were: 0 s, 10 s, 20 s,
and 30 s. The verbal feedback conditions were (a) correct–incorrect (verification
feedback), (b) correct–nothing (i.e., where “nothing” means that no feedback was
presented if the student solved an item incorrectly), and (c) nothing–incorrect (i.e.,
no feedback was presented if the student answered correctly). All participants were
tested 7 days after an initial learning trial. The finding relevant to this article is that
although delayed feedback slowed the rate of initial learning, it facilitated trans-
fer after the delay.
Support for Immediate Feedback
Supporters of immediate feedback theorize that the earlier corrective informa-
tion is provided, the more likely it is that efficient retention will result (Phye &
Andre, 1989). The superiority of immediate over delayed feedback has been
demonstrated for the acquisition of verbal materials, procedural skills, and some
motor skills (Anderson, Magill, & Sekiya, 2001; Brosvic & Cohen, 1988; Corbett
& Anderson, 1989, 2001; Dihoff, Brosvic, Epstein, & Cook, 2003).
Corbett and Anderson (2001) have been using immediate feedback successfully
in their programming and mathematics tutors for almost two decades (see
Anderson, Corbett, Koedinger, & Pelletier, 1995). For instance, they used their
ACT Programming Tutor to examine differential timing effects on students’ learn-
ing. The study involved four feedback conditions, the first three of which offered
the student different levels of control over error feedback and correction: (a) imme-
diate feedback and immediate error correction (i.e., the tutor intervened as soon as
students made errors and forced them to correct the error before moving on), (b)
immediate error flagging and student control of error correction, (c) feedback on
demand and student control of error correction, and (d) no-tutor condition and no
step-by-step problem-solving support (the control condition). The immediate feed-
back group with greatest tutor control of problem solving yielded the most efficient
learning (i.e., the first condition). These students completed the tutor problems
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
165
fastest, and their performance on criterion tests was equivalent to that of the other
groups (excluding the control group). Furthermore, questionnaires showed no sig-
nificant differences in terms of preference among the tutor conditions. This study
demonstrated that immediate error feedback helped with immediate learning.
Azevedo and Bernard (1995) conducted a meta-analysis on the literature con-
cerning the effects of feedback on learning from CBI. They noted that despite the
widespread acceptance of feedback in computerized instruction, empirical support
for particular types of feedback information has been inconsistent and contradic-
tory. Effect size calculations were performed on 22 CBI studies comparing feed-
back versus no feedback relating to immediate outcomes. This resulted in a mean
weighted effect size of 0.80. The results from 9 studies employing delayed out-
come conditions resulted in a mean weighted effect size of just 0.35. This provides
support for the strength of feedback in relation to immediate outcome administra-
tions, at least in CBI.
Conjoining Feedback Timing Findings
A preliminary conclusion derived from both the Schroth (1992) and Corbett and
Anderson (2001) findings is that delayed feedback may be superior for promoting
transfer of learning, especially in relation to concept-formation tasks, whereas
immediate feedback may be more efficient, particularly in the short run and for pro-
cedural skills (i.e., programming and mathematics). This proposition has some sup-
port. For instance, Schmidt, Young, Swinnen, and Shapiro (1989) conducted an
experiment that provided verification feedback following a set of trials relating to
a relatively simple ballistic-timing task. Feedback timing consisted of one of four
lengths: 1 (verification after every trial), 5 (verification after 5 trials), 10, and 15
trials. During the acquisition phase when feedback was present, all groups showed
general improvements in performance across practice, although those in the longer
length conditions showed worse performance relative to the shorter length condi-
tions. In a delayed test, they found an inverse relation between the timing variable
(1, 5, 10, 15 trials between feedback) and error rates. That is, longer delays between
feedback episodes resulted in relatively poorer performance during acquisition but
better retention compared with shorter delay conditions.
Mathan and Koedinger (2002) reviewed various studies on the timing of feed-
back and concluded that the effectiveness of feedback depends not on the main
effect of timing but on the nature of the task and the capability of the learner. They
called for further exploration on possible interactions involving timing effects and
optimal ways to match feedback (type and timing) to learning tasks and students’
individual needs or characteristics (e.g., Schimmel, 1988; Smith & Ragan, 1999).
One such interaction reported in the literature concerns feedback timing and task
difficulty. That is, if the task is difficult, then immediate feedback is beneficial, but
if the task is easy, then delayed feedback may be preferable (Clariana, 1999). This
is similar to the ideas presented earlier in the Formative Feedback as Scaffolding
subsection.
Summary of Feedback Timing Results
Research investigating the relationship of feedback timing to learning and per-
formance reveals inconsistent findings. One interesting observation is that many
field studies demonstrate the value of immediate feedback (see Kulik & Kulik,
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
166
1988), whereas many laboratory studies show positive effects of delayed feedback
(see Schmidt & Bjork, 1992; Schmidt et al. 1989). One way to resolve the inconsis-
tency is by considering that immediate feedback may activate both positive and neg-
ative learning effects. For instance, the positive effects of immediate feedback can
be seen as facilitating the decision or motivation to practice and providing the
explicit association of outcomes to causes. The negative effects of immediate feed-
back may facilitate reliance on information that is not available during transfer and
promote less careful or mindful behavior. If this supposition is true, the positive and
negative effects of immediate feedback could cancel each other out. Alternatively,
either the positive or negative effects may come to the fore, depending on the exper-
imental context. A similar argument could be made for delayed feedback effects on
learning. For example, on the positive side, delayed feedback may encourage learn-
ers’ engagement in active cognitive and metacognitive processing, thus engender-
ing a sense of autonomy (and perhaps improved self-efficacy). But on the negative
side, delaying feedback for struggling and less motivated learners may prove to be
frustrating and detrimental to their knowledge and skill acquisition.
Feedback and Other Variables
So far, formative feedback types and timing have been discussed in relation to
their effects on learning. This section examines other variables that may interact
with feedback features, such as learner ability level, response certitude, goal ori-
entation, and normative feedback.
Learner Level
As alluded to in the Timing subsection of this review, some research has sug-
gested that low-achieving students may benefit from immediate feedback, whereas
high-achieving students may prefer or benefit from delayed feedback (Gaynor,
1981; Roper, 1977). Furthermore, when testing different types of feedback, Clariana
(1990) has argued that low-ability students benefit from receipt of correct response
feedback more than from try again feedback. Hanna (1976) also examined student
performance in relation to different feedback conditions: verification, elaboration,
and no feedback. The verification feedback condition produced the highest scores
for high-ability students and elaborated feedback produced the highest scores for
low-ability students. There were no significant differences between verification and
elaborated feedback for middle-ability students, but both of these types of feedback
were superior to no feedback. These findings support the research and suppositions
presented earlier in the Scaffolding subsection.
Response Certitude
Kulhavy and Stock (1989) examined feedback and response certitude issues
from an information-processing perspective. That is, they had students provide
confidence judgments (“response certitude” ratings) following each response to
various tasks. They hypothesized that when students are certain their answer is cor-
rect, they will spend little time analyzing feedback, and when students are certain
their answer is incorrect, they will spend more time reviewing feedback. The impli-
cations of this are straightforward; that is, provide more elaborated feedback for
students who are more certain that their answer is wrong and deliver more con-
strained feedback for those with high certitude of correct answers. Although their
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
167
own research supported their hypotheses, other studies did not replicate the find-
ings. For instance, Mory (1994) tried to replicate the response certitude findings
and found that although there were differences in the amount of feedback study
time, there was no significant learning effect for feedback tailored to response cer-
titude and correctness.
Goal Orientation
Davis, Carson, Ammeter, and Treadway (2005) reported the results of a study
testing the relationship between goal orientation and feedback specificity on per-
formance using a management decision-making task. In short, they found that
feedback specificity (low, moderate, and high levels) had a significant influence
on performance for individuals who were low on learning orientation (i.e., high
feedback specificity was better for learners with low learning orientation). They
also reported a significant influence of feedback specificity on performance for
persons high in performance orientation (i.e., this group also benefited from more
specific feedback). The findings support the general positive effects of feedback
on performance and suggest the use of more specific feedback for learners with
either high-performance or low-learning goal orientations.
Normative Feedback
According to research cited in Kluger and DeNisi (1996), when feedback is pro-
vided to students in a norm-referenced manner that compares the individual’s per-
formance with that of others, people who perform poorly tend to attribute their
failures to lack of ability, expect to perform poorly in the future, and demonstrate
decreased motivation on subsequent tasks (i.e., similar to learners with a perfor-
mance orientation, described earlier). McColskey and Leary (1985) examined the
hypothesis that the harmful effects of failure might be lessened when failure is
expressed in self-referenced terms—that is, relative to the learner’s known level of
ability as assessed by other measures. In their study, learners received feedback
indicating that they did well or poorly on an anagram test, and this feedback was
described as either norm-referenced (comparing the individual’s performance with
that of others) or as self-referenced (comparing performance with other measures
of the individual’s ability). They found that, compared to norm-referenced feed-
back, self-referenced feedback resulted in higher expectancies regarding future
performance and increased attributions to effort (e.g., “I succeeded because
I worked really hard”). Attributions to ability (e.g., “I succeeded because I’m
smart”) were not affected. The main implication is that low-achieving students
should not receive normative feedback but should instead receive self-referenced
feedback—focusing their attention on their own progress.
This review has presented research findings covering the gamut of formative
feedback variables. As with earlier reviews, this one has unearthed mixed findings
regarding learning effects—whether examining feedback specificity, timing of
feedback, and so on. The next section presents four influential feedback research
studies that have attempted to integrate disparate findings into preliminary theo-
ries (or models) through large literature reviews, meta-analyses, or both. The arti-
cles summarized are Kluger and DeNisi (1996), Bangert-Drowns et al. (1991),
Narciss and Huth (2004), and Mason and Bruning (2001).
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
168
Toward a Framework of Formative Feedback
To understand the world, one must not be worrying about one’s self.
—Albert Einstein
Kluger and DeNisi (1996)
Kluger and DeNisi (1996) examined and reported on the effects of feedback
interventions (FIs) on performance from multiple perspectives and spanning
decades of research—back to Thorndike’s classic research in the early 1900s.
Kluger and DeNisi conducted an extensive review of the literature, performed a
meta-analysis on reported experimental findings, and constructed a preliminary
theory based on a number of variables, or moderators. Their preliminary feedback
intervention theory (FIT) offers a broad approach to investigating FI effects,
including feedback moderators such as praise, written or verbal feedback, task nov-
elty and complexity, time constraints, and types of tasks such as physical, mem-
ory, knowledge, and vigilance tasks. The basic premise underlying FIT is that FIs
change the locus of a learner’s attention among three levels of control: (a) task
learning, (b) task motivation, and (c) metatask processes (see Figure 1).
The general pattern of results from Kluger and DeNisi’s (1996) large meta-
analysis of FI studies was consistent with the argument that all else being equal, FI
cues affect performance by changing the locus of attention. The lower in the hier-
archy the FI-induced locus of attention is, the stronger is the benefit of an FI for
performance. In other words, formative feedback that focuses the learner on
aspects of the task (i.e., the lower part of Figure 1) promotes learning and achieve-
ment compared to FIs that draw attention to the self (i.e., the upper box in Figure 1),
which can impede learning. This phenomenon is described in the Normative
Feedback section in this article.
FIT consists of five basic arguments: (a) behavior is regulated by comparisons
of feedback to goals or standards, (b) goals or standards are organized hierarchi-
cally, (c) attention is limited and therefore only feedback–standard gaps (i.e., dis-
crepancies between actual and desired performance) that receive attention actively
participate in behavior regulation, (d) attention is normally directed to a moderate
level of the hierarchy; and (e) FIs change the locus of attention and therefore affect
behavior. These arguments are interdependent, and each consecutive argument is
built on the preceding argument.
Specific results from Kluger and DeNisi’s (1996) meta-analysis showed that
four moderators (feedback variables) demonstrated significant relationships with
d(effect size) at p<.01: (a) discouraging FIs reduce FI effects, (b) velocity FIs
(i.e., “self-referenced” feedback that addresses a change from the learner’s prior
performance), (c) correct response FIs increase effects; and (d) FI effects on per-
formance of physical tasks are lower than FI effects on cognitive tasks.
Six more moderators became significant after excluding biased studies from
the meta-analysis. Of those six, three of them were shown to reduce FI effects:
(a) praise, (b) FIs threatening self-esteem, and (c) orally delivered FIs (from the
instructor). FIs that provide frequent messages enhance FI effects, and FI effects
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
169
are stronger for memory tasks and weaker for more procedural tasks. Finally, other
variables showing significance at p<.05 include the following: (a) computerized
FIs yield stronger effects than noncomputerized FIs, (b) FIs in the context of com-
plex tasks yield weaker effects than for simpler tasks, and (c) FIs are more effec-
tive with a goal-setting intervention than in the absence of goal setting. Figure 2
summarizes the main findings. This figure represents my interpretation (and cate-
gorization) based on data presented in the Kluger and DeNisi (1996) article.
One important finding from these results concerns the attenuating effect of
praise on learning and performance, although this has been described elsewhere
in the literature in terms of a model of self-attention (Baumeister, Hutton, & Cairns,
1990), attributions of effort (Butler, 1987), and control theory (Waldersee &
Luthans, 1994). Also, Balcazar, Hopkins, and Suarez (1985) reported that praise
was not as widely effective a reinforcer as previously believed.
FIGURE 1. Abstract hierarchy of processing.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
170
Perhaps the most surprising finding that emerged from the Kluger and DeNisi
(1996) meta-analysis is that in more than one third of the 607 cases (effect sizes),
FIs reduced performance. Furthermore, most of the observed variability cannot be
explained by sampling or other errors.
In conclusion, and as the authors observe in a later paper on the topic (Kluger
& DeNisi, 1998), FIs may be viewed as double-edged swords, cutting both ways.
Care should be taken to know which interventions increase performance and under
which conditions.
Bangert-Drowns et al. (1991)
Bangert-Drowns et al. (1991) examined 40 research studies on feedback using
meta-analysis techniques. They examined such variables as type of feedback, tim-
ing of feedback, and error rates in terms of their respective effect sizes. This widely
cited article describes both behavioral and cognitive operations that occur in learn-
ing. The basic idea is that to direct behavior, a learner needs to be able to monitor
physical changes brought about by the behavior. That is, learners change cognitive
operations and thus activity by adapting it to new information and matching it with
their own expectations about performance. They emphasize that
any theory that depicts learning as a process of mutual influence between
learners and their environments must involve feedback implicitly or explic-
itly because, without feedback, mutual influence is by definition impossible.
Hence, the feedback construct appears often as an essential element of theo-
ries of learning and instruction. (p. 214)
FIGURE 2. Feedback intervention moderators and their relationships to learning
and performance.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
171
To make this point more concrete, imagine trying to learn something new in the
absence of any feedback (explicit or implicit).
Most of the variables Bangert-Drowns et al. (1991) analyzed comprised text-
based feedback, which they organized into a five-stage model. This model describes
the state of learners as they move through a feedback cycle and emphasizes the
construct of mindfulness (Salomon & Globerson, 1987). Mindfulness is “a reflec-
tive process in which the learner explores situational cues and underlying mean-
ings relevant to the task involved” (Dempsey et al., 1993, p. 38).
The five stages are depicted in Figure 3 and are similar to other learning cycles (e.g.,
Gibbs, 1988; Kolb, 1984), particularly in relation to the importance of reflection.
As described by Bangert-Drowns et al. (1991, p. 217), the five states of the
learner receiving feedback include:
1. The initial or current state of the learner. This is characterized by the degree
of interest, goal orientation, degree of self-efficacy, and prior relevant
knowledge.
2. Search and retrieval strategies. These cognitive mechanisms are activated
by a question. Information stored in the context of elaborations would be
easier to locate in memory because of more pathways providing access to
the information.
3. The learner makes a response to the question. In addition, the learner feels
some degree of certainty about the response and thus has some expectation
about what the feedback will indicate.
FIGURE 3. Five-stage model of the learner during a feedback cycle.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
172
4. The learner evaluates the response in light of information from the feedback.
The nature of the evaluation depends on the learner’s expectations about
feedback. For instance, if the learner was sure of his or her response and the
feedback confirmed its correctness, the retrieval pathway may be strength-
ened or unaltered. If the learner was sure of the response and feedback indi-
cated its incorrectness, the learner may seek to understand the incongruity.
Uncertainty about a response with feedback confirmation or disconfirmation
is less likely to simulate deep reflection unless the learner was interested in
acquiring the instructional content.
5. Adjustments are made to relevant knowledge, self-efficacy, interests, and
goals as a result of the response evaluation. These adjusted states, with sub-
sequent experiences, determine the next “current” state.
Overall, Bangert-Drowns et al.’s (1991) meta-analysis found generally weak
effects of feedback on achievement. More specifically (but not surprisingly), the
authors found that verification feedback (correct–incorrect) resulted in lower
effect sizes compared to correct response feedback (i.e., providing the correct
answer). Also, using a pretest within a study significantly lowered effect sizes,
as did uncontrolled presearch availability of answers (i.e., ability to locate an
answer before responding to a question). These last two findings may be because
pretests and presearch availability may be seen as “advance organizers” that may
support short-term retention but undermine overall feedback effects in studies
that employ them.
The main conclusion from Bangert-Drowns et al.’s (1991) meta-analysis and
subsequent five-cycle model is that feedback can promote learning if it is received
mindfully. Conversely, feedback can inhibit learning if it encourages mindlessness,
as when the answers are made available before learners begin their memory search,
or if the feedback message does not match students’ cognitive needs (e.g., too easy,
too complex, too vague).
Narciss and Huth (2004)
Narciss and Huth (2004) outlined a conceptual framework for the design of for-
mative feedback. This framework is based on the body of research relating to elab-
orated feedback types. Cognitive task and error analyses served as the basis for the
design of the framework. The impact of the feedback on learning and motivation
was ultimately examined in two computer-based learning experiments. The results
of these studies showed that systematically designed formative feedback has pos-
itive effects on achievement and motivation.
In general, Narciss and Huth (2004) asserted that designing and developing
effective formative feedback needs to take into consideration instructional context
as well as characteristics of the learner to provide effective feedback for complex
learning tasks. The conceptual framework for the design of formative feedback is
depicted in Figure 4 (modified from the original).
Each of the three factors is examined in more in the following:
1. Instruction. The instructional factor or context consists of three main
elements: (a) the instructional objectives (e.g., learning goals or standards
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
173
relating to some curriculum), (b) the learning tasks (e.g., knowledge items,
cognitive operations, metacognitive skills), and (c) errors and obstacles
(e.g., typical errors, incorrect strategies, sources of errors).
2. Learner. Information concerning the learner that is relevant to feedback
design includes (a) learning objectives and goals; (b) prior knowledge, skills,
and abilities (e.g., domain dependent, such as content knowledge, and
domain independent, such as metacognitive skills); and (c) academic moti-
vation (e.g., one’s need for academic achievement, academic self-efficacy,
and metamotivational skills).
3. Feedback. The feedback factor consists of three main elements: (a) the con-
tent of the feedback (i.e., evaluative aspects, such as verification, and infor-
mative aspects, such as hints, cues, analogies, explanations, and worked-out
examples), (b) the function of the feedback (i.e., cognitive, metacognitive,
and motivational), and (c) the presentation of the feedback components
(i.e., timing, schedule, and perhaps adaptivity considerations).
Narciss and Huth (2004) contend that adapting the content, function, and presen-
tation format of the feedback message should be driven by considerations of the
instructional goals and learner characteristics to maximize the informative value
of the feedback. Specific steps for generating effective formative feedback include
FIGURE 4. Factors interacting with feedback to influence learning.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
174
selecting and specifying learning objectives (concrete learning outcomes), identi-
fying learning tasks, matching to learning outcomes, and after conducting cogni-
tive task and error analyses, specifying information (i.e., formative feedback) that
addresses specific, systematic errors or obstacles.
Mason and Bruning (2001)
Mason and Bruning (2001) reviewed the literature on feedback that is delivered
via CBI systems and presented a theoretical framework intended to help design-
ers, developers, and instructors build their own CBI tools. Mason and Bruning’s
theoretical framework, depicted in Figure 5, is based on research that has exam-
ined type of feedback and level of elaboration in relation to student achievement
level, task complexity, timing of feedback, and prior knowledge. The general rec-
ommendation they have drawn from the framework is that immediate feedback for
students with low achievement levels in the context of either simple (lower level)
or complex (higher level) tasks is superior to delayed feedback, whereas delayed
feedback is suggested for students with high achievement levels, especially for
complex tasks.
FIGURE 5. Feedback variables for decision making in computer-based instruction.
Adapted from “Providing Feedback in Computer-Based Instruction: What the
Research Tells Us” by B. J. Mason and R. Bruning, 2001, Center for Instructional
Innovation, University of Nebraska–Lincoln. Copyright 2001 by B. J. Mason and
R. Bruning. Reprinted with permission.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
175
The following research supports Mason and Bruning’s (2001) framework. First,
significant learning gains often show up in response to various types of elaboration
feedback (e.g., Clariana, 1990; Morrison, Ross, Gopalakrishnan, & Casey, 1995;
Pridemore & Klein, 1991, 1995; Roper, 1977; Waldrop, Justen, & Adams, 1986).
Second, research conducted in classroom settings seems to suggest that response-
contingent feedback enhances student achievement more than other types of feed-
back (e.g., Whyte, Karolick, Neilsen, Elder, & Hawley, 1995). Third, Mason and
Bruning reported that the level of feedback complexity has been shown to both influ-
ence and not influence learning, and this lack of effect may be due to interactions
involving other variables, such as the nature of the topic and the type of skill mea-
sured (e.g., Hodes, 1985; Park & Gittelman, 1992). In cases where the level of feed-
back complexity has been shown to affect learning, more elaborative information
tends to produce increased understanding (Gilman, 1969; Pridemore & Klein, 1991;
Roper, 1977; Waldrop et al., 1986; Whyte et al., 1995). For instance, although ver-
ification feedback did not improve learning, correct response, response contingent,
and a combination of the other levels of feedback have been shown to significantly
improve student learning (e.g., Gilman, 1969). This may be due to the extra infor-
mation available in elaboration feedback that allows students to correct their own
errors or misconceptions. Information on the correctness of an answer (i.e., verifi-
cation feedback) does not have much utility for learning.
Summary and Discussion
In general, formative feedback should address the accuracy of a learner’s
response to a problem or task and may touch on particular errors and misconcep-
tions (Azevedo & Bernard, 1995; Birenbaum & Tatsuoka, 1987; Cheng, Lin, Chen,
& Heh, 2005; Cohen, 1985; Kulhavy, 1977; Sales, 1993; Sleeman et al., 1989), the
latter representing more specific or elaborated types of feedback. Formative feed-
back should also permit the comparison of actual performance with some estab-
lished standard of performance (Johnson & Johnson, 1993).
In technology-assisted instruction, similar to classroom settings, formative
feedback comprises information—whether a message, display, and so on—pre-
sented to the learner following his or her input (or on request, if applicable) with
the purpose of shaping the perception, cognition, or action of the learner (e.g.,
Moreno, 2004; Schimmel, 1983; Wager & Wager, 1985). The main goal of forma-
tive feedback—whether delivered by a teacher or computer, in the classroom or
elsewhere—is to enhance learning, performance, or both, engendering the forma-
tion of accurate, targeted conceptualizations and skills. Such feedback may be used
in conjunction with low- or medium-stakes assessments, include diagnostic com-
ponents, and even be personalized for the learner (Albertson, 1986; Azevedo &
Bernard, 1995; Narciss & Huth, 2004; VanLehn, 1982).
Formative feedback might be likened to “a good murder” in that effective and
useful feedback depends on three things: (a) motive (the student needs it), (b)
opportunity (the student receives it in time to use it), and (c) means (the student is
able and willing to use it). However, even with motive, opportunity, and means,
there is still large variability of feedback effects on performance and learning,
including negative findings that have historically been ignored in the literature (see
Kluger & DeNisi, 1996).
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
176
Despite this variability, several meta-analyses found that feedback generally
improves learning, ranging from about .40 SD (Guzzo, Jette, & Katzell, 1985) to
.80 SD and higher (Azevedo & Bernard, 1995; Kluger & DeNisi, 1996) compared
to control conditions. But there remain major gaps in the feedback literature, par-
ticularly in relation to interactions among task characteristics, instructional contexts,
and student characteristics that potentially mediate feedback effects. Therefore,
although there is no simple answer to the “what feedback works” query, there are
some preliminary guidelines that can be formulated based on the findings reported
in this review.
Recommendations and Guidelines for Formative Feedback
Tables 2, 3, 4, and 5 present suggestions or prescriptions based on the current
review of the formative feedback literature. These are intended to provide a point
of departure for more comprehensive and systematic prescriptions in the future.
Equivocal findings are not presented, and the references are not exhaustive, but
representative. The tables differ in terms of formative feedback guidelines for
(a) things to do (Table 2), (b) things to avoid (Table 3), (c) timing issues (Table 4),
and (d) learner characteristics (Table 5).
Future Research
One reason studies examining formative feedback effects are so inconsistent may
be a function of individual differences among motivational prerequisites (e.g., intrin-
sic motivation, beliefs, need for academic achievement, academic self-efficacy, and
metacognitive skills). In fact, Vygotsky (1987) noted that the study of psychology
had been damaged by the separation of the intellectual from the motivational and
emotional (or affective) aspects of thinking. Crafting and delivering formative feed-
back may help bridge these “aspects of thinking” and enhance learning. This seems
to be supported by a growing number of researchers (e.g., Goleman, 1995; Mayer &
Salovey, 1993, 1997; Picard et al., 2004) who have argued that emotional upsets can
interfere with mental activities (e.g., anxious, angry, or depressed students do not
learn). Thus, one intriguing area of future research is to systematically examine the
relationship(s) between affective components in feedback and outcome performance.
And although there have been inroads in the area, according to Picard et al. (2004),
extending cognitive theory to explain and exploit the role of affect in learning is still
in its infancy.
In general, and as suggested by Schwartz and White (2000) cited earlier, we need
to continue taking a multidimensional view of feedback where situational and indi-
vidual characteristics of the instructional context and learner are considered along
with the nature and quality of a feedback message. Narciss and Huth (2004) noted,
and I strongly agree, that function, content, and mode of feedback presentation are
important facets and should be considered separately as well as interactively with
learner characteristics and instructional variables. Cognitive task and error analyses
may be used to match formative feedback components to (a) learning objectives,
(b) skills needed for the mastery of the task, and (c) typical errors or incorrect strate-
gies. However such expensive analyses and methods may not, in fact, be necessary
to promote learning (e.g., see the No Effect of Feedback Complexity subsection in
this article, specifically the Sleeman et al., 1989, findings).
(text continues on page 181)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
177
TABLE 2
Formative feedback guidelines to enhance learning (things to do)
Prescription Description and references
Focus feedback on
the task, not the
learner.
Provide elaborated
feedback to
enhance
learning.
Present elaborated
feedback in
manageable
units.
Be specific and
clear with
feedback
message.
Keep feedback as
simple as
possible but no
simpler (based
on learner needs
and instructional
constraints).
Reduce
uncertainty
between
performance
and goals.
Give unbiased,
objective
feedback,
written or via
computer.
Promote a
“learning” goal
orientation via
feedback.
Feedback to the learner should address specific features of his or her
work in relation to the task, with suggestions on how to improve
(e.g., Butler, 1987; Corbett & Anderson, 2001; Kluger & DeNisi,
1996; Narciss & Huth, 2004).
Feedback should describe the what, how, and why of a given problem.
This type of cognitive feedback is typically more effective than ver-
ification of results (e.g., Bangert-Drowns et al., 1991; Gilman, 1969;
Mason & Bruning, 2001; Narciss & Huth, 2004).
Provide elaborated feedback in small enough pieces so that it is not
overwhelming and discarded (Bransford et al., 2000; Sweller et al.,
1998). Presenting too much information may not only result in super-
ficial learning but may also invoke cognitive overload (e.g., Mayer &
Moreno, 2002; Phye & Bender, 1989). A stepwise presentation of
feedback offers the possibility to control for mistakes and gives learn-
ers sufficient information to correct errors on their own.
If feedback is not specific or clear, it can impede learning and can frus-
trate learners (e.g., Moreno, 2004; Williams, 1997). If possible, try
to link feedback clearly and specifically to goals and performance
(Hoska, 1993; Song & Keller, 2001).
Simple feedback is generally based on one cue (e.g., verification or hint)
and complex feedback on multiple cues (e.g., verification, correct
response, error analysis). Keep feedback as simple and focused as
possible. Generate only enough information to help students and not
more. Kulhavy et al. (1985) found that feedback that was too complex
did not promote learning compared to simpler feedback.
Formative feedback should clarify goals and seek to reduce or remove
uncertainty in relation to how well learners are performing on a task,
and what needs to be accomplished to attain the goal(s) (e.g., Ashford
et al., 2003; Bangert-Drowns et al., 1991).
Feedback from a trustworthy source will be considered more seriously
than other feedback, which may be disregarded. This may explain
why computer-based feedback is often better than human-delivered
in some experiments in that perceived biases are eliminated (see
Kluger & DeNisi, 1996).
Formative feedback can be used to alter goal orientation—from a focus
on performance to a focus on learning (Hoska, 1993). This can be facil-
itated by crafting feedback emphasizing that effort yields increased
learning and performance, and mistakes are an important part of the
learning process (Dweck, 1986).
(continued)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
178
Provide feedback
after learners
have attempted
a solution.
Do not let learners see answers before trying to solve a problem on their
own (i.e., presearch availability). Several studies that have controlled
presearch availability show a benefit of feedback, whereas studies with-
out such control show inconsistent results (Bangert-Drowns et al., 1991).
TABLE 2 (continued)
Prescription Description and references
Do not give
normative
comparisons.
Be cautious about
providing
overall grades.
Do not present
feedback that
discourages the
learner or
threatens the
learner’s self-
esteem.
Use “praise”
sparingly,
if at all.
Try to avoid
delivering
feedback orally.
Do not interrupt
learner with
feedback if the
learner is
actively
engaged.
Feedback should avoid comparisons with other students—directly or indi-
rectly (e.g., “grading on the curve”). In general, do not draw attention to
“self” during learning (Kluger & DeNisi, 1996; Wiliam, 2007).
Feedback should note areas of strength and provide information on how to
improve, as warranted and without overall grading. Wiliam (2007) sum-
marized the following findings: (a) students receiving just grades
showed no learning gains, (b) those getting just comments showed large
gains, and (c) those with grades and comments showed no gains (likely
due to focusing on the grade and ignoring comments). Effective feed-
back relates to the content of the comments (Butler, 1987; McColskey
& Leary, 1985).
This prescription is based not only on common sense but also on research
reported in Kluger and DeNisi (1996) citing a list of feedback interven-
tions that undermine learning as it draws focus to the “self” and away
from the task at hand. In addition, do not provide feedback that is either
too controlling or critical of the learner (Baron, 1993; Fedor et al., 2001).
Kluger & DeNisi (1996), Butler (1987), and others have noted that use of
praise as feedback directs the learner’s attention to “self,” which distracts
from the task and consequently from learning.
This also was addressed in Kluger & DeNisi (1991). When feedback is
delivered in a more neutral manner (e.g., written or computer delivered),
it is construed as less biased.
Interrupting a student who is immersed in a task—trying to solve a prob-
lem or task on his or her own—can be disruptive to the student and
impede learning (Corno & Snow, 1986).
TABLE 3
Formative feedback guidelines to enhance learning (things to avoid)
Prescription Description and references
(continued)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
179
Design timing of
feedback to align
with desired
outcome.
For difficult tasks,
use immediate
feedback.
For relatively
simple tasks,
use delayed
feedback.
For retention of
procedural or
conceptual
knowledge, use
immediate
feedback.
Feedback can be delivered (or obtained) either immediately or delayed.
Immediate feedback can help fix errors in real time, producing greater
immediate gains and more efficient learning (Corbett & Anderson, 2001;
Mason & Bruning, 2001), but delayed feedback has been associated with
better transfer of learning (e.g., Schroth, 1992).
When a student is learning a difficult new task (where "difficult" is relative
to the learner's capabilities), it is better to use immediate feedback, at
least initially (Clariana, 1990). This provides a helpful safety net for the
learner so she does not get bogged down and frustrated (Knoblauch &
Brannon, 1981).
When a student is learning a relatively simple task (again, relative to capa-
bilities), it is better to delay feedback to prevent feelings of feedback intru-
sion and possibly annoyance (Clariana, 1990; Corno, & Snow, 1986).
In general, there is wide support for use of immediate feedback to promote
learning and performance on verbal, procedural, and even tasks requiring
motor skills (Anderson et al., 2001; Azevedo & Bernard, 1995; Corbett
& Anderson, 1989, 2001; Dihoff et al., 2003; Phye & Andre, 1989).
TABLE 4
Formative feedback guidelines in relation to timing issues
Prescription Description and references
Avoid using
progressive
hints that
always
terminate with
the correct
answer.
Do not limit the
mode of
feedback
presentation
to text.
Minimize use of
extensive error
analyses and
diagnosis.
Although hints can be facilitative, they can also be abused, so if they are
employed to scaffold learners, provisions to prevent their abuse should
be made (e.g., Aleven & Koedinger, 2000; Shute, Woltz, & Regian,
1989). Consider using prompts and cues (i.e., more specific kinds of
hints).
Exploit the potential of multimedia to avoid cognitive overload due to
modality effects (e.g., Mayer & Moreno, 2002) and do not default to pre-
senting feedback messages as text. Instead, consider alternative modes
of presentation (e.g., acoustic, visual).
In line with findings by Sleeman et al. (1989) and VanLehn et al. (2005), the
cost of conducting extensive error analyses and cognitive diagnosis may
not provide sufficient benefit to learning. Furthermore, error analyses are
rarely complete and not always accurate, thus only helpful in a subset of
circumstances.
TABLE 3 (continued)
Prescription Description and references
(continued)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
180
To promote
transfer of
learning,
consider using
delayed
feedback.
According to some researchers (e.g., Kulhavy et al., 1985; Schroth, 1992),
delayed may be better than immediate feedback for transfer task perfor-
mance, although initial learning time may be depressed. This needs more
research.
TABLE 4 (continued)
Prescription Description and references
For high-
achieving
learners,
consider using
delayed
feedback.
For low-achieving
learners, use
immediate
feedback.
For low-achieving
learners, use
directive (or
corrective)
feedback.
For high-
achieving
learners, use
facilitative
feedback.
For low-achieving
learners, use
scaffolding.
For high-
achieving
learners,
verification
feedback may
be sufficient.
Similar to the Clariana (1990) findings cited in Table 4, high-achieving stu-
dents may construe a moderate or difficult task as relatively easy and
hence benefit by delayed feedback (see also Gaynor, 1981; Roper,
1977).
The argument for low-achieving students is similar to the one above; how-
ever, these students need the support of immediate feedback in learning
new tasks they may find difficult (see Gaynor, 1981; Mason & Bruning,
2001; Roper, 1977).
Novices or struggling students need support and explicit guidance during
the learning process (Knoblauch & Brannon, 1981; Moreno, 2004); thus,
hints may not be as helpful as more explicit, directive feedback.
Similar to the above, high-achieving or more motivated students benefit
from feedback that challenges them, such as hints, cues, and prompts
(Vygotsky, 1987).
Provide early support and structure for low-achieving students (or those
with low self-efficacy) to improve learning and performance (e.g.,
Collins et al., 1989; Graesser et al., 2005).
Hanna (1976) presented findings that suggest that high-achieving students
learn more efficiently if permitted to proceed at their own pace.
Verification feedback provides the level of information most helpful in
this endeavor.
TABLE 5
Formative feedback guidelines in relation to learner characteristics
Prescription Description and references
(continued)
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
181
In line with the question concerning the value added of error analyses and more
diagnostic types of formative feedback, controlled evaluations are needed, system-
atically testing the effects of feedback conditions (as listed in Table 1) on learning
combined with a cost–benefit analysis. Some obvious costs include development
time for specifying the feedback types and reading time for feedback by the student.
Benefits relate to improvements in learning outcome and efficiency, as well as pos-
sible self-regulatory skills and affective variables. Information about the learner
would be collected to examine possible interactions. The hypotheses are that
(a) more complex formative feedback types (e.g., involving extensive and expen-
sive error analyses) do not yield proportionately greater learning gains and (b) feed-
back can be made more effective if it can adapt to the needs of learners—cognitive
and noncognitive characteristics—as well as to different types of knowledge and
skills. The general question is: What level of feedback complexity yields the most
bang for the buck?
Table 5 provides guidelines for linking a few learner characteristics to different
feedback types. Future research may examine (a) additional learner characteristics
and (b) links between different types of knowledge and feedback types. For
instance, feedback to support fact learning (declarative knowledge) could reiterate
definitions or provide the learner with a handy mnemonic technique; feedback to
support conceptual knowledge could provide examples, counterexamples, and big
pictures; and feedback to improve procedural knowledge could involve demonstra-
tions, solution paths (complete or partial), and so forth. Ultimately, information
about the learner, combined with information about desired outcomes, may inform
the development of adaptive formative feedback. Various feedback types could
be generated and incorporated into a program (or generated on the fly based on
formative feedback models) and then accessed and delivered according to the
For low-achieving
learners, use
correct response
and some kind
of elaboration
feedback.
For learners with
low learning
orientation (or
high
performance
orientation),
give specific
feedback.
Using the same rationale as with supplying scaffolding to low-achieving
students, the prescription here is to ensure low-achieving students
receive a concrete, directive form of feedback support (e.g., Clariana,
1990; Hanna, 1976).
As described in the study by Davis et al. (2005), if students are oriented
more toward performance (trying to please others) and less toward learn-
ing (trying to achieve an academic goal), provide feedback that is spe-
cific and goal directed. Also, keep the learner’s eye on the learning goal
(Hoska, 1993).
TABLE 5 (continued)
Prescription Description and references
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
182
characteristics of the learner in conjunction with the nature of the task and instruc-
tional goals.
In closing, the goal of this review is to summarize research findings relating to
formative feedback to serve as the foundation for a variety of future educational
products and services. As evidenced throughout, there is no “best” type of forma-
tive feedback for all learners and learning outcomes. However, formative feedback
has been shown in numerous studies to improve students’ learning and enhance
teachers’ teaching to the extent that the learners are receptive and the feedback is
on target (valid), objective, focused, and clear.
References
Albertson, L. M. (1986). Personalized feedback and cognitive achievement in com-
puter assisted instruction. Journal of Instructional Psychology, 13(2), 55–57.
Aleven, V., & Koedinger, K. R. (2000). Limitations of student control: Do students
know when they need help? In G. Gauthier, C. Frasson, & K. VanLehn (Eds.),
Proceedings of the 5th International Conference on Intelligent Tutoring Systems, ITS
2000 (pp. 292–303). Berlin: Springer-Verlag.
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive
tutors: Lessons learned. Journal of the Learning Sciences, 4, 167–207.
Ashford, S. J. (1986). Feedback-seeking in individual adaptation: A resource perspec-
tive. Academy of Management Journal, 29, 465–487.
Ashford, S. J., Blatt, R., & VandeWalle, D. (2003). Reflections on the looking glass:
A review of research on feedback-seeking behavior in organizations. Journal of
Management, 29, 773–799.
Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in
computer-based instruction. Journal of Educational Computing Research, 13(2),
111–127.
Balcazar, F. E., Hopkins, B. L., & Suarez, Y. (1985). A critical, objective review of
performance feedback. Journal of Organizational Behavior Management, 7(3-4),
65–89.
Bandura, A. (1991). Social theory of self-regulation. Organizational Behavior and
Human Decision Processes, 50, 248–287.
Bandura, A., & Cervone, D. (1983). Self-evaluation and self-efficacy mechanisms gov-
erning the motivational effects of goal systems. Journal of Personality and Social
Psychology, 45, 1017–1028.
Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The
instructional effect of feedback in test-like events. Review of Educational Research,
61, 213–238.
Baron, R. A. (1988). Negative effects of destructive criticism: Impact on conflict, self-
efficacy, and task performance. Journal of Applied Psychology, 73, 199–207.
Baron, R. A. (1993). Criticism (informal negative feedback) as a source of perceived
unfairness in organizations: Effects, mechanisms, and countermeasures. In R.
Cropanzano (Ed.), Justice in the workplace: Approaching fairness in human
resource management (pp. 155–170). Hillsdale, NJ: Lawrence Erlbaum.
Baumeister, R. F., Hutton, D. G., & Cairns, K. J. (1990). Negative effects of praise on
skilled performance. Basic & Applied Social Psychology, 11(2), 131–149.
Birenbaum, M., & Tatsuoka, K. K. (1987). Effects of “on-line” test feedback on the
seriousness of subsequent errors. Journal of Educational Measurement, 24(2),
145–155.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
183
Birney, R. C., Burdick, H., & Teevan, R. C. (1969). Fear of failure. New York: Van
Nostrand-Reinhold.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
Education: Principles, Policy & Practice, 5(1), 7–74.
Bordia, P., Hobman, E., Jones, E., Gallois, C., & Callan, V. J. (2004). Uncertainty
during organizational change: Types, consequences, and management strategies.
Journal of Business and Psychology, 18, 507–532.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain,
mind, experience, and school (Rev. ed.). Washington, DC: National Academies
Press.
Brophy, J. E. (1981). Teacher praise: A functional analysis. Review of Educational
Research, 51, 5–32.
Brosvic, G. M., & Cohen, B. D. (1988). The horizontal vertical illusion and knowledge
of results. Perceptual and Motor Skills, 67(2), 463–469.
Bunderson, V. C., & Olson, J. B. (1983). Mental errors in arithmetic skills: Their diag-
nosis in precollege students (Report No. NSF SED 80-12500). Provo, UT: WICAT
Education Institute.
Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects
of different feedback conditions on motivational perceptions, interest, and perfor-
mance. Journal of Educational Psychology, 79(4), 474–482.
Button, S. B., Mathieu, J. E., & Zajac, D. M. (1996). Goal orientation in organizational
research: A conceptual and empirical foundation. Organizational Behavior and
Human Decision Processes, 67, 26–48.
Cheng, S. Y., Lin, C. S., Chen, H. S., & Heh, J. S. (2005). Learning and diagnosis of
individual and class conceptual perspectives: An intelligent systems approach using
clustering techniques. Computers & Education, 44(3), 257–283.
Clariana, R. B. (1990). A comparison of answer-until-correct feedback and knowledge-
of-correct-response feedback under two conditions of contextualization. Journal of
Computer-Based Instruction, 17(4), 125–129.
Clariana, R. B. (1999, February). Differential memory effects for immediate and
delayed feedback: A delta rule explanation of feedback timing effects. Paper pre-
sented at the Association of Educational Communications and Technology annual
convention, Houston, TX.
Cohen, V. B. (1985). A reexamination of feedback in computer-based instruction:
Implications for instructional design. Educational Technology, 25(1), 33–37.
Collins, C., Brown, J., & Newman, S. (1989). Cognitive apprenticeship: Teaching the
crafts of reading, writing, and mathematics. In L. Resnick (Ed.), Knowing, learning,
and instruction: Essays in honor of Robert Glaser (pp. 453–494). Hillsdale,
NJ: Lawrence Erlbaum.
Corbett, A. T., & Anderson, J. R. (1989). Feedback timing and student control in the
LISP intelligent tutoring system. In D. Bierman, J. Brueker, & J. Sandberg (Eds.),
Proceedings of the Fourth International Conference on Artificial Intelligence and
Education (pp. 64–72). Amsterdam, Netherlands: IOS Press.
Corbett, A. T., & Anderson, J. R. (2001). Locus of feedback control in computer-based
tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of ACM
CHI 2001 Conference on Human Factors in Computing Systems (pp. 245-252). New
York: Association for Computing Machinery Press.
Corno, L., & Snow, R. E. (1986). Adapting teaching to individual differences among
learners. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.,
pp. 605–629). New York: Macmillan.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
184
Covington, M. V., & Omelich, C. L. (1984). Task-oriented versus competitive learn-
ing structures: Motivational and performance consequences. Journal of Educational
Psychology, 76, 1038–1050.
Davis, W. D., Carson, C. M., Ammeter, A. P., & Treadway, D. C. (2005). The interac-
tive effects of goal orientation and feedback specificity on task performance. Human
Performance, 18, 409–426.
Dempsey, J., Driscoll, M., & Swindell, L. (1993). Text-based feedback. In J. Dempsey
& G. Sales (Eds.), Interactive instruction and feedback (pp. 21–54). Englewood
Cliffs, NJ: Educational Technology Publications.
Dihoff, R. E., Brosvic, G. M., Epstein, M. L., & Cook, M. J. (2003). The role of feedback
during academic testing: The delay retention test revisited. The Psychological Record,
53, 533–548.
Dweck, C. S. (1986). Motivational processes affecting learning. American
Psychologist, 41, 1040–1048.
Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and
personality. Psychological Review, 95, 256–273.
Epstein, M. L., Lazarus, A. D., Calvano, T. B., Matthews, K. A., Hendel, R. A., Epstein,
B. B., et al. (2002). Immediate feedback assessment technique promotes learning and
corrects inaccurate first responses. The Psychological Record, 52, 187–201.
Farr, J. L., Hofmann, D. A., & Ringenbach, K. L. (1993). Goal orientation and action con-
trol theory: Implications for industrial and organizational psychology. In C. L. Cooper
& I. T. Robertson (Eds.), International review of industrial and organizational psy-
chology (pp. 193–232). New York: John Wiley.
Fedor, D. B. (1991). Recipient responses to performance feedback: A proposed model
and its implications. Research in Personnel and Human Resources Management, 9,
73–120.
Fedor, D. B., Davis, W. D., Maslyn, J. M., & Mathieson, K. (2001). Performance
improvement efforts in response to negative feedback: The roles of source power
and recipient self-esteem. Journal of Management, 27(1), 79–97.
Fisher, S. L., & Ford, J. K (1998). Differential effects of learner effort and goal orien-
tation on two learning outcomes. Personnel Psychology, 51, 397–420.
Ford, J. K., Smith, E. M., Weissbein, D. A., Gully, S. M., & Salas, E. (1998).
Relationships of goal orientation, metacognitive activity, and practice strategies with
learning outcomes and transfer. Journal of Applied Psychology, 83, 218–233.
Gaynor, P. (1981). The effect of feedback delay on retention of computer-based math-
ematical material. Journal of Computer-Based Instruction, 8(2), 28–34.
Gibbs, G. (1988). Learning by doing: A guide to teaching and learning methods.
London: Further Education Unit.
Gilman, D. A. (1969). Comparison of several feedback methods for correcting errors
by computer-assisted instruction. Journal of Educational Psychology, 60(6),
503–508.
Goldstein, I. L., Emanuel, J. T., & Howell, W. C. (1968). Effect of percentage and
specificity of feedback on choice behavior in a probabilistic information-processing
task. Journal of Applied Psychology, 52, 163–168.
Goleman, D. (1995). Emotional intelligence. New York: Bantam.
Goodman, J., Wood, R. E., & Hendrickx, M. (2004). Feedback specificity, exploration,
and learning. Journal of Applied Psychology, 89, 248–262.
Graesser, A. C., McNamara, D., & VanLehn, K. (2005). Scaffolding deep comprehen-
sion strategies through AutoTutor and iSTART. Educational Psychologist, 40,
225–234.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
185
Guzzo, R. A., Jette, R. D., & Katzell, R. A. (1985). The effects of psychologically based
intervention programs on worker productivity: A meta-analysis. Personnel Psychology,
38, 275–291.
Hanna, G. S. (1976). Effects of total and partial feedback in multiple-choice testing
upon learning. Journal of Educational Research, 69(5), 202–205.
Hartman, H. (2002). Scaffolding and cooperative learning. In Human learning and
instruction (pp. 23–69). New York: City College, University of New York.
Hodes, C. L. (1985). Relative effectiveness of corrective and noncorrective feedback in
computer assisted instruction on learning and achievement. Journal of Educational
Technology Systems, 13(4), 249–254.
Hoska, D. M. (1993). Motivating learners through CBI feedback: Developing a posi-
tive learner perspective. In V. Dempsey & G. C. Sales (Eds.), Interactive instruction
and feedback (pp. 105–132). Englewood Cliffs, NJ: Educational Technology
Publications.
Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feed-
back on behavior in organizations. Journal of Applied Psychology, 64, 349–371.
Johnson, D., & Johnson, R. (1993). Cooperative learning and feedback in technology-
based instruction. In J. Dempsey & G. Sales (Eds.), Interactive instruction and feed-
back (pp. 133–157). Englewood Cliffs, NJ: Educational Technology Publications.
Jurma, W. E., & Froelich, D. L. (1984). Effects of immediate instructor feedback on
group discussion participants. Central States Speech Journal, 35(3), 178–186.
Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integra-
tive/aptitude-treatment interaction approach to skill acquisition. Journal of Applied
Psychology, 74, 657–690.
Kippel, G. M. (1974). Information feedback schedules, interpolated activities, and
retention. Journal of Psychology, 87, 245–251.
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on perfor-
mance: A historical review, a meta-analysis, and a preliminary feedback interven-
tion theory. Psychological Bulletin, 119(2), 254–284.
Kluger, A. N., & DeNisi, A. (1998). Feedback interventions: Toward the understand-
ing of a double-edged sword. Current Directions in Psychological Science, 7, 67–72.
Knoblauch, C. H., & Brannon, L. (1981). Teacher commentary on student writing: The
state of the art. Freshman English News, 10(2), 1–4.
Kolb, D. (1984). Experiential learning. Englewood Cliffs, NJ: Prentice Hall.
Kulhavy, R. W. (1977). Feedback in written instruction. Review of Educational
Research, 47, 211–232.
Kulhavy, R. W., & Anderson, R. C. (1972). Delay-retention effect with multiple-choice
tests. Journal of Educational Psychology, 63(5), 505–512.
Kulhavy, R. W., & Stock, W. (1989). Feedback in written instruction: The place of
response certitude. Educational Psychology Review, 1(4), 279–308.
Kulhavy, R. W., & Wager, W. (1993). Feedback in programmed instruction: Historical
context and implications for practice. In J. Dempsey & G. Ales (Eds.), Interactive
instruction and feedback (pp. 3–20). Englewood Cliffs, NJ: Educational Technology
Publications.
Kulhavy, R. W., White, M. T., Topp, B. W., Chan, A. L., & Adams, J. (1985). Feedback
complexity and corrective efficiency. Contemporary Educational Psychology, 10(3),
285–291.
Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of
Educational Research, 58(1), 79–97.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
186
Lepper, M. R., & Chabay, R. W. (1985). Intrinsic motivation and instruction:
Conflicting views on the role of motivational processes in computer-based educa-
tion. Educational Psychologist, 20(4), 217–230.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance.
Englewood Cliffs, NJ: Prentice Hall.
Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction.
Cognitive Science, 5(4), 333–370.
Mason, B. J., & Bruning, R. (2001). Providing feedback in computer-based instruc-
tion: What the research tells us. Center for Instructional Innovation, University of
Nebraska–Lincoln: 14. Retrieved June 1, 2006, from http://dwb.unl.edu/Edit/MB/
MasonBruning.html
Mathan, S. A., & Koedinger, K. R. (2002). An empirical assessment of comprehension
fostering features in an intelligent tutoring system. In S. A. Cerri, G. Gouarderes, &
F. Paraguacu (Eds.), Intelligent Tutoring Systems, 6th International Conference, ITS
2002 (Vol. 2363, pp. 330–343). New York: Springer-Verlag.
Mayer, J. D., & Salovey, P. (1993). The intelligence of emotional intelligence. Intelligence,
17(4), 433–442.
Mayer, J. D., & Salovey, P. (1997). What is emotional intelligence? In P. Salovey &
D. Sluyter (Eds.), Emotional development and emotional intelligence: Implications
for educators (pp. 3–31). New York: Basic Books.
Mayer, R. E., & Moreno, R. (2002). Aids to computer-based multimedia learning.
Learning and Instruction, 12(1), 107–119.
McColskey, W., & Leary, M. R. (1985). Differential effects of norm-referenced and
self-referenced feedback on performance expectancies, attribution, and motivation.
Contemporary Educational Psychology, 10, 275–284.
Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explana-
tory versus corrective feedback in discovery-based multimedia. Instructional Science,
32, 99–113.
Morrison, G. R., Ross, S. M., Gopalakrishnan, M., & Casey, J. (1995). The effects of
feedback and incentives on achievement in computer-based instruction. Contemporary
Educational Psychology, 20(1), 32–50.
Mory, E. H. (1994). Adaptive feedback in computer-based instruction: Effects of
response certitude on performance, feedback-study time, and efficiency. Journal of
Educational Computing Research, 11(3), 263–290.
Mory, E. H. (2004). Feedback research review. In D. Jonassen (Ed.), Handbook of
research on educational communications and technology (pp. 745–783). Mahwah,
NJ: Lawrence Erlbaum.
Narciss, S., & Huth, K. (2004). How to design informative tutoring feedback for multi-
media learning. In H. M. Niegemann, D. Leutner, & R. Brunken (Ed.), Instructional
design for multimedia learning (pp. 181–195). Munster, NY: Waxmann.
Newman, M. I., Williams, R. G., & Hiller, J. H. (1974). Delay of information feedback
in an applied setting: Effects on initially learned and unlearned items. Journal of
Experimental Education, 42(4), 55–59.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional
design: Recent developments. Educational Psychologist, 38, 1–4.
Park, O., & Gittelman, S. S. (1992). Selective use of animation and feedback in com-
puter-based instruction. Educational Technology Research and Development, 40(4),
27–38.
Phye, G. D., & Andre, T. (1989). Delayed retention effect: Attention, perseveration, or
both? Contemporary Educational Psychology, 14(2), 173–185.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
187
Phye, G. D., & Baller, W. (1970). Verbal retention as a function of the informativeness
and delay of informative feedback: A replication. Journal of Educational
Psychology, 61(5), 380–381.
Phye, G. D., & Bender, T. (1989). Feedback complexity and practice: Response pat-
tern analysis in retention and transfer. Contemporary Educational Psychology,
14(2), 97–110.
Phye, G. D., & Sanders, C. E. (1994). Advice and feedback: Elements of practice for
problem solving. Contemporary Educational Psychology, 19(3), 286–301.
Picard, R. W., Papert, S., Bender, W., Blumberg, B., Breazeal, C., Cavallo, D., et al.
(2004). Affective learning—A manifesto. BT Technology Journal, 22(4), 253–269.
Pound, L. D., & Bailey, G. D. (1975). Immediate feedback less effective than delayed
feedback for contextual learning? Reading Improvement, 12(4), 222–224.
Prather, D. C., & Berry, G. A. (1973). Delayed versus immediate information feedback
on a verbal learning task controlled for distribution of practice. Education, 93(3),
230–232.
Pridemore, D. R., & Klein, J. D. (1991). Control of feedback in computer-assisted
instruction. Educational Technology Research and Development, 39(4), 27–32.
Pridemore, D. R., & Klein, J. D. (1995). Control of practice and level of feedback in
computer-based instruction. Contemporary Educational Psychology, 20, 444–450.
Reddy, W. B. (1969). Effects of immediate and delayed feedback on the learning of
empathy. Journal of Counseling Psychology, 16(1), 59–62.
Roper, W. J. (1977). Feedback in computer assisted instruction. Programmed Learning
and Educational Technology, 14(1), 43–49.
Sales, G. C. (1993). Adapted and adaptive feedback in technology-based instruction.
In J. V. Dempsey & G. C. Sales (Eds.), Interpretive instruction and feedback
(pp. 159–175). Englewood Cliffs, NJ: Educational Technology Publications.
Salomon, G., & Globerson, T. (1987). Skill may not be enough: The role of mindful-
ness in learning and transfer. International Journal of Educational Research, 11(6),
623–637.
Schimmel, B. J. (1983, April). A meta-analysis of feedback to learners in computerized
and programmed instruction. Paper presented at the annual meeting of the American
Educational Research Association, Montréal. (ERIC Document Reproduction
Service No. 233708).
Schimmel, B. J. (1988). Providing meaningful feedback in courseware. In D. Jonassen
(Ed.), Instructional designs for microcomputer courseware (pp. 183–195). Hillsdale,
NJ: Lawrence Erlbaum.
Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common
principles in three paradigms suggest new concepts for training. Psychological
Science, 3(4), 207–217.
Schmidt, R. A., Young, D. E., Swinnen, S., & Shapiro, D. C. (1989). Summary knowl-
edge of results for skill acquisition: Support for the guidance hypothesis. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 15(2), 352–359.
Schroth, M. L. (1992). The effects of delay of feedback on a delayed concept forma-
tion transfer task. Contemporary Educational Psychology, 17(1), 78–82.
Schwartz, F., & White, K. (2000). Making sense of it all: Giving and getting online
course feedback. In K. W. White & B. H. Weight (Eds.), The online teaching guide:
A handbook of attitudes, strategies, and techniques for the virtual classroom
(pp. 57–72). Boston: Allyn & Bacon.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Shute
188
Shute, V. J., Hansen, E. G., & Almond, R. G. (2007). An assessment for learning sys-
tem called ACED: Designing for learning effectiveness and accessibility. ETS
Research Report No. RR-07-26, Princeton, NJ.
Shute, V. J., Woltz, D. J., & Regian, J. W. (1989, May). An investigation of learner dif-
ferences in an ITS environment: There’s no such thing as a free lunch. Paper pre-
sented at the 4th International Conference on Artificial Intelligence and
Education–AI-ED ‘89, Amsterdam, Holland.
Sleeman, D. H., Kelly, A. E., Martinak, R., Ward, R. D., & Moore, J. L. (1989). Studies
of diagnosis and remediation with high school algebra students, Cognitive Science,
13, 551–568.
Smith, P. L., & Ragan, T. J. (1999). Instructional design (2nd ed.). Upper Saddle River,
NJ: Prentice Hall.
Song, S. H., & Keller, J. M. (2001). Effectiveness of motivationally adaptive computer-
assisted instruction on the dynamic aspects of motivation. Educational Technology
Research and Development, 49(2), 5–22.
Surber, J. R., & Anderson, R. C. (1975). Delay-retention effect in natural classroom
settings. Journal of Educational Psychology, 67(2), 170–173.
Swan, M. B. (1983). Teaching decimal place value. A comparative study of conflict and
positively-only approaches. Research Report No. 31, University of Nottingham,
Sheel Center for Mathematical Education.
Sweller, J., Van Merriënboer, J., & Paas, F. (1998). Cognitive architecture and instruc-
tional design. Educational Psychology Review, 10, 251–296.
VandeWalle, D., Brown, S. P., Cron, W. L., & Slocum, L. W. (1999). The influence of
goal orientation and self-regulation tactics on sales performance: A longitudinal field
test. Journal of Applied Psychology, 84, 249–259.
VanLehn, K. (1982). Bugs are not enough: Empirical studies of bugs, impasses and
repairs in procedural skills. Journal of Mathematical Behavior, 3(2), 3–71.
VanLehn, K., Lynch, C., Schulze, K., Shapiro, J. A., Shelby, R., Taylor, L., et al.
(2005). The Andes physics tutoring system: Lessons learned, International Journal
of Artificial Intelligence in Education, 15(3). Retrieved May 22, 2006 from
http://www.andes.pitt.edu/Pages/AndesLessonsLearnedForWeb.pdf
Vygotsky, L. S. (1987). The collected works of L.S. Vygotsky. New York: Plenum.
Wager, W., & Wager, S. (1985). Presenting questions, processing responses, and pro-
viding feedback in CAI. Journal of Instructional Development, 8(4), 2–8.
Waldersee, R., & Luthans, F. (1994). The impact of positive and corrective feedback
on customer service performance. Journal of Organizational Behavior, 15(1),
83–95.
Waldrop, P. B., Justen, J. E., & Adams, T. M. (1986). A comparison of three types of
feedback in a computer-assisted instruction task. Educational Technology, 26(11),
43–45.
Whyte, M. M., Karolick, D. M., Neilsen, M. C., Elder, G. D., & Hawley, W. T. (1995).
Cognitive styles and feedback in computer-assisted instruction. Journal of
Educational Computing Research, 12(2), 195–203.
Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regula-
tion of learning. In F. K. Lester Jr. (Ed.), Second handbook of mathematics teaching
and learning (pp. 1053-1098). Greenwich, CT: Information Age Publishing.
Williams, S. E. (1997, March). Teachers’ written comments and students’ responses:
A socially constructed interaction. Proceedings of the annual meeting of the
Conference on College Composition and Communication, Phoenix, AZ. Retrieved
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from
Focus on Formative Feedback
189
December 24, 2007, from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/
content_storage_01/0000019b/80/16/a8/1e.pdf
Author
VALERIE J. SHUTE is currently an associate professor in the Educational Psychology
and Learning Systems Department at Florida State University, Tallahassee, FL 32306;
e-mail vshute@fsu.edu. She joined Florida State University in 2007 and teaches grad-
uate students in the Instructional Systems program. Her research interests relate to
cognitive modeling/diagnosis, instructional system design, formative/stealth assess-
ment to support learning, and innovative evaluation methodologies, typically using
the structure and power of Bayes net technology to support the various efforts.
2009
at FLORIDA STATE UNIV LIBRARY on August 10,http://rer.aera.netDownloaded from