ArticlePDF Available

Abstract and Figures

Discovery learning approaches to education have recently come under scrutiny (Tobias & Duffy, 2009), with many studies indicating limitations to discovery learning practices. Therefore, 2 meta-analyses were conducted using a sample of 164 studies: The 1st examined the effects of unassisted discovery learning versus explicit instruction, and the 2nd examined the effects of enhanced and/or assisted discovery versus other types of instruction (e.g., explicit, unassisted discovery). Random effects analyses of 580 comparisons revealed that outcomes were favorable for explicit instruction when compared with unassisted discovery under most conditions (d = –0.38, 95% CI [−.44, −.31]). In contrast, analyses of 360 comparisons revealed that outcomes were favorable for enhanced discovery when compared with other forms of instruction (d = 0.30, 95% CI [.23, .36]). The findings suggest that unassisted discovery does not benefit learners, whereas feedback, worked examples, scaffolding, and elicited explanations do. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Content may be subject to copyright.
Does Discovery-Based Instruction Enhance Learning?
Louis Alfieri, Patricia J. Brooks, and
Naomi J. Aldrich
City University of New York
Harriet R. Tenenbaum
Kingston University
Discovery learning approaches to education have recently come under scrutiny (Tobias & Duffy, 2009),
with many studies indicating limitations to discovery learning practices. Therefore, 2 meta-analyses were
conducted using a sample of 164 studies: The 1st examined the effects of unassisted discovery learning
versus explicit instruction, and the 2nd examined the effects of enhanced and/or assisted discovery versus
other types of instruction (e.g., explicit, unassisted discovery). Random effects analyses of 580 compar-
isons revealed that outcomes were favorable for explicit instruction when compared with unassisted
discovery under most conditions (d 0.38, 95% CI [–.44, .31]). In contrast, analyses of 360
comparisons revealed that outcomes were favorable for enhanced discovery when compared with other
forms of instruction (d 0.30, 95% CI [.23, .36]). The findings suggest that unassisted discovery does
not benefit learners, whereas feedback, worked examples, scaffolding, and elicited explanations do.
Keywords: discovery learning, explicit instruction, scaffolding
Supplemental materials: http://dx.doi.org/10.1037/a0021017.supp
The average student will be unable to recall most of the factual
content of a typical lecture within fifteen minutes after the end of
class. In contrast, interests, values, and cognitive skills are all likely to
last longer, as are concepts and knowledge that students have acquired
not by passively reading or listening to lectures but through their own
mental efforts. (Bok, 2006, pp. 4849)
Over the past several decades, conventional explicit instruction
has been increasingly supplanted by approaches more closely
aligned with constructivist concepts of exploration, discovery, and
invention (i.e., discovery learning), at least in part because of an
appreciation of which learning outcomes are most valuable (Bok,
2006). Allowing learners to interact with materials, manipulate
variables, explore phenomena, and attempt to apply principles
affords them with opportunities to notice patterns, discover under-
lying causalities, and learn in ways that are seemingly more robust.
Such self-guided learning approaches, like Piaget (1952, 1965,
1980) proposed, posit the child/learner at the center of the learning
process as he/she attempts to make sense of the world. From an
ecological perspective, people learn many complex skills without
formal instruction through participation in daily activities and
observation of others (Rogoff, 1990). Indeed, in cultures without
institutionalized formal education, complex skills and modes of
thought are learned in the absence of explicit, verbal teaching.
Nonetheless, debate remains concerning the limitations of discov-
ery learning (Bruner, 1961; Kirschner, Sweller, & Clark, 2006;
Klahr & Nigam, 2004; Mayer, 2004; Sweller, Kirschner, & Clark,
2007; Tobias & Duffy, 2009). Pedagogical and cognitive concerns
have led to some disagreement as to what constitutes effective
discovery learning methods and how and when such methods
should be applied. Two recent review articles (Kirschner et al.,
2006; Mayer, 2004) have outlined some of the problems associated
with various discovery-based instructional methods; however, no
systematic meta-analysis has been conducted on this literature. For
instance, it is unclear (a) whether the process of how to discover
information on one’s own needs to be taught to learners (Ausubel,
1964; Bruner, 1961), (b) to what extent discovery tasks should be
structured (Mayer, 2004), (c) which types of tasks are within the
realm of discovery methods (Klahr & Nigam, 2004), and (d)
whether the working memory demands of discovery-learning sit-
uations jeopardize the efficacy of the instruction (Kirschner et al.,
2006). In the current meta-analyses, we evaluate these concerns.
A Definition of Discovery Learning
Before proceeding, it is necessary to reflect on the wide range of
instructional conditions that have been included under the rubric of
discovery learning. Because methods employing discovery learn-
ing involve a wide variety of intended accomplishments during the
acquisition of the target content, a definition of discovery learning
is needed. However, there is a myriad of discovery-based learning
approaches presented within the literature without a precise defi-
This article was published Online First November 15, 2010.
Louis Alfieri, Patricia J. Brooks, and Naomi J. Aldrich, Department of
Psychology, College of Staten Island and the Graduate Center, City Uni-
versity of New York; Harriet R. Tenenbaum, School of Social Science,
Kingston University, London, England.
The research reported is based on Louis Alfieri’s doctoral dissertation
submitted to the Doctoral Program in Cognition, Brain, and Behavior at the
City University of New York. Preliminary results were presented at the
biennial meeting of the Society for Research in Child Development,
Denver, Colorado, April 2009. Purchase of software was supported by a
Student-Faculty Research Technology Grant from the College of Staten
Island/City University of New York awarded to Patricia J. Brooks.
Correspondence concerning this article should be addressed to Louis
Alfieri or Patricia J. Brooks, Department of Psychology, College of Staten
Island/City University of New York, 2800 Victory Boulevard, 4S-103,
Staten Island, NY 10314. E-mail: alfieri_psych@hotmail.com or
patricia.brooks@csi.cuny.edu
Journal of Educational Psychology © 2010 American Psychological Association
2011, Vol. 103, No. 1, 1–18 0022-0663/10/$12.00 DOI: 10.1037/a0021017
1
nition (Klahr & Nigam, 2004). Learning tasks considered to be
within the realm of discovery learning range from implicit pattern
detection (Destrebecqz, 2004; Jime´nez, Me´ndez, & Cleeremans,
1996) to the elicitation of explanations (Chi, de Leeuw, Chiu, &
LaVancher, 1994; Rittle-Johnson, 2006), and from working
through manuals (Lazonder & van der Meij, 1993) to conducting
simulations (Stark, Gruber, Renkl, & Mandl, 1998). What exactly
constitutes a discovery-learning situation is seemingly yet unde-
termined by the field as a whole. At times, the discovery condition
seems less influenced by the learning methods and more by the
comparison methods. That is, when a comparison group has re-
ceived some greater amount of explicit instruction, whatever the
type or degree, investigators often refer to the other group as a
discovery group because it has been assisted less during the
learning process.
A review of the literature suggests that discovery learning
occurs whenever the learner is not provided with the target infor-
mation or conceptual understanding and must find it independently
and with only the provided materials. Within discovery-learning
methods, there is an opportunity to provide the learners with
intensive or, conversely, minimal guidance, and both types can
take many forms (e.g., manuals, simulations, feedback, example
problems). The extent to which the learner is provided with assis-
tance seems to be contingent upon the difficulty in discovering the
target information with less assistance and also on the instructional
methodologies to which it is being compared. Common to all of
the literature, however, is that the target information must be
discovered by the learner within the confines of the task and its
material.
Concerns and Warnings About Discovery Learning
As early as the 1950s, research had begun to investigate the
effects of discovery learning methods in comparison with other
forms of instruction. Bruner (1961) and others (Ausubel, 1964;
Craig, 1965; Guthrie, 1967; Kagan, 1966; Kendler, 1966; Kersh,
1958, 1962; Ray, 1961; Scandura, 1964; Wittrock, 1963; Worthen,
1968) advocated learning situations that elicited explanations or
self-guided comprehension from learners and that provided oppor-
tunities for learners to gain insights into their domains of study.
Bruner emphasized that such discovery-based learning could en-
hance the entire learning experience while also cautioning that
such discovery could not be made a priori or without at least some
base of knowledge in the domain in question. Although Bruner’s
article has often been cited as support for discovery learning, many
have seemingly ignored his warnings (i.e., the limitations of such
an approach to instruction).
Recently, Mayer (2004) argued that pure, unassisted discovery-
learning practices should be abandoned because of a lack of
evidence that such practices improve learning outcomes. Through
a review of the literature, he illustrated that unassisted discovery-
learning tasks did not help learners discover problem-solving rules,
conservation strategies, or programming concepts. Mayer empha-
sized that although constructivist-based approaches might be ben-
eficial to learning under some circumstances, unassisted discovery
learning does not seem advantageous because of its lack of struc-
ture. He further emphasized that unassisted discovery-learning
tasks involving hands-on activities, even with large group discus-
sions, do not guarantee that learners will understand the task or that
they will come into contact with the to-be-learned material.
Furthermore, Klahr (2009) and others (Clark, 2009; Mayer,
2009; Rosenshine, 2009; Sweller, 2009) have emphasized that
there are times when more explicit instruction or at least directive
guidance is optimal. Although Klahr’s concerns were in teaching
the control of variables strategy (CVS), his arguments regarding
instructional times, feedback, instructional sequences, and gener-
alization of skills emphasize that in certain situations some amount
of direct instruction is advantageous. In the case of CVS, Klahr has
argued that learners might have difficulty arriving at the proper
strategy of holding all other variables constant while manipulating
only one. He has explained that such scientific problem solving,
although commonplace to cognitive scientists who have a great
understanding of the cognitive processes involved in such a task,
might not arise simply by asking novice learners to figure out how
to use the provided materials. Even if such a strategy is reached
and implemented by learners, it might require a great deal of time,
which could have been saved through direct teaching of the CVS.
Klahr has suggested that perhaps it would be more time efficient to
instruct learners directly on how to implement CVS and then to
give them ample opportunities to practice it. Moreover, direct
instruction in CVS learning tasks might be necessary because the
manipulation of the materials alone does not provide sufficient
feedback; learners are not presented with any indication of short-
comings in their strategies if they fail to manipulate only one
variable at a time. By explicitly teaching learners about the cog-
nitive processes involved in problem solving and the ways in
which scientists go about uncovering causal factors, Klahr has
argued that learners will be empowered to use these skills and that
their understandings can be strengthened by activities that afford
them with opportunities to practice these skills in a domain of
interest and, consequently, to discover knowledge in that domain
by doing so.
Similarly, Sweller et al. (2007) have emphasized the usefulness
of worked examples over other forms of instruction. They have
suggested that instructors should provide a complete problem
solution for learners to study and practice for themselves. They
have argued that such a learning technique would be superior to
less guided forms of instruction because of the limited capacity of
working memory. Although that claim is addressed in a subsequent
section, it is noteworthy that the encouragement to use worked
examples is similar to Klahr’s (2009) suggestion to demonstrate
CVS to learners and then to provide them with opportunities for
practice.
Direct Instruction and Construction
The example of teaching CVS directly, as described by Klahr
(2009), illustrates the variability of what is meant by direct in-
struction. Klahr did not suggest lecture-type instructional situa-
tions. Instead, he suggested some degree of guidance as to what
learners should expect as evidence of successful learning and then
giving them opportunities to practice using such skills on their
own. This suggestion is not unique to Klahr but has been raised by
a number of researchers on both sides of the debate (Clark, 2009;
Herman & Gomez, 2009; Kintsch, 2009; Pea, 2004; Rosenshine,
2009; Sweller et al., 2007; Wise & O’Neill, 2009). Although
Klahr’s arguments might not be appropriate in all domains or for
2
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
all learning tasks, his suggestions to employ direct instruction as a
basis for subsequent discovery address some of the concerns that
discovery-learning tasks lack structure and, therefore, overwhelm
the learner’s cognitive workspace.
Note also that Klahr (2009) did not position direct instruction in
opposition to constructivism in that he asserted that learners should
be provided with opportunities to manipulate materials directly. In
a way, Klahr might be helping to unite constructivism and more
direct forms of instruction by emphasizing that sometimes, as in
the case of CVS, direct instruction will facilitate constructivist
learning by reducing task ambiguities and learning times while
improving process comprehension and potential generalization.
More generally, Klahr’s suggestions to provide some amount of
direct instruction might reduce the cognitive demands of discovery
tasks by familiarizing learners with the processes involved, as we
discuss below.
Cognitive Factors
At the most basic level, memory is enhanced when learning
materials are generated by the learner in some way; this is com-
monly referred to as the generation effect (Slamecka & Graf,
1978). The robust effect is that materials generated or even merely
completed by learners are remembered more often and/or in
greater detail than materials provided by an instructor. This effect
is often presented as evidence that discovery learning is efficacious
because such learning involves the discovery and generation of
general principles or explanations of domain-specific patterns after
discovering such on one’s own (Chi et al., 1994; Crowley &
Siegler, 1999; Schwartz & Bransford, 1998). Therefore, the ex-
pectation is that discovery-based approaches, because of the re-
quirement that learners construct their own understandings and
consequently the content, should yield greater learning, compre-
hension, and/or retention. Note, however, that the majority of tasks
used in the generation effect are simple (e.g., recalling a word),
unlike much of the research on discovery learning, which involves
more involved tasks such as CVS.
Cognitive Load Theory and Concerns
With regard to the cognitive processes involved in discovery
learning, Mayer (2003) emphasized that discovery-based peda-
gogy works best in promoting meaningful learning when the
learner strives to make sense of the presented materials by select-
ing relevant incoming information, organizing it into a coherent
structure, and integrating it with other organized knowledge. How-
ever, to select, organize, and integrate high-level information in a
task-appropriate way is quite demanding of learners. Both Sweller
(1988) and Rittle-Johnson (2006) have emphasized that because
discovery learning relies on an extensive search through problem-
solving space, the process taxes learners’ limited working-memory
capacity and frequently does not lead to learning. In addition,
learners need the ability to monitor their own processes of atten-
tion to relevant information (Case, 1998; Kirschner et al., 2006).
This would seem to require learners to have considerable meta-
cognitive skills, and it is unlikely that all learners, in particular
children, would have such skills (Dewey, 1910; Flavell, 2000;
Kuhn & Dean, 2004). Thus, learning by discovery seems to require
a greater number of mental operations, as well as better executive
control of attention, in comparison with learning under a more
directive approach. Furthermore, cognitive load theory suggests
that the exploration of complex phenomena or learning domains
imposes heavy loads on working memory detrimental to learning
(Chandler & Sweller, 1991; Kirschner et al., 2006; Paas, Renkl, &
Sweller, 2003; Sweller, 1988, 1994).
Predictions
The cognitive demands involved in discovery-based pedagogies
make them seem daunting and implicate a number of predictions.
For example, young learners (i.e., children) might be least likely to
benefit from such methods (Case, 1998; Kirschner et al., 2006;
Mayer, 2004) compared with their older counterparts. Younger
learners would have comparatively limited amounts of organized,
preexisting knowledge and schemas to be able to integrate new
information effectively. Children have more limited working
memory capacities (Kirschner et al., 2006) and experiences in
using the cognitive processes outlined by Mayer (2004) and others.
Furthermore, they lack the metacognitive skills required to monitor
their cognitive processes (Flavell, 2000; Kuhn & Dean, 2004).
Issues of Guidance Within the Debate Between
Constructivist Instruction and Explicit Instruction
Of course constructivism does not assert that all learning should
be unaided (Hmelo-Silver, Duncan, & Chinn, 2007; Schmidt,
Loyens, van Gog, & Paas, 2007; Spiro & DeSchryver, 2009).
Nonetheless, although guidance has been an important component
of instruction on both sides of the debate concerning constructivist
instruction (Tobias & Duffy, 2009), there remains a remarkable
number of discovery-based instructional tasks that are largely
unassisted. As Duffy (2009) has explained, explicit instruction
advocates seemingly intend for their students to reach their learn-
ing objectives in the most efficient ways possible, whereas con-
structivism advocates emphasize learners’ motivation and tend to
provide guidance or feedback only when learners prompt it
through inquiry.
An illustration of these different standpoints can be found in the
correspondence of Fletcher (2009) with Schwartz, Lindgren, and
Lewis (2009), in which he claimed that more direct forms of
instruction work better when learners have little prior knowledge.
In response, Schwartz et al. provided the example of children
having to learn to tie their shoes without having ever seen a shoe
before. They argued that in such a case, hands-on exploration
would be optimal so that the children could familiarize themselves
with the layout of the shoe, its laces, and so forth. However,
because these children have never seen a shoe before, one might
argue just the opposite: to understand the utility of having shoes
tied, children should be provided explicitly with the task objective
and a means for achieving the goal.
Because their intentions and learning objectives are different
(Schwartz et al., 2009), the ways in which the explicit instruction
and constructivism camps understand learning situations are dif-
ferent (Duffy, 2009; Kuhn, 2007). However, both camps have
tended to include some forms of guidance within instructional
designs (Tobias & Duffy, 2009), and in the current analyses, it is
our intention to determine which types of enhancement are best.
Enhanced-discovery methods include a number of techniques from
3
DISCOVERY-BASED INSTRUCTION
feedback to scaffolding (Rosenshine, 2009), and many studies
have been conducted that have employed different forms and
degrees of guidance during learning tasks.
We conducted two meta-analyses because of the ambiguity
within the literature as to what constitutes a discovery-learning
method and how and when such methods should be applied. In the
first meta-analysis, we compared unassisted discovery-learning
methods (e.g., teaching oneself, completing practice problems,
conducting simulations) with more explicit instruction. In the
second meta-analysis, we compared enhanced discovery-learning
methods (e.g., guided discovery, elicited self-explanation) with a
variety of instructional conditions, including unassisted discovery
as well as explicit instruction.
Method
Literature Search
Articles examining different types of discovery learning were
identified through a variety of sources. The majority of the articles
were identified using PsycINFO, ERIC, and Dissertation Abstracts
International computerized literature searches. Studies were also
identified from citations in articles. The selection criterion for the
first meta-analysis was that studies had to test directly for differ-
ences between an explicit training or instruction condition (ex-
plicit) and a condition in which unassisted discovery learning
occurred, which was operationally defined as being provided with
no guidance or feedback during the learning task. The selection
criterion for the second meta-analysis was that the study included
a condition in which discovery learning was operationally defined
as being provided with guidance in the learning task, along with a
comparison condition. In other words, in the first meta-analysis,
we evaluated the effects of unassisted discovery-learning condi-
tions versus explicit instruction, whereas in the second meta-
analysis, we evaluated the effects of guided or enhanced
discovery-learning conditions versus other forms of instruction.
Exclusion criteria precluded the use of several potentially rele-
vant studies. First, articles with unclear statistical information or
those that were based on only qualitative data alone were not
included. Because we did not want to perform simply a sign test,
we did not include articles that did not provide useable statistical
information. However, before discarding any articles, authors were
contacted for information that could be included in the meta-
analysis. Second, articles needed to include comparable conditions
that consistently differed in the type of instruction. Those compar-
ing conditions that were fundamentally different or that were
equivocated prior to testing could not be included.
Units of Analysis and Data Sets
As the unit of analysis, group samples of studies and compari-
sons were considered separately. Studies as a unit of analysis
referred to individual experiments with different participants.
Studies, thus, treat multiple experiments reported within a single
article as separate studies if they involved different participants.
Comparisons were also used as a unit of analysis. Analysis at the
level of comparisons refers to counting each individual statistical
comparison as an independent contribution. Articles that run many
comparisons have more weight in the overall computation of the
effect than those that run fewer. Because many potentially mod-
erating variables differ between comparisons, only one moderator
(i.e., publication rank) could be tested using studies as the unit of
analysis. All other moderators were analyzed at the level of com-
parisons. Although multiple comparisons reported for a single
sample violate assumptions of independence, analysis at this level
was required to test for effects of moderating variables.
Variables Coded From Studies as Possible Moderators
for the Meta-Analyses
Six moderators were used for blocking purposes in both meta-
analyses. See Table 1 for the complete listing of the categories of
each moderator. Publication rank was the first moderator to be
considered. Studies from top-ranked journals were compared with
studies from other sources. Top-ranked journals included any
journal with an impact factor 1.5 on the basis of the 2001 listings
of impact factors. All other journal publications that ranked below
1.5 were coded as second-tier journal articles. Studies published in
book chapters were coded separately, and studies included in
dissertations or unpublished works (e.g., conference poster presen-
Table 1
Categories of Each Moderator
Moderator Categories
Publication rank Journal impact factor 1.5
Journal impact factor 1.5
Book chapters
Unpublished/dissertations
Domain Math/numbers
Computer skills
Science
Problem solving
Physical/motor skills
Verbal/social skills
Age Children: 12 years of age
Adolescents: between 12 and 18 years of age
Adults: 18 years of age
Dependent measure All post-tests scores, error rates, rates of error
detection
Acquisition scores
Reaction time scores
Self-ratings
Peer ratings
Mental effort/load ratings
Unassisted discovery Unassisted, teaching oneself, practice problems
Invention
Other: matched guidance/probes in both
discovery and comparison conditions
Simulation
Work with a naı¨ve peer
Enhanced discovery Generation
Elicited explanation
Guided discovery
Comparison
condition
Direct teaching
Feedback
Worked examples with solutions provided
Baseline
Unassisted: no exposure nor explanation
Enhanced: unassisted discovery or textbook
only
Explanations provided
Other: study-specific condition
4
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
tations) were coded separately. Although impact factors have
increased in the intervening years, the rank ordering of journals has
changed very little.
Second, the domains of the studies were considered. The fol-
lowing domains were coded for: (a) math/numbers, (b) computer
skills, (c) science, (d) problem solving, (e) physical/motor skills,
and (f) verbal/social skills. Next, the ages of participants were
coded. Participants were considered children if they were 12 years
of age or younger, adolescents if they were between 13 and 17
years of age, and adults if they were 18 years of age or older. If the
same statistical test included a range of ages, the mean age of the
sample was used for coding purposes. If the exact ages were not
provided but their grade levels were, participants were coded as
children through sixth grade, as adolescents from seventh to 12th
grades, and as adults thereafter.
The dependent variable was the next moderator considered.
Post-tests were assessments administered after the learning phases.
These scores included a variety of assessment types from pure
post-test scores to improvement scores, with previous assessments
used as baseline measures on tasks ranging from error detection/
correction to content recall, depending on the domain in question.
Acquisition scores included measurements of learning, success, or
failed attempts/errors during the learning phases. Reaction time
scores reflected the amount of time employed to arrive at the target
answer. Self-ratings included ratings by learners of their own
motivation levels, competencies, or other aspects of the learning
tasks. Peer ratings included ratings by observing peers or other
learners in regard to the learners’ competencies or other aspects of
the learning tasks. Mental effort reflected scores determined by the
experimenters who calculated mental load reflective of the amount
of information being considered, the number of variables to be
manipulated, the number of possible solutions, and so forth that
learners had to manage to complete the task successfully.
The fifth moderator to be considered was the type of discovery
learning condition employed. The types of discovery learning for
the first meta-analysis, comparing explicit with unassisted discov-
ery learning conditions, included the following: unassisted, inven-
tion, matched probes, simulation, and work with a naı¨ve peer. The
unassisted conditions included the learners’ investigation or ma-
nipulation of relevant materials without guidance, the learners
teaching themselves through trial-and-error or some other means,
and/or the learners attempting practice problems. The invention
conditions included tasks that required learners to invent their own
strategies or to design their own experiments. The matched probes
conditions included hints in the form of probe questions or mini-
mal types of feedback, which were provided to learners in both the
unassisted-discovery conditions and the explicit-instruction condi-
tions. For example, Morton, Trehub, and Zelazo (2003, Experi-
ment 2) asked 6-year-old children to decide whether a disembod-
ied voice was happy or sad and either provided them with
uninformative general instructions (i.e., unassisted discovery–
matched probes condition) or explicit instructions to attend to the
tone of voice (i.e., comparison condition–feedback condition). As
both groups of children were provided with feedback as to whether
they were correct or incorrect, this minimal form of feedback was
considered to be a matched probe. The simulation conditions
included computer-generated simulations that required learners to
manipulate components or to engage in some type of practice to
foster comprehension. The work with a naı¨ve peer conditions were
those that paired learners with novice or equal learning partners.
The types of discovery learning for the second meta-analysis
were considered to be enhanced forms of discovery learning meth-
ods and included generation, elicited explanations, and guided
discovery conditions. Generation conditions required learners to
generate rules, strategies, images, or answers to experimenters’
questions. Elicited explanation conditions required that learners
explain some aspect of the target task or target material, either to
themselves or to the experimenters. The guided discovery condi-
tions involved either some form of instructional guidance (i.e.,
scaffolding) or regular feedback to assist the learner at each stage
of the learning tasks.
Lastly, the type of comparison condition was investigated. Di-
rect teaching conditions included the explicit teaching of strate-
gies, procedures, concepts, or rules in the form of formal lectures,
models, demonstrations, and so forth and/or structured problem
solving. Feedback conditions took priority over other coding and
included any instructional design in which experimenters re-
sponded to learners’ progress to provide hints, cues, or objectives.
Conditions of worked examples included provided solutions to
problems similar to the targets. Baseline conditions included de-
signs in which learners were not given the basic instructions
available to the discovery group, learners were asked to complete
an unrelated task that required as much time as the discovery
group’s intervention, or learners were asked to complete pre- and
post-tests only with a time interval matched to the discovery
group’s. The explanations provided conditions were those in
which explanations were provided to learners about the target
material or the goal task. Other conditions included conditions
(i.e., one comparison in the analysis of unassisted discovery and
two comparisons in the analysis of enhanced discovery) that were
largely experiment-specific in that the condition could not fairly be
categorized as any other code because the instructional change
involved only a minimal change in design.
Comparison conditions for the second meta-analysis included
all of the above except for feedback conditions. Also, the baseline
conditions for the second meta-analysis differed slightly in that
such conditions in the second meta-analysis more often involved
designs in which learners were asked to teach themselves either
through physical manipulations or through textbook learning (i.e.,
similar to the unassisted-discovery conditions of the first meta-
analysis), and designs in which only pre- and post-tests were
administered with interceding time intervals matched to the dis-
covery group.
Reliability on Moderators
Coding for moderators was accomplished with recommenda-
tions from the four authors who decided on moderator codes to
include the range of conditions, completely and yet concisely.
Reliability on all moderators for both meta-analyses was found to
be consistently high leading to an overall kappa of .87. All dis-
agreements were resolved through a discussion of how best to
classify the variable in question both within the context of the
study and the purposes of analysis.
5
DISCOVERY-BASED INSTRUCTION
Computation and Analysis of Effect Sizes
Given the great variety of discovery learning designs and the
variety of undetermined factors involved in any potential effects, a
random effects model was used in all analyses in the Comprehen-
sive Meta-Analysis Version 2 (CMA) program (Borenstein,
Hedges, Higgins, & Rothstein, 2005). A random effects model is
appropriate when participant samples and intervention factors can-
not be presumed to be functionally equivalent. Consequently,
effect sizes cannot be presumed to share a common effect size
because they may differ because of any one or a number of
different factors between studies. However, in the current meta-
analyses, we report overall results from both fixed and random
effects models and then present subsequent results only from the
random effects model.
Effect sizes. Computation formulae included within the CMA
program allowed for direct entry of group statistics to calculate
effect sizes for each test-by-test comparison. When the only sta-
tistics available were F values and group means, DSTAT (John-
son, 1993) allowed us to convert those statistics to a common
metric, g, which represents the difference in standard deviation
units. More specifically, g is computed by calculating the differ-
ence of the two means divided by the pooled standard deviation of
the two samples (e.g., the difference between two groups’ mean
reaction times, divided by the pooled standard deviation). Those g
scores and other group statistics were then entered into the CMA
program. For analyses at the level of studies, overall g statistics
were calculated in DSTAT before entry into the CMA program.
Because g values may “overestimate the population effect size”
when samples are small (Johnson, 1993, p. 19), Cohen’s d values are
reported here as calculated by the CMA program. Cohen’s ds between
0.20 and 0.50 indicate a small effect size, Cohen’s ds between 0.50
and 0.80 indicate a medium effect, and Cohen’s ds 0.80 indicate a
large effect (Cohen, 1988). Of course, the effect size alone does not
determine significance, and we determined the significance of effect
sizes on the basis of the p values of the resultant Z scores.
Post Hoc Comparisons
After grouping the effect sizes by a particular moderator and
finding significant heterogeneity among different levels of the
same moderator, each level was compared with all others within
the CMA program, indicated by Q, to determine whether the effect
sizes between the groups were significantly different from one
another. Post hoc p values were adjusted for the number of
comparisons conducted. For example, post hoc comparisons of the
domain categories required 15 comparisons and consequently led
to a set alpha level of .003 for levels to be considered significantly
different from one another.
Results
The effect sizes comparing discovery conditions with other
forms of instruction were analyzed in four separate meta-analyses,
two at the level of studies and two at the level of comparisons.
Table 2 displays the results overall for each of the meta-analyses
and includes results for both fixed and random effects models.
Effects sizes were coded so that a negative effect size indicates that
participants in the compared instructional conditions evidenced
greater learning than participants in discovery conditions, whereas
a positive effect size indicates that participants in the discovery
conditions evidenced greater learning than participants in the com-
pared instructional conditions. Similarly, the effect sizes for the
dependent measures of reaction times and mental effort/load were
coded so that scores lower in number (i.e., faster reaction times,
less mental effort), which reflect better performance, would con-
sequently lead to positive effect sizes when the discovery group
outperformed the comparison group.
Moderators
An advantage of quantitative meta-analytic techniques is the
ability to examine potential moderators of relations with ample
statistical power. In the present meta-analyses, the following po-
tential moderators were investigated: publication rank, domain,
age of participants, dependent variable, type of discovery condi-
tion, and type of compared instructional condition. Whenever
heterogeneity of variance was indicated (Johnson, 1989), moder-
ators were tested for each of the meta-analyses. Post hoc p values
were used to determine statistical significance. All moderators for
both meta-analyses were examined using statistical comparisons as
Table 2
Summary of Effect Sizes
Level of analysis Cohen’s d 95% CI Zpvalue (Z) NQdf(Q) p value (Q)
Unassisted discovery
Studies
Fixed 0.30 [.36, .25] 10.62 .00 5,226 522.11 107 .00
Random 0.38 [.50, .25] 5.69 .00 5,226
Comparisons
Fixed 0.30 [.32, .27] 23.08 .00 25,986 3,490.42 579 .00
Random 0.38 [.44, .31] 11.40 .00 25,986
Enhanced discovery
Studies
Fixed 0.26 [.20, .32] 8.39 .00 4,243 260.14 55 .00
Random 0.30 [.15, .44] 4.10 .00 4,243
Comparisons
Fixed 0.24 [.21, .26] 18.61 .00 25,925 2,037.19 359 .00
Random 0.30 [.23, .36] 9.12 .00 25,925
6
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
the unit of analysis, assuming independence, except for publication
rank, which was examined at the level of studies.
Unassisted Discovery
Overall effects. A total of 580 comparisons from 108 studies
compared unassisted discovery learning with more explicit teach-
ing methods. Table 3 lists each sample. With the random effects
analysis, the 108 studies had a mean effect size of d 0.38 (95%
CI [–.50, .25]), indicating that explicit teaching was more ben-
eficial to learning than unassisted discovery. This constitutes a
small but meaningful effect size ( p .001). The effects are highly
heterogeneous across the studies, Q(107) 522.11, p .001.
Such heterogeneity is to be expected given the diversity of re-
search methods, participant samples, and learning tasks. To ad-
dress issues of publication bias, we calculated fail-safe Ns both at
the level of comparisons and at the level of studies with alphas set
to .05, two-tailed. At the level of comparisons, 3,588 unpublished
studies would be needed to alter the results so that the effect would
no longer be statistically significant. At the level of studies, 3,551
unpublished studies would be needed to reduce the effect to
nonsignificance.
Moderators. First, using studies as the unit of analysis, the
type of publication moderated the findings, Q(3) 10.86, p .05.
Articles in first-tier journals (d 0.67) evidenced larger effect
sizes in favor of explicit instruction than did articles in second-tier
publications (d 0.24). Post hoc comparisons revealed that these
mean effect sizes were significantly different from one another,
Q(1) 10.20, p .008. Effect sizes from book chapters (d
0.12) and unpublished works (d 0.01) did not reach signifi-
cance.
The domain was also found to moderate effect sizes, Q(5)
91.75, p .001. In the domains of math (d 0.16), science (d
0.39), problem solving (d 0.48), and verbal and social skills
(d 0.95), participants evidenced less learning in the unassisted-
discovery conditions than in the explicit conditions. Post hoc
comparisons indicated that the mean effect size favoring explicit
conditions within the verbal/social skills domain was significantly
greater than within the domains of math, Q(1) 50.03, p .001;
computer skills, Q(1) 58.17, p .001; science,
Q(1) 22.65,
p .001; problem solving, Q(1) 18.35, p .001; and physical/
motor skills, Q(1) 14.87, p .001. The mean effect size
favoring explicit conditions within the domain of problem solving
was also significantly greater than within the domains of math,
Q(1) 13.65, p .001, and computer skills, Q(1) 28.29, p
.001. Lastly, the mean effect size favoring explicit conditions in
the domain of science was significantly greater than within the
domain of computer skills, Q(1) 16.64, p .001.
The next moderator investigated was participant age, which also
moderated the findings, Q(2) 12.29, p .01. Effect sizes for all
age groups showed significant advantages for more explicit in-
struction over unassisted discovery. Post hoc comparisons revealed
that the mean effect size for adolescents (d 0.53) was signif-
icantly greater than the mean effect size for adults (d 0.26),
Q(1) 10.41, p .001. The type of dependent variable was also
found to moderate the findings, Q(5) 37.38, p .001. Measures
of post-test scores (d 0.35), acquisition scores (d 0.95),
and time to solution (d 0.21) favored participants in explicit
conditions. Post hoc comparisons indicated that the measure of
acquisition scores led to significantly greater effect sizes in favor
of explicit conditions than did the measures of post-test scores,
Q(1) 31.41, p .001; time to solution, Q(1)
23.84, p .001;
and self-ratings, Q(1) 15.89, p .001.
The type of unassisted-discovery condition moderated the find-
ings, Q(4) 10.02, p .05, but post hoc comparisons failed to
reveal any reliable differences. Performances were better under
explicit conditions than they were in conditions in which learners
worked with a naı¨ve peer (d 0.47), engaged in unassisted
discovery (d 0.41), and engaged in invention tasks (d
0.34). Next, we investigated the explicit conditions to which
unassisted-discovery conditions were compared. The type of ex-
plicit condition moderated the findings, Q(5) 32.31, p .001.
Participants in unassisted discovery fared worse than participants
in comparison conditions of direct teaching (d 0.29), feedback
(d 0.46), worked examples (d 0.63), and explanations
provided (d 0.28). Post hoc comparisons revealed that effect
sizes for direct teaching and worked examples were significantly
different from one another, Q(1) 18.98, p .001, and indicated
that participants learning with worked examples outperformed
participants learning through unassisted discovery to a greater
extent than did participants learning from direct teaching outper-
formed participants learning from unassisted discovery. Post hoc
comparisons also revealed that feedback, Q(1) 9.15, p .003,
and worked examples, Q(1) 13.70, p .001, benefitted learners
more than having no exposure with pre- and post-tests only.
Overall, the findings indicate that explicit-instructional condi-
tions lead to greater learning than do unassisted-discovery condi-
tions. The lack of significant differences between the mean effect
sizes of the unassisted-discovery conditions helps to illustrate that
claim (see Tables 1–5 in the supplemental materials).
Enhanced Discovery
Overall effects. A total of 360 comparisons from 56 studies
compared enhanced discovery learning (i.e., generation, elicited
explanation, or guided discovery) with other types of instructional
methods. Table 4 lists each sample. With the random effects
analysis, the 56 studies had a mean effect size of d 0.30 (95%
CI [.15, .44]), indicating that enhanced-discovery methods led to
greater learning than did comparison methods of instruction. This
constitutes a small but meaningful effect size ( p .001). The
effects are highly heterogeneous across the studies, Q(55)
260.14, p .001. Again, such heterogeneity is to be expected
given the diversity of research methods, participant samples, and
learning tasks. To address issues of publication bias, we calculated
fail-safe Ns both at the level of comparisons and at the level of
studies with alphas set to .05, two-tailed. At the level of compar-
isons, 4,138 unpublished studies would be needed to reduce the
effects to nonsignificance. At the level of studies, 960 unpublished
studies would be needed to reduce effects to nonsignificance.
Moderators. First, using studies as the unit of analysis, the
type of publication moderated the findings, Q(2) 18.66, p
.001. Articles in first-tier journals (d 0.35) and second-tier
journals (d 0.40) generally favored enhanced-discovery condi-
tions, whereas data sets from unpublished studies and dissertations
did not (d 0.54). Post hoc comparisons revealed that although
the effect sizes derived from first-tier and second-tier journal
articles were not significantly different, Q(1) 0.10, ns, the mean
7
DISCOVERY-BASED INSTRUCTION
Table 3
Samples Included in the Unassisted Discovery Meta-Analysis
Author(s) Year
Discovery
n
Comparison
n
Cohen’s
d Domain Age group Journal rank
Alibali 1999 26 29.25 0.89 Math/numbers Children Journal 1.5
Anastasiow et al. 1970 6 6 0.06 Math/numbers Children Journal 1.5
Bannert 2000 37 35 0.74 Computer skills Adults Journal 1.5
Belcastro 1966 189 189 0.26 Math/numbers Adolescents Journal 1.5
Bobis et al. (Experiment 1) 1994 15 15 1.07 Math/numbers Children Journal 1.5
Bobis et al. (Experiment 2) 1994 10 10 1.11 Math/numbers Children Journal 1.5
Bransford & Johnson (Experiment 1) 1972 10 10 0.63 Verbal/social skills Adolescents Journal 1.5
Bransford & Johnson (Experiment 2) 1972 17 17.5 0.60 Verbal/social skills Adults Journal 1.5
Bransford & Johnson (Experiment 4) 1972 9 11 0.50 Verbal/social skills Adolescents Journal 1.5
Brant et al. 1991 33 35 0.55 Science Adults Journal 1.5
Brown et al. (Experiment 3) 1989 21 16 0.17 Problem solving Children Journal 1.5
Butler et al. 2006 34 28 0.01 Math/numbers Children Unpublished/dissertation
Cantor et al. 1982 24 24 0.46 Math/numbers Children Journal 1.5
Carroll (Experiment 1) 1994 16.8 16.8 0.89 Math/numbers Adolescents Journal 1.5
Carroll (Experiment 2) 1994 12 12 2.05 Math/numbers Adolescents Journal 1.5
Charney et al. 1990 20 45 0.33 Computer skills Adults Journal 1.5
Craig 1965 30 30 0.11 Math/numbers Adults Journal 1.5
Danner & Day 1977 20 20 0.86 Science Adolescents Journal 1.5
Destrebecqz (Experiment 1) 2004 20 20 0.56 Problem solving Adults Journal 1.5
Destrebecqz (Experiment 2) 2004 12 12 2.36 Problem solving Adults Journal 1.5
Elias & Allen 1991 37.86 34.43 0.01 Problem solving Children Journal 1.5
Elshout & Veenman (Experiment 1) 1992 4.5 4.25 0.19 Science Adults Journal 1.5
Elshout & Veenman (Experiment 2) 1992 4.4 5 0.24 Science Adults Journal 1.5
Fender & Crowley (Experiment 2) 2007 12 12 1.04 Science Children Journal 1.5
Guthrie 1967 18 18 0.64 Problem solving Adults Journal 1.5
Hendrickson & Schroeder 1941 30 30 0.32 Physical/motor skills Adolescents Journal 1.5
Hendrix 1947 13 13.5 0.51 Math/numbers Adults Journal 1.5
Hodges & Lee 1999 8 8.5 0.39 Physical/motor skills Adults Journal 1.5
Howe et al. (Experiment 2) 2005 36 36 0.43 Science Children Journal 1.5
Howe et al. (Experiment 3) 2005 36 36 0.29 Science Children Journal 1.5
Jackson et al. 1992 36 24 0.23 Math/numbers Children Journal 1.5
Jime´nez et al. 1996 6 6 0.00 Verbal/social skills Adults Journal 1.5
Kalyuga et al. (Experiment 1) 2001 9 8 0.78 Math/numbers Adults Journal 1.5
Kalyuga et al. (Experiment 2) 2001 9 8 0.28 Math/numbers Adults Journal 1.5
Kalyuga et al. (Experiment 1) 2001 12 12 0.53 Computer skills Adults Journal 1.5
Kalyuga et al. (Experiment 2) 2001 12 12 0.70 Computer skills Adults Journal 1.5
Kamii & Dominick 1997 16.29 16.71 0.21 Math/numbers Children Journal 1.5
Kelemen 2003 12 11 0.82 Science Children Journal 1.5
Kersh 1958 16 16 0.18 Math/numbers Adults Journal 1.5
Kersh 1962 10 10 0.50 Math/numbers Adolescents Journal 1.5
King 1991 8 7.5 0.58 Problem solving Children Journal 1.5
Kittell 1957 45 43.5 0.78 Verbal/social skills Children Journal 1.5
Klahr & Nigam 2004 52 52 1.14 Science Children Journal 1.5
Kuhn & Dean 2005 12 12 1.18 Science Children Journal 1.5
Lawson & Wollman 1976 16 16 0.82 Science Adolescents Journal 1.5
Lazonder & van der Meij 1993 30 34 0.67 Computer skills Adults Journal 1.5
Lazonder & van der Meij 1994 21 21 0.05 Computer skills Adults Journal 1.5
Lazonder & van der Meij 1995 25 25 0.44 Computer skills Adults Journal 1.5
Lee & Thompson 1997 66 64 0.92 Computer skills Adults Journal 1.5
Leutner (Experiment 1) 1993 16 16 0.09 Problem solving Adolescents Journal 1.5
Leutner (Experiment 2) 1993 19 19 0.36 Problem solving Adults Journal 1.5
Leutner (Experiment 3) 1993 20 20 0.38 Problem solving Adolescents Journal 1.5
McDaniel & Pressley (Experiment 1) 1984 16.6 17.6 1.21 Verbal/social skills Adults Journal 1.5
McDaniel & Pressley (Experiment 2) 1984 21 21 1.06 Verbal/social skills Adults Journal 1.5
McDaniel & Schlager (Experiment 1) 1990 31 29.5 0.00 Problem solving Adults Journal 1.5
McDaniel & Schlager (Experiment 2) 1990 60 60 0.42 Problem solving Adults Journal 1.5
Messer et al. (Experiment 1) 1993 14 13 0.32 Science Children Journal 1.5
Messer et al. (Experiment 2) 1993 18 20 1.14 Science Children Journal 1.5
Messer, Mohamedali, & Fletcher 1996 21 20 0.34 Problem solving Children Journal 1.5
Messer, Norgate, et al. (Experiment 1) 1996 11.75 10.5 0.89 Science Children Journal 1.5
Messer, Norgate, et al. (Experiment 2) 1996 16 15 0.43 Science Children Journal 1.5
Morton et al. (Experiment 2) 2003 15.29 16.14 2.19 Verbal/social skills Children Journal 1.5
Mwangi & Sweller (Experiment 1) 1998 9 9 0.46 Math/numbers Children Journal 1.5
8
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
effect size from unpublished works and dissertations differed from
both the mean effect size from first-tier journals, Q(1) 9.65, p
.003, and the mean effect size from second-tier journals, Q(1)
21.59, p .001.
Domain was also found to moderate the findings, Q(5) 65.53,
p .001. In the domains of math (d 0.29), computer skills (d
0.64), science (d 0.11), physical/motor (d 1.05), and verbal
and social skills (d 0.58), participants evidenced more learning
in the enhanced-discovery conditions than in the comparison con-
ditions. Post hoc comparisons indicated that the mean effect size in
the physical/motor domain was significantly greater than the effect
sizes in the domains of math, Q(1) 34.59, p .001; science,
Q(1) 41.67, p .001; and problem solving, Q(1) 15.73, p
.001. Also, the mean effect size for the domain of computer skills
was significantly greater than the effect sizes in the domains of
math, Q(1) 12.14, p .001, and science, Q(1) 18.65, p
.001.
The next moderator, participant age, also influenced the find-
ings, Q(2) 10.68, p .01. Post hoc comparisons revealed that
the mean effect size for adults was significantly greater than the
effect size for children, Q(1) 7.64, p .01. Although superfi-
cially there was a greater difference between the mean effect sizes
of adults and adolescents, that difference was not found to be
significant because of the larger variance within the adolescents
(95% CI [.04, .33]). Next, the type of dependent variable was
found to moderate the findings, Q(4)
64.60, p .001. Measures
of post-test scores (d 0.28), acquisition scores (d 0.54), and
self-ratings (d 1.25) favored participants in enhanced-discovery
conditions over participants in comparison conditions, whereas
measures of reaction times (d 0.72) favored participants in
Table 3 (continued)
Author(s) Year
Discovery
n
Comparison
n
Cohen’s
d Domain Age group Journal rank
Nadolski et al. 2005 11 12 0.09 Problemsolving Adults Journal 1.5
O’Brien & Shapiro 1977 15 15 0.15 Math/numbers Adults Journal 1.5
Paas 1992 13 15 2.25 Math/numbers Adolescents Journal 1.5
Paas & Van Merrie¨nboer 1994 30 30 0.77 Problemsolving Adults Journal 1.5
Pany & Jenkins 1978 6 6 1.93 Verbal/social skills Children Journal 1.5
Peters 1970 30 30 0.25 Math/numbers Children Journal 1.5
Pillay (Experiment 1) 1994 10 20 1.09 Problemsolving Adolescents Journal 1.5
Pillay (Experiment 2) 1994 10 20 0.78 Problemsolving Adolescents Journal 1.5
Pine et al. 1999 14 14 0.74 Science Children Journal 1.5
Quilici & Mayer (Experiment 1) 1996 27 54 0.92 Math/numbers Adults Journal 1.5
Quilici & Mayer (Experiment 2) 1996 18 18 1.69 Math/numbers Adults Journal 1.5
Radziszewska & Rogoff 1991 20 20 1.25 Problemsolving Children Journal 1.5
Rappolt-Schlichtmann et al. 2007 27 37 0.61 Science Children Journal 1.5
Reinking & Rickman 1990 45 15 1.09 Verbal/social skills Children Journal 1.5
Rieber & Parmley 1995 25 27.5 0.65 Science Adults Journal 1.5
Rittle-Johnson 2006 21 21.5 0.23 Math/numbers Children Journal 1.5
Salmon et al. 2007 16 16 1.66 Verbal/social skills Children Journal 1.5
Scandura (Experiment 2) 1964 23 23 0.00 Math/numbers Children Journal 1.5
Shore & Durso 1990 60 60 0.14 Verbal/social skills Adults Journal 1.5
Shute et al. 1989 10 10 0.42 Math/numbers Adults Book chapter
Siegel & Corsini 1969 12 12 0.90 Problemsolving Children Journal 1.5
Singer & Gaines 1975 19 18 0.27 Physical/motor skills Adults Journal 1.5
Stark et al. 1998 15 15 0.54 Math/numbers Adults Journal 1.5
Strand-Cary & Klahr 2008 29 32 0.85 Science Children Journal 1.5
Sutherland et al. 2003 12 11.5 0.10 Verbal/social skills Children Journal 1.5
Swaak et al. 2004 67 55 0.56 Science Adolescents Journal 1.5
Swaak et al. 1998 21 21 0.44 Science Adults Journal 1.5
Sweller et al. (Experiment 1) 1990 16 16 0.20 Math/numbers Adolescents Journal 1.5
Sweller et al. (Experiment 3) 1990 12 12 1.78 Math/numbers Adolescents Journal 1.5
Tarmizi & Sweller (Experiment 3) 1988 10 10 0.20 Math/numbers Adolescents Journal 1.5
Tarmizi & Sweller (Experiment 4) 1988 10 10 0.28 Math/numbers Adolescents Journal 1.5
Tarmizi & Sweller (Experiment 5) 1988 10 10 0.71 Math/numbers Adolescents Journal 1.5
Trafton & Reiser 1993 20 20 0.39 Computer skills Adults Journal 1.5
Tunteler & Resing 2002 18 18 2.19 Problem solving Children Journal 1.5
van der Meij & Lazonder 1993 13 12 1.03 Computer skills Adults Journal 1.5
van hout Wolters 1990 24 24 0.54 Science Adolescents Book chapter
Veenman et al. 1994 15 14 0.49 Science Adults Journal 1.5
Ward & Sweller (Experiment 1) 1990 21 21 1.07 Science Adolescents Journal 1.5
Ward & Sweller (Experiment 2) 1990 16 16 1.52 Science Adolescents Journal 1.5
Ward & Sweller (Experiment 3) 1990 17 17 0.25 Science Adolescents Journal 1.5
Ward & Sweller (Experiment 4) 1990 15 15 0.42 Science Adolescents Journal 1.5
Ward & Sweller (Experiment 5) 1990 15.5 15.5 0.47 Science Adolescents Journal 1.5
Wittrock 1963 67 75 0.84 Verbal/social skills Adults Journal 1.5
Worthen 1968 216 216 0.08 Math/numbers Children Journal 1.5
Zacharia & Anderson 2003 13 13 4.62 Science Adults Journal 1.5
9
DISCOVERY-BASED INSTRUCTION
comparison conditions over participants in enhanced-discovery
conditions. Post hoc comparisons indicated that the measure of
post-test scores led to significantly greater effect sizes in favor of
participants in enhanced-discovery conditions than did the mea-
sure of self-ratings, Q(1) 29.68, p .001. Comparisons also
indicated that the mean effect size derived from reaction time
measures was significantly different (i.e., significantly opposite in
effect size direction) from both the mean effect size derived from
acquisition scores, Q(1) 10.19, p .001, and the mean effect
size derived from post-tests, Q(1) 31.61, p .001. Lastly, the
Table 4
Studies Included in the Enhanced Discovery Meta-Analysis
Author(s) Year
Discovery
n
Comparison
n
Cohen’s
d Domain Age group Journal rank
Amsterlaw & Wellman 2006 12 12 1.11 Verbal/social skills Children Journal 1.5
Anastasiow et al. 1970 6 6 0.08 Math/numbers Children Journal 1.5
Andrews 1984 25 28 1.27 Science Adults Journal 1.5
Bielaczyc et al. 1995 11 13 0.95 Computer skills Adults Journal 1.5
Bluhm 1979 20 17 1.44 Science Adults Journal 1.5
Bowyer & Linn 1978 312 219 0.20 Science Children Journal 1.5
Butler et al. 2006 32 31 0.02 Math/numbers Children Unpublished/dissertation
Chen & Klahr 1999 30 30 0.07 Science Children Journal 1.5
Chi et al. 1994 14 10 0.94 Science Adolescents Journal 1.5
Coleman et al. 1997 14 14 0.61 Science Adults Journal 1.5
Crowley & Siegler 1999 57 57 0.25 Problem solving Children Journal 1.5
Debowski et al. 2001 24 24 1.07 Computer skills Adults Journal 1.5
Denson 1986 45 34 0.10 Science Adults Unpublished/dissertation
Foos et al. (Experiment 1) 1994 78 90 0.53 Science Adults Journal 1.5
Foos et al. (Experiment 2) 1994 25 25 0.71 Science Adults Journal 1.5
Gagne´ & Brown 1961 11 11 1.41 Math/numbers Adolescents Journal 1.5
Ginns et al. (Experiment 1) 2003 10 10 0.67 Computer skills Adults Journal 1.5
Ginns et al. (Experiment 2) 2003 13 13 0.67 Math/numbers Adolescents Journal 1.5
Grandgenett & Thompson 1991 72 71 0.05 Computer skills Adults Journal 1.5
Greenockle & Lee 1991 20 20 0.48 Physical/motor skills Adults Journal 1.5
Hiebert & Wearne 1993 24 21.25 0.70 Math/numbers Children Journal 1.5
Hirsch 1977 61 76 0.56 Math/numbers Adolescents Journal 1.5
Howe et al. (Experiment 1) 2005 31 30 0.15 Science Children Journal 1.5
Howe et al. (Experiment 2) 2005 35 36 0.15 Science Children Journal 1.5
Howe et al. (Experiment 3) 2005 35.5 36 0.34 Science Children Journal 1.5
Jackson et al. 1992 12 24 0.01 Math/numbers Children Journal 1.5
Kasten & Liben 2007 34 99 0.42 Problem solving Children Journal 1.5
Kersh 1958 16 16 0.12 Math/numbers Adults Journal 1.5
Kersh 1962 10 10 0.10 Math/numbers Adolescents Journal 1.5
Kuhn et al. 2000 21 21 0.29 Science Adolescents Journal 1.5
Lamborn et al. 1994 113 113 1.06 Verbal/social skills Adolescents Journal 1.5
Murphy & Messer 2000 41 40.5 0.46 Science Children Journal 1.5
Mwangi & Sweller (Experiment 3) 1998 12 12 0.04 Math/numbers Children Journal 1.5
Öhrn et al. 1997 11 12 0.99 Science Adults Journal 1.5
Olander & Robertson 1973 190 184 0.02 Math/numbers Children Journal 1.5
Peters 1970 30 30 0.09 Math/numbers Children Journal 1.5
Pillow et al. 2002 15 15 0.44 Verbal/social skills Children Journal 1.5
Pine & Messer 2000 40 44 0.55 Science Children Journal 1.5
Pine et al. 1999 14 14 0.35 Science Children Journal 1.5
Ray 1961 45 45 0.44 Math/numbers Adolescents Journal 1.5
Reid et al. 2003 20 18 0.16 Science Adolescents Journal 1.5
Rittle-Johnson 2006 22 21 0.19 Math/numbers Children Journal 1.5
Rittle-Johnson et al. 2008 36 18 0.81 Problem solving Children Journal 1.5
Scandura (Experiment 1) 1964 23 23 0.00 Math/numbers Children Journal 1.5
Singer & Pease 1978 16 16 2.62 Physical/motor skills Adults Journal 1.5
Stark et al. 2002 27 27 0.94 Math/numbers Adults Journal 1.5
Stull & Mayer (Experiment 1) 2006 51 52.5 0.60 Science Adults Unpublished/dissertation
Stull & Mayer (Experiment 2) 2006 38 39 1.14 Science Adults Unpublished/dissertation
Stull & Mayer (Experiment 3) 2006 33 32.5 1.10 science Adults Unpublished/dissertation
Tarmizi & Sweller (Experiment 2) 1988 12 12 0.08 Math/numbers Adolescents Journal 1.5
Tenenbaum et al. 2008 32 30.5 0.20 Verbal/social skills Children Journal 1.5
Tuovinen & Sweller 1999 16 16 0.67 Computer skills Adults Journal 1.5
Vichitvejpaisal et al. 2001 40 40 0.28 Science Adults Journal 1.5
Zhang et al. (Experiment 1) 2004 13 13.67 0.16 Computer skills Adolescents Journal 1.5
Zhang et al. (Experiment 2) 2004 14 16 0.36 Computer skills Adolescents Journal 1.5
Zimmerman & Sassenrath 1978 119.67 119.67 0.51 Math/numbers Children Journal 1.5
10
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
mean effect size for self-ratings that favored enhanced discovery
was found to be significantly different (i.e., opposite) to the mean
effect size for mental effort/load, which showed trends favoring
other forms of instruction.
The type of enhanced-discovery condition used also moderated
the findings, Q(2) 65.00, p .001. Elicited explanation (d
0.36) and guided discovery (d 0.50) favored enhanced discov-
ery, whereas generation (d 0.15) favored other instructional
methods. Post hoc comparisons indicated that indeed, generation
conditions were significantly different in their effect sizes when
compared with both elicited explanation, Q(1) 33.20, p .001,
and guided discovery, Q(1) 57.43, p .001, but the effect sizes
for elicited explanation and guided discovery did not differ from
one another. Next, we investigated the instructional conditions to
which enhanced-discovery conditions were compared but the type
of comparison condition failed to moderate the findings, Q(4)
9.12, p .06. With the exception of worked examples (d 0.06,
ns), all other comparisons conditions indicated significantly supe-
rior performances in the enhanced-discovery conditions.
Overall, results seem to favor enhanced-discovery methods over
other forms of instruction. However, the dependent measure and
the type of enhanced discovery employed affected the outcome
assessments (see Tables 6–10 in the supplemental materials).
Discussion
In the first meta-analysis, our intention was to investigate under
which conditions unassisted discovery learning might lead to bet-
ter learning outcomes than explicit-instructional tasks. However,
more explicit-instructional tasks were found to be superior to
unassisted-discovery tasks. Moreover the type of publication, the
domain of study, the age of participants, the dependent measure,
the type of unassisted-discovery task, and the comparison condi-
tion all moderated outcomes. Post hoc comparisons revealed that
on average, publications in first-tier journals showed greater ben-
efits for explicit-instructional tasks than did publications in
second-tier journals. Among the variety of different domains in
which more explicit instruction was found to benefit learners,
verbal and social learning tasks seemed to favor explicit instruc-
tion most, followed by problem solving and science. Adolescents
were found to benefit significantly more from explicit instruction
than did adults. Analysis of dependent measures indicated that
learners’ acquisition scores showed a greater detriment under
discovery conditions than did post-test scores, time to solution, and
self-ratings. Although the type of unassisted-discovery task mod-
erated trends favoring explicit instruction, unassisted tasks, tasks
requiring invention, and tasks involving collaboration with a naı¨ve
peer were all found to be equally detrimental to learning. Analyses
of the types of explicit instruction in the comparison conditions
indicated that worked examples benefited learners more than direct
teaching and also indicated that feedback and providing explana-
tions are useful aids to learning. The finding that worked examples
evidenced greater learning than did unassisted discovery is ex-
pected given the worked-example effect (Sweller et al., 2007).
However, the finding that worked examples benefitted learners to
a greater extent than did direct teaching was unexpected.
In the second meta-analysis, we investigated under which con-
ditions enhanced forms of discovery-learning tasks might be ben-
eficial. This meta-analysis showed better learning for enhanced-
discovery instructional methods, with the type of publication, the
domain, the age of participants, the dependent measure, and the
type of enhanced-discovery task moderating the findings. Unpub-
lished studies and dissertations were found to show disadvantages
for enhanced-discovery conditions, whereas first- and second-tier
journal articles favored enhanced discovery. Of the different task
domains, physical/motor skills, computer skills, and verbal and
social skills benefited most from enhanced discovery. Because of
concerns that the domain category of physical/motor skills might
be dominating the overall analysis of enhanced discovery, those 24
comparisons were removed, and analyses were run again. The
removal of physical/motor skills from the overall analyses under
the random effects model only reduced the mean effect size
slightly (i.e., from d 0.30 to d 0.25). Consequently, we
retained the category of physical/motor skills within our analyses.
Analyses revealed that adult participants benefit more from
enhanced discovery than children. Of the three types of enhanced
discovery, the generation method of enhanced discovery failed to
produce learning benefits over other instructional methods, which
was unexpected given the typical benefits reported as the genera-
tion effect (Bertsch, Pesta, Wiscott, & McDaniel, 2007; Slamecka
& Graf, 1978). It should be noted that the advantage of other forms
of instruction over generation also led to the finding that unpub-
lished studies and dissertations showed an advantage for other
forms of instruction over enhanced discovery. This was due to the
fact that four out of the five studies sampled from unpublished
works or dissertations employed generation conditions. Although
the meta-analysis indicated that the type of comparison condition
did not moderate the results, note that enhanced discovery was
generally better than both direct teaching and explanations pro-
vided. Thus, the construction of explanations or participation in
guided discovery is better for learners than being provided with an
explanation or explicitly taught how to succeed on a task, in
support of constructivist claims. Analysis of the dependent mea-
sure indicated that although learners’ post-test and acquisition
scores benefited from enhanced-discovery tasks, reaction times did
not. This suggests that learners may take more time to find prob-
lem solutions or to perform target responses when engaged in
enhanced-discovery tasks.
In regard to the large mean effect size for the category of
comparison conditions labeled other, it should be noted that this
category included only two comparisons; these two comparisons
were included to ensure a complete inclusion of comparison con-
ditions, despite the fact that they did not fit into the other catego-
ries. The participants in the first other comparison condition were
asked the same questions that were asked of the elicited explana-
tions group, but the elicited explanations condition required par-
ticipants to provide a specific target answer before proceeding to
the next question, and the comparison condition did not. The
participants in the second other comparison condition were asked
to discuss how/why things balance on a beam within a group
without input from the experimenter and were compared with
participants who were asked to explain to the experimenter who
guided the learner with subsequent questions toward the target
explanation.
The moderating effect of age across the two meta-analyses did
not follow the expected pattern of results. First, the adolescent age
group was shown to benefit least from unassisted-discovery con-
ditions, as opposed to the children, as had been predicted.
11
DISCOVERY-BASED INSTRUCTION
Although enhanced-discovery conditions led to better learning
outcomes for all age groups, adults seemed to benefit from
enhanced-discovery tasks more so than children. Interestingly, the
adolescents tended to benefit least and the adults tended to benefit
most from both unassisted-discovery tasks and enhanced-
discovery tasks. One might speculate that the negative trend
among adolescents could reflect a general lack of motivation or
lack of domain-relevant knowledge (Mayer, 2009). However, if
the trend was the result of a lack of domain-relevant knowledge,
one might expect to see even larger deficits in children. With
regards to the adults, perhaps their greater domain-relevant knowl-
edge helped them to succeed on unassisted-discovery tasks to a
greater extent than the adolescents. It is also possible that the tasks
used in the enhanced-discovery studies were more appropriate for
adult learners (e.g., having participants explain the strategies they
were using to solve problems) than for young learners. Organizing
guidance to facilitate discovery requires sensitivity to the learner’s
zone of proximal development (Pea, 2004; Vygotsky, 1962) if it is
to be maximally useful.
Implications for Teaching
The results of the first meta-analysis indicate that unassisted
discovery generally does not benefit learning. Although direct
teaching is better than unassisted discovery, providing learners
with worked examples or timely feedback is preferable. Whereas
providing well-timed, individualized feedback to all learners might
be impossible (e.g., in a classroom setting), providing such feed-
back on homework assignments seems possible and worthwhile.
Students might also benefit from having worked examples pro-
vided on those homework assignments, when the content allows
for it. Furthermore, the second meta-analysis suggests that teach-
ing practices should employ scaffolded tasks that have support in
place as learners attempt to reach some objective, and/or activities
that require learners to explain their own ideas. The benefits of
feedback, worked examples, scaffolding, and elicited explanation
can be understood to be part of a more general need for learners to
be redirected, to some extent, when they are mis-constructing.
Feedback, scaffolding, and elicited explanations do so in more
obvious ways through an interaction with the instructor, but
worked examples help lead learners through problem sets in their
entireties and perhaps help to promote accurate constructions as a
result. Although our suggestions are conservative as to how to
apply the current findings, we suspect and hope that these analyses
will be influential in subsequent designs, both instructional and
empirical.
Theoretical Implications
Perhaps the inferior outcomes of unassisted-discovery tasks
should not be surprising; Hake (2004) referred to such methods as
extreme modes of discovery and pointed out that methods with
almost no teacher guidance will, of course, be inferior to more
guided methods. It does not seem that many researchers on either
side of the argument would disagree with such a claim (Tobias &
Duffy, 2009). Nonetheless, it seems that many of Mayer’s (2004)
concerns are justified. Unassisted-discovery tasks appear inferior
to more instructionally guided tasks, whether explicit instruction or
enhanced discovery. Mayer’s concern that unassisted-discovery
tasks do not lead learners to construct accurate understandings of
the problem set illustrates the potential disconnect between activity
and constructivist learning. As Mayer has pointed out, it has been
the accepted practice to consider hands-on activities as equivalent
to constructivism, but active instructional methods do not always
lead to active learning, and passive methods do not always lead to
passive learning (Mayer, 2009).
Recently, Chi (2009) outlined the theoretical and behavioral
differences between learning tasks that require the learner to be
active and learning tasks that require the learner to be constructive,
and she emphasized that the two are not one in the same. Although
a meta-analysis of Chi’s claims would be optimal to support her
outline, she nonetheless has provided tentative explanations that
are useful fodder and seemingly in agreement to some extent with
the points of Mayer (2004). She explained that although activities
requiring hands-on active participation from learners guarantee a
level of engagement greater than passive reception of information,
these activities do not guarantee that learners will be engaged to
the extent necessary to make sense of the materials for themselves.
From Chi’s perspective, learning activities entailing true construc-
tivism should require learners not only to engage in the learning
task (e.g., manipulate objects or paraphrase) but also to construct
ideas that surpass the presented information (e.g., to elaborate,
predict, reflect). Chi’s emphasis that constructivism should require
learners to achieve these higher order objectives—similar to those
outlined by Fletcher (2009) that include analysis, evaluative abil-
ities, and creativity—illustrates that the objectives of constructiv-
ism are at least, in part, present within the learning activity itself.
Perhaps the completely unguided discovery activities objected
to by Mayer (2004) were too ambiguous to allow learners to
transcend the mere activity and to reach the level of constructivism
intended. Through more guided tasks, the learner is liberated
potentially from high demands on working memory and executive
functioning abilities (Chi, 2009; Kirschner et al., 2006; Mayer,
2003; Rittle-Johnson, 2006; Sweller, 1988; Sweller et al., 2007)
and can therefore direct his/her efforts toward more creative pro-
cesses (e.g., inference, integration, and reorganization) as outlined
by both Chi (2009) and Fletcher (2009). Our finding that genera-
tion is not an optimal form of enhanced discovery may illustrate
this claim. The generation conditions required learners to generate
rules, strategies, or images or to answer questions about the infor-
mation, but there was little consistency in the extent to which
learners had to go beyond the presented information to do so. Of
the three types of enhanced discovery, generation required the least
engagement of learners with respect to the types of activities that
Chi identified as constructive.
The finding that enhanced forms of discovery are superior to
unassisted forms also calls into question ecological perspectives of
learning inherent within discovery pedagogy and perhaps con-
structivism more generally. Although it seems reasonable to expect
learners to be able to construct their own understandings with
minimal assistance because they do so on a daily basis in the
context of everyday activities, perhaps the content and context of
formal education are extraordinary (Geary, 2008) and conse-
quently require more assistance to arrive at accurate construc-
tions, understandings, and solutions (Sweller et al., 2007). It is
also possible that people often learn what they do within daily
life activities through forms of guided participation (Rogoff,
1990).
12
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
The Potential of Teaching Discovery
In light of the previous discussion of Mayer (2004) and Chi
(2009), we should return to the possibility that it might serve
educators and students alike to spend time learning the procedures
of discovery (Ausubel, 1964; Bielaczyc, Pirolli, & Brown, 1995;
Bruer, 1993; Dewey, 1910; Karpov & Haywood, 1998; King,
1991; Kozulin, 1995; Kuhn, Black, Keselman, & Kaplan, 2000).
Teaching learners first to be discoverers (e.g., how to navigate the
problem solving space, use limited working memory capacities
efficiently, and attend to relevant information) could prepare them
(Bruner, 1961) for active learning demands, as outlined by Chi,
and perhaps provide some of the needed curricular focus and
necessary structure to discovery tasks, as emphasized by Mayer
(2004). Furthermore, by having learners better familiarized with
the processes of discovery, the cognitive load demands (Kirschner
et al., 2006; Rittle-Johnson, 2006; Sweller, 1988) might be re-
duced. Consequently, this might allow learners to engage with the
learning tasks not only in active ways but also constructively (i.e.,
in the ways outlined by Chi, 2009) to allow them to go beyond the
presented information. Bruner (1961, p. 26) emphasized that dis-
covery encourages learners to be constructivists and that practice
in discovering teaches the learner how best to acquire information
to make it more readily available. Again, Bruner implied that the
act of discovering is one that requires practice to be of value.
Bruner (1961) also warned that the learner’s mind has to be
prepared for discovery. The preparation that Bruner emphasized
was not merely an existing knowledge base regarding the domain
of study; he also emphasized that learning by discovery does not
necessarily involve the acquisition of new information. Bruner
claimed that discovery was more often the result of a learner
gaining insights that transform his/her knowledge base through
new ways of organizing the previously learned information. Fur-
thermore, the prepared mind for Bruner was one with experience
in discovery itself:
It goes without saying that, left to himself, the child will go about
discovering things for himself within limits. It also goes without
saying that there are certain forms of child rearing, certain home
atmospheres that lead some children to be their own discoverers more
than other children. (Bruner, 1961, p. 22)
Bruner (1961), like Vygotsky (1962), suggested that the narra-
tive of teaching is a conversation that is appropriated by the learner
who can subsequently use that narrative to teach himself/herself.
Bruner emphasized that opportunities for discovery might facili-
tate this process. Consequently, it seems reasonable to conclude
that discovery might itself be a scripted tool (i.e., a narrative) for
making sense of materials on one’s own (Arievitch & Stetsenko,
2000; Kozulin, 1995; Stetsenko & Arievitch, 2002; Wertsch,
1981). The steps and procedures of that script are not innate to the
learner but need to be presented by teachers or parents, as empha-
sized by Bruner, because they are part of a culture (e.g., the culture
of formal education). Thus, if learning through discovery is supe-
rior to other forms of instruction, then it might serve educators and
students alike to spend time learning the procedures of discovery
(Ausubel, 1964; Bielaczyc et al., 1995; Bruer, 1993; Dewey, 1910;
Karpov & Haywood, 1998; King, 1991; Kozulin, 1995; Kuhn et
al., 2000). Generally, teaching the procedures of discovery to
learners might provide some of the needed curricular focus and
necessary structure to discovery instructional methods (concerns
raised by Mayer, 2004). It might also reduce the cognitive de-
mands of discovery learning tasks and make such methods more
easily employed (a concern raised by Kirschner et al., 2006;
Sweller et al., 2007).
Although we have suggested teaching learners how to discover,
we do not mean to imply that we have arrived at some oversim-
plified strategy for discovery that can bridge all domains or learn-
ing tasks. On the contrary, directly instructing learners on
problem-solving skills, analogies, and other cognitive processes
should not be expected to lead learners to generalize those skills to
all other areas of learning (Klahr, 2009; Sweller et al., 2007; Wise
& O’Neill, 2009). However, providing ample opportunities for
learners to discover when and where those processes are appro-
priate could lead learners to such discovery-based constructivism
only after those processes have been taught directly within the
contexts of their appropriate domains.
More generally, teaching students how be constructive learners
might begin with more basic preparation. Perhaps many learners
are not prepared for such activities and educational reform needs to
focus first at the level of reading comprehension to teach students
how to make sense of new information (Herman & Gomez, 2009)
because domain-relevant information might be essential for suc-
cessful construction of novel understandings during instruction,
particularly in ill-structured domains (Rosenshine, 2009; Spiro &
DeSchryver, 2009). Herman and Gomez (2009, p. 70) have out-
lined several reading support tools designed to help students
understand science texts in meaningful and useful ways. Although
these tools need first to be taught explicitly, they could provide
self-guidance while reading science texts thereafter. Perhaps sim-
ilar reading support tools need to be developed for other texts as
well so that students can come to view textbooks as helpful
resources within their environments that they are able to interact
with in meaningful ways to reach objectives, the definition of
learning as proposed by Gresalfi and Lester (2009). These tools
could establish foundations for learning that might not be readily
generalizable from the moment that they are mastered but can be
after practice, after experience in different contexts, and in the
presence of scaffolding and feedback (Wise & O’Neill, 2009).
Conclusion
Overall, the effects of unassisted-discovery tasks seem limited,
whereas enhanced-discovery tasks requiring learners to be actively
engaged and constructive seem optimal. On the basis of the current
analyses, optimal approaches should include at least one of the
following: (a) guided tasks that have scaffolding in place to assist
learners, (b) tasks requiring learners to explain their own ideas and
ensuring that these ideas are accurate by providing timely feed-
back, or (c) tasks that provide worked examples of how to succeed
in the task. Opportunities for constructive learning might not
present themselves when learners are left unassisted. Perhaps the
findings of these meta-analyses can help to move the debate away
from issues of unassisted forms of discovery and toward a fruitful
discussion and consequent empirical investigations of how scaf-
folding is best implemented, how to provide feedback in classroom
settings, how to create worked examples for varieties of content,
and when during the learning task direct forms of instruction
should be provided.
13
DISCOVERY-BASED INSTRUCTION
References
References marked with an asterisk indicate studies included in the
meta-analysis.
*Alibali, M. W. (1999). How children change their minds: Strategy change
can be gradual or abrupt. Developmental Psychology, 35, 127–145.
doi:10.1037/0012-1649.35.1.127
*Amsterlaw, J., & Wellman, H. M. (2006). Theories of mind in transition:
A microgenetic study of the development of false belief understanding.
Journal of Cognition and Development, 7, 139–172. doi:10.1207/
s15327647jcd0702_1
*Anastasiow, N. J., Sibley, S. A., Leonhardt, T. M., & Borich, G. D.
(1970). A comparison of guided discovery, discovery and didactic
teaching of math to kindergarten poverty children. American Educa-
tional Research Journal, 7, 493–510.
*Andrews, J. D. W. (1984). Discovery and expository learning compared:
Their effects on independent and dependent students. Journal of Edu-
cational Research, 78, 80 89.
Arievitch, I. M., & Stetsenko, A. (2000). The quality of cultural tools and
cognitive development: Gal’perin’s perspective and its implications.
Human Development, 43, 69 –92. doi:10.1159/000022661
Ausubel, D. P. (1964). Some psychological and educational limitations of
learning by discovery. The Arithmetic Teacher, 11, 290–302.
*Bannert, M. (2000). The effects of training wheels and self-learning
materials in software training. Journal of Computer Assisted Learning,
16, 336 –346. doi:10.1046/j.1365-2729.2000.00146.x
*Belcastro, F. P. (1966). Relative effectiveness of the inductive and de-
ductive methods of programming algebra. Journal of Experimental
Education, 34, 77–82.
Bertsch, S., Pesta, B. J., Wiscott, R., & McDaniel, M. A. (2007). The
generation effect: A meta-analytic review. Memory & Cognition, 35,
201–210.
*Bielaczyc, K., Pirolli, P. L., & Brown, A. L. (1995). Training in self-
explanation and self-regulation strategies: Investigating the effects of
knowledge acquisition activities on problem solving. Cognition and
Instruction, 13, 221–252. doi:10.1207/s1532690xci1302_3
*Bluhm, W. J. (1979). The effects of science process skill instruction on
preservice elementary teachers’ knowledge of, ability to use, and ability
to sequence science process skills. Journal of Research in Science
Teaching, 16, 427–432. doi:10.1002/tea.3660160509
*Bobis, J., Sweller, J., & Cooper, M. (1994). Demands imposed on
primary-school students by geometric models. Contemporary Educa-
tional Psychology, 19, 108 –117. doi:10.1006/ceps.1994.1010
Bok, D. (2006). Our underachieving colleges: A candid look at how much
students learn and why they should be learning more. Princeton, NJ:
Princeton University Press.
Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2005). Compre-
hensive Meta-Analysis Version 2. Englewood, NJ: Biostat.
*Bowyer, J. B., & Linn, M. C. (1978). Effectiveness of the science
curriculum improvement study in teaching scientific literacy. Journal of
Research in Science Teaching, 15, 209–219. doi:10.1002/
tea.3660150304
*Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for
understanding: Some investigations of comprehension and recall. Jour-
nal of Verbal Learning and Verbal Behavior, 11, 717–726. doi:10.1016/
S0022-5371(72)80006-9
*Brant, G., Hooper, E., & Sugrue, B. (1991). Which comes first the
simulation or the lecture? Journal of Educational Computing Research,
7, 469 481.
*Brown, A. L., Kane, M. J., & Long, C. (1989). Analogical transfer in
young children: Analogies as tools for communication and exposition.
Applied Cognitive Psychology, 3, 275–293. doi:10.1002/
acp.2350030402
Bruer, J. T. (1993). Schools for thought: A science of learning in the
classroom. Cambridge, MA: MIT Press.
Bruner, J. S. (1961). The act of discovery. Harvard Educational Review,
31, 21–32.
*Butler, C., Pine, K., & Messer, D. J. (2006, September). Conceptually and
procedurally based teaching in relation to children’s understanding of
cardinality. Paper presented at the British Psychological Society Devel-
opmental Section Conference, Royal Holloway University of London,
Egham, Surrey.
*Cantor, G. N., Dunlap, L. L., & Rettie, C. S. (1982). Effects of reception
and discovery instruction on kindergarteners’ performance on probabil-
ity tasks. American Educational Research Journal, 19, 453–463.
*Carroll, W. M. (1994). Using worked examples as an instructional support
in the algebra classroom. Journal of Educational Psychology, 86, 360
367. doi:10.1037/0022-0663.86.3.360
Case, R. (1998). The development of conceptual structures. In D. Kuhn &
R. S. Siegler (Eds.), Handbook of child psychology: Cognition, percep-
tion, and language (Vol. 2, pp. 745– 800). New York, NY: Wiley.
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of
instruction. Cognition and Instruction, 8, 293–332. doi:10.1207/
s1532690xci0804_2
*Charney, D., Reder, L., & Kusbit, G. W. (1990). Goal setting and
procedure selection in acquiring computer skills: A comparison of
tutorials, problem solving, and learner exploration. Cognition and In-
struction, 7, 323–342. doi:10.1207/s1532690xci0704_3
*Chen, Z., & Klahr, D. (1999). All other things being equal: Acquisition
and transfer of the control of variables strategy. Child Development, 70,
1098–1120. doi:10.1111/1467-8624.00081
Chi, M. T. H. (2009). Active-constructive-interactive: A conceptual frame-
work for differentiating learning activities. Topics in Cognitive Science,
1, 73–105. doi:10.1111/j.1756-8765.2008.01005.x
*Chi, M. T. H., de Leeuw, N., Chiu, M., & LaVancher, C. (1994). Eliciting
self-explanations improves understanding. Cognitive Science, 18, 439
477.
Clark, R. E. (2009). How much and what type of guidance is optimal for
learning from instruction? In S. Tobias & T. M. Duffy (Eds.), Construc-
tivist theory applied to instruction: Success or failure? (pp. 158 –183).
New York, NY: Taylor & Francis.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences
(2nd ed.). Hillsdale, NJ: Erlbaum.
*Coleman, E. B., Brown, A. L., & Rivkin, I. D. (1997). The effect of
instructional explanations on learning from scientific texts. The Journal
of the Learning Sciences, 6, 347–365. doi:10.1207/s15327809jls0604_1
*Craig, R. C. (1965). Discovery, task completion, and the assignment as
factors in motivation. American Educational Research Journal, 2, 217–
222.
*Crowley, K., & Siegler, R. S. (1999). Explanation and generalization in
young children’s strategy learning. Child Development, 70, 304–316.
doi:10.1111/1467-8624.00023
*Danner, F. W., & Day, M. C. (1977). Eliciting formal operations. Child
Development, 48, 1600–1606. doi:10.2307/1128524
*Debowski, S., Wood, R. E., & Bandura, A. (2001). Impact of guided
exploration and enactive exploration on self-regulatory mechanisms and
information acquisition through electronic search. Journal of Applied
Psychology, 86, 1129–1141. doi:10.1037/0021-9010.86.6.1129
*Denson, D. W. (1986). The relationships between cognitive styles, method
of instruction, knowledge, and process skills of college chemistry stu-
dents (Unpublished doctoral dissertation, University Microfilms No.
87–05059). University of Southern Mississippi, Hattiesburg, MS.
*Destrebecqz, A. (2004). The effect of explicit knowledge on sequence
learning: A graded account. Psychologica Belgica, 44, 217–247.
Dewey, J. (1910). How we think. Boston, MA: D. C. Heath. doi:10.1037/
10903-000
Duffy, T. M. (2009). Building line of communication and a research
14
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
agenda. In S. Tobias & T. M. Duffy (Eds.), Constructivist theory applied
to instruction: Success or failure? (pp. 351–367). New York, NY:
Taylor & Francis.
*Elias, M. J., & Allen, G. J. (1991). A comparison of instructional methods
for delivering a preventive social competence/social decision making
program to at risk, average, and competent students. School Psychology
Quarterly, 6, 251–272. doi:10.1037/h0088819
*Elshout, J. J., & Veenman, M. V. J. (1992). Relation between intellectual
ability and working method as predictors of learning. Journal of Edu-
cational Research, 85, 134 –143.
*Fender, J. G., & Crowley, K. (2007). How parent explanation changes
what children learn from everyday scientific thinking. Journal of
Applied Developmental Psychology, 28, 189–210. doi:10.1016/j
.appdev.2007.02.007
Flavell, J. H. (2000). Development of children’s knowledge about the
mental world. International Journal of Behavioral Development, 24,
15–23. doi:10.1080/016502500383421
Fletcher, J. D. (2009). From behaviorism to constructivism. In S. Tobias &
T. M. Duffy (Eds.), Constructivist theory applied to instruction: Success
or failure? (pp. 242–263). New York, NY: Taylor & Francis.
*Foos, P. W., Mora, J. J., & Tkacz, S. (1994). Student study techniques and
the generation effect. Journal of Educational Psychology, 86, 567–576.
doi:10.1037/0022-0663.86.4.567
*Gagne´, R. M., & Brown, L. T. (1961). Some factors in the programming
of conceptual learning. Journal of Experimental Psychology, 62, 313–
321. doi:10.1037/h0049210
Geary, D. C. (2008). Whither evolutionary educational psychology? Edu-
cational Psychologist, 43, 217–226. doi:10.1080/00461520802392240
*Ginns, P., Chandler, P., & Sweller, J. (2003). When imagining informa-
tion is effective. Contemporary Educational Psychology, 28, 229–251.
doi:10.1016/S0361-476X(02)00016-4
*Grandgenett, N., & Thompson, A. (1991). Effects of guided programming
instruction on the transfer of analogical reasoning. Journal of Educa-
tional Computing Research, 7, 293–308.
*Greenockle, K. M., & Lee, A. (1991). Comparison of guided and discov-
ery learning strategies. Perceptual and Motor Skills, 72, 1127–1130.
doi:10.2466/PMS.72.4.1127-1130
Gresalfi, M. S., & Lester, F. (2009). What’s worth knowing in mathemat-
ics? In S. Tobias & T. M. Duffy (Eds.), Constructivist theory applied to
instruction: Success or failure? (pp. 264 –290). New York, NY: Taylor
& Francis.
*Guthrie, J. T. (1967). Expository instruction versus a discovery method.
Journal of Educational Psychology, 58, 45–49. doi:10.1037/h0024112
Hake, R. R. (2004, August). Direct instruction suffers a setback in Cali-
fornia—Or does it? Paper presented at the 129th National AAPT Meet-
ing, Sacramento, CA.
*Hendrickson, G., & Schroeder, W. H. (1941). Transfer of training in
learning to hit a submerged target. Journal of Educational Psychology,
32, 205–213. doi:10.1037/h0056643
*Hendrix, G. (1947). A new clue to transfer of training. The Elementary
School Journal, 48, 197–208. doi:10.1086/458927
Herman, P., & Gomez, L. M. (2009). Taking guided learning theory to
school. In S. Tobias & T. M. Duffy (Eds.), Constructivist theory applied
to instruction: Success or failure? (pp. 62–81). New York, NY: Taylor
& Francis.
*Hiebert, J., & Wearne, D. (1993). Instructional tasks, classroom dis-
course, and students’ learning in second-grade arithmetic. American
Educational Research Journal, 30, 393–425.
*Hirsch, C. R. (1977). The effects of guided discovery and individualized
instructional packages on initial learning, transfer, and retention in
second-year algebra. Journal for Research in Mathematics Education, 8,
359–368. doi:10.2307/748407
Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding
and achievement in problem-based and inquiry learning: A response to
Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42,
99–107.
*Hodges, N. J., & Lee, T. D. (1999). The role of augmented information
prior to learning a bimanual visual-motor coordination task: Do instruc-
tions of the movement pattern facilitate learning relative to discovery
learning? British Journal of Psychology, 90, 389403. doi:10.1348/
000712699161486
*Howe, C., McWilliam, D., & Cross, G. (2005). Chance favours only the
prepared mind: Incubation and the delayed effects of peer collaboration.
British Journal of Psychology, 96, 67–93. doi:10.1348/
000712604X15527
*Jackson, A. C., Fletcher, B. C., & Messer, D. J. (1992). When talking
doesn’t help: An investigation of microcomputer-based group problem
solving. Learning and Instruction, 2, 185–197. doi:10.1016/0959-
4752(92)90008-A
*Jime´nez, L., Me´ndez, C., & Cleeremans, A. (1996). Comparing direct and
indirect measures of sequence learning. Journal of Experimental Psy-
chology: Learning, Memory, and Cognition, 22, 948–969. doi:10.1037/
0278-7393.22.4.948
Johnson, B. (1989). DSTAT: Software for the meta-analytic review of
research literature. Hillsdale, NJ: Erlbaum.
Johnson, B. (1993). DSTAT 1.10 software for the meta-analytic review of
research literature: Upgrade documentation. Hillsdale, NJ: Erlbaum.
Kagan, J. (1966). Learning, attention, and the issue of discovery. In L. S.
Shulman & E. R. Keislar (Eds.), Learning by discovery: A critical
appraisal (pp. 151–161). Chicago, IL: Rand McNally.
*Kalyuga, S., Chandler, P., & Sweller, J. (2001). Learner experience and
efficiency of instructional guidance. Educational Psychology, 21, 5–23.
doi:10.1080/01443410124681
*Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001). When
problem solving is superior to studying worked examples. Journal of
Educational Psychology, 93, 579–588. doi:10.1037/0022-0663.93.3.579
*Kamii, C., & Dominick, A. (1997). To teach or not to teach algorithms.
Journal of Mathematical Behavior, 16, 51– 61. doi:10.1016/S0732-
3123(97)90007-9
Karpov, Y. V., & Haywood, H. C. (1998). Two ways to elaborate Vy-
gotsky’s concept of mediation: Implications for instruction. American
Psychologist, 53, 27–36. doi:10.1037/0003-066X.53.1.27
*Kastens, K. A., & Liben, L. S. (2007). Eliciting self-explanations im-
proves children’s performance on a field-based map skills task. Cogni-
tion and Instruction, 25, 45–74.
*Kelemen, D. (2003). British and American children’s preferences for
teleo-functional explanations of the natural world. Cognition, 88, 201–
221. doi:10.1016/S0010-0277(03)00024-6
Kendler, H. H. (1966). Reflections on the conference. In L. S. Shulman &
E. R. Keislar (Eds.), Learning by discovery: A critical appraisal (pp.
171–176). Chicago, IL: Rand McNally.
*Kersh, B. Y. (1958). The adequacy of “meaning” as an explanation for the
superiority of learning by independent discovery. Journal of Educa-
tional Psychology, 49, 282–292. doi:10.1037/h0044500
*Kersh, B. Y. (1962). The motivating effect of learning by directed
discovery. Journal of Educational Psychology, 53, 65–71. doi:10.1037/
h0044269
*King, A. (1991). Effects of training in strategic questioning on children’s
problem-solving performance. Journal of Educational Psychology, 83,
307–317. doi:10.1037/0022-0663.83.3.307
Kintsch, W. (2009). Learning and constructivism. In S. Tobias & T. M.
Duffy (Eds.), Constructivist theory applied to instruction: Success or
failure? (pp. 223–241). New York, NY: Taylor & Francis.
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal
guidance during instruction does not work: An analysis of the failure
of constructivist, discovery, problem-based, experiential, and
inquiry-based teaching. Educational Psychologist, 41, 75– 86. doi:
10.1207/s15326985ep4102_1
15
DISCOVERY-BASED INSTRUCTION
*Kittell, J. E. (1957). An experimental study of the effect of external
direction during learning on transfer and retention of principles. Journal
of Educational Psychology, 48, 391–405. doi:10.1037/h0046792
Klahr, D. (2009). “To every thing there is a season, and a time to every
purpose under the heavens”: What about direct instruction? In S. Tobias
& T. M. Duffy (Eds.), Constructivist theory applied to instruction:
Success or failure? (pp. 291–310). New York, NY: Taylor & Francis.
*Klahr, D., & Nigam, M. (2004). The equivalence of learning paths in
early science instruction: Effects of direct instruction and discovery
learning. Psychological Science, 15, 661–667. doi:10.1111/j.0956-
7976.2004.00737.x
Kozulin, A. (1995). The learning process: Vygotsky’s theory in the mirror
of its interpretations. School Psychology International, 16, 117–129.
doi:10.1177/0143034395162003
Kuhn, D. (2007). Is direct instruction an answer to the right question?
Educational Psychologist, 42, 109 –113.
*Kuhn, D., Black, J., Keselman, A., & Kaplan, D. (2000). The develop-
ment of cognitive skills to support inquiry learning. Cognition and
Instruction, 18, 495–523. doi:10.1207/S1532690XCI1804_3
Kuhn, D., & Dean, D. (2004). Connecting scientific reasoning and causal
inference. Journal of Cognition and Development, 5, 261–288. doi:
10.1207/s15327647jcd0502_5
*Kuhn, D., & Dean, D. (2005). Is developing scientific thinking all about
learning to control variables? Psychological Science, 16, 866870.
doi:10.1111/j.1467-9280.2005.01628.x
*Lamborn, S. D., Fischer, K. W., & Pipp, S. (1994). Constructive criticism
and social lies: A developmental sequence for understanding honesty
and kindness in social interactions. Developmental Psychology, 30,
495–508. doi:10.1037/0012-1649.30.4.495
*Lawson, A. E., & Wollman, W. T. (1976). Encouraging the transition
from concrete to formal cognitive functioning—An experiment. Journal
of Research in Science Teaching, 13, 413–430. doi:10.1002/
tea.3660130505
*Lazonder, A. W., & van der Meij, H. (1993). The minimal manual: Is less
really more? International Journal of Man-Machine Studies, 39, 729
752. doi:10.1006/imms.1993.1081
*Lazonder, A. W., & van der Meij, H. (1994). Effect of error information
in tutorial documentation. Interacting with Computers, 6, 23–40. doi:
10.1016/0953-5438(94)90003-5
*Lazonder, A. W., & van der Meij, H. (1995). Error-information in tutorial
documentation: Supporting users’ errors to facilitate initial skill learn-
ing. International Journal of Human-Computer Studies, 42, 185–206.
doi:10.1006/ijhc.1995.1009
*Lee, M. O. C., & Thompson, A. (1997). Guided instruction in LOGO
programming and the development of cognitive monitoring strategies
among college students. Journal of Educational Computing Research,
16, 125–144.
*Leutner, D. (1993). Guided discovery learning with computer-based sim-
ulation games: Effects of adaptive and non-adaptive instructional sup-
port. Learning and Instruction, 3, 113–132. doi:10.1016/0959-
4752(93)90011-N
Mayer, R. E. (2003). Learning and instruction. Upper Saddle River, NJ:
Prentice Hall.
Mayer, R. E. (2004). Should there be a three-strikes rule against pure
discovery learning? The case for guided methods of instruction. Amer-
ican Psychologist, 59, 14 –19. doi:10.1037/0003-066X.59.1.14
Mayer, R. E. (2009). Constructivism as a theory of learning versus con-
structivism as a prescription for instruction. In S. Tobias & T. M. Duffy
(Eds.), Constructivist theory applied to instruction: Success or failure?
(pp. 184 –200). New York, NY: Taylor & Francis.
*McDaniel, M. A., & Pressley, P. (1984). Putting the keyword method in
context. Journal of Educational Psychology, 76, 598 609. doi:10.1037/
0022-0663.76.4.598
*McDaniel, M. A., & Schlager, M. S. (1990). Discovery learning and
transfer of problem-solving skills. Cognition and Instruction, 7, 129
159. doi:10.1207/s1532690xci0702_3
*Messer, D. J., Joiner, R., Loveridge, N., Light, P., & Littleton, K. (1993).
Influences on the effectiveness of peer interaction: Children’s level of
cognitive development and the relative ability of partners. Social Devel-
opment, 2,
279–294. doi:10.1111/j.1467-9507.1993.tb00018.x
*Messer, D. J., Mohamedali, M. H., & Fletcher, B. (1996). Using com-
puters to help pupils tell the time, is feedback necessary? Educational
Psychology, 16, 281–296. doi:10.1080/0144341960160305
*Messer, D. J., Norgate, S., Joiner, R., Littleton, K., & Light, P. (1996).
Development without learning? Educational Psychology, 16, 5–19. doi:
10.1080/0144341960160101
*Morton, J. B., Trehub, S. E., & Zelazo, P. D. (2003). Sources of inflex-
ibility in 6-year-olds’ understanding of emotion in speech. Child Devel-
opment, 74, 1857–1868. doi:10.1046/j.1467-8624.2003.00642.x
*Murphy, N., & Messer, D. (2000). Differential benefits from scaffolding
and children working alone. Educational Psychology, 20, 17–31. doi:
10.1080/014434100110353
*Mwangi, W., & Sweller, J. (1998). Learning to solve compare word
problems: The effect of example format and generating self-
explanations. Cognition and Instruction, 16, 173–199. doi:10.1207/
s1532690xci1602_2
*Nadolski, R. J., Kirschner, P. A., & Van Merrie¨nboer, J. J. G. (2005).
Optimizing the number of steps in learning tasks for complex skills.
British Journal of Educational Psychology, 75, 223–237. doi:10.1348/
000709904X22403
*O’Brien, T. C., & Shapiro, B. J. (1977). Number patterns: Discovery
versus reception learning. Journal for Research in Mathematics Educa-
tion, 8, 83–87. doi:10.2307/748536
*Öhrn, M. A. K., Van Oostrom, J. H., & Van Meurs, W. L. (1997). A
comparison of traditional textbook and interactive computer learning of
neuromuscular block. Anesthesia & Analgesia, 84, 657–661. doi:
10.1097/00000539-199703000-00035
*Olander, H. T., & Robertson, H. C. (1973). The effectiveness of discovery
and expository methods in the teaching of fourth-grade mathematics.
Journal for Research in Mathematics Education, 4, 33– 44. doi:10.2307/
749022
*Paas, F. G. W. C. (1992). Training strategies for attaining transfer of
problem-solving skill in statistics: A cognitive-load approach. Journal of
Educational Psychology, 84, 429434. doi:10.1037/0022-0663
.84.4.429
Paas, F. G. W. C., Renkl, A., & Sweller, J. (2003). Cognitive load theory
and instructional design: Recent developments. Educational Psycholo-
gist, 38, 1–4. doi:10.1207/S15326985EP3801_1
*Paas, F. G. W. C., & Van Merrie¨nboer, J. J. G. (1994). Variability of
worked examples and transfer of geometrical problem-solving skills: A
cognitive-load approach. Journal of Educational Psychology, 86, 122–
133. doi:10.1037/0022-0663.86.1.122
*Pany, D., & Jenkins, J. R. (1978). Learning word meanings: A compar-
ison of instructional procedures. Learning Disability Quarterly, 1, 21–
32. doi:10.2307/1510304
Pea, R. D. (2004). The social and technological dimensions of scaffolding
and related theoretical concepts for learning, education, and human
activity. The Journal of the Learning Sciences, 13, 423–451. doi:
10.1207/s15327809jls1303_6
*Peters, D. L. (1970). Discovery learning in kindergarten mathematics.
Journal for Research in Mathematics Education, 1, 76 87. doi:10.2307/
748854
Piaget, J. (1952). The origins of intelligence in children (M. Cook, Trans.).
New York, NY: International Universities Press. doi:10.1037/11494-000
Piaget, J. (1965). Science of education and the psychology of the child. In
H. E. Gruber & J. J. Voneche (Eds.), The essential Piaget (pp. 695–725).
New York, NY: Basic Books.
Piaget, J. (1980). The psychogenesis of knowledge and its epistemological
16
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
significance. In M. Piattelli-Palmarini (Ed.), Language and learning (pp.
23–54). Cambridge, MA: Harvard University Press.
*Pillay, H. K. (1994). Cognitive load and mental rotation: Structuring
orthographic projection for learning and problem solving. Instructional
Science, 22, 91–113. doi:10.1007/BF00892159
*Pillow, B. H., Mash, C., Aloian, S., & Hill, V. (2002). Facilitating
children’s understanding of misinterpretation: Explanatory efforts and
improvements in perspective taking. Journal of Genetic Psychology,
163, 133–148. doi:10.1080/00221320209598673
*Pine, K. J., & Messer, D. J. (2000). The effect of explaining another’s
actions on children’s implicit theories of balance. Cognition and Instruc-
tion, 18, 35–51. doi:10.1207/S1532690XCI1801_02
*Pine, K. J., Messer, D. J., & Godfrey, K. (1999). The teachability of
children with naı¨ve theories: An exploration of the effects of two
teaching methods. British Journal of Educational Psychology, 69, 201–
211. doi:10.1348/000709999157671
*Quilici, J. L., & Mayer, R. E. (1996). Role of examples in how students
learn to categorize statistics word problems. Journal of Educational
Psychology, 88, 144–161. doi:10.1037/0022-0663.88.1.144
*Radziszewska, B., & Rogoff, B. (1991). Children’s guided participation
in planning imaginary errands with skilled adult or peer partners. De-
velopmental Psychology, 27, 381–389. doi:10.1037/0012-1649.27.3.381
*Rappolt-Schlichtmann, G., Tenenbaum, H. R., Koepke, M. F., & Fischer,
K. (2007). Transient and robust knowledge: Contextual support and the
dynamics of children’s reasoning about density. Mind, Brain, and Ed-
ucation, 1, 98–108. doi:10.1111/j.1751-228X.2007.00010.x
*Ray, W. E. (1961). Pupil discovery vs. direct instruction. Journal of
Experimental Education, 29, 271–280.
*Reid, D. J., Zhang, J., & Chen, Q. (2003). Supporting scientific discovery
learning in a simulation environment. Journal of Computer Assisted
Learning, 19, 9–20. doi:10.1046/j.0266-4909.2003.00002.x
*Reinking, D., & Rickman, S. S. (1990). The effects of computer-mediated
texts on the vocabulary learning and comprehension of intermediate-
grade readers. Journal of Reading Behavior, 22, 395–411.
*Rieber, L. P., & Parmley, M. W. (1995). To teach or not to teach?
Comparing the use of computer-based simulations in deductive versus
inductive approaches to learning with adults in science. Journal of
Educational Computing Research, 13, 359–374.
*Rittle-Johnson, B. (2006). Promoting transfer: Effects of self-explanation
and direct instruction. Child Development, 77, 1–15. doi:10.1111/j.1467-
8624.2006.00852.x
*Rittle-Johnson, B., Saylor, M., & Swygert, K. E. (2008). Learning from
explaining: Does it matter if mom is listening? Journal of Experimental
Child Psychology, 100, 215–224. doi:10.1016/j.jecp.2007.10.002
Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in
social context. New York, NY: Random.
Rosenshine, B. (2009). The empirical support for direct instruction. In S.
Tobias & T. M. Duffy (Eds.), Constructivist theory applied to instruc-
tion: Success or failure? (pp. 201–220). New York, NY: Taylor &
Francis.
*Salmon, K., Yao, J., Berntsen, O., & Pipe, M. (2007). Does providing
props during preparation help children to remember a novel event?
Journal of Experimental Child Psychology, 97, 99 –116. doi:10.1016/
j.jecp.2007.01.001
*Scandura, J. M. (1964). An analysis of exposition and discovery modes of
problem solving instruction. Journal of Experimental Education, 33,
149–159.
Schmidt, H. G., Loyens, S. M. M., van Gog, T., & Paas, F. (2007).
Problem-based learning is compatible with human cognitive architec-
ture: Commentary on Kirschner, Sweller, and Clark (2006). Educational
Psychologist, 42, 91–97.
Schwartz, D. L., & Bransford, J. D. (1998). A time for telling. Cognition
and Instruction, 16, 475–522. doi:10.1207/s1532690xci1604_4
Schwartz, D. L., Lindgren, R., & Lewis, S. (2009). Constructivism in an
age of non-constructivist assessments. In S. Tobias & T. M. Duffy
(Eds.), Constructivist theory applied to instruction: Success or failure?
(pp. 34 61). New York, NY: Taylor & Francis.
*Shore, W. J., & Durso, F. T. (1990). Partial knowledge in vocabulary
acquisition: General constraints and specific detail. Journal of Educa-
tional Psychology, 82, 315–318. doi:10.1037/0022-0663.82.2.315
*Shute, V. J., Glaser, R., & Raghavan, K. (1989). Inference and discovery
in an exploratory laboratory. In P. L. Ackerman, R. J. Sternberg, & R.
Glaser (Eds.), Learning and individual differences: Advances in theory
and research (pp. 279 –326). New York, NY: Freeman.
*Siegel, A. W., & Corsini, D. A. (1969). Attentional differences in chil-
dren’s incidental learning. Journal of Educational Psychology, 60, 65–
70. doi:10.1037/h0026672
*Singer, R. N., & Gaines, L. (1975). Effects of prompted and problem-
solving approaches on learning and transfer of motor skills. American
Educational Research Journal, 12, 395–403.
*Singer, R. N., & Pease, D. (1978). Effect of guided vs. discovery learning
strategies on initial motor task learning, transfer, and retention. Research
Quarterly, 49, 206–217.
Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of
a phenomenon. Journal of Experimental Psychology: Human Learning
and Memory, 4, 592– 604. doi:10.1037/0278-7393.4.6.592
Spiro, R. J., & DeSchryver, M. (2009). Constructivism. In S. Tobias &
T. M. Duffy (Eds.), Constructivist theory applied to instruction: Success
or failure? (pp. 106 –123). New York, NY: Taylor & Francis.
*Stark, R., Gruber, H., Renkl, A., & Mandl, H. (1998). Instructional effects
in complex learning: Do objective and subjective learning outcomes
converge? Learning and Instruction, 8, 117–129. doi:10.1016/S0959-
4752(97)00005-4
*Stark, R., Mandl, H., Gruber, H., & Renkl, A. (2002). Conditions and
effects of example elaboration. Learning and Instruction, 12, 3960.
doi:10.1016/S0959-4752(01)00015-9
Stetsenko, A., & Arievitch, I. (2002). Teaching, learning and development:
A post-Vygotskian perspective. In G. Wells & G. Claxton (Eds.), Learn-
ing for life in the twenty-first century: Sociocultural perspectives on the
future of education (pp. 84–96). London, England: Blackwell.
*Strand-Cary, M., & Klahr, D. (2008). Developing elementary science
skills: Instructional effectiveness and path independence. Cognitive De-
velopment, 23, 488–511. doi:10.1016/j.cogdev.2008.09.005
*Stull, A. T., & Mayer, R. E. (2006, July). Three experimental compari-
sons of learner-generated versus author-provided graphic organizers.
Poster presented at the 28th Annual Conference of the Cognitive Science
Society, Vancouver, British Columbia, Canada.
*Sutherland, R., Pipe, M., Schick, K., Murray, J., & Gobbo, C. (2003).
Knowing in advance: The impact of prior event information on memory
and event knowledge. Journal of Experimental Child Psychology, 84,
244–263. doi:10.1016/S0022-0965(03)00021-3
*Swaak, J., de Jong, T., & Van Joolingen, W. R. (2004). The effects of
discovery learning and expository instruction on the acquisition of
definitional and intuitive knowledge. Journal of Computer Assisted
Learning, 20, 225–234. doi:10.1111/j.1365-2729.2004.00092.x
*Swaak, J., Van Joolingen, W. R., & De Jong, T. (1998). Supporting
simulation-based learning: The effects of model progression and assign-
ments on definitional and intuitive knowledge. Learning and Instruction,
8, 235–252. doi:10.1016/S0959-4752(98)00018-8
Sweller, J. (1988). Cognitive load during problem solving: Effects on
learning. Cognitive Science, 12, 257–285. doi:10.1207/
s15516709cog1202_4
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instruc-
tional design. Learning and Instruction, 4, 295–312. doi:10.1016/0959-
4752(94)90003-5
Sweller, J. (2009). What human cognitive architecture tells us about
constructivism. In S. Tobias & T. M. Duffy (Eds.), Constructivist theory
17
DISCOVERY-BASED INSTRUCTION
applied to instruction: Success or failure? (pp. 127–143). New York,
NY: Taylor & Francis.
*Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive
load as a factor in the structuring of technical material. Journal of
Experimental Psychology: General, 119, 176–192. doi:10.1037/0096-
3445.119.2.176
Sweller, J., Kirschner, P. A., & Clark, R. E. (2007). Why minimally guided
teaching techniques do not work: A reply to commentaries. Educational
Psychologist, 42, 115–121.
*Tarmizi, R. A., & Sweller, J. (1988). Guidance during mathematical
problem solving. Journal of Educational Psychology, 80, 424436.
doi:10.1037/0022-0663.80.4.424
*Tenenbaum, H. R., Alfieri, L., Brooks, P. J., & Dunne, G. (2008). The
effects of explanatory conversations on children’s emotion understand-
ing. British Journal of Developmental Psychology, 26, 249–263. doi:
10.1348/026151007X231057
Tobias, S., & Duffy, T. M. (Eds.). (2009). Constructivist theory applied to
instruction: Success or failure? New York, NY: Taylor & Francis.
*Trafton, J. G., & Reiser, B. J. (1993). The contributions of studying
examples and solving problems to skill acquisition. In The Proceedings
of the 1993 Conference of the Cognitive Science Society (pp. 1017–
1022). Hillsdale, NJ: Erlbaum.
*Tunteler, E., & Resing, W. C. M. (2002). Spontaneous analogical transfer
in 4-year-olds: A microgenetic study. Journal of Experimental Child
Psychology, 83, 149–166. doi:10.1016/S0022-0965(02)00125-X
*Tuovinen, J. E., & Sweller, J. (1999). A comparison of cognitive load
associated with discovery learning and worked examples. Journal of
Educational Psychology, 91, 334–341. doi:10.1037/0022-0663.91.2.334
*van der Meij, H., & Lazonder, A. W. (1993). Assessment of the mini-
malist approach to computer user documentation. Interacting with Com-
puters, 5, 355–370. doi:10.1016/0953-5438(93)90001-A
*van hout Wolters, B. H. A. M. (1990). Selecting and cueing key phrases
in instructional texts. In H. Mandl, E. De Corte, N. Bennett, & H. F.
Friedrich (Eds.), Learning and instruction, European research in an
international context: Vol. 2.2. Analysis of complex skills and complex
knowledge domains (pp. 181–197). New York, NY: Pergamon Press.
*Veenman, M. V. J., Elshout, J. J., & Busato, V. V. (1994). Metacognitive
mediation in learning with computer-based simulations. Computers in
Human Behavior, 10, 93–106. doi:10.1016/0747-5632(94)90031-0
*Vichitvejpaisal, P., Sitthikongsak, S., Preechakoon, B., Kraiprasit, K.,
Parakkamodom, S., Manon, C., & Petcharatana, S. (2001). Does
computer-assisted instruction really help to improve the learning pro-
cess? Medical Education, 35, 983–989.
Vygotsky, L. (1962). Thought and language. Cambridge, MA: MIT Press.
doi:10.1037/11193-000
*Ward, M., & Sweller, J. (1990). Structuring effective worked examples.
Cognition and Instruction, 7, 1–39. doi:10.1207/s1532690xci0701_1
Wertsch, J. (1981). The concepts of activity in Soviet psychology. Armonk,
NY: Sharpe.
Wise, A. F., & O’Neill, K. (2009). Beyond more versus less. In S. Tobias
& T. M. Duffy (Eds.), Constructivist theory applied to instruction:
Success or failure? (pp. 82–105). New York, NY: Taylor & Francis.
*Wittrock, M. C. (1963). Verbal stimuli in concept formation: Learning by
discovery. Journal of Educational Psychology, 54, 183–190. doi:
10.1037/h0043782
*Worthen, B. R. (1968). A study of discovery and expository presentation:
Implications for teaching. Journal of Teacher Education, 19, 223–242.
doi:10.1177/002248716801900215
*Zacharia, Z., & Anderson, O. R. (2003). The effects of an interactive
computer-based simulation prior to performing a laboratory inquiry-
based experiment in students’ conceptual understanding of physics.
American Journal of Physiology, 71, 618 629. doi:10.1119/1.1566427
*Zhang, J., Chen, Q., Sun, Y., & Reid, D. J. (2004). Triple scheme of
learning support design for scientific discovery learning based on com-
puter simulation: Experimental research. Journal of Computer Assisted
Learning, 20, 269–282. doi:10.1111/j.1365-2729.2004.00062.x
*Zimmermann, M. J., & Sassenrath, J. M. (1978). Improvement in arith-
metic and reading and discovery learning in mathematics (SEED).
Educational Research Quarterly, 3, 27–33.
Received October 21, 2009
Revision received July 28, 2010
Accepted July 28, 2010
18
ALFIERI, BROOKS, ALDRICH, AND TENENBAUM
... The interactional pattern is characterized by teachers' giving responses to students' questions as well as, to a larger extent, functioning as an invisible actor who avoids disturbing the students in their interactions regarding mathematics. These practices are part of discovery-learning (Alfieri et al., 2011), which emphasizes students' motivations and asserts that students should discover facts and relationships on their own requiring the teacher's guidance only upon request (Ryve & Hemmi, 2019). That is, student-to-student interactions are a focus of Swedish democratic discourse in education (Forsberg et al., 2016). ...
... However, the teachers do not discuss the contexts according to how the relevant relationships are set up and the specific contexts that are chosen. This motivational practice is also apparent where students are required to create their mathematical tasks based on a given context (Alfieri et al., 2011). From the analyses of all groups, this study contends that a common understanding of these teaching practices seems to be in play during the introduction phase. ...
Article
Full-text available
This paper investigates how teachers portray their own teaching practices while reflecting on them and discussing them in collegial discussions. Analysing data from eight groups with a total of 59 teachers, this study investigates how teachers portray their teaching practices and draw upon different discourses to represent their role as a teacher. This analysis finds that teachers describe different teaching practices in different lesson phases and draw upon different discourses in doing so. From this study emerges an eclectic, pragmatic teacher who rather comfortably navigates between different discourses to create a new, blended discourse.
... Without help, primary school students may not be able to engage productively in scientific inquiry involving experimentation and may need teacher guidance (Martella, Klahr, and Li 2020). As several meta-studies have shown, scientific inquiry is effective as a teaching method, provided that students are guided by the teacher (Alfieri et al. 2011;Furtak et al. 2012;Lazonder and Harmsen 2016). With guidance, teachers might reduce the cognitive load (Corbalan, Paas, and Cuypers 2010). ...
Article
Full-text available
In primary science education, inquiry‐based science instruction stands out as an optimal learning environment for fostering domain‐specific content and procedural knowledge. Recognizing the effectiveness of different forms of teacher guidance, there is an ongoing debate about the planning of high ( structured inquiry ) and low ( guided inquiry ) guidance and their optimal sequencing. This debate revolves around balancing the level of autonomy and the amount of conceptual information given to students. Furthermore, the complete understanding of the impact of responsive teaching, which encompasses a broad range of practices, including on‐the‐fly scaffolding such as Promoting Participation , Focusing , and Problematizing , remains elusive. To address this gap, this study examines the relationship between planned teacher guidance and specific instances of responsive teaching, particularly on‐the‐fly scaffolding in the inquiry‐based science classroom. A pre‐posttest design was employed, involving 164 primary school students ( M = 9.9 years, SD = 0.66, 57% female) and one female experimenter. Domain‐specific content knowledge contained science concepts of thermal insulation, whereas procedural knowledge comprised the application of the control‐of‐variables strategy. The sequential order of planned teacher guidance, structured inquiry, and guided inquiry , was systematically varied, and the experimenter was allowed to provide spontaneous on‐the‐fly scaffolding. The study assesses the influence of planned teacher guidance and specific instances of responsive teaching, particularly on‐the‐fly scaffolding on students' conceptual and procedural knowledge. Results indicate no differential learning effects based on the order of planned guidance. However, when planned guided inquiry was provided second, the teacher gave less on‐the‐fly scaffolding. Additionally, Problematizing had a positive effect, while Focusing had a negative effect on students' procedural knowledge learning.
... For instance, the use of visual aids and hands-on activities has been shown to benefit students with diverse learning styles and needs [36]. Similarly, discovery-based learning approaches have been associated with improved long-term retention and transfer of knowledge [37]. The emphasis on these methods reflects a growing recognition of the need for inclusive teaching strategies that can accommodate the varied learning needs of students in modern classrooms. ...
Article
Full-text available
The field of neuroeducation, which integrates neuroscience findings into educational practice, has gained significant attention in recent years. Establishing research priorities in neuroeducation is crucial for guiding future studies and ensuring that the field benefits both neuroscience and education. This study aimed to address the need for collaboration between neuroscientists and educators by conducting a priority-setting exercise with early career professionals from both fields. Using the nominal group technique (NGT) with interquartile range (IQR) analysis, we identified seven key priorities in neuroeducation and assessed the level of consensus on these priorities. The top-ranked priorities were “Emotional and Mental Well-being”, “Neurodiversity and Special Education Needs”, and “Active and Inclusive Teaching Methods”, though IQR analysis revealed varying levels of consensus. Lower-ranked priorities, such as “Role of Technology on Learning and the Brain”, showed a higher consensus. This discrepancy between ranking and consensus highlights the complex nature of neuroeducation, reflecting differing perspectives between neuroscientists and educators. These findings suggest the need for interdisciplinary collaboration to bridge these gaps and foster evidence-based practices. We recommend that future research focuses on the specific neural mechanisms underlying emotional well-being, strategies for supporting neurodivergent learners, and practical approaches to integrating inclusive teaching methods in diverse educational contexts.
Article
How we should teach mathematics has long been debated by advocates of different instructional approaches. These debates can lead to tensions that are difficult to navigate for those who are new to (or even veterans of) the profession. Educators often propose a “balance” of different approaches as a solution. Charles Munter considers whether such a balance is actually possible — and also considers whether some other questions may be more urgent.