Content uploaded by Jason Harley
Author content
All content in this area was uploaded by Jason Harley on Feb 17, 2016
Content may be subject to copyright.
The ubiquity and widespread use of Student-Centered Learning
Environments (SCLEs) poses numerous challenges for learners. Learning
with these non-linear, multi-representational, open-ended learning
environments typically involves the use of numerous self-regulatory
processes such as planning, reflection, and metacognitive monitoring and
regulation (Azevedo, 2005, 2007, 2008, 2009; Greene & Azevedo, 2009,
2010; Moos & Azevedo, 2008; Veenman, 2007; White & Frederiksen,
2005; Zimmerman, 2008). Unfortunately, learners do not always monitor
and regulate these processes during learning with SCLEs, which limits
these environments’ potential and effectiveness as educational tools to
enhance learning about complex and challenging topics and domains.
Metacognition and self-regulation comprise a set of key processes that
are critical for learning about conceptually-rich domains with SCLEs such
as open-ended hypermedia environments, multi-agent tutoring systems,
serious games, and other hybrid systems. We emphasize that learning
with SCLEs involves a complex set of interactions between cognitive,
metacognitive, motivational, and affective processes (Aleven, Roll,
McLaren, & Koedinger, 2010; Azevedo, Moos, Johnson, & Chauncey,
2010; Biswas, Jeong, Kinnebrew, Sulcer, & Roscoe, 2010; Graesser
& McNamara, 2010; White, Frederiksen, & Collins, 2009; Winne &
Nesbit, 2009). Current interdisciplinary research provides evidence that
learners of all ages struggle when learning about these conceptually rich
domains with SCLEs. To briefly summarize, this research indicates that
learning with SCLEs is particularly difficult because it requires students
to monitor and regulate several aspects of their learning. For example,
regulating one’s learning involves analyzing the learning context, setting
and managing meaningful learning goals, determining which learning
and problem-solving strategies to use, assessing whether the strategies are
effective in meeting the learning goals, monitoring and making accurate
judgments regarding one’s emerging understanding of the topic and
contextual factors, and determining whether there are aspects of the
learning context that could be used to facilitate learning. During self-
regulated learning (SRL), students need to deploy several metacognitive
7 Metacognition and Self-Regulated
Learning in Student-Centered
Learning Environments
Roger Azevedo, Reza F. Behnagh, Melissa
Duffy, Jason M. Harley, and Gregory Trevors
172 Azevedo, Behnagh, Duffy, Harley and Trevors
processes to determine whether they understand what they are learning,
and perhaps modify their plans, goals, strategies, and efforts in relation to
dynamically changing contextual conditions. In addition, students must
also monitor, modify, and adapt to fluctuations in their motivational and
affective states, and determine how much social support (if any) may be
needed to perform the task. Also, depending on the learning context,
instructional goals, perceived task performance, and progress made
towards achieving the learning goal(s), they may need to modify certain
aspects of their cognition, metacognition, motivation, and affect. As such,
we argue that metacognition and self-regulation play a critical role in
learning with SCLEs.
In this chapter, we provide an overview of SRL with SCLEs, describe
assumptions and commonalities across several leading models, describe the
assumptions and components of a leading information-processing model
of SRL, provide examples and definitions of key specific metacognitive
monitoring processes and regulatory skills used when learning with SCLEs,
provide specific examples of how models of metacognition and SRL have
been embodied in four contemporary SCLEs, and provide implications
for the future of student-centered SCLEs that focus on metacognition
and SRL.
Self-Regulated Learning in Student-Centered Learning
Environments
The complex nature of metacognitive and self-regulatory processes
can be exemplified by providing an example of learning with a multi-
agent, adaptive, hypermedia learning environment such as MetaTutor.
Typically, a student is asked to learn about the human circulatory system
for two hours with the system. The environment contains several dozen
illustrations and hundreds of paragraphs containing thousands of words
with corresponding static diagrams. Each of these representations of
information is organized in some fashion, similar to sections and sub-
sections of book chapters, thus allowing students to navigate freely
throughout the environment. Imagine a self-regulated student who
analyzes the learning situation, sets meaningful learning goals, and
determines which strategies to use based on the task conditions. The
student may also generate motivational beliefs based on prior experience
with the topic and learning environment, success with similar tasks,
contextual constraints (e.g., provision of scaffolding and feedback by an
artificial pedagogical agent), and contextual demands (e.g., a time limit
for completion of the task). During the course of learning, the student may
assess whether particular strategies are effective in meeting his learning
sub-goals, evaluate his emerging understanding of the topic, and make
the necessary adjustments regarding his knowledge, behavior, effort, and
other aspects of the learning context. Ideally, the self-regulated learner
Metacognition and Self-Regulated Learning 173
will make adaptive adjustments, based on continuous metacognitive
monitoring and control related to the standards for the particular
learning task and these adjustments will facilitate decisions regarding
when, how, and what to regulate (Pintrich, 2000; Schunk, 2001; Winne
& Hadwin, 1998, 2008; Winne & Nesbit, 2009; Zimmerman, 2008;
Zimmerman & Schunk, 2011). Depending on the task with the learning
environment and sometime after the learning session, the learner may
make several cognitive, motivational, and behavioral attributions that
affect subsequent learning (Pintrich, 2000; Schunk, 2001). This scenario
represents an idealistic approach to self-regulating one’s learning with
an SCLE. Unfortunately, the typical learner does not engage in these
complex adaptive cognitive and metacognitive processes during learning
with SCLEs (see Azevedo & Witherspoon, 2009; Biswas et al., 2010). As
such, the educational potential of these environments is severely limited.
Overview of SRL Models
Self-regulated learning (SRL) theories attempt to model how cognitive,
metacognitive, motivational, and emotional processes and contextual
factors influence the learning process (Pintrich, 2000; Winne, 2001;
Winne & Hadwin, 1998, 2008; Zimmerman, 2000, 2008). Although
there are important differences between various theoretical definitions,
self-regulated learners are generally characterized as active and efficient
at managing their own learning through monitoring and strategy use
(Boekaerts, Pintrich, & Zeidner, 2000; Butler & Winne, 1995; Efklides,
2011; Greene & Azevedo, 2007; Pintrich, 2000; Winne, 2001; Winne &
Hadwin, 1998, 2008; Zimmerman, 2001). Students are self-regulated to
the degree that they are metacognitively, motivationally, and behaviorally
active participants in their learning (Zimmerman, 1986).
SRL has also been described as a constructive process wherein learners
set goals on the basis of both their past experiences and their current
learning environments (Pintrich, 2000). These goals become the criteria
toward which regulation aims. In essence, SRL mediates the relations
between learner characteristics, context, and performance (Pintrich,
2004). Pintrich (2000) organized SRL research using a taxonomy focusing
on the phases and areas of self-regulation. These phases include task
identification and planning, monitoring and control of learning strategies,
and reaction and reflection. The various areas in which self-regulation
can occur fall into four broad categories: cognition, motivation, behavior,
and context. By crossing phases and areas, Pintrich presented a four-by-
four grid wherein various research findings and theoretical constructs can
be categorized. For example, feeling of knowing (FOK) is a monitoring
process within the area of cognition, whereas changing or re-negotiating
a task with a pedagogical agent, teacher, or peer presents the enactment
of a context-control strategy.
174 Azevedo, Behnagh, Duffy, Harley and Trevors
Pintrich’s (2000) taxonomy helps researchers organize the many lines
of current SRL research and gives some general information regarding
how they might relate. This is particularly relevant for understanding the
nature of SRL during learning with SCLEs. Different models of SRL focus
on specific cells or groups of cells within Pintrich’s (2000) taxonomy. For
example, Winne and Hadwin’s (1998, 2008) model of SRL, based on the
information processing theory (IPT), complements the work of Pintrich
and others by more specifically outlining the cognitive processes that
occur during learning, as well as re-conceptualizing some of the phases
(Winne, 2001). This affords a different perspective on SRL. However,
given the number of SRL models currently in existence, the question is
how these contributions aid in understanding learning with SCLEs. More
specifically, this chapter addresses how these processes influence students’
learning with SCLEs and how they can be designed to support and foster
students’ SRL processes.
Theoretical Framework: Information-Processing Theory of SRL
Self-regulated learning (SRL) involves actively constructing an
understanding of a topic or domain by using strategies and goals;
monitoring and regulating certain aspects of cognition, behavior, and
motivation; and modifying behavior to achieve the desired goal(s)
(see Boekaerts et al., 2000; Pintrich, 2000; Zimmerman & Schunk,
2001). Though this definition of SRL is commonly used, the field of
SRL consists of various theoretical perspectives that make different
assumptions and focus on different constructs, processes, and phases (see
Azevedo et al., 2010; Dunlosky & Lipko, 2007; Metcalfe & Dunlosky,
2009; Pintrich, Wolters, & Baxter, 2000; Schunk, 2005; Winne &
Hadwin, 2008; Zimmerman, 2008). We further specify SRL as a concept
superordinate to metacognition that incorporates both metacognitive
monitoring (i.e., knowledge of cognition or metacognitive knowledge)
and metacognitive control (involving the skills associated with the
regulation of metacognition), as well as processes related to manipulating
contextual conditions and planning for future activities within a learning
episode. SRL is based on the assumption that learners exercise agency by
consciously monitoring and intervening in their learning.
Most of the contemporary research on SRL with SCLEs (e.g., see
special issues by Azevedo, 2005; Greene & Azevedo, 2010; Clarebout,
Horz, & Elen, 2009; Zumbach & Bannert, 2006) have drawn on Winne
and colleagues’ (Butler & Winne, 1995; Winne, 2001; Winne & Hadwin,
1998, 2008) Information Processing Theory (IPT) of SRL. This IPT
theory suggests a four-phase model of SRL. The goal of this section is to
explicate the basics of the model so as to emphasize the linear, recursive,
and adaptive nature of SRL (see Greene & Azevedo, 2007, for a recent
review).
Metacognition and Self-Regulated Learning 175
Winne and Hadwin (1998, 2008) propose that learning occurs in
four basic phases: (1) task definition, (2) goal-setting and planning,
(3) studying tactics, and (4) adaptations to metacognition. Winne and
Hadwin’s SRL model differs from the majority of other SRL models in
that they hypothesize that information processing occurs within each
phase. Using the acronym COPES, they describe each of the four phases
in terms of the interactions between a learner’s conditions, operations,
products, evaluations, and standards. All of the terms except operations
are kinds of information used or generated during learning. It is within
this cognitive architecture, comprised of COPES, that the work of each
phase is completed. Thus, their model complements other SRL models by
introducing a more complex description of the processes underlying each
phase. It should be noted that Winne and Hadwin’s model is similar to
other models which focus on the underlying cognitive and metacognitive
processes, accuracy of metacognitive judgments, and control processes
used to achieve particular learning goals (e.g., see Hacker, Dunlosky, &
Graesser, 2009).
Cognitive and task conditions are the resources available to the person
and the constraints inherent to the task or environment. Cognitive
conditions include beliefs, dispositions and styles, motivation, domain
knowledge, knowledge of the current task, and knowledge of study tactics
and strategies. Task conditions are external to the person, and include
resources, instructional cues, time, and the local context. Thus, in Winne
and Hadwin’s model, motivation and context are subsumed in conditions.
Conditions influence both standards as well as the actual operations a
person performs. The conditions represent both the characteristics of
the learner and the context that set the stage for initially deciding how
to proceed with a task (e.g., generate a plan, activate relevant prior
domain knowledge) and how external task conditions may influence a
learner’s ability to monitor and regulate their learning given particular
task conditions such as the amount of time to solve a set of problems
and the level of accessibility to relevant instructional resources needed to
complete the task.
Standards are multi-faceted criteria that the learner believes are the
optimal end state of whatever phase is currently running, and they include
both metrics and beliefs. For example, in the task definition phase, a learner
might examine a list of learning goals set by an artificial pedagogical agent
for a learning task and develop task standards including what needs to
be learned (metrics). The learner may also develop beliefs about the act
of studying itself, such as the depth of understanding required, or how
difficult the task will be. The model uses a bar graph to illustrate how a
learner actively determines criteria for “success” in terms of each aspect
of the learning task, where each bar represents a different standard with
varying qualities or degrees. The overall profile of these standards from
phase one constitutes the learner’s goal. These standards or goals are used
176 Azevedo, Behnagh, Duffy, Harley and Trevors
to determine the success of any operations the learner might perform
within each phase. One of the most challenging aspects of understanding
the role of standards is that they are internally represented in a learner’s
cognitive system and are rarely accessible to the learning environment
during learning.
Operations are the information manipulation processes that students
use during learning, which include searching, monitoring, assembling,
rehearsing, and translating. These processes are also known as SMART
processes (Winne, 2001). These SMART processes are cognitive in
nature, rather than metacognitive and as such they only result in cognitive
products, or information for each phase. For example, the product of
phase one is a definition of the task; whereas the product of phase
three might be the ability to recall a specific piece of information for
a test. These products are then compared with the standards by way
of monitoring. Through monitoring, a learner compares products with
standards to determine if their objectives for a given phase have been
met, or if further work remains to be done. These comparisons are called
cognitive evaluations, and they become important when, for example,
a student detects a poor fit between products and standards and as a
result, enacts control over the learning operations to refine the product,
revise the conditions and standards, or both. This is the object-level
focus of monitoring. However, this monitoring also includes a meta-
level information, or metacognitive, focus. For example, a learner may
believe that a particular learning task is easy, and thus translate this belief
into a standard in phase two. However, in iterating through phase three,
perhaps the learning product is consistently evaluated as unacceptable
in terms of object-level standards. This may initiate metacognitive
monitoring that determines that this meta-level information, in this case
regarding the actual difficulty of this task, does not match the previously
set standard that the task is easy. At this point, a metacognitive control
strategy might be initiated to modify (or to update) that particular
standard (e.g., “this task is difficult”) which might, in turn, affect other
standards created during phase two; goal setting. These changes to goals
from phase two may include a review of past material or the learning of a
new learning strategy. Thus, the model is a “recursive, weakly sequenced
system” (Winne & Hadwin, 1998) where the monitoring of products
and standards within one phase can lead to updates of products from
previous phases. The inclusion of monitoring and control in the cognitive
architecture allows these processes to influence each phase of SRL.
Overall, while there is no typical cycle, most learning involves
recycling through the cognitive architecture until a clear definition of the
task has been created (phase one), followed by the production of learning
goals and the best plan to meet them (phase two), which leads to the
enactment of strategies to begin learning (phase three). The products of
learning, for example, an understanding of pulmonary circulation, are
Metacognition and Self-Regulated Learning 177
compared against standards including the overall accuracy of the product,
the learner’s beliefs about what needs to be learned, and other factors
like efficacy and time constraints. If the product does not adequately fit
the standard, then further learning operations are initiated, perhaps with
changes to conditions such as setting aside more time for studying. Finally,
after the main processes of learning have occurred, learners may decide
to further alter their beliefs, motivation, and strategies that make up SRL
(i.e., phase four of the model). These changes can include the addition or
deletion of conditions or operations, as well as minor (tuning) and major
(restructuring) changes to the ways conditions cue operations (Winne,
2001). The output, or performance, is the result of recursive processes
that cascade back and forth, altering conditions, standards, operations,
and products as needed.
Lastly, Winne and Nesbit (2009) state that certain hypotheses can be
postulated when adopting a model of SRL. First, before committing to a
goal, a learner must recognize the features of the learning environment
that affect the odds of success. Second, if such features are recognized,
then they need to be interpreted, a choice must be made (e.g., set a goal),
and the learner needs to select amongst a set of learning strategies that
may lead to successful learning. If these first conditions are satisfied, the
learner must have the capability to apply these learning strategies. If these
three conditions are met, then the learner must be motivated to put forth
the effort required to apply the selected learning strategies. In sum, this
model provides a macro-level model and elegantly accounts for the linear,
recursive, and adaptive nature of SRL with SCLEs.
Micro-Level Model of SRL as an Event
Azevedo, Greene, Moos, and colleagues, following Winne’s model,
have provided a detailed analysis of the cognitive and metacognitive
processes used by learners of all ages when using several SCLEs including
hypermedia, simulations, intelligent tutoring systems, and multi-agent
learning environments (see Azevedo, Greene, & Moos, 2007; Azevedo,
Moos, Greene, Winters, & Cromley, 2008; Azevedo, Cromley, Moos,
Greene, & Winters, 2011; Azevedo & Witherspoon, 2009; Greene &
Azevedo, 2007, 2009, 2010). Their analyses of SRL processes during
learning with SCLEs are of particular relevance since they treat SRL as
an event. Their analyses of hundreds of concurrent think-aloud protocols
and other processes data (e.g., log-files) provide detailed evidence of the
micro-level processes that can augment Winne and Hadwin’s (1998, 2008)
model. In general, these processes include planning, monitoring, strategy
use, handling of task difficulty and demands, and interest activities. In this
section, we describe definitions and examples of metacognitive processes
typically used with SCLEs and then present how their monitoring processes
and corresponding judgments are addressed by regulatory processes.
178 Azevedo, Behnagh, Duffy, Harley and Trevors
Monitoring Processes during Learning with SCLEs
In this section, we present several monitoring processes, which we have
identified in our studies on SRL with hypermedia. Although many of
these processes are likely context-independent, applicable to learning
with various SCLEs, some are most appropriately applied to learning
with hypermedia, in situations where learners have control over which
content, in which modality, they access at any given moment. As previously
mentioned, Winne and colleagues’ model provides a macro-level
framework for the cyclical and iterative phases of SRL. The data presented
in this section provides the micro-level details that can interface Winne’s
model. In particular, we present the eight metacognitive monitoring
processes we have identified as essential to promoting students’ SRL
with hypermedia. Some of these monitoring processes include valence,
positive (+) or negative (–), which indicates the learners’ evaluation of the
content, their understanding, progress, or familiarity with the material.
For example, a learner might state that the current content is either
appropriate (content evaluation +) or inappropriate (content evaluation
–), given their learning goals and according to which valence is associated
with the evaluation (and accuracy of the metacognitive judgment). They
may also make choices about how and which metacognitive regulatory
process to choose in order to address the result of the metacognitive
judgment (e.g., set a new goal, summarize the content).
Feeling of knowing (FOK) is when the learner is aware of having (+) or
not having (–) read, heard, or inspected something in the past and having
(+) or not having (–) some familiarity with the material. For example, a
learner may be familiar with a particular static external representation
of the circulatory system showing oxygenated and deoxygenated blood
paths. Judgment of learning (JOL) is when a learner becomes aware that
he/she does (+) or does not (–) know or understand something he/she
read, inspected, or heard. For example, a learner states that he does
not understand the explanation of the concept of homeostasis verbally
described to him by a pedagogical agent. Another important monitoring
process is monitoring use of strategies (MUS). In MUS, the learner
acknowledges that a particular learning strategy he has employed was
either useful (+) or not useful (–). An example of a learner monitoring
use of strategies is: “Yeah, drawing it really helps me understand how
blood flows throughout the heart.” Self-test (ST) is when a learner poses
a question to himself to assess his understanding of the content and
determine whether to proceed with additional content or to re-adjust his/
her use of strategies. An example of a learner self-testing is: “Ok, how
do lower-level organisms support a pond’s ecosystem?” In monitoring
progress toward goals (MPTG), learners assess whether previously set goals
have been met (+) or not met (–), given time constraints. This monitoring
process includes a learner comparing the goals set for the learning task
Metacognition and Self-Regulated Learning 179
with those that he/she has already accomplished and those that still need
to be addressed in the remainder of the session, but rarely occurs during a
learning session. A related metacognitive process is time monitoring (TM)
which involves the learner becoming aware of the remaining time which
was allotted for the learning task. An example of a learner monitoring
his time is: “I still have 30 minutes, that’s plenty of time.” Content
evaluation (CE) is when the learner monitors the appropriateness (+) or
inappropriateness (–) of the current learning content (e.g., text, diagram,
animation or any other type of static and dynamic external representation
of information), given their pre-existing overall learning goal and sub-
goals. An example of a learner evaluating the content is: “This section,
which includes a description of the cycles of blood flow through the heart
and lungs and a labeled diagram of the heart is important and helpful
for me to understand the different components of the heart.” Finally,
evaluation of adequacy of content (EAC) is similar to CE, in that learners
are monitoring the learning content, given their learning goals, but in this
process, learners evaluate learning content they have not yet navigated
toward. An example of a learner evaluating the adequacy of content is:
“Do they have a picture of the blood flow through the heart?” In sum,
these are just a few of the relevant metacognitive monitoring processes
used by students during learning with SCLEs. It should be highlighted,
based on our previous discussions of models of SRL, that these as well as
other metacognitive processes play a role in facilitating and supporting
students’ SRL with SCLEs.
Self-Regulation Based on Metacognitive Monitoring Processes
In this section, we describe the learner’s application of these eight
monitoring processes within the context of self-regulation with
hypermedia. The processes described in this section are based on
empirical findings (e.g., Azevedo et al., 2010). For each monitoring
process, we provide the aspects of the learning environment that are
evaluated by learners and illustrate them using examples of task and
cognitive conditions, which may lead to the various monitoring processes,
as well as examples of appropriate control mechanisms which might be
deployed following the evaluations. Feeling of knowing (FOK) is used
when the learner is monitoring the correspondence between his or
her own pre-existing domain knowledge and the current content. The
learner’s domain knowledge and the learning resources are the aspects
of the learning situation being monitored when a learner engages in
FOK. If a learner recognizes a mismatch between his pre-existing domain
knowledge and the learning resources, more effort should be expended
in order to align the domain knowledge and the learning resources.
Following more effortful use of the learning material, a learner is more
likely to experience/generate more positive FOKs. However, if a learner
180 Azevedo, Behnagh, Duffy, Harley and Trevors
experiences familiarity with some piece of material, a good self-regulator
will attempt to integrate the new information with existing knowledge by
employing knowledge elaboration (KE). Often, a learner will erroneously
sense a positive FOK toward material, and quickly move on to other
material, with several misconceptions still intact. In contrast to FOK,
judgment of learning (JOL) is used when the learner is monitoring the
correspondence between his own emerging understanding of the domain
and the learning resources. Similar to FOK, when engaging in JOL, a
learner is monitoring his domain knowledge and the learning resources. If
a learner recognizes that his emerging understanding of the material is not
congruent with the material (i.e., the learner is confused by the material),
more effort should be applied to learn the material. A common strategy
deployed after a negative JOL is re-reading previously encountered
material. In order to capitalize on re-reading, a good self-regulator
should pay particular attention to elements in a passage, animation, or
illustration that confused him. When a learner expresses a positive JOL,
he might self-test to confirm that the knowledge is as accurate as the
evaluation suggests. As with FOK, learners often over-estimate their
emerging understanding and progress too quickly to other material. When
monitoring use of strategies (MUS), a learner is monitoring the efficacy
of recently used learning strategies, given his expectations for learning
results. MUS encompasses a learner’s monitoring of learning strategies,
expectations of results, and domain knowledge. By noting the learning
strategies used during a learning task and the resulting change in domain
knowledge, learners can compare their emergent knowledge with their
learning expectations and engage in SRL to make changes to the strategies
employed accordingly. For example, many learners will begin a learning
episode by taking copious amounts of notes, then realize that the learning
outcomes from this strategy are not as high as they would have expected.
Good self-regulators will then make alterations to their strategy of note-
taking such as employing more efficient methods (making bullet points,
outlines, or drawings), or even abandon this strategy for another, more
successful strategy (e.g., summarizing). However, if a learner realizes that
a particular strategy has been especially helpful to his learning, he should
continue to employ this strategy during the learning session. Learners
self-test (ST) to monitor their emerging understanding of the content and
the aspects of the learning situation being monitored are their domain
knowledge and their expectations of the content. While tackling difficult
material learners should occasionally assess their level of understanding
of the material, by engaging in self-testing. If the results of this self-test
are positive, the learner can progress to new material, but if the learner
recognizes, through this self-test, that his emergent understanding of the
current material is not congruent with what is stated in the material, he
should revisit the content to better comprehend it. When monitoring
progress toward goals, a learner is monitoring the fit between his learning
Metacognition and Self-Regulated Learning 181
results and previously set learning goals for the session. The aspects of the
learning situation which are monitored during MPTG are the learner’s
domain knowledge, his expectations of results, and the learning goals.
Closely related to time monitoring, MPTG is an essential monitoring
activity that learners should use to stay “on-track” to their completion of
the learning task. A learner may be able to generate several critical sub-
goals for his learning task, but if he does not monitor the completion or
incompletion of these sub-goals, the sub-goal generation SRL strategy will
be inadequate. When a learner monitors the progress toward his goals
and realizes that he has only accomplished one out of three of his sub-
goals in 80 percent of the time devoted to the learning task, a good self-
regulator will revisit the remaining sub-goals and decide which is most
important to pursue next. In time monitoring, the learner is monitoring
the task condition of time, with respect to their pre-existing learning
goals. These learning goals can be either the global learning goal defined
before engaging in the learning task, or sub-goals created by the learner
during the learning episode. If the learner recognizes that very little time
remains and few of the learning goals have been accomplished, he should
make adaptations to the manner in which the material is being tackled.
For example, if a learner has been reading a very long passage for several
minutes and realizes that he has not accomplished the learning goals, a
good self-regulator will begin scanning the remaining material related to
the goals he has not yet reached.
When learners engage in content evaluation, they are monitoring the
appropriateness or inappropriateness of the learning material that they
are currently reading, hearing, or viewing with regard to the overall
learning goal or sub-goal(s) they are currently pursuing. In contrast to
content evaluation, evaluation of adequacy of content relates to the
learner’s assessment of the appropriateness of available learning content,
rather than content currently being inspected. The aspects of the learning
situations monitored in both of these processes are the learning resources
and the learning goals. The learner should remain aware of whether
learning goals and learning resources are complementary. If a learner
evaluates a particular piece of material as particularly appropriate given
their learning goal, he should direct more cognitive resources toward
this material (or navigate toward this material), and persist in reading
or inspecting the content in order to achieve this goal. Conversely, if a
particular piece of content is evaluated as inappropriate with respect to
a learning goal, a good self-regulator will navigate away from (or not
at all toward) this content to seek more appropriate material. In sum,
these monitoring processes and corresponding regulatory processes are
based on studies examining the role of self-regulatory processes deployed
by learners during learning with open-ended hypermedia learning
environments. They also play a critical role during learning with other
SCLEs described in the next section.
182 Azevedo, Behnagh, Duffy, Harley and Trevors
Examples of SRL in Several SCLEs
In this section, we exemplify how SRL has been embodied in several
contemporary SCLEs designed to detect, track, support and foster students’
metacognitive monitoring and control processes during learning. The aim
is to illustrate how different researchers have conceptualized assumptions
and models in the architecture of their SCLEs. The examples include
Azevedo and colleagues’ MetaTutor, a multi-agent, adaptive hypermedia
learning environment; Biswas and colleagues’ Betty’s Brain, an agent-
based environment for teaching middle-school students about ecology;
White and Frederiken’s multi-agent ThinkerTools environment for inquiry
learning; and Lester and colleagues’ Crystal Island, a narrative-based and
inquiry-oriented serious game learning environment for science.
MetaTutor
MetaTutor is a multi-agent, adaptive hypermedia learning environment,
which presents challenging human biology science content. The primary
goal underlying this environment is investigating how SCLEs can
adaptively scaffold SRL and metacognition within the context of learning
about complex biological content. MetaTutor is grounded in a theory
of SRL that views learning as an “active, constructive process whereby
learners set goals for their learning and then attempt to monitor, regulate,
and control” their cognitive and metacognitive processes in the service of
those goals (Pintrich, 2000, p. 453). More specifically, MetaTutor adopts
Winne and Hadwin’s IPT model and is based on several theoretical
assumptions of SRL that emphasize the role of cognitive, metacognitive
(conceptualized as being subsumed under SRL, cf. Veenman, 2007),
motivational, and affective processes. Moreover, learners must regulate
their cognitive and metacognitive processes in order to integrate multiple
informational representations available from the system (Azevedo, 2008,
2009; Azevedo et al., 2011; Mayer, 2005). While all students have the
potential to regulate, few students do so effectively, possibly due to
inefficient or lacking cognitive or metacognitive strategies, knowledge, or
control (Dunlosky & Bjork, 2008; Pressley & Hilden, 2006; Veenman,
2007).
Embedded in MetaTutor are a multitude of features that embody and
foster SRL. Four pedagogical agents guide students through the two-hour
learning session and prompt students to engage in planning, monitoring,
and strategic learning behaviors. The system uses natural language
processing to allow learners to express metacognitive monitoring and
control processes. For example, learners can type that they do not
understand a paragraph and can also use the interface to summarize a
static illustration related to the circulatory system. In addition, the agents
can provide feedback and engage in a tutorial dialogue in an attempt to
Metacognition and Self-Regulated Learning 183
scaffold students’ selection of appropriate learning sub-goals, accuracy
of metacognitive judgments, and use of particular learning strategies.
Additionally, the system collects information from user interactions with
the system to provide adaptive feedback on the deployment of their SRL
behaviors. For example, students can be prompted to self-assess their
understanding (i.e., system-initiated JOL) and are then administered a
brief quiz. Results from the self-assessment and quiz allow pedagogical
agents to provide adaptive feedback according to the calibration between
students’ confidence of comprehension and their actual quiz performance.
The design layout also supports SRL processes. As depicted on the
right in Figure 7.1, an embedded palette provides the opportunity for
learners to initiate an interaction with the system according to the SRL
process selected (e.g., take notes). Overall, and in line with its theoretical
foundations, MetaTutor supports and fosters a variety of SRL processes
including: prior knowledge activation, goal setting, evaluation of learning
strategies, integrating information across representations, content
evaluation, summarization, note-taking, and drawing. Importantly, it also
scaffolds specific metacognitive processes such as judgments of learning,
feelings of knowing, and monitoring progress towards goals.
Some aspects of the espoused theoretical models of SRL are yet to be
implemented. Initially, the theoretical and empirical foci have been on
cognitive, metacognitive, and behavioral learning processes. Thus, this
SCLE does not extensively incorporate the motivational and affective
Figure 7.1 Screenshot and brief descriptions of MetaTutor learning environment
184 Azevedo, Behnagh, Duffy, Harley and Trevors
dimensions of SRL into its design. Moving forward, the varieties and
regulation of learners’ affective processes, the affective qualities of
human-agent interaction, and how the system and learners’ self-regulation
influence the activation, awareness, and protection of motivation will
be areas of interest with important implications for SRL theory and
instructional design.
Betty’s Brain
Betty’s Brain (Biswas, Leelawong, Schwartz, & Vye, 2005; Biswas et al.,
2010; Leelawong & Biswas, 2008) is an agent-based learning environment
developed to help students learn about complex topics in middle school
science classrooms. Learning takes place by students performing a
knowledge construction task in which they teach a virtual agent, called
Betty, using a visual representation called a causal map. The causal map
includes concepts and causal links between pairs of concepts in the
relevant science domain, like ecology and thermoregulation. Students can
access the science content, which is available in hypertext, to identify the
relationships between the concepts during their learning task. They can
also ask Betty questions about the cause-and-effect relationships they just
created in the causal map to see if she understands what she has been
taught, to which Betty responds by explaining her chain of reasoning using
text and animation schemes. Betty’s understanding can also be checked by
asking her to take quizzes, which are administered by the Mentor Agent,
Mr Davis, in the learning environment. Mr Davis grades Betty’s responses
based on the hidden “expert” concept map implemented into the system,
which is not visible for the student or Betty (Biswas et al., 2010). In the
event that Betty makes a mistake, students can browse the hypertext
content or get hints from the Mentor Agent, Mr Davis, to teach Betty the
correct causal relationship (see Figure 7.2).
According to Kinnebrew, Biswas, and Sulcer (2010), one of the
goals of the Betty’s Brain project was to find out the degree to which
the agents’ metacognitive and SRL prompts could help improve student
learning. They also developed methods to identify and interpret students’
learning strategies based on the trace of their interactions with the system.
Moreover, Biswas and his research team sought to determine students’
acceptance of the strategies recommended by the agents, and how the
feedback provided by the agents influenced their learning activities.
Betty’s Brain utilizes learning-by-teaching and social constructive
learning frameworks (Schunk, 2005; Zimmerman & Schunk, 2001) and
helps students learn science and mathematics topics in a self-directed
and open-ended setting (Kinnebrew et al., 2010). In the learning-by-
teaching method, students take the role of teaching a virtual student,
Betty, which is believed to lead to the development of sophisticated
metacognitive strategies in students. Kinnebrew et al. (2010) also indicate
Metacognition and Self-Regulated Learning 185
that this approach is a less threatening way for students to assess their
own understanding. The theory underlying the design of Betty’s Brain
is the model of self-regulation proposed by Pintrich (2002), where he
differentiates between two major aspects of metacognition for learners:
metacognitive knowledge and metacognitive control. Kinnebrew et
al. (2010) further explain their adoption of Pintrich’s (2002) model
by classifying the knowledge construction strategies into information
seeking and information structuring, and the monitoring strategies into
checking (querying or quizzing to test the correctness of one’s causal
map) and probing (asking for explanations and identifying errors). They
also believe that Betty’s Brain supports five types of activities: reading
(the hypermedia content), editing (teaching concepts to Betty), querying
(asking Betty questions), explaining (prompting Betty to explain her
reasoning), and quizzing (having Betty take quizzes). Two interactive
factors of the Betty’s Brain environment are believed to support students’
self-regulation of learning: the visual shared representation used to teach
Betty and the shared responsibility, which refers to the joint responsibility
of teaching and learning between the student and Betty (Biswas, Roscoe,
Jeong & Sulcer, 2009).
According to Biswas and Sulcer (2010), the detailed log file data collected
by the Betty’s Brain learning environment, which includes quizzes, the
resources visited, queries made, or explanations asked, contain a lot of
Figure 7.2 Betty’s Brain system with the query window (Kinnebrew, Biswas,
Sulcer, & Taylor, in press)
186 Azevedo, Behnagh, Duffy, Harley and Trevors
irrelevant data, which is hard to interpret for pedagogical purposes and
for deciding on the type of feedback that the agent should give students
to assist them in scaffolding their learning. They also argue that screen
recording videos contain a great deal of irrelevant and distracting detail,
which makes the analysis difficult. They attribute the problem to the
fact that Betty’s Brain is an open-ended learning environment, in which
students have many different ways to solve the learning task. However,
detailed log file data has the potential to provide interesting trace data
regarding students’ metacognitive calibration during the learning session,
and can also be used to investigate how students set goals, monitor their
learning, use different strategies, and remedy their lack of understanding.
These data sources further assist researchers, designers, and teachers in
better understanding when and how to scaffold students’ learning and
provide timely and appropriate feedback.
ThinkerTools: Scaffolding Self-Regulated Learning to Promote
Collaborative Inquiry
The ThinkerTools Research Group has developed a suite of SCLEs
that aim to promote collaborative inquiry and SRL by scaffolding
these processes through student–agent interactions. The multi-agent
systems developed by this group include SCI-WISE (Shimoda, White,
& Frederiksen, 2002; White, Shimoda, & Frederiksen, 1999), Web of
Inquiry, and Inquiry Island (White & Frederiksen, 2005; White et al.,
2002, 2009). These systems are successors to the original ThinkerTools
Force and Motion software (White, 1993), which aimed to help improve
young students’ understanding of Newtonian physics principles using
interactive simulations and reflective learning.
ThinkerTools uses collaborative inquiry as a platform for SRL and
metacognitive development. According to the inquiry cycle, groups of
students are guided through the process of question and hypothesis
development, experimentation, modeling, application, and evaluation,
which then leads back to the generation of new questions in a cyclical
manner (White & Frederiksen, 1998, 2005). To effectively support this
type of learning, White et al. (2009) argue that metacognitive skills are
critical. Given their emphasis on promoting the forethought, performance,
and self-reflection phases of SRL in collaborative environments (White &
Frederiksen, 2005), the design of ThinkerTools is grounded in a social-
cognitive model of SRL (e.g., Schunk & Zimmerman, 1998; Zimmerman
& Schunk, 2001). Several features embedded in these systems serve to
support this SRL framework.
To begin with, the intelligent agents within these systems are designed
to provide explicit models of cognitive, metacognitive, and social
processes. A team of agents, referred to as software advisors, are available
to offer students advice and strategies that prompt them to engage in
Metacognition and Self-Regulated Learning 187
SRL processes such as planning, monitoring, reflecting, and revising
throughout each stage of inquiry (White et al., 2009). Additional features
of these programs include goal sliders, project journals, progress reports
and research notebooks, which allow students to track and assess their
progress and task products, such as models or questions they have created
(Shimoda et al., 2002; White & Frederiksen, 2005).
In the more recent developments, such as Inquiry Island and SCI-WISE
(see Figure 7.3), students can improve agent interactions by modifying
advice settings, such as the content and timing of the advice that they
receive (Shimoda et al., 2002; White & Frederiksen, 2005). The Inquiry
Island curriculum has also extended its pedagogical approach by providing
opportunities for students to adopt advisor responsibilities through role-
play (beyond the use of the system)—a process that encourages students
to internalize the self-regulatory skills modeled by the agents. Combined,
these features allow students to build and modify their own models of
SRL and apply them to new contexts (White & Frederiksen, 2005).
Empirical findings suggest that these inquiry-based systems serve as
effective metacognitive tools. For instance, results from an inquiry test
revealed that fifth-grade students who used Inquiry Island demonstrated
significantly higher scores than a comparison group that did not participate
(White & Frederiksen, 2005). Moreover, in terms of SRL, those who
used Inquiry Island and engaged in subsequent role-play activities showed
Figure 7.3 SCI-WISE interface: screenshots of the project journal, meeting room,
and software advisor (White, Shimoda, & Frederiksen, 1999)
188 Azevedo, Behnagh, Duffy, Harley and Trevors
significant improvements in metacognitive knowledge compared with
those who did not participate. This finding, along with analyses of
student dialogues, interview responses, and answers to test questions
suggest that when ThinkerTools programs are coupled with role-playing
activities, students demonstrate improved understanding of the purpose
and applications of metacognition and SRL (White & Frederiksen, 2005;
White et al., 2009).
ThinkerTools learning outcomes have typically been assessed using
data from interviews, observations, metacognitive knowledge tests, and
analysis of artifacts, such as research projects. While this provides useful
information, these types of SCLEs could also benefit from finer grained
analysis by incorporating trace methodologies to capture SRL processes
as they unfold over time (admittedly, this becomes more challenging in
collaborative environments). This could include assessing micro-level
metacognitive and SRL processes, such as judgments of learning and
feelings of knowing. Furthermore, affective and motivational dimensions
of SRL do not appear to be directly integrated into agents’ metacognitive
expertise. Still, the ThinkerTools curriculum represents an important
shift toward authentic learning environments by providing students with
opportunities to internalize SRL processes through system modifications
and role-play. Such an approach is unique from other SCLEs in that it
provides students with more autonomy to design and apply their personal
model of SRL.
Crystal Island
Crystal Island is an innovative SCLE that deploys a rich, narrative-
centered, inquiry-based approach to teaching eighth-grade microbiology
(McQuiggan, Rowe, Lee, & Lester, 2008; Nietfeld, Hoffman,
McQuiggan, & Lester, 2008). An additional distinguishing feature of
Crystal Island is its game-like environment, which benefits both from the
developed storyline and the use of Valve Software’s SourceTM engine; the
same engine used by the Half Life 2 game. Narrative-centered learning
environments (NLEs) combine story contexts and pedagogical support
strategies, including the use of artificial intelligence (AI) techniques, to
deliver engaging, educational experiences where both the narrative and
educational content is tailored to students’ actions, metacognitive and
affective states, and abilities (McQuiggan et al., 2008) (see Figure 7.4).
Crystal Island has been used to investigate the roles and interactions
of a variety of processes and phenomena in a narrative-centered learning
environment, including inquiry-based learning, human–computer
interaction, affect, engagement, presence, perception of control and a
number of metacognitive and SRL elements and strategies (see McQuiggan
et al. 2008; McQuiggan, Goth, Ha, Rowe & Lester, 2007a; McQuiggan,
Hoffman, Nietfeld, Robinson, & Lester, 2008a; McQuiggan, Lee, &
Metacognition and Self-Regulated Learning 189
Lester, 2007; Mott & Lester, 2006; Nietfeld et al. 2008; Robinson,
McQuiggan, & Lester, 2010).
Crystal Islands’ ability to foster SRL (Butler & Winne, 1995; Greene &
Azevedo, 2007; Pintrich, 2000; Winne & Hadwin, 1998; Zimmerman,
2000) has been investigated by looking at goal orientations (Elliot &
Dweck, 1988; Elliot & McGregor, 2001), situational interest (Schraw &
Lehman, 2001) and metacognitive and SRL strategies such as note taking
and metacognitive monitoring (Butler & Winne, 1995; Winne, 2001). The
majority of this research comes from recent studies, such as McQuiggan
et al. (2008b), which examined several types of note taking and their
relationship with outcome and post-test measures. They found that students
who took in-game, hypothesis-type notes (containing a possible solution
to the problem in the Crystal Island storyline) performed significantly
better on post-test measures than students who did not. Furthermore,
hypothesis-type notes were significantly and positively correlated with
students’ self-efficacy scores from Bandura’s Self-Efficacy for SRL scale
(2006). Another SRL strategy Crystal Island deployed was metacognitive
monitoring, where participants were prompted at 90-second intervals to
evaluate their progress towards accomplishing their goals. Nietfeld and
colleagues (2008) found that students’ evaluations were highly correlated
with their performance, including score, actions and goals completed and
negatively correlated with number of guesses. Goals were generated by
the game, the most superordinate of which was to identify the source of
an outbreak on the island. Nietfeld et al. (2008) found that presenting the
goal of the game as learning-oriented rather than performance-oriented
Figure 7.4 Crystal Island narrative-centered learning environment (Rowe, Shore,
Mott, & Lester, 2010)
190 Azevedo, Behnagh, Duffy, Harley and Trevors
led learners to report significantly higher levels of interest towards the
game.
These initial results highlight Crystal Island’s capacity as an environment
designed to foster SRL and metacognitive skills and processes through
various aspects of its narrative and game-like architecture. Future
directions for Crystal Island may include expanding the scope of the
SRL and metacognitive processes and strategies measured and evaluated
as well as moving from measuring to scaffolding students’ SRL and
metacognitive skill development. One approach to doing this, which
would be in line with Crystal Island’s narrative-based approach, would
be to have the non-playable characters (NPCs) act as in-game SRL and
metacognitive tutors. In this capacity these agents would try to help the
students’ avatar solve the problem “they” collectively face by equipping
him/her with cognitive and metacognitive problem-solving skills. For
example, a brief tutorial of effective note taking could be provided at
the beginning of the game and followed up throughout the serious game
with NPCs prompting note-taking as well as evaluating the quality of the
notes (e.g. verbatim, summary, unrelated). NPCs could also evaluate the
effectiveness of students’ confidence ratings in the monitoring task as well
as prompt them to monitor their feeling of knowing, use of strategies
(e.g. note taking, summarization, help seeking), and progress towards
sub-goals. Other elements of SRL and metacognition, such as planning,
sub-goal setting and prior-knowledge activation could also be integrated
in the form of NPC plot-relevant dialogue. Indeed, Crystal Island has
demonstrated that it is a unique and promising potential test bed for SRL
and metacognitive theory and research.
Conclusions and Future Implications
This chapter has highlighted the importance of metacognition and SRL
when using SCLEs for learning complex and challenging topics and
domains. We continue to advocate that these processes are key to learning
and that SCLEs need to be able to detect, track, model, support, and foster
these processes during learning episodes. It would be unwise to expect
that all SCLEs have these capabilities for several reasons. However, it is
important to highlight that SCLEs need to be designed based on sound
assumptions, frameworks, models, and theories of metacognition and
SRL. A principled, theoretically-based foundation is the key to the design
of these systems in order for them to support and foster students’ SRL. We
have provided an overview of the SRL assumptions commonly accepted
by researchers from various theoretical orientations, provided an in-
depth description of Winne and Hadwin’s model, described specific key
metacognitive monitoring and self-regulatory processes related to learning
with several SCLEs, and exemplified the embodiment of metacognition
and SRL by providing examples of four contemporary SCLEs.
Metacognition and Self-Regulated Learning 191
Future work in the area of SRL and SCLEs needs to address several
outstanding issues. First, issues related to the learning context need to
be clearly described and accounted for by the learner and the learning
environment. In this category, several variables of interest need to be
addressed:
1 the learning goal(s) (e.g., provision of a challenging learning goal(s),
self- or other-generated goal(s));
2 the accessibility of instructional resources (e.g., accessibility to these
resources to facilitate goal attainment, engaging in help-seeking
behavior and scaffolding while consulting resources);
3 dynamic interactions between the learner and other external / in-
system regulating agents (e.g., pedagogic agents’ role(s), levels of
interaction, scaffolding, feedback, embodiment of modeling and
scaffolding, and fading metaphor behaviors); and
4 the role of assessment in enhancing performance, learning,
understanding, and problem solving (e.g., the type of assessment,
the timing of assessment, whether it is metacognitive knowledge and
regulatory skills, or conditional knowledge for the use of SRL skills).
The second set of issues is related to the learners’ cognitive and
metacognitive SRL knowledge and skills. Several issues need to be
addressed, including:
1 What self-regulatory strategies are students knowledgeable about?
How much practice have they had in successfully using them?
2 How familiar are students with the tasks they are being asked to
complete? Are they familiar with the various aspects of the context
and learning system they are being asked to use?
3 What are students’ levels of prior knowledge? What impact will it
have on a learner’s ability to self-regulate?
4 Do students have the necessary declarative, procedural, and
conditional knowledge essential to regulate their learning? Will the
learning system offer opportunities for learning about these complex
processes? Will the environment provide opportunities for students
to practice and receive feedback about these opportunities?
5 What are students’ self-efficacy, interest, task value, and goal
orientations, which may influence their ability to self-regulate?
6 Are students able to monitor and regulate their emotional states
during learning?
The third set of issues is related to the characteristics and features of the
SCLE. Several issues need to be addressed, including:
192 Azevedo, Behnagh, Duffy, Harley and Trevors
1 instructional goal(s) and the structure of the system (e.g., using open-
ended hypermedia to acquire a mental model of a science topic,
engaging in a tutorial dialogue to refine a misconception, or using
inquiry strategies to understand a particular scientific phenomena);
2 What is the role of multiple representations (e.g., what kinds of
external representations are afforded by the environment? How
many types of representations exist? Are they associated with each
other (to facilitate integration) or are they embedded in some random
fashion (potentially causing extraneous cognitive load)? Are the
representations static (e.g., diagrams), dynamic (e.g., animations), or
both? Are students allowed to construct their own representations?
If so, are they used (by the system or some other external regulating
agent) to assess emerging understanding? Or, are they just artifacts
that may show the evolution of students’ understanding, problem
solving, learning, etc.? Or is the purpose for learners to off-load their
representations to increase working memory?);
3 What are the types of interactivity between the learner and SCLE (and
other contextually embedded external agents)? Are there different
levels of learner control? Is the system purely learner-controlled
and therefore relies on the learner’s ability to self-regulate or is the
system adaptive in externally regulating and supporting students’
SRL through the use of complex AI algorithms that provide SRL
scaffolding and feedback?
4 What types of scaffolding exist (e.g., what is the role of external
regulating agents? Do they provide cognitive and metacognitive
strategies? Do they play different roles (e.g., scaffolding, modeling,
etc.)? Is their role to monitor or model students’ emerging
understanding, and does it facilitate knowledge acquisition, provide
meaningful feedback, successfully scaffold learning, etc.? Are these
scaffolding strategies what we would expect from artificial pedagogical
agents? Do the levels of scaffolding remain constant during learning,
fade over time, or fluctuate during learning? When do these agents
intervene? How do they demonstrate their interventions (e.g., verbal,
conversation, gesturing, facial moves, dialogue system)? ).
In sum, there are endless possibilities for the future design of SCLEs
that embody metacognition and SRL.
References
Aleven, V., Roll, I., McLaren, B., & Koedinger, K. (2010). Automated,
unobtrusive, action-by-action assessment of self-regulation during learning
with an intelligent tutoring system. Educational Psychologist, 45(4), 224–233.
Azevedo, R. (2005). Computers as metacognitive tools for enhancing learning.
Educational Psychologist, 40(4), 193–197.
Metacognition and Self-Regulated Learning 193
Azevedo, R. (2007). Understanding the complex nature of self-regulated learning
processes in learning with computer-based learning environments: An
introduction. Metacognition & Learning, 2(2/3), 57–65.
Azevedo, R. (2008). The role of self-regulation in learning about science
with hypermedia. In D. Robinson & G. Schraw (Eds.), Recent innovations
in educational technology that facilitate student learning (pp. 127–156).
Charlotte, NC: Information Age Publishing.
Azevedo, R. (2009). Theoretical, methodological, and analytical challenges in the
research on metacognition and self-regulation: A commentary. Metacognition
& Learning, 4, 87–95.
Azevedo, R., & Witherspoon, A. M. (2009). Self-regulated learning with
hypermedia. In D.J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook
of metacognition in education (pp. 319–339). Mahwah, NJ: Routledge.
Azevedo, R., Greene, J. A., & Moos, D. C. (2007). The effect of a human agent’s
external regulation upon college students’ hypermedia learning. Metacognition
and Learning, 2 (2-3), 67-87.
Azevedo, R., Johnson, A., Chauncey, A., & Burkett, C. (2010). Self-regulated
learning with MetaTutor: Advancing the science of learning with MetaCognitive
tools. In M. Khine & I. Saleh (Eds.), New science of learning: Computers,
cognition, and collaboration in education (pp. 225–247). Amsterdam: Springer.
Azevedo, R., Johnson, A. M., Chauncey, A., & Graesser, A. (2011). Use of
hypermedia to assess and convey self-regulated learning. In B. Zimmerman
& D. Schunk (Eds.), Handbook of self-regualtion of learning and performance
(pp. 102–121). New York: Routledge.
Azevedo, R., Cromley, J. G., Moos, D. C., Greene, J. A., & Winters, F. J. (2011).
Adaptive content and process scaffolding: A key to facilitating students’
learning with hypermedia. Psychological Test and Assessment Modeling, 53 (1),
106-140.
Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. G. (2008).
Why is externally-facilitated regulated learning more effective than self-
regulated learning with hypermedia? Educational Technology Research and
Development, 56(1), 45–72.
Azevedo, R., Moos, D. C., Johnson, A. M., & Chauncey, A. D. (2010). Measuring
cognitive and metacognitive regulatory processes during hypermedia learning:
Issues and challenges. Educational Psychologist, 45, 210–223.
Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T.
Urdan (Eds.) Self-efficacy beliefs of adolescents (pp. 307–337). Information
Age Publishing, Greenwich.
Biswas, G., & Sulcer, B. (2010). Visual exploratory data analysis methods to
characterize student progress in intelligent learning environments. In 2010
International Conference on Technology for Education (T4E), pp. 114–121,
Mumbai, India.
Biswas, G., Jeong, H., Kinnebrew, J., Sulcer, B., & Roscoe, R. (2010).Measuring
self-regulated learning skills through social interactions in a teachable agent
environment. Research and Practice in Technology-Enhanced Learning, 5(2),
123–152.
Biswas, G., Leelawong, K., Schwartz, D., & Vye, N. (2005). Learning by teaching:
A new agent paradigm for educational software. Applied Artificial Intelligence,
19 (3), 363–392.
194 Azevedo, Behnagh, Duffy, Harley and Trevors
Biswas, G., Roscoe, R., Jeong, H., & Sulcer, B. (2009).Promoting self-regulated
learning skills in agent-based learning environments. Proceedings of the 17th
International Conference on Computers in Education. Hong Kong: Asia-Pacific
Society for Computers in Education.
Boekaerts, M., Pintrich, P., & Zeidner, M. (2000). Handbook of self-regulation.
San Diego, CA: Academic Press.
Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A theoretical
synthesis. Review of Educational Research, 65(3), 245–281.
Clarebout, G., Horz, H., & Elen, J. (2009). The use of support devices in electronic
learning environments. Computers in Human Behavior, 25(4), 793–794.
Dunlosky, J., & Bjork, R. (2008) (Eds.). Handbook of metamemory and memory.
New York: Taylor & Francis.
Dunlosky, J., &Lipko, A. R. (2007). Metacomprehension: A brief history and
how to improve its accuracy. Current Directions in Psychological Science,
16(4), 228–232.
Efklides, A. (2011). Interactions of metacognition with motivation and affect
in self-regulated learning: The MASRL model. Educational Psychologist, 46,
6–25.
Elliot, A., & McGregor, H. A. (2001). A 2 × 2 achievement goal framework.
Journal of Personality and Social Psychology, 80, 501–519.
Elliot, E. S. & Dweck, C. S. (1988). Goals: An approach to motivation and
achievement. Journal of Personality and Social Psychology, 54(1).
Graesser, A. C., & McNamara, D. S. (2010). Self-regulated learning in learning
environments with pedagogical agents that interact in natural language.
Educational Psychologist, 45, 234–244.
Greene, J. A., & Azevedo, R. (2007). A theoretical review of Winne and Hadwin’s
model of self-regulated learning: New perspectives and directions. Review of
Educational Research, 77, 334–372.
Greene, J. A., & Azevedo, R. (2009). A macro-level analysis of SRL processes and
their relations to the acquisition of sophisticated mental models. Contemporary
Educational Psychology, 34, 18–29.
Greene, J. A. & Azevedo, R. (2010). The measurement of learners’ self-regulated
cognitive and metacognitive processes while using computer-based learning
environments. Educational Psychologist, 45(4), 203–209.
Hacker, D., Dunlosky, J., & Graesser, A. (Eds.). (2009). Handbook of metacognition
in education. Mahwah, NJ: Erlbaum.
Kinnebrew, J., Biswas, G., & Sulcer, B. (2010). Measuring self-regulated learning
skills through social interactions in a teachable agent environment. AAAI Fall
Symposium on Cognitive and Metacognitive Educational Systems (MCES),
Arlington, VA.
Kinnebrew, J., Biswas, G., Sulcer, B., & Taylor, R. (in press). Investigating self-
regulated learning in teachable agent environments. In R. Azevedo & V. Aleven
(Eds.), International Handbook of Metacognition and Learning Technologies.
Amsterdam, The Netherlands: Springer.
Leelawong, K., & Biswas, G. (2008). Designing learning by teaching agents: The
Betty’s Brain system. International Journal of Artificial Intelligent in Education,
18 (3), 181–208.
Metacognition and Self-Regulated Learning 195
Mayer, R. E. (2005). Cognitive theory of multimedia learning. In R. E. Mayer
(Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New
York: Cambridge University Press.
McQuiggan, S., Lee., S., & Lester, J. (2007). Early prediction of student
frustration. In Paiva, A., Prada, R., & Picard, R. (Eds.). Affective computing
and intelligent interaction (pp. 698–709). Berlin, Germany: Springer.
McQuiggan, S., Rowe, J., Lee, S., & Lester, J. (2008). Story-based learning:
The impact of narrative on learning experiences and outcomes. In Woolf, B.,
Aïmeur, E., Nkambou, R., & Lajoie, S. (Eds.). Intelligent tutoring systems (pp.
530–539). Berlin, Germany: Springer.
McQuiggan, S., Goth, J., Ha, E., Rowe, J., & Lester, J. (2008a). Student note-
taking in narrative-centered learning environments: Individual differences and
learning effects. In Woolf, B., Aïmeur, E., Nkambou, R., & Lajoie, S. (Eds.).
Intelligent tutoring systems (pp. 510–519). Berlin, Germany: Springer.
McQuiggan, S., Hoffman, K. L., Nietfeld, J. L., Robinson, J. L., & Lester, J.
(2008b). Examining self-regulated learning in a narrative-centered learning
environment: An inductive approach to modeling metacognitive monitoring.
In Proceedings of the ITS’08 Workshop on Metacognition and Self-Regulated
Learning in Educational Technologies, Montreal, Canada.
Metcalfe, J., & Dunlosky, J. (2009) Metacognition: A textbook for cognitive,
educational, life span & applied psychology, Thousand Oaks, CA: Sage.
Moos, D. C. & Azevedo, R. (2008). Exploring the fluctuation of motivation and
use of self regulatory processes during learning with hypermedia. Instructional
Science, 36 (3), 203–231.
Mott, B. & Lester, J. (2006). Narrative-centered tutorial planning for inquiry-
based learning environments. Proceedings of the Intelligent Tutoring Systems
Conference (pp. 675–684). In M. Ikeda, K. Ashley & T. W. Chan (Eds.). Berlin,
Germany: Springer.
Nietfeld, J., Hoffman, K., McQuiggan, S., and Lester, J. (2008). Self-regulated
learning in a narrative-centered learning environment. Proceedings of the World
Conference on Educational Multimedia, Hypermedia, and Telecommunications.
Vienna, Austria.
Paris, S. G., & Paris, A. H. (2001). Classroom applications of research on self-
regulated learning. Educational Psychologist, 36(2), 89–101.
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M.
Boekaerts, P. Pintrich & M. Zeidner (Eds.), Handbook of self-regulation (pp.
451– 502). San Diego, CA: Academic Press.
Pintrich, P. R. (2002) The role of metacognitive knowledge in learning, teaching,
and assessing. Theory in Practice, 41(4), 219–225.
Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-
regulated learning in college students, Educational Psychology Review, 16(4),
385–407.
Pintrich, P. R., Wolters, C., & Baxter, G. (2000). Assessing metacognition and self-
regulated learning. In G. Schraw & J. Impara (Eds.), Issues in the measurement
of metacognition (pp. 43–97).
Pressley, M., & Hilden, K. (2006). Cognitive strategies. In D. Kuhn & R. S. Siegler
(Eds.), Handbook of child psychology: Volume 2: Cognition, perception, and
language (6th edn., pp. 511–556). Hoboken, NJ: Wiley.
196 Azevedo, Behnagh, Duffy, Harley and Trevors
Robinson, J., McQuiggan, S. & Lester, J. (2010). Developing empirically based
student personality profiles for affective feedback models. In V. Aleven, J.
Kay, & J. Mostow (Eds.). Intelligent tutoring systems (pp. 285–295). Berlin,
Germany: Springer.
Rowe, J., Shores, L., Mott, B., & Lester, J. (2010). Integrating learning and
engagement in narrative-centered learning environments. Intelligent Tutoring
Systems, Lecture Notes in Computer Science, 6095, 166–177.
Schraw, G., & Lehman, S. (2001). Situational interest. A review of the literature
and directions for future research. Educational Psychology Review, 13(1), 23–
52.
Schunk, D. (2001). Social cognitive theory of self-regulated learning. In B.
Zimmerman & D. Schunk (Eds.), Self-regulated learning and academic
achievement: Theoretical perspectives (pp. 125–152). Mahwah, NJ: Erlbaum.
Schunk, D. (2005). Self-regulated learning: The educational legacy of Paul R.
Pintrich. Educational Psychologist, 40, 85–94.
Schunk, D., & Zimmerman, B. (Eds.). (1998). Self-regulated learning: From
teaching to self-reflective practice. New York: Guilford.
Shimoda, T., White, B., & Frederiksen, J. (2002). Student goal orientation in
learning inquiry skills with modifiable software advisors. Science Education,
86, 244–263.
Veenman, M. (2007). The assessment and instruction of self-regulation in
computer-based environments: A discussion. Metacognition and Learning, 2,
177–183.
White, B. Y. (1993). Causal models, conceptual change and science education.
Cognition and Instruction, 10, 1–100.
White, B. Y., & Frederiksen, J. R. (1998). Inquiry, Modeling, and metacognition:
Making science accessible to all students. Cognition and Instruction, 16, 3–118.
White, B. Y., & Frederiksen, J. R. (2005). A theoretical framework and approach
for fostering metacognitive development. Educational Psychologist, 40, 211–
233.
White, B. Y., Frederiksen, J. R., & Collins, J. (2009). The interplay of scientific
inquiry and metacognition. In D. J. Hacker, J. Dunlosky, and A. C. Graesser
(Eds.) Handbook of metacognion (pp. 175–205). New York: Routledge.
White, B., Frederiksen, J., Frederiksen, T., Eslinger, E., Loper, S., & Collins, A.
(2002). Inquiry Island: Affordances of a multi-agent environment for scientific
inquiry and reflective learning. In P. Bell, R. Stevens & T. Satwicz (Eds.),
Proceedings of the Fifth International Conference of the Learning Sciences
(ICLS). Mahwah, NJ: Erlbaum.
White, B. Y., Shimoda, T. A., & Frederiksen, J. R. (1999). Enabling students to
construct theories of collaborative inquiry and reflective learning: Computer
support for metacognitive development. International Journal of Artificial
Intelligence in Education, 10, 151–182.
Winne, P. H. (2001). Self-regulated learning viewed from models of information
processing. In B. Zimmerman & D. Schunk (Eds.), Self-regulated learning and
academic achievement: Theoretical perspectives (pp. 153–189). Mahwah, NJ:
Erlbaum.
Winne, P., & Hadwin, A. (1998). Studying as self-regulated learning. In D. Hacker,
J. Dunlosky, & A. Graesser (Eds.), Metacognition in educational theory and
practice (pp. 227–304). Mahwah, NJ: Erlbaum.
Metacognition and Self-Regulated Learning 197
Winne, P., & Hadwin, A. (2008). The weave of motivation and self-regulated
learning. In D. Schunk & B. Zimmerman (Eds.), Motivation and self-regulated
learning: Theory, research, and applications (pp. 297–314). Mahwah, NJ:
Erlbaum.
Winne, P. H., & Nesbit, J. C. (2009). Supporting self-regulated learning with
cognitive tools. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.),
Handbook of metacognition in education. Mahwah, NJ: Erlbaum.
Zimmerman, B. (1986). Becoming a self-regulated learner: Which are the key sub-
processes? Contemporary Educational Psychology, 11, 307–313.
Zimmerman, B. (2000). Attaining self-regulation: A social cognitive perspective.
In M. Boekaert, P. R. Pintrich, and M. Zeidner (Eds.), Handbook of self-
regulated learning (pp. 13–39). San Diego, CA: Academic Press.
Zimmerman, B. (2001). Theories of self-regulated learning and academic
achievement: An overview and analysis. In B. Zimmerman & D. Schunk (Eds.),
Self-regulated learning and academic achievement: Theoretical perspectives (pp.
1–37). Mahwah, NJ: Erlbaum.
Zimmerman, B. (2008). Investigating self-regulation and motivation: Historical
background, methodological developments, and future prospects. American
Educational Research Journal, 45(1), 166–183.
Zimmerman, B. J., & Schunk, D. H. (Eds.) (2001). Self-regulated learning and
academic achievement: theoretical perspectives. New York: Erlbaum.
Zimmerman, B. J., & Schunk, D. H. (Eds.) (2011). Handbook of self-regualtion of
learning and performance. New York: Routledge.
Zumbach, J., & Bannert, M. (2006). Analyzing (self-)monitoring in computer
assisted learning. Journal of Educational Computing Research, 35(4), 315–
317.