ArticlePDF Available

Abstract and Figures

Self-regulating learners can be characterized as learners who actively research what they do to learn and how well their goals are achieved by variations in their approaches to learning. Extensive research on how and how well learners understand and apply the scientific method demonstrates that they encounter significant challenges in designing and validly interpreting experiments. I juxtapose these two views to make a case that learners need significant support to carry out a progressive program of research to make self-regulated learning productive. One key to this endeavor is gathering data that can accurately and systematically reflect how learning unfolds. I describe a software system called nStudy that is designed to do this, and I speculate on how software systems like nStudy can play powerful roles in improving learning and simultaneously advancing learning science.
Content may be subject to copyright.
Psychological Test and Assessment Modeling, Volume 52, 2010 (4), 472-490
Bootstrapping learner’s self-regulated
learning
Philip H. Winne1
Abstract
Self-regulating learners can be characterized as learners who actively research what they do to
learn and how well their goals are achieved by variations in their approaches to learning. Extensive
research on how and how well learners understand and apply the scientific method demonstrates
that they encounter significant challenges in designing and validly interpreting experiments. I
juxtapose these two views to make a case that learners need significant support to carry out a pro-
gressive program of research to make self-regulated learning productive. One key to this endeavor
is gathering data that can accurately and systematically reflect how learning unfolds. I describe a
software system called nStudy that is designed to do this, and I speculate on how software systems
like nStudy can play powerful roles in improving learning and simultaneously advancing learning
science.
Key words: learning technologies, metacognition, process feedback, self-regulated learning
1 Correspondence concerning this article should be addressed to: Philip H. Winne, PhD, Faculty of
Education, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada; email: winne@sfu.ca
Bootstrapping learner’s self-regulated learning 473
My 2-year old niece’s language is blossoming. Every waking hour is densely populated
with talk about her toys, her family’s and our dogs and their behavior, food, her doll, and
scores of other topics. She is engaging in extensive – some days it seems excessive –
deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993). Her practice is not
merely repeating routines but a directed exploration of forms for language, contexts that
shape its use, as well as psychomotor and kinesthetic information that helps her commu-
nicate with prosody and clear articulation.
In a very natural and culturally supported way, my niece is intensely engaged in a pro-
gram of personal experimentation with language. Often it is trial and error and at other
times, given her developmental stage, it is an approximately hypothetico-deductive activ-
ity. Her parents, my wife and I, and others with relatively more advanced language skills,
in varying ways and with varying immediacy of effects, provide frequent, timely and
encouraging formative and summative feedback. We play roles as human instruments –
we assess data she generates in operationalizing her hypotheses about complex elements
and relationships that govern language. We display not merely our readings of these data
but interact in ways that help her extend her perceptions about the correctness, appropri-
ateness and communication “value” of her utterances. In a few more years, except for
common errors (e.g., some instances of subject-verb agreement, grammatically correct
use of I and me), my niece likely will have achieved expertise. She is fortunate to live in
a world that actively and nearly constantly supports her personal research program.
Various researchers, including me (e.g., Winne, 2010; Zimmerman, 2008), maintain that
skills for learning, like skills in language, can be instructed with positive effects. Hun-
dreds of research studies demonstrate that learners who are taught simple tactics and
strategic methods for carrying out academic tasks outperform peers from whom such
instruction is withheld. One clear and powerful example is the self-regulated strategy
development model for writing (e.g., Harris & Graham, 1999). A recent meta-analysis
found that students taught this strategy produced writing that was better than peers’ writ-
ing by a weighted mean effect size of 1.14 (Graham & Perin, 2007). Such studies also
demonstrate that students participating in these experiments’ control groups (or “business
as usual” comparison groups) have regrettably underdeveloped learning skills.
Why is this difference the case? In my view, as is the case in other professional fields,
teaching practices in schools typically lag behind the leading edge of research. Moreover,
students in schools too infrequently receive direct and effective instruction in learning
skills. On the presumption that teaching can make a difference, this is a logical deduction
given findings like those just cited. If teaching practices kept apace with research, com-
parison groups in research would not fare as poorly as they do because learners would
have compensatory skills to apply to their tasks.
In the context I attribute to schools, students likely engage in personal programs of re-
search about how to learn more effectively. For example, after being taught a few first-
letter mnemonics – for example, Chief SOH CAH TOA for the trigonometric relations of
sine, cosine and tangent defined by the ratio of a triangle’s opposite, adjacent and hy-
potenuse sides; or ROY G BIV for colors of the spectrum in order from longer to shorter
wavelength – it seems likely that students experiment with similar mnemonics given the
P. H. Winne
474
reasonably quick and reliable success such tactics offer. But the positive results of ex-
perimental educational psychology demonstrate that these personally framed research
programs, to the extent they are operative, are not sufficiently progressive.
I posit two causes for this result. First, while the school environment provides many
occasions for students to experiment with learning skills, students probably struggle to do
this. The numerous, heterogeneous and rapidly sequenced objectives that are addressed
in a day’s instruction focus on mastery of curriculum, not inquiry about and deliberate
practice of learning skills. Second, teachers are insufficiently educated about learning
skills that they could teach learners (which is not a fault of teachers but of those who
design and certify teacher education and professional development programs). The vol-
ume and pace of prescribed curricula pushes instruction and guided practice of learning
skills out of the practical bounds of lessons and of teachers’ capacities to provide exten-
sive, timely and tailored feedback about students’ uses of learning skills. Students them-
selves have underdeveloped skills for monitoring what they do (e.g., Winne & Jamieson-
Noel, 2002) and what they do is not well matched to what research would prescribe
(Winne & Jamieson-Noel, 2003).
Despite this less than optimal environment, students carry out personalized enquiries
about how to learn better than they do at present. In an investigation of what teachers and
students perceived about cognitive and metacognitive features of lessons, Marx and I
observed that fifth grade students often referred to nascent learning skills they claimed to
use or were “working on” (Winne & Marx, 1982). But it is not only students’ learning
environments that hinder successful deliberate practice of learning skills. Students and
most people are not naturally skilled experimenters (Zimmerman, 2007).
In the following sections, I first overview challenges people face in developing and
learning from experiments in general. Then, I relate how these factors are predicted to
hamper students in programs of research they carry out on becoming more effective
learners, that is, bumps on a path to becoming productive, self-regulating learners. Given
this context, second, I describe software we have developed that, with some extension,
may help students overcome some of these challenges and improve their capabilities to
deliberately practice learning skills on a path to become an expert learner. Finally, I offer
suggestions for realizing these predicted gains in everyday schooling.
Learning skills of scientific inquiry
In her comprehensive synthesis of research on the development of scientific thinking
skills across the developmental spectrum from mid-elementary grades to adulthood,
Zimmerman (2007) defined scientific thinking as
the application of the methods or principles of scientific inquiry to reasoning or prob-
lem-solving situations, and involves the skills implicated in generating, testing and
revising theories, and in the case of fully developed skills, to reflect on the process of
knowledge acquisition and change …. Participants engage in some or all of the com-
ponents of scientific inquiry, such as designing experiments, evaluating evidence and
Bootstrapping learner’s self-regulated learning 475
making inferences in the service of forming and/or revising theories about the phe-
nomenon under investigation. (p. 173)
In a preview of her synthesis, she observed:
Sufficient research has been compiled to corroborate the claim that investigation
skills and relevant domain knowledge “bootstrap” one another, such that there is an
interdependent relationship that underlies the development of scientific thinking.
However, as is the case for intellectual skills in general, the development of the com-
ponent skills of scientific thinking “cannot be counted on to routinely develop”
(Kuhn & Franklin, 2006, p. 974). (p. 173)
Bumps on a road to becoming a skilled researcher
Klahr’s (2005) model of scientific discovery as dual search describes three overarching
processes in conducting a scientific inquiry: searching for hypotheses, searching for
experimental designs to generate data for testing hypotheses, and evaluating evidence
afforded by data. In pursuing these activities, children and adult experimenters often
suffer shortcomings and fail to surmount them. I draw and synthesize findings from
Zimmerman’s (2007) review; citations to primary research relating to these can be found
there.
In the general population, and particularly among less developmentally advanced or
younger experimenters, people sometimes struggle or fail to:
1. Frame discriminating hypotheses.
2. Explore all relevant combinations of variables they hypothesize or are told have
bearing on outcomes.
3. Design experiments involving multiple variables to test a specific (targeted) hypothe-
sis. Rather, variables and their combinations are chosen on the basis of availability
(in the environment or the mind), representativeness, salience, or other inconsistent
and weak heuristics.
4. Vary a causal factor that produces a positive outcome, thus reproducing positive
results that yield confounded interpretations about true causes.
5. Design experiments to investigate the same causal system when a hypothesis is
framed as a positive versus a negative relation.
6. Design experiments to test rival hypotheses or hypotheses that are capable of discon-
firming a held theory.
7. Seek or attend to evidence that could disconfirm hypotheses. Alternatively, evidence
is conceptually integrated with the hypothesis such that disconfirming evidence is
judged not relevant.
8. Avoid overgeneralizing mere covariation as sufficient grounds for inferring causa-
tion.
P. H. Winne
476
9. Avoid tendencies to discount, ignore or even misrepresent evidence that challenges
background knowledge or preference.
10. Avoid mistakes of reasoning when data are outliers and, more generally, make use of
statistical tools and concepts when considering patterns in data.
11. Take account of and compensate for their beliefs about the epistemological status of
claims that influence non-scientific forms of reasoning.
12. Keep records of their enquiries and consult them. Memory for important features of
experimental enquiries is fragile and biased.
13. Develop skills involved in valid and efficient scientific reasoning on their own, i.e.,
without prompts and guidance.
14. Apply skills of scientific reasoning in social domains as well as they do in physical
science domains.
15. Cope with inconsistent ways they use valid rules of scientific reasoning over multiple
experiments.
16. Treat experimentation as a subject of study beyond the topic(s) directly explored in
experiments; that is, engage with experimentation metacognitively.
The good news is that student experimenters likely can overcome these challenges if they
experience well-designed instruction that provides deliberate practice, that is, extensive,
spaced trials with accurate and timely knowledge-of-results feedback. When the focus of
students’ experiments is becoming a better learner, they also need formative feedback
(Hattie & Temperly, 2007; Shute, 2008) and, particularly, they need process feedback
that describes variables they manipulated to change how they learn (Butler & Winne,
1995). The next section examines this process of personal experimentation to improve
learning, called self-regulated learning.
Self-regulated learning
In brief (see Zimmerman & Schunk, 2010, for elaboration), self-regulated learning (SRL)
is a dynamic blend of two metacognitive operations, metacognitive monitoring and meta-
cognitive control. Together, these metacognitive operations change how activities, like
learning, are carried out so that goals can be met and met in a more satisfying way. In
exercising SRL, learners attempt to discover “what works” in striving to develop knowl-
edge and skills, as well as “what works better.” A model of SRL Hadwin and I developed
(Winne & Hadwin, 1998; see also Greene & Azevedo, 1997) is tightly coupled to the
proposition that learners are agents. Agents gather data about factors in their environment
– the internal environment of cognition and motivation, and the external environment –
then set goals and devise plans to reach those goals. As work on a task unfolds, monitor-
ing may identify discrepancies between goals and achievements, and metacognitive
monitoring may identify differences between plans for achieving goals and processes
enacted. Learners then may exercise metacognitive control in at least three ways: revis-
ing goals, adapting plans or changing operations.
Bootstrapping learner’s self-regulated learning 477
More specifically, the Winne and Hadwin (1998) model segments SRL into four weakly
sequenced and recursive phases. In phase one, learners construct an idiosyncratic profile
of features in the environment surrounding a learning task. This profile blends cognitive,
motivational and affective data. In phase two, learners set goals and design a plan to
achieve them in the context of the environment they perceive. In phase 3, learners call on
tactics and strategies to move toward goals. Modest adaptations may be made on the fly.
In phase four, which is optional, learners consider whether and how to change their ele-
ments in the preceding three phases.
Over time, the self-regulating learner’s aim is to improve learning by (a) adapting the
environment and (b) acquiring and improving methods for learning. In this sense, self-
regulating learners experiment with how they learn so that learning becomes successively
more effective and more satisfying (Winne, 1997, 2006). However, as noted earlier, there
are significant challenges to doing such research. And, empirical findings document that
learners are not expert at learning. This chicken-and-egg dilemma may account for why
forging a productive personal program of SRL research is a very difficult task for stu-
dents in schools (see also Winne & Hadwin, in press).
Can learners be supported to break this cycle? In this article, I first summarize features of
software we have designed to research and support online learning. In the course of this
summary, I describe how the software gathers data that can be used to assemble a picture
of how a learner learns. Then, I describe features we plan to add to this software that
make it into a tool learners can use to carry out a personal program of research to im-
prove learning. In this latter section, I describe how these plans are intended to help
learners become better at researching their learning, that is, how the software can scaf-
fold improvements in self-regulated learning.
nStudy
Software has potential to be a powerful tool to help learners research learning and a boon
to “professional” researchers investigating learning and SRL (Nesbit & Winne, 2008;
Winne, 2006; Winne & Nesbit, 2009). So far, however, few software technologies have
been designed to help learners pull up learning by their bootstraps. This is one of the
goals of the nStudy project.
nStudy (Winne, Hadwin & Beaudoin, 2010; see also http://learningkit.sfu.ca/lucb/celc-
2009-nStudy.pdf) is a web application, that is, software that runs on an internet server
and displays its products in a web browser’s window. We designed nStudy to serve three
purposes. First, it is a tool learners can use to study information online. The information
can be of any kind that is formatted using the hypertext markup language (HTML). As
learners study, nStudy records extensive, fine-grained, time-stamped data about opera-
tions they apply to selections of information. As is described in the next sections,
nStudy’s second purpose is to gather these data so that researchers can characterize how
learners learn without suffering shortfalls of self-reports that arise due to imperfections
of human memory (see Winne, Zhou & Egan, 2010). Third, these same data are raw
materials learners need to carry out a progressive, personal program of research on SRL.
P. H. Winne
478
An extension to nStudy is being designed that will allow learners to access and analyze
these data to support individualized “N = me” research on learning.
Interventions and traces
HTML content that is viewed in nStudy can be material that an instructor or researcher
designs, or it can be information available anywhere in the Internet. In the former case,
researchers can present particular information, for example, headings or “self-check”
questions that operationalize a treatment condition that is the topic of an experiment. Or,
information can be formatted in ways that operationalize variables of interest; for exam-
ple, a causal system can be described as text or as a graphical display of factors. In this
way, nStudy can mimic many kinds of treatments studied in conventional experiments.
nStudy is also an instrument that gathers traces of learners’ cognition and metacognition.
What are traces? They are records of behavior, a form of performance assessment, that
provide grounds for inferring a learner’s cognitive and metacognitive activities (see
Winne et al., 2010).
To generate a trace datum, a learner carries out an observable activity on or with infor-
mation. For example, suppose a window provides a list of files and the learner double
clicks a particular item to open it. Setting aside the possibility that this is random behav-
ior, it is plausible to infer the learner (a) is searching for some target information in par-
ticular, (b) forecasts the target information is available in this particular file based on the
file’s title and/or the name of the directory in which it is located, and (c) opens the file
because the target information is judged to be not fully or accurately stored in memory,
or handily available. The specifics of (c) can have several explanations, for example, the
learner judges the target information is: (a) not in memory; or (b) stored in memory but
can’t be retrieved according to a feeling of knowing; or (c) may be stored in memory but
it is easier to open the file to “retrieve” it. Double-clicking a particular file traces these
features of cognition.
In nStudy, when a learner opens a file-like item, for example, a bookmark to a web page,
nStudy logs extensive, fine-grained, time-stamped information that, in the aggregate,
traces the cognitive event. Some of the information logged includes: the title of the win-
dow containing the item that was double-clicked, the time that container window was
opened or made active, the title of the item that was opened, the time the double-click
was effected, and the window that opened. Time data can provide grounds for interpret-
ing the cognitive load involved in, for example, locating the target item. The title of the
item double-clicked provides semantic information for estimating kinds of information
the learner seeks. The log of these data is constructed nearly simultaneously with the
cognitive event. The learner need not describe what was done but merely acts, thus
avoiding vagueness or misspoken descriptions. The context for this action, while not
necessarily a full representation of the state of the learner’s working memory, is well
specified.
Bootstrapping learner’s self-regulated learning 479
nStudy’s browser window and bookmarks
Learners access information on a web page by entering a universal resource locator
(URL) into an address field, for example, http://mypage.domainname. Once the web
page is displayed, nStudy provides a variety of tools learners can use to operate on its
information. One tool guides the learner to bookmark a web page. To create a bookmark,
the learner generates a title for the bookmark and, optionally, can enter a description of
the web page.
The simple act of titling a bookmark generates trace data that support inferences about
at least three cognitive events. First, choosing to create a bookmark signals that the
learner forecasts information on this web page that may have value in the future. A
bookmark preserves a way to re-access it and, thus, second, it traces planning. Third,
assigning a title to a bookmark to replace nStudy’s default title of “untitled –
year.month.day.hour.minute” signals that the learner classifies the information in this
web page as belonging to a category (e.g., “Darwin’s theory”) or fits a task in which this
information might play a role (e.g., “arguments: evolution”). Fourth, the semantic con-
tent of the title makes observable at least some of the standards the learner used in meta-
cognitively monitoring the web page’s information.
Tags
nStudy allows learners to tag information in a web page by selecting text, then mod-
clicking2 to pop up a contextual menu from which a tag can be chosen (see Figure 1).
This links the selected text to the tag. A generic tag titled Highlight is always available at
the top of the popup menu. Below that is a further option, Tags… that opens another
window (not shown) where the learner can review all tags that have been constructed so
far, and assign one of those or create a new tag for the target information. The next five
items in the popup menu are the five most recently used tags; in Figure 1, these are: Can
do, Can’t do, Fallacy, funny and is this a learning objective. Once information is tagged,
a background corresponding to the tag’s color is created (the color can be set by the
learner as a preference), and the tagged information, which we call a quote, is copied to
the panel at left in Figure 1. For example, “Describe the major scientific ideas …” is the
first quote tagged with “Can do.” Double clicking on a quote in that panel scrolls the text
to show the quote in its surrounding context.
2 A mod-click is a right-click when running Firefox and nStudy under the Windows operating systems
and a control-click under the Macintosh operating system.
P. H. Winne
480
Figure 1:
The nStudy browser window running inside a Firefox window showing its table of quotes and
linking tools
Selecting text for tagging traces that the learner is metacognitively monitoring it. The tag
chosen or created identifies what standards the learner used to metacognitively monitor.
The five most recent tags might be inferred to identify a current set of discriminations the
learner is considering about the information being read in the web page.
Notes
Figure 2 shows a basic note. Learners can link notes to quotes in a web page and to
bookmarks representing entire web pages. Each note is automatically assigned a default
title of untitled that the learner can change. The text selected for annotation is automati-
cally quoted in the linked note’s Quote field along with a link to the web page in which
that quote appears.
Bootstrapping learner’s self-regulated learning 481
Figure 2
A basic note in nStudy
Notes are web forms that operationalize schemas for annotation. Learners choose a
schema from the dropdown list, Select Form. An option in this dropdown list is to create
a new form. Figure 3 shows the editor that learners (and researchers) can use to tailor an
existing note form or to create an entirely new schema for annotating information. Modi-
fied or new note forms can be a one off design or they can be saved for future use. Vari-
ous kinds of fields are provided for adapting forms. They can be added to a note by a
drag-and-drop operation. Properties of a field, such as its label(s) or end points for a
slider are set in the Set Properties tab at the far right of the editor’s tabs bar.
As with nStudy’s other tools, making a note traces several key cognitive and metacogni-
tive events. For reasons of limited space, just a few are elaborated. As with tags, the act
of creating a note reflects metacognitive monitoring of information in relation to a goal.
The quote selected in a browser is elaborated by information the learner enters into fields
in the note. Customizing a note form or creating a new form traces a major metacognitive
event: an available schema is judged inappropriate and a new schema is constructed.
P. H. Winne
482
Figure 3:
An nStudy note showing a form and the editor for modifying and creating new forms that
operationalize new schemas for notes
Terms
A learner can create terms (see Figure 4) for concepts that are central to the topic being
studied. All terms use one form. Making a term traces metacognitive monitoring of in-
formation that the learner judges to be a term, and assembles that new term and its linked
quote with information input to the Description field.
In any nStudy browser window, note window, and term window, nStudy scans the text
for terms. Terms it identifies are posted in that window under Terms Used in the left
panel (e.g., see Figures 1 and 2). Double-clicking a term opens the term in a new window
like Figure 4. This traces metacognitive monitoring about the retrievability of a term’s
description.
Bootstrapping learner’s self-regulated learning 483
Figure 4:
An nStudy term
The Termnet
nStudy also traces when a learner examines how terms relate to one another in a display
called the Termnet (see Figure 5). Links between two terms, A and B, are created if A’s
description uses B or vice versa. The link represents a literal “in terms of …” relation. The
Termnet shows terms and in-terms-of links for every window the learner has open and can
be viewed by clicking an icon in the toolbar of any one of those windows. The click traces a
metacognitive judgment about a goal to examine how terms in the window in which the
click is made relate to other terms in the set of windows that are open. In the Termnet win-
dow, any particular term can be reviewed by double clicking it. This traces a metacognitive
judgment of utility in reviewing a term, presumably because its description cannot be re-
trieved from memory. A term of interest can be found in a complex graphical display by
entering it in the search field of the Termnet’s toolbar. As with the double-click of a term,
this traces a term on which the learner is currently focusing attention.
P. H. Winne
484
Figure 5:
An nStudy Termnet showing terms used in all windows currently open
Library
A display of all the information items a learner has created can be viewed in nStudy’s
Library table (see Figure 6). Items in the table can be filtered by: (a) type of information
item by selecting options in the dropdown menu Show, (b) folders into which items are
organized by selecting a folder in the left panel, and (c) tags applied by selecting a tag in
the left panel. By clicking on a column in the table, items can be sorted according to
several kinds of metadata including those shown in Figure 6 as well as creator, last editor
(in a shared workspace) and more. When learners filter and sort items, standards being
used to search for information are traced. The learner also can search for a particular item
by its content, title or both by entering text in the Search field. The learner can operate
on one or a set of several items by selecting them, mod-clicking to expose a contextual
menu and choosing a desired operator. As in other windows, double clicking an item
opens its window. This traces a metacognitive expectation that the learner holds about
information recorded in the item being opened.
Bootstrapping learner’s self-regulated learning 485
Figure 6:
nStudy’s Library and Operators
Analyzing nStudy logs of studying events
Trace data that nStudy gathers unobtrusively and on the fly as learners engage with their
tasks are readily examined using conventional methods such as counts and averages.
These are data and statistics that researchers can use to examine SRL and, pending a
planned extension to nStudy, that learners will be able to examine as process feedback.
Beyond this conventional information, additional descriptions about how studying is
carried out can be gleaned by: (a) taking into account temporal features of trace data and
(b) examining patterns among traces that can yield quantitative indexes and pictorial
views of how learners study (see Winne, Zhou & Egan, 2010).
Extending counts: Conditional IF-THEN pairs and patterns of studying events
Suppose a form for an nStudy note, a “goal” form, provides fields for learners to describe
goals for learning – Goal, Evidence of success, Need to work on … – plus a slider to rate
the degree to which each goal is achieved. A count of notes a learner makes using this
form indexes the number of goals set. While number of goals set is a common measure
of goal setting, nStudy’s log of studying events allows more penetrating examinations of
the role of goals in learning. By scanning backward along a timeline of logged events, it
is possible to identify at least two important IF-THEN relationships: contexts that precede
when a learner (a) sets a goal and (b) revisits previously set goals to adjust estimates of
P. H. Winne
486
success. For contexts deemed similar (using an index of structural equivalence described
in Winne, Jamieson-Noel & Muis, 2002), these relationships can be expressed as a con-
ditional probability: given a particular context, IF X & Y & …, THEN what is the probabil-
ity the learner sets or revisits a goal, which nStudy logs as creating or revisiting any note
that uses the goal form.
The binary IF-THEN architecture of studying events can be extended in ways that allow
the measurement of more complex patterns – learning strategies if you will. This kind of
analysis begins by building a transition matrix that can be translated into a graph of tran-
sitions (see Figure 7). Briefly, in a transition matrix, each tally in a cell corresponds to a
transition from an event designated in a row (IF) to the event that follows it (THEN) in a
column. For example, in Figure 7, A is followed by B so a tally is recorded in cell A,B.
Event B is now the context for a subsequent event, D. And so on.
Sequence of traces: A B D B C E D B C E D A C E D A B C F …
Transition Matrix
A
B
C
D
E
F
A
//
/
B
///
/
C
///
/
D
//
//
E
///
F
Figure 7:
A trace sequence, its transition matrix and a graph of the pattern of traces
The transition matrix can be transformed to picture the pattern of transitions as illus-
trated. Properties of this graph can be quantified in terms of, for example, the degree of
regularity in the pattern, the degree to which one pattern is “geometrically” congruent to
another, and whether specific nodes or small neighborhoods of the graph play the same
structural role relative to the graph as a whole (see Winne, Jamieson-Noel & Muis,
2002). Hadwin, Nesbit, Jamieson-Noel, Code and Winne (2007) illustrate using this kind
of analysis to describe levels and forms of learners’ cognitive and metacognitive activi-
ties.
Bootstrapping learner’s self-regulated learning 487
Software supports for SRL as a program of personal research on
learning
Assume for sake of argument that: (a) goal setting boosts achievement (it does; see Mor-
gan, 1985) and (b) contexts can be operationally defined in a form that software can
recognize by a researcher’s analysis or, with very large samples, by software data mining
methods (they can; see Zhou, Xu, Nesbit & Winne, 2010). In this situation, nStudy might
be extended to intervene in several ways designed to promote self-regulated learning as a
personal program of research. For example, if the learner does not consistently (a) set
goals when goal-setting contexts arise or (b) revisit goals in a context where this is ap-
propriate, alerting the learner to this situation may help overcome a production defi-
ciency, that is, a context (IF) where a useful studying tactic (THEN) was not engaged.
Equipped with these kinds of information and extensions to nStudy (which are not yet
developed but planned), learners may be helped to smooth several of the bumps previ-
ously described that people encounter when they research causal systems, such as how
they learn. Here are some conjectures that speak directly to issues 1, 2, 3, 7 and 12 previ-
ously listed as bumps on the road to becoming a skilled researcher. In the following
scenarios, I use the label nStudy+ to refer to a future version of nStudy that is being
designed but has not yet been implemented.
First, with information about contexts that are and are not very likely to lead to an in-
crease in estimates of goal success, the learner might be prompted to frame more dis-
criminating hypotheses about kinds of contexts that are more or less a boon to improving
knowledge (bump #1). A method for describing these contexts in terms of learning
events logged by nStudy+ could be a neighborhood of study events displayed in a graph
like Figure 7. A shortcoming of this display is that it characterizes context in terms of
operations the learner applies to study information but omits characteristics of the infor-
mation studied.
Second, for contexts in which the conditional probability of an upward revision of goal
success is small, nStudy+ might invite the learner to investigate the termnet for each
context. This invitation involves the consideration of which key concepts and what it is
about their conceptual configuration that might be impeding progress. This further dis-
criminates grounds for hypotheses about effects and extends exploration of variables
beyond operations applied to affect studying to include the semantic content of what was
studied (bump #2).
Third, reminding the learner that particular operations were applied to particular informa-
tion provides a fuller account than just counting the frequency of studying activities or
showing only the content on which operations, like tagging, were applied. nStudy+ sets
the stage for the learner to design an experiment – with some guidance by nStudy+ –
about multiple variables that affect the growth of and confidence in knowledge (bump
#3). Because nStudy+ has available all instances of contexts and associated termnets, the
learner benefits from a complete sample rather than what is available to forgetful, biased
and erroneously reconstructive memory. This, for example, helps the learner seek evi-
P. H. Winne
488
dence that can disconfirm as well as support hypotheses (bump #7) based on a complete
record of data rather than a sample (bump #12).
Research targets for the near and moderate range future
The proposition I advance is that, to be successful, learners need to learn more than sub-
jects they study in school. They need to develop skills that allow them to carry out a
personalized, progressive program of research on how to learn. Such development is
researched today under the banner of self-regulated learning. Becoming a productive
self-regulated learner is challenging. First, learners lack sufficient and valid data about
how they learn. Their memories of learning experiences can be invalid because factors of
human memory lead to biased, incomplete and erroneously reconstructed representations
of data and relations among data. Second, learners (and people in general) face a number
of challenges – bumps on the road – to doing penetrating and valid research.
Instruments like nStudy can gather, sort, analyze and display information of the kind, and
in ways, that learners need to form and test hypotheses about learning, that is, to self-
regulate learning. This can supplement the typical school environment (Winne, 2006) in
areas where schools are not (yet) well equipped.
It would be naïve to believe that simply installing powerhouse software could be suffi-
cient to achieve the challenging goals described in this paper. Beyond having tools,
learners must be motivated to use them and have a context that invites using them. The
hope that natural consequences associated with improved learning skills will be enough
may collapse if systemic features of education foster performance goal orientations,
value grades more than deep understanding, and foreground an entity view of learning
where ability is inelastic (see Wigfield & Cambria, 2010). Moreover, I conjecture that
succeeding as a self-regulated learner entails adopting an epistemological stance that
learning is a topic of inquiry because it is a skill that can be honed with deliberate prac-
tice. Under this view, research in educational psychology widens from inquiries into “the
way things are” in an experimental condition or new school curriculum to studies of “the
way learners make things” (Winne & Nesbit, 2010). With multifaceted support, such as
conjectured for nStudy+, learners themselves can be tapped as a powerful resource for
improving their education. The data they generate simultaneously can leverage signifi-
cant advances in learning science by providing much more complete, authentic and lon-
gitudinal records of how learners go about building knowledge and skills (Winne, 2006).
References
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical
synthesis. Review of Educational Research, 65, 245-281.
Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent stu-
dents. Journal of Educational Psychology, 99, 445-476.
Bootstrapping learner’s self-regulated learning 489
Greene, J. A., & Azevedo, R. (2007). A theoretical review of Winne and Hadwin’s model of
self-regulated learning: new perspectives and directions. Review of Educational Research,
77, 334-372.
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in
the acquisition of expert performance. Psychological Review, 100, 363-406.
Hadwin, A. F., Nesbit, J. C., Code, J., Jamieson-Noel, D. L., & Winne, P. H. (2007). Examin-
ing trace data to explore self-regulated learning. Metacognition and Learning, 2, 107-124.
Harris, K. R, & Graham, S. (1999). Programmatic intervention research: Illustrations from the
evolution of self-regulated strategy development. Learning Disability Quarterly, 22, 251-
262.
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research,
77, 81-113.
Klahr, D. (2005). A framework for cognitive studies of science and technology. In M. Gor-
man, R. D. Tweney, D. C. Gooding, & A. P. Kincannon (Eds.), Scientific and technologi-
cal thinking (pp. 81-95). Mahwah, NJ: Lawrence Erlbaum.
Morgan, M. (1985). Self-monitoring of attained subgoals in private study. Journal of Educa-
tional Psychology, 77, 623-630.
Nesbit, J. C., & Winne, P. H. (2008). Tools for learning in an information society. In T. Wil-
loughby & E. Wood (Eds.), Children's learning in a digital world (pp. 173-195). Oxford,
UK: Blackwell Publishing.
Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78, 153-
189.
Wigfield, A., & Cambria, J. (2010). Students’ achievement values, goal orientations, and
interest: Definitions, development, and relations to achievement outcomes. Developmen-
tal Review, 30, 1-35.
Winne, P. H. (1997). Experimenting to bootstrap self-regulated learning. Journal of Educa-
tional Psychology, 89, 397-410.
Winne, P. H. (2006). How software technologies can improve research on learning and bol-
ster school reform. Educational Psychologist, 41, 5-17.
Winne, P. H. (2011). Cognitive and metacognitive factors in self-regulated learning. In B. J.
Zimmerman and D. H. Schunk (Eds.), Handbook of Self-Regulation of Learning and Per-
formance (pp. 15-32). New York: Routledge.
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J.
Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice
(pp. 277-304). Mahwah, NJ: Lawrence Erlbaum Associates.
Winne, P. H., & Hadwin, A. F. (in press). nStudy: Tracing and supporting self-regulated
learning in the Internet. In R. Azevedo & V. Aleven (Eds.), International handbook of
metacognition and learning technologies. New York: Springer.
Winne, P. H., & Jamieson-Noel, D. L. (2002). Exploring students’ calibration of self-reports
about study tactics and achievement. Contemporary Educational Psychology, 27, 551-
572.
P. H. Winne
490
Winne, P. H., & Jamieson-Noel, D. L. (2003). Self-regulating studying by objectives for
learning: Students’ reports compared to a model. Contemporary Educational Psychology,
28, 259-276.
Winne, P. H., Jamieson-Noel, D. L., & Muis, K. (2002). Methodological issues and advances
in researching tactics, strategies, and self-regulated learning. In P. R. Pintrich & M. L.
Maehr (Eds.), Advances in motivation and achievement: New directions in measures and
methods (Vol. 12, pp. 121-155). Greenwich, CT: JAI Press.
Winne, P. H., & Marx, R. W. (1982). Students’ and teachers’ views of thinking processes for
classroom learning. Elementary School Journal, 82, 493-518.
Winne, P. H., & Nesbit, J. C. (2009). Supporting self-regulated learning with cognitive tools.
In D. J. Hacker, J. Dunlosky & A. C. Graesser (Eds.), Handbook of metacognition in edu-
cation (pp. 259-277). New York: Routledge.
Winne, P. H., & Nesbit, J. C. (2010). The psychology of school performance. Annual Review
of Psychology, 61, 653-678.
Winne, P. H., Zhou, M., & Egan, R. (in press). Assessing self-regulated learning skills. In G.
Schraw (Ed.), Assessment of higher-order thinking skills. New York: Routledge.
Zhou, M., Xu, Y., Nesbit, J. C., & Winne, P. H. (2010, in press). Sequential pattern analysis
of learning logs: Methodology and applications. In C. Romero, S. Ventura, S. R. Viola,
M. Pechenizkiy & R. de Baker (Eds.), Handbook of educational data mining. New York:
Routledge.
Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical back-
ground, methodological developments, and future prospects. American Educational Re-
search Journal, 45, 166-183.
Zimmerman, C. (2007). The development of scientific thinking skills in elementary and
middle school. Developmental Review, 27, 172-223.
Zimmerman, B. J. & Schunk, D. H. (Eds.) (in press, 2010). Handbook of self-regulation of
learning and performance. New York: Routledge.
... Self-Regulated Learning (SRL) characterizes learners as active participants in their own learning process who study how they learn and how learning helps them to achieve their goals (Winne, 2010;Zimmerman, 1989). For a learner to successfully self-regulate their learning, sufficient cognitive ability and motivation must be met with sufficient metacognition: the knowledge of one's own cognitive processes and products, and the skills to regulate cognitive aspects of the learning process (Flavell, 1979;Schraw et al., 2006). ...
... An affective perspective of SRL relates learning to emotional and motivational processes that occur during learning (Boekaerts, 1997;Boekaerts & Cascallar, 2006). A metacognitive perspective of SRL emphasizes the cognitive and metacognitive processes involved in learning Efklides, 2014;Winne, 2010;Winne & Hadwin, 1998). ...
... Metacognitive support can be delivered through digital tools (Altıok et al., 2019;Bannert & Mengelkamp, 2013;Connor et al., 2019), which generally fall into one of two categories: embedded instruction within domain-specific digital learning environments and detached instruction provided outside of, and prior to or in parallel to, ongoing domainspecific training (Broadbent et al., 2020;Osman & Hannafin, 1992). Embedded instruction typically (i) augments domain-specific content with cognitive tools aiding information processing (Bannert et al., 2009;Winne, 2010;Winne et al., 2006), (ii) uses data gathered from learning to provide meaningful feedback and support to learners to help them overcome particular challenges (Winne et al., 2006), and (iii) makes use of interactive and multimedia environments to situate SRL-support (McQuiggan & Hoffmann, 2008;Sabourin et al., 2013). Detached instruction, in contrast, makes few assumptions about the content of learning, and instead focuses on supporting metacognition during different parts of the learning process (Broadbent et al., 2020;Derry & Murphy, 1986;Osman & Hannafin, 1992). ...
Article
Full-text available
Digital support during self-regulated learning can improve metacognitive knowledge and skills in learners. Previous research has predominantly focused on embedding metacognitive support in domain-specific content. In this study, we examine a detached approach where digital metacognitive support is offered in parallel to ongoing domain-specific training via a digital tool. The primary support mechanism was self-explication, where learners are prompted to make, otherwise implicit, metacognition concrete. In a controlled pre-test/post-test quasi-experiment, we compared domain-specific and domain-general support and assessed the effects, use, and learners' perceptions of the tool. The results showed that self-explication is an effective mechanism to support and improve metacognition during self-regulated learning. Furthermore, the results confirm the effectiveness of offering detached metacognitive support. While only domain-specific metacognitive support was found to be effective, quantitative and qualitative analysis warrant further research into domain-general and detached metacognitive support. The results also indicated that, while students with higher metacognition found a lack of relevance of using the tool, students with lower metacognition are less likely to make (structural) use of the available support. A key challenge for future research is thus to adapt metacognitive support to learner needs, and to provide metacognitive support to those who would benefit from it the most. The paper concludes by formulating implications for future research as well as design of digital metacognitive support.
... Table 1 describes each with examples of traces, observable behaviour tightly coupled to the unobservable cognitive operation [20]. More complex descriptions of cognition, study tactics and learning strategies, are modelled as patterns of SMART operations [17]. An example study tactic is: Highlight every sentence containing a definition. ...
... The prompt given is critical because the learner uses it to set standards for deciding what to report. A thorough review is beyond the scope of this chapter; see Winne and Perry [30] and Winne [17,19]. In general, prompts for questionnaire items present conditions too generally (e.g., When you study . . . ...
Chapter
Over the last ten years learning analytics (LA) has grown from a hypothetical future into a concrete field of inquiry and a global community of researchers and practitioners. Although the LA space may appear sprawling and complex, there are some clear through-lines that the new student or interested practitioner can use as entry points. Four of these are presented in this chapter, 1. LA as a concern or problem to be solved, 2. LA as an opportunity, 3. LA as field of inquiry and 4. the researchers and practitioners that make up the LA community. These four ways of understanding LA and its associated constructs, technologies, domains and history can hopefully provide a launch pad not only for the other chapters in this handbook but the world of LA in general. A world that, although large, is open to all who hold an interest in data and learning and the complexities that follow from the combination of the two.
... In our study, group members received feedback about their task progress at specific time points during CPS; therefore, feedback was either negative or positive. Further, metacognition is a dynamic process in which learners' metacognitive awareness changes and evolves over time (Molenaar & Järvelä, 2014;Winne, 2010). During CPS, capturing metacognition, which is situated, contextual, and social, provides only a limited understanding of the complex and varying nature of CPS processes (Reiter-Palmon et al., 2017). ...
Article
Full-text available
Metacognitive awareness is knowing about learners’ own thinking and learning, facilitated by introspection and self-evaluation. Although metacognitive functions are personal, they cannot be explained simply by individual conceptions, especially in a collaborative group learning context. This study considers metacognitive awareness on multiple levels. It investigates how metacognitive awareness at the individual, social, and environmental levels are associated with collaborative problem solving (CPS). Seventy-seven higher education students collaborated in triads on a computer-based simulation about running a fictional company for 12 simulated months. The individual level of metacognitive awareness was measured using the Metacognitive Awareness Inventory. The social level of metacognitive awareness was measured multiple times during CPS through situated self-reports, that is, metacognitive judgements and task difficulty. The environmental level of metacognitive awareness was measured via a complex CPS process so that group members’ interactions were video recorded and facial expression data were created by post-processing video-recorded data. Perceived individual and group performance were measured with self-reports at the end of the CPS task. In the analysis, structural equation modelling was conducted to observe the relationships between multiple levels of metacognitive awareness and CPS task performance. Three-level multilevel modelling was also used to understand the effect of environmental-level metacognitive awareness. The results reveal that facial expression recognition makes metacognitive awareness visible in a collaborative context. This study contributes to research on metacognition by displaying both the relatively static and dynamic aspects of metacognitive awareness in CPS.
... In addition to the issues identified above, self-report and think-aloud approaches are laborintensive and time-consuming (Winne, 2010a), which make them difficult to scale. As such, a third approach, analyzing log data collected from computer-based learning environments, has emerged as a promising way to measure SRL. ...
Article
Full-text available
Self-regulated learning (SRL) is a critical component of mathematics problem-solving. Students skilled in SRL are more likely to effectively set goals, search for information, and direct their attention and cognitive process so that they align their efforts with their objectives. An influential framework for SRL, the SMART model (Winne, 2017), proposes that five cognitive operations (i.e., searching, monitoring, assembling, rehearsing, and translating) play a key role in SRL. However, these categories encompass a wide range of behaviors, making measurement challenging-often involving observing individual students and recording their think-aloud activities or asking students to complete labor-intensive tagging activities as they work. In the current study, to achieve better scalability, we operationalized indicators of SMART operations and developed automated detectors using machine learning. We analyzed students' textual responses and interaction data collected from a mathematical learning platform where students are asked to thoroughly explain their solutions and are scaffolded in communicating their problem-solving process. Due to the rarity in data for one of the seven SRL indicators operationalized, we built six models to reflect students' use of four SMART operations. These models are found to be reliable and generalizable, with AUC ROCs ranging 76 Journal of Educational Data Mining, Volume 14, No 3, 2022 from .76-.89. When applied to the full test set, these detectors are relatively robust to algorithmic bias, performing well across different student populations and with no consistent bias against a specific group of students.
... A learner has limited capacity to manipulate or influence many conditions within which they learn. Learners struggle to understand and apply the scientific method (Winne, 2010). Their theories are sketchy, naïve and suffer misconceptions. ...
Article
Full-text available
Metacognition is the engine of self-regulated learning. At the object level, learners seek information and choose learning tactics and strategies they forecast will develop knowledge. At the meta level, learners gather and analyze data about learning events to draw conclusions, such as: Is this tactic a good fit to conditions? Was it effective? Was effort required reasonable? Is my ability publicly exposed? As data accumulate, learners shape, re-shape and refine a personal theory about optimal learning. Thus, self-regulating learners are learning scientists. However, without training and tools on which “professional” learning scientists rely, learners’ N = me research programs are naïve and scruffy. Merging models of tasks, cognition, metacognition and motivation, I describe software tools, approaches to analyzing data and learning analytics designed to serve three goals: supporting self-regulating learners’ metacognition in N = me research, accelerating professional learning scientists’ research, and boosting synergy among learners and learning scientists to accelerate progress in learning science.
Chapter
In this chapter, we outline how modes of interaction, such as cognitive and socio-emotional, and the regulation of learning provide support for collaborative engagement and examine how it changes over time. We start by framing how regulated learning is embedded in the cognitive and socio-emotional interaction between the group members from both a theoretical and a methodological perspective. We then move to illustrate, with an empirical case example, how multimodal data (i.e., video) and physiological signals, such as electrodermal activity indicating physiological synchrony between the group members, can be used to capture varying levels of collaborative engagement. The empirical example provides a complementary view on group interaction and collaborative engagement. We conclude by discussing how investigating group interaction that targets regulation can reveal how collaborative engagement is built and maintained. Additionally, we discuss future possibilities to harness multimodal data in practice to support collaborative engagement.
Chapter
Recent advances in the use of learning technologies for both in-person and distance education has enabled the collection of detailed data on learners’ access and use of resources at unprecedented levels. Simultaneously, the growth of technology use in the classroom has brought forward increased interest in the analysis and use of learner data. The field of learning analytics leverages these data with the aim to enhance understanding about and improve learning processes. The Society of Learning Analytics Research defines learning analytics as “the measurement, collection, analysis and reporting of data about learners and their context” (LAK in 1st international conference on learning analytics and knowledge, Banff, AB, Canada, 2011; SOLAR in What is learning analytics? 2021). This chapter provides an overview of leveraging learning analytics to enhance understanding and provide feedback about learning processes. We aim to offer insights into types of data used to generate learning analytics, use the research on procrastination as an example for interpreting and operationalizing learning processes through data and analytics, and offer recommendations for generating feedback about these data. We aim to offer a starting point for utilizing learning analytics and convey their potential to aid learning and pedagogical choices.
Article
Arabic language is a challenging subject yet worth to learn it. As for non-Arab students, it is difficult for them to learn Arabic language without support such as self-determination and effective instruction especially those who are not a native speaker. The study aimed to determine the way to sustain Arabic language learning among secondary school students through motivation and teaching effectiveness. A simple model consisted teaching effectiveness, motivational beliefs and self-regulated learning also shown to explained the result. This study uses quantitative design through survey method. A total of 542 non-Arab students from several secondary schools in Malaysia were selected using stratified random sampling techniques. Students’ Evaluation of Teaching Effectiveness Rating Scale (SETERS) and Motivational Strategies for Learning Questionnaire (MSLQ) through Motivational Beliefs Scale and Self-Regulated Learning Strategies Scale were used as an instruments. A multiple and hierarchical regression was conducted for data analysis. Furthermore, a structural equation model also derived to strengthen the result by showing an indexes fitting the model. The findings showed that teaching effectiveness and motivational beliefs significantly influence self-regulated learning in Arabic language learning. The best predictor for self-regulated learning is the intrinsic value, test anxiety, self-efficacy and teachers’ delivery of subject information. The findings also show how self-regulated learning can improve and empower the Arabic language learning in Malaysia. This study has indicated an important implication for teachers to improvising teaching methods and for students to motivate them-self in order to catalyze self-regulated learning to improve Arabic language performance.
Article
Metacognition is thinking about the contents and processes of one’s own cognition. Research shows that metacognition plays important roles in most cognitive tasks, from everyday behaviors to problem-solving to expert performance. This chapter focuses on metacognition’s centrality in learning and in self-regulated learning. When learning, people monitor what they know and whether it is aligned with their intended learning outcome. A learner’s ability to monitor effectively is known as calibration. Learners then control their next actions based on their monitoring, and finally they self-regulate the process of monitoring and controlling their learning by shaping and adapting cognition or behavior by reaching forward by planning for future tasks. Research shows that people learn better when they have strong metacognitive abilities and when they can self-regulate their learning effectively.
Article
The interdisciplinary field of the learning sciences encompasses educational psychology, cognitive science, computer science, and anthropology, among other disciplines. The Cambridge Handbook of the Learning Sciences, first published in 2006, is the definitive introduction to this innovative approach to teaching, learning, and educational technology. In this significantly revised third edition, leading scholars incorporate the latest research to provide seminal overviews of the field. This research is essential in developing effective innovations that enhance student learning - including how to write textbooks, design educational software, prepare effective teachers, and organize classrooms. The chapters illustrate the importance of creating productive learning environments both inside and outside school, including after school clubs, libraries, and museums. The Handbook has proven to be an essential resource for graduate students, researchers, consultants, software designers, and policy makers on a global scale.
Article
Full-text available
The theoretical framework presented in this article explains expert performance as the end result of individuals' prolonged efforts to improve performance while negotiating motivational and external constraints. In most domains of expertise, individuals begin in their childhood a regimen of effortful activities (deliberate practice) designed to optimize improvement. Individual differences, even among elite performers, are closely related to assessed amounts of deliberate practice. Many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years. Analysis of expert performance provides unique evidence on the potential and limits of extreme environmental adaptation and learning.
Article
Full-text available
Article
Full-text available
The topic of how students become self-regulated as learners has attracted researchers for decades. Initial attempts to measure self-regulated learning (SRL) using questionnaires and interviews were successful in demonstrating significant predictions of students’ academic outcomes. The present article describes the second wave of research, which has involved the development of online measures of self-regulatory processes and motivational feelings or beliefs regarding learning in authentic contexts. These innovative methods include computer traces, think-aloud protocols, diaries of studying, direct observation, and microanalyses. Although still in the formative stage of development, these online measures are providing valuable new information regarding the causal impact of SRL processes as well as raising new questions for future study.
Article
This theoretical review of Winne and Hadwin’s model of self-regulated learning (SRL) seeks to highlight how the model sheds new light on current research as well as suggests interesting new directions for future work. The authors assert that the model’s more complex cognitive architecture, inclusion of monitoring and control within each phase of learning, and separation of task definition and goal setting into separate phases are all important contributions to the SRL literature. New research directions are outlined, including more nuanced interpretations of judgments of learning and the potential to more thoroughly assess the influence of interactions among cognitive and task conditions on all phases of learning.
Article
Feedback is one of the most powerful influences on learning and achievement, but this impact can be either positive or negative. Its power is frequently mentioned in articles about learning and teaching, but surprisingly few recent studies have systematically investigated its meaning. This article provides a conceptual analysis of feedback and reviews the evidence related to its impact on learning and achievement. This evidence shows that although feedback is among the major influences, the type of feedback and the way it is given can be differentially effective. A model of feedback is then proposed that identifies the particular properties and circumstances that make it effective, and some typically thorny issues are discussed, including the timing of feedback and the effects of positive and negative feedback. Finally, this analysis is used to suggest ways in which feedback can be used to enhance its effectiveness in classrooms.
Article
The purpose of this invited article is to provide an example of the evolution of programmatic research in learning disabilities. We first note the four strands of writing research in which we have been involved since the early 1980s, and then address the theoretical and pedagogical groundings of our research in Self-Regulated Strategy Development (SRSD). Over 20 studies involving SRSD have been conducted to date. One of these studies is briefly described, followed by a closer examination of how this study and other previous research led to two subsequent studies. We conclude with an overview of the many research questions and directions that remain in the area of writing, strategies instruction, and the development of self-regulation.
Chapter
Six features are enumerated that distinguish studying from learning in general and describe circumstances that essentially compel students to engage in complex bundles of goal-directed cognitive and motivational processes that 'get studying done.' We view these bundles as instances of metacognitively powered self-regulated learning. As a first step toward examining studying through metacognitive lenses, we present a general typology that delineates facets of academic tasks in general, including studying tasks. Then, we use this typology to characterize four distinguishable but recursively linked stages of studying: task definition, goal setting and planning, enacting study tactics and strategies, and metacognitively adapting studying. Next, we develop connections between our typology for studying and models of metacognitive monitoring, metacognitive control, and self-regulated learning. With this backdrop, we then survey select research that highlights metacognitive activities in each of the four stages of studying. Finally, we summarize our model of studying and offer suggestions for next steps in research on studying as a complex, self-regulated learning event.