ArticlePDF Available

Interactive metacognition: Monitoring and regulating a teachable agent

Authors:
1
Interactive Metacognition:
Monitoring and Regulating a Teachable Agent
Daniel L. Schwartz, Catherine Chase, Doris B. Chin, Marily Oppezzo, Henry Kwong
School of Education, Stanford University
Sandra Okita
Teachers College, Columbia University
Gautam Biswas, Rod Roscoe, Hogyeong Jeong, & John Wagster
Department of Engineering, Vanderbilt University
Corresponding Author:
Daniel L. Schwartz
485 Lasuen Mall
Stanford University
Stanford, CA 94305-3096
Daniel.Schwartz@ Stanford.edu
(650) 736-1514
2
Metacognition involves monitoring and regulating thought processes to make sure
they are working as effectively as possible (Brown, 1987; Flavell, 1976; Winne, 2001).
Good teachers are highly metacognitive (Lin, Schwartz, & Hatano, 2005). They reflect
on their expertise and instruction, and they refine their pedagogy accordingly. Good
teachers are also metacognitive in a less conventional sense of the term. They monitor
student understanding and they regulate the processes students use to learn and solve
problems (Shulman, 1987). Thus, good teachers apply metacognition to other people’s
thoughts. The proposal of this chapter is that asking children to teach and apply
metacognition to others can help them learn both content knowledge and metacognitive
skills. A strong version of this proposal, consistent with Vygostky (1987), would be that
metacognition develops first on the external plane by monitoring others, and then turns
inward to self-monitoring. The chapter does not test this claim. Instead, it shows that
having students teach a specially designed computer agent leads to metacognitive
behaviors that increase content learning and hint at improving metacognition more
generally.
To differentiate self-directed metacognition and other-directed metacognition, we
term the latter “interactive metacognition.” Learning-by-teaching is an instructional
method that is high on interactive metacognition – tutors anticipate, monitor, regulate,
and more generally, interact with their tutees’ cognition. Research on learning-by-
teaching has found that teaching another person is an effective way to learn. For instance,
when people prepare to teach pupils to take a test, they learn more compared to when
they prepare to take the test themselves (Annis, 1983; Bargh & Schul, 1980; Biswas et al.,
2001; cf. Renkl, 1995). Moreover, during the act of teaching, tutors learn by clarifying
3
the confusions of their tutees (Craig, Sullins, Witherspoon, & Gholson, 2006; Palinscar &
Brown, 1984; Uretsi, 2000) and by engaging in reflective knowledge building (Roscoe &
Chi, 2008). Interestingly, when tutors slip into “lecturing mode” and no longer engage in
interactive metacognition, they learn less (Chi, Roy and Hausmann, 2008; Fuchs, Fuchs,
Bentz, Phillips, & Hamlett, 1994; Graesser, Person, & Magliano, 1995; Roscoe & Chi,
2007).
The interactive quality of other-directed metacognition can help resolve two
psychological challenges: balancing the dual-task demands of metacognition; and,
rallying the motivation to engage in metacognition. Metacognition puts a dual-task load
on working memory. During metacognition, people need (1) to think their problem-
solving thoughts, and they simultaneously need (2) to monitor and regulate their thinking
about those thoughts. When learning or problem solving becomes difficult, there can be
less free capacity for metacognition. For example, when first learning to drive a car with
a manual transmission, people may be less likely to monitor their knowledge of the cars
behind them. Teaching can help alleviate the dual-task demand of metacognition. The
tutee has the responsibility of problem solving, which frees up resources for the tutor’s
metacognition. Gelman and Meck (1983), for example, found that young children could
monitor errors in adult counting better than their own counting, when the counting task
reached the edge of the children’s abilities (cf. Markman, 1977). In this case, interactive
metacognition was a form of distributed cognition (King, 1998; Kirsch, 1996), where the
adult took on the burden of problem solving and the child took on the burden of
monitoring that problem solving.
4
The distribution of tasks in interactive metacognition can help students improve
their own metacognition, because they can focus on monitoring and regulating cognition
per se. For example, in a series of studies by Okita (2008), elementary school children
learned tricks for mentally solving complex arithmetic problems. In half of the cases,
students practiced problem solving on their own. In the other half of the cases, students
took turns. On one turn, they would try to solve a problem, and on the next turn, they
would monitor a computer agent solving a problem. The children had to stop the agent if
they thought there was a mistake. Students who monitored the agent demonstrated a U-
shaped curve in their own problem solving. When first monitoring the agent, students
subsequently became slower and less accurate in their own problem solving. Over time,
however, the children sped up and became more accurate compared to students who
never monitored the agent. Presumably, by monitoring the agent, the students were
learning to monitor themselves, which caused a temporary drop in efficiency, but a better
payoff in the long run, because they improved their own cognition.
The second challenge of metacognition is motivational. Because metacognition
takes extra work, people will tend to “get by” if they can, rather than take the extra
cognitive effort needed to go beyond “good enough” (Martin & Schwartz, accepted).
Students often skim readings, because they think it is not worth checking their
understanding. Teachers, however, are responsible for their students’ performance, not to
mention their own display of competence. This increase in responsibility can motivate
teachers to engage in interactive metacognition, which may be one reason that tutors
learn more when preparing to teach than simply studying for themselves (e.g., Annis,
1983).
5
This chapter reviews research on Teachable Agents to demonstrate that it is
possible to use computer learning environments to produce the cognitive and
motivational benefits of interactive metacognition. Teachable Agents (TA) are learning-
by-teaching environments where students explicitly teach an intelligent computer agent.
The chapter begins with an introduction of the TA, “Betty’s Brain,” followed by a
description of how Betty elicits interactive metacognitive behaviors. The chapter then
shows that teaching Betty improves children’s content learning and their abilities to use
the same sort of reasoning as Betty. Finally, the chapter examines students’ learning
choices to determine whether they begin to internalize interactive metacognition.
A Technology for Applying Interactive Metacognition
This section explains how the Teachable Agent software naturally engages
metacognition during learning. Betty’s Brain, the TA shown in Figure 1 and the focus of
this chapter, was designed for knowledge domains where qualitative causal chains are a
useful structural abstraction (e.g., the life sciences). Students teach Betty by creating a
concept map of nodes connected by qualitative causal links; for example, ‘burning fossil
fuels’ increases ‘carbon dioxide’. Betty can answer questions based on how she was
taught. For instance, Betty includes a simple query feature. Using basic artificial
intelligence reasoning techniques (see Biswas, Leelawong, Schwartz, Vye, & TAG-V,
2005), Betty animates her reasoning process as she answers questions. In Figure 1, Betty
uses the map she was taught to answer the query, “What happens to ‘heat radiation’ if
‘garbage’ increases?” Students can trace their agent’s reasoning, and then remediate their
agents’ knowledge (and their own) if necessary. As described below, there are many
feedback features that help students monitor their agent’s understanding. A version of the
6
Betty’s Brain environment and classroom management tools can be found at
<aaalab.stanford.edu/svBetty.html>. Betty is not meant to be the only means of
instruction, but rather, she provides a way to help students organize and reason about the
content they have learned through other classroom lessons.
[Figure 1 about here – Betty’s Brain]
In reality, when students work with Betty, they are programming in a high-level,
graphical language. However, Betty’s ability to draw inferences gives the appearance of
sentient behavior. Betty also comes with narratives and graphical elements to help
support the mindset of teaching; for example, students can customize their agent’s
appearance and give it a name. (“Betty’s Brain” is the name of the software, not a
student’s specific agent.) Betty can also take quizzes, play games, and even comment on
her own knowledge. Ideally, the TA can enlist students’ social imagination so they will
engage in the processes of monitoring and regulating their agent’s knowledge.
A key element of Betty is that she externalizes thought processes. Betty literally
makes her thinking visible. Thus, students are applying metacognition to the agent’s
thinking and that thinking is in an easily accessible format.
Monitoring One’s Own Thoughts in an Agent
For students to practice metacognition on their agent, they need to view Betty as
exhibiting cognitive processes. This section shows that students do treat their agent as
sentient, which leads them to take responsibility for monitoring and regulating their
agents’ knowledge. It then shows that Betty’s knowledge is a fair representation of the
students’ own knowledge, which shortens the distance between monitoring the agent and
monitoring themselves.
7
The Agent Elicits Interactive Metacognitive Behaviors
When programming and debugging their agents, students are also monitoring and
regulating their agents’ knowledge and reasoning. A study with 5th-graders demonstrated
that students treat their agents as having and using knowledge. By this age, children know
the computer is not really alive, but they suspend disbelief enough to treat the computer
as possessing knowledge and feelings (e.g., Reeves and Nass, 1996; Turkle, 2005).
Students monitor their agents’ failures and share responsibility, which leads them to
revise their own understanding so they can teach better.
The study used the Triple-A-Challenge Gameshow, which is an environment
where multiple TAs, each taught by a different student, can interact and compete with
one another (Figure 2). Students can log on from home to teach their agents, chat with
other students, and eventually have their agents play in a game. During game play, (1) the
game host poses questions to the agents; (2) the students choose a wager that their agent
will answer correctly; (3) the agents answer based on what they have been taught; (4) the
host reveals the correct answer; and finally, (5) wager points are awarded. In addition to
boosting engagement, the wagering feature was intended to lead students to think through
how their agent would answer the question, thereby monitoring their agent’s
understanding. The Gameshow was developed to make homework more interactive,
social, and fun. In this study, however, the focus was on student attitudes towards Betty
during game play, and students were videotaped as they worked alone.
[Figure 2 about here – Gameshow podium]
The study included two conditions. In both, students received a text passage on
the mechanisms that sustain a fever, and they taught their TA about these concepts. The
8
treatment difference occurred when playing the Gameshow. In the TA condition, the
agents answered six questions, and the graphical character in the Gameshow represented
the agent. In the Student condition, the students answered the questions, and the character
represented the student. To capture students’ thoughts and feelings towards the agent,
students in both groups thought aloud.
In the TA condition, students treated their agents as having cognitive states.
Students’ attributions of cognitive states were coded as being directed to themselves,
their agents, or both. Examples of self-attributions include, “It’s kind of confusing to
me,” “I have a really good memory,” and “No, actually, I don’t know.” Examples of
agent-attributions include, “He doesn’t know it,” and “He knows if shivering
increases….” Sometimes, a single statement could include both self and agent
attributions; for example, “I’m pretty sure he knows this one,” and, “I guess I’m smarter
than him.”
During game play, students in both treatments made about two cognitive state
attributions per question. For the TA condition, over two-thirds of these attributions were
towards the agent or a combination of agent and student. Thus, students treated the agent
as a cognitive entity, and in fact, they sometimes confused who was doing the thinking,
as in the case of one boy, who stated, “’cause I don’t… ‘cause he doesn’t know it.”
The TA students also took an “intentional stance” (Dennett, 1989) towards their
agents, by apportioning responsibility to the agent for success and failure. They could
have behaved as though all successes and failures were theirs, because the agent is simply
a program that generates answers from a map the student had created, but they did not.
Table 1 indicates the number of attribution-of-credit statements made in response to
9
successful and unsuccessful answers. Examples of success attributions include, “I’m
glad I got it right” (self), “He got it right!” (agent), or “We got it!” (both). Examples of
failure attributions include, “I didn’t teach him right” (self), “He said large increase when
it was only increase” (agent), or “Guess we were wrong” (both).
[Table 1 about here – attributions of sentience]
As the table shows, students in the TA condition liberally attributed responsibility
to the agent. Importantly, the TA condition exhibited more attention to failure, which is a
key component of monitoring (e.g., Zimmerman & Kitsantas, 2002). They made nearly
three times as many remarks in a failure situation relative to the Student condition. The
attributions were spread across themselves and their agents. In addition to acknowledging
failure, they often made remarks about flaws in their teaching such as, “Whoa. I really
need to teach him more.” Thus, at least by the verbal record, the TA condition led the
students to monitor and acknowledge errors more closely than the Student condition.
The study also demonstrated that the students were sufficiently motivated by
teaching to engage in the extra work that metacognition often entails. After completing
the round of game play, students were told the next round would be more difficult. They
were given the opportunity to revise their maps and re-read the passage in preparation.
While all the children in the TA condition chose to go back and prepare for the next
round, only two-thirds of the Student condition prepared. Of those who did prepare, the
TA students spent significantly more time at it. The protocol data from the game play
help indicate one possible reason. The Student condition exhibited nearly zero negative
responses to failure (e.g., “Ouch!). Given an unsuccessful answer, the Student condition
averaged 0.02 negative affective responses. In contrast, the TA condition averaged 0.62
expressions of negative affect. Much of this negative affect was regarding their agent’s
10
feelings. For example, one student said “Poor Diokiki… I’m sorry Diokiki” when his
agent, Diokiki, answered a question incorrectly. The TA students felt responsibility for
their agents’ failures, which may have caused them to spend more time preparing for the
next round of game play.
Overall, these data indicate that the children treated their agents as if they were
sentient, which had subsequent effects on student learning behaviors. In reality, the
children were “playing pretend.” They knew their agent was not a sentient being.
Regardless, their play involved the important features of metacognition – thinking about
mental states and processes, noticing and taking responsibility for mistakes, and
experiencing sufficient affect that it is worth the effort to do something about the
mistakes when given a chance to revise. Working with another, in this case an agent one
has taught, can lead to more metacognitive behaviors than completing a task oneself.
The Agent’s Knowledge Reflects the Student’s Knowledge
Schoenfeld (1987), discussing the importance of monitoring, states that “… the
key to effective self-regulation is being able to accurately self-assess what is known and
not known.” In Betty, students are assessing what their agent does and does not know.
The agent’s knowledge is a reflection of their own knowledge, so that working with the
agent indirectly entails working on an externalized version of their own knowledge. This
was demonstrated by correlating the test scores of the students and their agents.
Betty can be automatically tested on the complete population of questions in a
concept map. By using a hidden expert map that generates the correct answers, the
program can successively test Betty on all possible questions of the form, “If node <X>
11
increases, what happens to node <Y>?” The results produce an APQ Index (all possible
questions) that summarizes the overall test performance of the TA.
A study with 30 sixth-grade students compared the agents’ APQ indices with how
well students did on their own paper-and-pencil tests. Students completed three
cumulative units by teaching their agents about global warming and climate change. At
the end of each unit, the agents were tested to derive an APQ Index, and students took a
short answer, paper-and-pencil test. In the paper-and-pencil test, half of the items
comprised TA-like questions, in the sense that they depended on causal chaining and
nodes in Betty’s map. The other half comprised Non-TA questions in the sense that they
depended on content that was not captured in Betty’s nodes. The Non-TA questions
helped to determine whether Betty correlated with student knowledge more broadly, and
not just questions that Betty could answer.
[Table 2 about here – APQ x Student test scores]
Table 2 indicates that the TA scores were positively correlated with students’ test
scores. These correlations compare favorably with the correlations between students’
scores on the TA-like questions and the Non-TA questions for each unit test (Test 1: r
= .47; Test 2: r = .46; and Test 3: r = .14. Thus, the APQ Index correlated better with
student performance on the TA-like and Non-TA questions than these two types of paper-
and-pencil items correlated with each other. (The low correlations for Test 3 are due to a
badly worded TA-like question.) Conceivably, with further development and evaluation,
it will be possible to test agents instead of students, thereby saving valuable instructional
time.
12
The correlation of student and agent performance indicates that when students
monitor their agent’s knowledge, for example, by asking it a question, they are likely to
be monitoring a fair externalization of their own knowledge. This helps to dissolve the
gap between self and other, so that the task of working with the agent is a proxy for the
task of reflecting upon their own knowledge.
Adopting the Cognition of the Agent
Given that students treat the TA as exhibiting mental states and the TA reflects
the student’s knowledge, the next question is whether these have any effect on student
learning. Ideally, by monitoring another’s cognition, one can pick up the other person’s
style of reasoning. Siegler (1995), for example, found that young children learned
number conservation more effectively when prompted to explain the experimenter’s
reasoning rather than their own. Betty reasons by making inferences along causal chains.
When students teach Betty, they learn to simulate her causal reasoning for themselves.
Learning to simulate Betty’s cognition about a situation is different from learning
to simulate the situation itself. When people reason about a situation itself, they often
create a mental model that helps them imagine the behavior of the situation and make
predictions (Gentner & Gentner & Stevens, 1983; Glenberg, et al., 2004; Zwaan &
Radvansky, 1998). For example, when reasoning about how gears work, people can
create and simulate an internal image of the gears to solve problems (Schwartz & Black,
1996). To run their mental model, people imagine the forces and movements of the gears,
and they observe the resulting behaviors in their mind’s eye. With Betty, students create
a mental model of the agent’s reasoning. So, rather than simulating forces and spatial
13
displacements, the students learn to simulate chains of declarative reasoning. This way,
Betty’s cognition becomes internalized as a way of reasoning for the student.
Relevant data come from the preceding study where two classes of sixth graders
learned about global warming. Over two weeks, students learned the mechanisms of the
greenhouse effect, the causes of greenhouse gasses, and finally, the effects of global
warming. Both classes completed hands-on activities, saw film clips, received lectures,
and completed relevant readings. At regular points, students were asked to create concept
maps to organize their learning, and they all learned how to model causal relations using
a concept map. The difference was that one class was assigned to the Betty condition;
these students used the Betty software to make their concept maps. Figure 3 shows a
finished “expert” version of a map created on the Betty system. The other class was
assigned to the Self condition; these students used Inspiration, a popular, commercial
concept-mapping program.
[Figure 3 about here – Global Warming Map]
Students in both conditions received multiple opportunities for feedback with an
important difference. In the Betty condition, agents answered the questions, and the
feedback was directed towards the agents. In the Self condition, the students answered
the questions, and the feedback was directed towards them. This difference occurred
across several feedback technologies. For example, the agents took quizzes or the
students took quizzes. For homework, the agents answered questions in the Gameshow or
the students answered the questions in the Gameshow. Thus, the main difference
between conditions was that in the Betty condition, the learning interactions revolved
around the task of teaching and monitoring the agent, whereas in the Self condition, the
14
learning interactions revolved around the task of creating a concept map and answering
questions and monitoring oneself.
[Figure 4 about here – accuracy by inference chain length]
The students in the Betty condition adopted Betty’s reasoning style. After each
unit – mechanisms, causes, effects – all the students completed short-answer, paper-
pencil tests. The tests included questions that required short, medium, or long chains of
causal inference. An example of a short-chain question involved answering why warmer
oceans increase sea level. An example of a long-chain question involved detailing a
causal bridge that spanned from an increase in factories to the effects on polar bears.
Figure 4 shows that over time the Betty students separated from the Self students in their
abilities to complete longer chains of inference. After the first unit, the two groups
overlapped, with the Betty students showing a very modest advantage for the longer
inferences. After the second unit, the TA students showed a strong advantage for the
medium-length inferences. By the final unit, the TA students showed an advantage for
short, medium, and long inferences.
This study used intact classes, so the results are promissory rather than conclusive.
Nevertheless, the steady improvement in length of causal inference is exactly what one
would expect the Betty software to yield, because this is what the agent’s reasoning
models and enforces. The interactive metacognition of teaching and monitoring Betty’s
reasoning and accuracy helped students internalize her style of thinking, which in this
case, is a positive outcome because her reasoning involved causal chaining.
Regulating Cognition for the Agent
15
In addition to monitoring cognition, metacognition involves taking steps to guide
cognition, or as it is often termed, “regulating” cognition (Azevedo & Hadwin, 20055;
Brown, 1987; Butler & Winne, 1995; Pintrich, 2002; Schraw, Crippen, & Hartley, 2006).
Regulating another can help students learn to regulate for themselves.
Thus far, Betty’s features supported monitoring, but there were few features to
help students decide what to do if they detected a problem. For example, one student’s
agent was performing poorly in the Gameshow and the student did not know how to fix it.
The Gameshow was not designed to address this situation. Fortunately, another student
used the Gamehow’s chat feature to provide support, “Dude, the answer is right there in
the reading assignment!”
To help students learn to self-regulate their thinking, Betty comes in a self-
regulated learning (SRL) version. For example, when students add incorrect concepts or
links, Betty can spontaneously reason and remark that the answer she is deriving does not
seem to make sense. This prompts students to reflect on what they have just taught Betty
and to appreciate the value of checking understanding. SRL Betty also includes Mr.
Davis, a mentor agent shown in Figure 5. Mr. Davis complements the teaching narrative,
because he grades Betty’s quizzes and gives her feedback on her performance. This
feedback is in the form of motivational support (e.g., “Betty, your quiz scores are
improving”), as well as strategies to help the students improve Betty’s knowledge (e.g.,
“Betty, ask your teacher to look up the resources on quiz questions that you have got
wrong …”).
[Figure 5 – Mr. Davis]
16
SRL Betty implements regulation goals specified in Zimmerman’s (1989) list of
self regulation strategies. The SRL system monitors for specific patterns of interaction,
and when found, Betty or Mr. Davis provide relevant suggestions (also see Jeong, et al.,
2008). Table 3 provides a sample of triggering patterns and responses used by the SRL
system; there are many more than those shown in Table 3.
[Table 3 about here – SRL Patterns and Responses]
In sum, SRL Betty is an adaptive tutoring system, except that students are the
tutors, and the system adapts to target metacognitive needs specifically. The
metacognitive support is integrated into the teaching narrative through computer
characters that take the initiative to express opinions, make requests, and provide relevant
prompts to encourage further interactive metacognition. In the following, the first sub-
section shows that SRL support helps students learn science content. The second sub-
section introduces a new machine learning methodology for analyzing student choices.
The methodology is used to identify high-level interaction patterns that indicate
metacognitive strategies. It is then used to evaluate whether students developed
metacognitive strategies that they continued to use on their own, even when the SRL
features were turned off.
Self –Regulation Support Improves Student Learning
The self-regulation support in SRL Betty helps students learn science content
better. Fifty-one 5th-grade students learned about interdependence in a river ecosystem
with a special focus on the relations between fish, macroinvertebrates, plants, and algae.
The students worked over seven class periods starting with the food chain, then
photosynthesis and respiration and finally the waste cycle. To help the students learn,
17
there were quizzes and reading resources built into the system. (In the Gameshow studies
described earlier in the chapter, the students received the nodes, and their task was to
determine the links. In this study, the students had to decide which nodes to include in
their maps based on the reading, so they could develop strategies for identifying key
concepts. )
The study had three conditions: Regulated Learning by Teaching (RT); Learning
by Teaching (LT); and Intelligent Coaching (IC). The RT condition used SRL Betty, per
Table 3. Students could also submit Betty to take a quiz, and Mr. Davis provided
metacognitive tips about resources and steps the students could use to teach Betty better.
Mr. Davis did not dictate specific changes to Betty’s knowledge, for example, to add a
particular concept or change a link. Instead, he suggested strategies for improving Betty’s
knowledge (e.g., “Check if Betty understands after you have taught her something new”).
In the LT condition, students worked with Betty and the mentor agent, but
without the SRL support. Betty did not provide prompts for regulating how she was
taught, and Mr. Davis provided direct instructions for how to fix the concept map after a
quiz. For example, Mr. Davis might tell students “to consider how macroinvertebrates
might affect algae and add an appropriate link.”
The final Intelligent Coach (IC) condition was identical to the LT condition,
except that students used the software to make concept maps of their own knowledge.
There was no teaching cover story. Instead of asking Betty to answer a question, students
could ask Mr. Davis to answer a question using the concept map or to explain how the
map gave a certain answer. Thus, students got the same information and animations as in
18
the LT condition, except they thought it was their map that Mr. Davis was analyzing
instead of Betty’s thinking.
In addition to the initial seven days of learning, the study included a second
learning phase that measured transfer. Six weeks after completing the river ecosystem
unit, students left their original conditions to spend five class periods learning about the
land-based nitrogen cycle. All the students worked with a basic Betty version. There were
on-line reading resources; Betty could answer questions; and, students could check how
well Betty did on quizzes. However, there was no extra support, such as how to improve
Betty’s map or their teaching. The logic of this phase was that if students had developed
good metacognitive strategies, they would be more prepared to learn the new content on
their own (Bransford & Schwartz, 1999).
The students’ final concepts maps from the main and transfer phases were scored
for the inclusion of correct nodes and links based on the reading materials. Table 4 holds
the average scores. Overall, both conditions that involved teaching did better than the
Intelligent Coach condition, with no interactions by time. This means that the Learning-
by-Teaching condition did better than the Intelligent Coach condition, even though the
only treatment difference between these two conditions was whether students thought
they were teaching and monitoring Betty (LT), instead of being monitored by Mr. Davis
(IC). This result reaffirms the findings from the global warming study using a tighter
experimental design. If students believe they are teaching an agent, it leads to superior
learning even when they are using the same concept mapping tool and receiving
equivalent feedback.
19
In a separate study not reported here, an Intelligent Coaching condition included
self-regulated learning support, similar to the Regulated Teaching condition. (Mr. Davis
gave prompts for how to improve the concept map by consulting resources, checking the
map by asking queries, etc.). In that study, the IC+Regulated support condition did no
better than an IC condition, whereas the RT condition did. So, despite similar levels of
metacognitive prompting, the prompting was more effective when directed towards
monitoring and regulating one’s agent. This result also supports the basic proposition
that teaching effectively engages metacognitive behaviors, even compared to being told
to use those metacognitive behaviors for one’s self.
[Table 4 about here – Concept Map Scores]
Post-hoc analyses of the main learning phase indicates that the extra
metacognitive support of the RT treatment led to better initial learning than the LT
condition in which students did not receive any guidance on regulation. However, once
students lost the extra support in the transfer phase, they performed about the same as the
LT students. By these data, self-regulation support helped students learn when it was
available, but it is not clear that the extra support yielded lasting metacognitive skills
compared to only teaching Betty. As described next, however, there were some modest
differences in how the RT students went about learning in the transfer phase, even though
these did not translate into significant learning differences.
Adopting Metacognitive Learning Choices from an Agent
Metacognition, besides helping people think more clearly, can also help people make
choices about how to use learning resources in their environment. For example, to study
for the California Bar exam, many students order the BAR/BRI materials
20
(www.barbri.com). These materials comprise nearly a cubic meter of readings, reviews,
outlines, practice tests, videotapes, as well as live local lectures, workshops and on-line
tutorials. Across the materials, the content is highly redundant. Rather than plowing
through all the materials, these well-educated adults often choose the presentation format
and activities that they feel suit their learning needs and preferences for a particular topic.
Their learning is driven by their choices of what, when, and how to learn. Outside of
classrooms that exert strict control, this is often the case. People make choices that
determine their learning. For younger students, metacognitive instruction should help
children learn to make effective learning choices.
[Table 5 about here – Possible student choices]
This section introduces a new data mining methodology for examining learning
choices. The goal is to be able to identify choice patterns that reflect effective
metacognition. Ideally, once these patterns have been identified, adaptive technologies
can monitor for these patterns and take appropriate actions. This is a useful endeavor,
because current adaptive computer systems depend on strict corridors of instruction in
which students can make few choices (except in the unrelated sense of choosing an
answer to a problem). If students do not have chances to make choices during learning, it
is hard to see how they can develop the metacognition to make effective learning choices.
If the current methodology (or others) is successful, it will be possible to use more
choice-filled learning environments, like virtual worlds, without sacrificing the benefits
of adaptive technologies for helping students to improve.
To explore the data mining methodology, it was applied to the log files from the
preceding study. The question was whether the methodology could help reveal whether
21
the RT students exhibited unique patterns of learning choices during the initial learning
phase when the metacognitive support was in play, and whether these patterns carried
over to the transfer phase when the support was removed. That is, did the students in the
RT condition internalize the metacognitive support so they exhibited effective
metacognitive patterns once the support was removed?
In the study, students could make a number of choices about which activities to
pursue. Table 5 summarizes the possibilities. For example, one student read the resources,
and then made a number of edits to the map. Afterwards, the student submitted the map to a quiz,
made some more edits, and then asked a pair of questions of the map. In raw form, the logfile
sequence is somewhat overwhelming: R M M M A M M M M M
M M M M Q M M A A.
To make sense of these complex choice sequences, a new data mining methodology
analyzed the log files (Li and Biswas, 2002; Jeong & Biswas, 2008). The methodology
automated the derivation of a Hidden Markov Model (HMM). An HMM model represents the
probabilities of transitioning between different “aggregated” choice states (Rabiner, 1989).
An aggregated choice state represents common choice patterns that comprise sequences
of individual choices to transition from one activity to another. HMM is useful for
identifying high-level choice patterns, much in the way that factor analysis is useful for
identifying clusters of survey items that reflect a common underlying psychological
property.
The HMM analysis generated three choice patterns that could be interpreted as
increasing in metacognitive sophistication: Basic Map Building; Map Probing; and, Map
Tracing. The Basic Map Building pattern involves editing the map, submitting the map
for a quiz, and occasionally referring to the reading resources. It reflects a basic and
22
important metacognitive strategy. Students work on their maps, check the map with a
quiz to see if there are errors, and occasionally look back at the readings. Students may
order these choices in different ways, but HMM analysis captured that students frequently
transitioned among these choices.
In the Map Probing pattern, students edit their maps, and then they ask a question
of their map to check for specific relations between two concepts (e.g., if fish increase,
what happens to algae?). This pattern exhibits a more proactive, conceptually driven
strategy. Students are targeting specific relations rather than relying on the quiz to
identify errors, and students need to formulate their own questions to check their maps.
Finally, the Map Tracing pattern captures when students ask Betty or Mr. Davis
(depending on the system) to explain the steps that led to an answer. When Betty or Mr.
Davis initially answers a question during Map Probing, the agents only state the answer
and show the paths they followed. To see whether a specific link within the path
produced an increase or decrease, students have to request an explanation. (Map Tracing
can only occur after Map Probing.) These decomposing explanations are particularly
useful when maps become complex, and there are multiple paths between two concepts.
Map Tracing is a sophisticated metacognitive strategy, because it involves decomposing a
chain of reasoning step-by-step, even after the answer has been generated in Map Probing.
[Figure 6 about here. HMM transition probabilities]
Figure 6 shows the complete set of transitional probabilities from one state to
another broken out by condition and phase of the study. The figure is complex, so the
following discussion will narrow the focus to Map Tracing.
Multiplying the transition probabilities yields a rough estimate of the proportion
of time students spent in a specific activity state. This is important, because just looking
23
at a single transition can be misleading. For example, in the main phase of the study, the
IC and RT conditions transitioned from Map Probing into Map Tracing at the same rate.
Nevertheless, the IC condition spent much less time Map Tracing. The IC students rarely
transitioned from Map Building into Map Probing, and Map Probing is a necessary
precursor to Map Tracing.
In the first phase of the study, students in all three conditions spent a significant
proportion of their time in Basic Map Building. However, the RT (Regulated Teaching)
students more often transitioned into Map Probing and Map Tracing. Their version of the
software included two features to make this happen. First, Betty would not take a quiz if
students had not checked her reasoning by asking her a question. This forced students to
enter the Map Probing activity. Second, Betty and Mr. Davis suggested that the students
ask Betty to explain her reasoning, so the students could trace her reasoning and look for
errors. As a result, the proportion of effort spent in Map Probing and Tracing were twice
as great for the RT condition compared to the other two conditions. Presumably, this
contributed to the superior content learning, as indicated by Table 4.
The metacognitive strategies practiced in the initial learning phase transferred
somewhat when students had to learn the nitrogen cycle on their own. At transfer, when
all students had to learn the nitrogen cycle without any special feedback or tips, the
differences between conditions were much smaller. However, there was a “telling”
difference that involved transitions into Map Tracing. The RT students, who had received
regulation support, were twice as likely as the LT students to use Map Tracing. And, the
LT students, who had taught Betty, were twice as likely to use Map Tracing as the IC
students. As ratios, the differences are quite large, though in terms of absolute amount of
24
time spent Map Tracing, they are relatively small. Nevertheless, the strategic use of Map
Tracing can greatly help monitor lengthy chains of reasoning. These differences may help
explain why the LT and RT treatments learned more at posttest. These students were
more likely to check how their agent was reaching its conclusion, which conceivably,
could have caused the superior learning.
At this point, it is only tentative that the self-regulation support in Betty affected
students’ learning at transfer via the learning choices they made. This HMM analysis
aggregated across students and sessions within a condition. Thus, it is not possible to do
statistical tests. Deriving patterns through HMM is a new approach to understanding
students’ metatcognitive learning choices, and it is still being developed. The main
promise of analyzing these patterns is that it can help improve the design of interactive,
choice-filled environments for learning. By identifying better and worse interactive
patterns for learning, it should be possible to design the computer system to identify those
patterns in real-time and provide adaptive prompts to (a) move students away from
ineffective metacognitive patterns, and (b) encourage them to use effective patterns. Thus,
an important new step will be to correlate choice patterns with specific learning outcomes,
so it is possible to determine which choice patterns do indeed lead to better learning.
CONCLUSION
The chapter’s leading proposal is that teaching another person, or in this case an
agent, can engage productive metacognitive behaviors. This interactive metacognition
can lead to better learning, and ideally, if given sufficient practice, students will
eventually turn the metacognition inward.
25
The first empirical section demonstrated that students do take their agent’s
behavior as cognitive in nature, and that the agent’s reasoning is correlated with the
students’ own knowledge. Thus, when students work with their agent, they are engaging
in metacognition. It is interactive metacognition directed towards another. The second
empirical section demonstrated that monitoring an agent can lead to better learning,
because students internalize the agent’s style of reasoning. In the final empirical section,
the Teachable Agent was enhanced to include support for regulating the choices that
students make to improve learning. Again, the results indicated that working with an
agent led to superior content learning, especially with the extra metacognitive support in
place. Moreover, students who taught an agent made a near transfer to learn a new topic
several weeks later.
An analysis of students’ learning choices indicated that the students who had
taught agents exhibited a more varied repertoire of choices for improving their learning.
They also exhibited some modest evidence of transferring these metacognitive skills by
choosing to check intermediate steps within a longer chain of inference.
It is informative to contrast Betty with other technologies designed as objects-to-
think-with (Turkle, 2007). Papert (1980), for example, proposed that the programming
language Logo would improve children’s abilities to plan. Logo involved programming
the movement of a graphical “turtle” on the computer screen. Evidence did not support
the claim that Logo supported planning (Pea & Kurland, 1984). One reason might be that
students had to plan the behavior of the turtle, but the logical flow of the program did not
resemble human planning itself. For example, the standard programming construct of a
“do-loop” involves iterating through a cycle and incrementing a variable until a criterion
26
is reached. The execution of the logic of this plan does not resemble many human
versions of establishing and managing a plan. Therefore, programming in Logo is an
interactive task, but it is not a task where one interacts with mental states or processes. In
contrast, the way Betty reasons through causal chains is similar enough to human
reasoning that programming Betty can be treated as working with her mental states.
Students can internalize her cognitive structure, and eventually turn their thinking about
her cognitive structures into thinking about their own.
27
ACKNOWLEDGMENTS
This material is based upon work supported by the National Science Foundation
under grants EHR- 0634044, EHR- 0633856, SLC-0354453, and by the Department of
Education under grant IES R305H060089. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the granting agencies.
28
REFERENCES
Annis, L. (1983). The processes and effects of peer tutoring. Human Learning, 2, 39-47.
Azevedo, R. & Hadwin, A.F. (2005). Scaffolding self-regulated learning and
metacognition – Implications for the design of computer-based scaffolds.
Instructional Science, 33, 367-379.
Bargh, J. A. and Y. Schul (1980). On the Cognitive Benefits of Teaching. Journal of
Educational Psychology 72, 593-604.
Biswas, G., D. L. Schwartz, et al. (2001). Technology Support for Complex Problem
Solving. In K. D. Forbus and P. J. Feltovich (Eds). Smart Machines in Education:
The coming revolution in educational technology (pp. 71-97). Menlo Park, CA:
AAAI Press.
Biswas, G., Leelawong, K., Schwartz, D., Vye, N., TAG-V. (2005). Learning By
Teaching: A New Agent Paradigm for Educational Software. Applied Artificial
Intelligence, 19, 363-392.
Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with
multiple implications. In A. Iran-Nejad & P. D. Pearson (Eds.), Review of
Research in Education , 24, 61-101. Washington DC: American Educational
Research Association.
Brown , A. (1987). Metacognition, executive control, self-regulation and other more
mysterious mechanisms, In Weinert, F.E, & Kluwq, R.H. (Eds.), Metacognition,
Motivation and Understanding, New Jersey, Lawrence Erlbaum Associates
Brown, A.L., Bransford, J.D., Ferrara, R.A., & Campione, J.C. (1983). Learning,
remembering, and understanding. In J.H. Flavell and E.M. Markman (Eds.),
Handbook of child psychology (4th ed.). Cognitive Development (Vol.3, pp.515-
529). New York: Wiley.
Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A theoretical
synthesis. Review of Educational Research, 65(3), 245–281.
Chi, M. T. H., Roy, M., & Hausmann, R. G. M. (2008). Observing tutorial dialogues
collaboratively: Insights about human tutoring effectiveness from vicarious
learning. Cognitive Science, 32, 301-341.
Craig, S. D., Sullins, J., Witherspoon, A., & Gholson, B. (2006). The deep-level-
reasoning-question effect: The role of dialogue and deep-level-reasoning
questions during vicarious learning. Cognition and Instruction, 24, 565-591.
Dennett, D. (1989). The intentional stance. Cambridge, MA: MIT Press.
Fantuzzo, J., Riggio, R., Connelly, S., & Dimeff, L. (1989). Effects of reciprocal peer
tutoring on academic achievement and psychological adjustment: A componential
analysis. Journal of Educational Psychology, 81(2), 173-177.
Flavell, J.H. (1976). Metacognitive aspects of problem solving. In L.B. Resnick (Ed.),
The nature of intelligence. NJ: L. Erlbaum.
Fuchs, L., Fuchs, D., Bentz, J., Phillips, N., & Hamlett, C. (1994). The nature of student
interactions during peer tutoring with and without prior training and experience.
American Educational Research Journal, 31, 75-103.
Gelman, R., Meck, E. (1983) Preschoolers’ counting: Principles before skill. Cognition,
13, 343-359.
Gentner, D., & Stevens, A. (Eds.) (1983). Mental Models. Hillsdale, NJ: Erlbaum.
29
Glenberg, A. M., Gutierrez, T,, Levin, J. R., Japuntich, S., & Kaschak, M. P. (2004).
Activity and imagined activity can enhance young children's reading
comprehension. Journal of Educational Psychology, 96, 424-436.
Graesser, A.C., Person, N., & Magliano, J. (1995). Collaborative dialog patterns in
naturalistic one-on-one tutoring. Applied Cognitive Psychologist, 9, 359-387.
Jeong, H. and Biswas, G. (2008). G. Mining Student Behavior Models in Learning-by-
Teaching Environments. Proceedings of the First International Conference on
Educational Data Mining (pp. 127-136), Montreal, Canada.
Jeong, H., & Biswas, G. (2008). Mining Student Behavior Models in Learning-by-
Teaching Environments, First International Conference on Educational Data
Mining, Montreal, Canada.
Jeong, H., Gupta, A., Roscoe, R., Wagster, J., Biswas, G., & Schwartz, D. (2008). Using
Hidden Markov Models to Characterize Student Behavior Patterns in Computer-
based Learning-by-Teaching Environments, Intelligent Tutoring Systems: 9th
International Conference, Montreal, Canada.
Jeong, H., Gupta, A., Roscoe, R., Wagster, J., Biswas, G., & Schwartz, D. (2008). Using
Hidden Markov Models to Characterize Student Behavior Patterns in Computer-
based Learning-by-Teaching Environments. In B. Woolf, et al. (Eds.),Intelligent
Tutoring Systems: 9th International Conference, Montreal, Canada, LNCS vol.
5091, pp. 614-625.
King, A. (1998). Transactive peer tutoring: Distributing cognition and metacognition.
Educational Psychology Review, 10(1), 57-74.
King, A., Staffieri, A., & Adelgais, A. (1998). Mutual peer tutoring: Effects of
structuring tutorial interaction to scaffold peer learning. Journal of Educational
Psychology, 90, 134-152.
Kirsh, D. (1996). Adapting the environment in stead of oneself. Adaptive Behavior, 4(3-
4), 415-452.
Li, C. & Biswas, G. (2002). A Bayesian Approach for Learning Hidden Markov Models
from Data. Special issue on Markov Chain and Hidden Markov Models,
Scientific Programming, 10, 201-219.
Lin, X. D., Schwartz, D. L., & Hatano, G. (2005). Towards teacher’s adaptive
metacognition. Educational Psychologist, 40, 245-256
Markman, E. (1977). Realizing that you don't understand: Elementary school children's
awareness of inconsistencies. Child Development, 48, 986-992
Martin, L. & Schwartz, D. L. (accepted pending revisions). Prospective adaptation in the
use of representational tools. Cognition and Instruction.
Okita, S.Y. (2008). Learn Wisdom by the Folly of Others: Children Learning to Self
Correct by Monitoring the Reasoning of Projective Pedagogical Agents (Doctoral
dissertation, Stanford University, 2008). Dissertation Abstracts International
ProQuest
Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering
and comprehension-monitoring activities. Cognition and Instruction, 2, 117-175.
Papert, S. A. (1980). Mindstorms: Children, computers, and powerful ideas. NY: Basic
Books.
30
Pea, R. D., & Kurland, D. M. (1984). On the cognitive effects of learning computer
programming. New Ideas Psychology, 2, 137-168.
Pintrich, P.R. (2002). The Role of Metacognitive Knowledge in Learning, Teaching, and
Assessing. Theory into Practice, 41 (4), 219-225.
Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications
in Speech Recognition. Proceedings of the IEEE, 77(2), 257-286.
Reeves, B., & Nass, C. (1996). The Media Equation. NY: Cambridge University Press.
Renkl, A. (1995). Learning for later teaching: An exploration of mediational links
between teaching expectancy and learning results. Learning and Instruction, 5,
21-36.
Roscoe, R. & Chi, M. (2007). Understanding tutor learning: Knowledge-building and
knowledge-telling in peer tutors' explanations and questions. Review of
Educational Research, 77, 534-574.
Roscoe, R. D. & Chi, M. (2008). Tutor learning: The role of instructional explaining and
responding to questions. Instructional Science, 36, 321-350.
Roscoe, R., and Chi, M.T. (2008) Instructional Science
Schoenfeld, A.H. (1987). What’s all the fuss about metacognition? In A.H. Schoenfeld
(Ed.), Cognitive science and mathematics education (pp.189-215). Hillsdale, NJ:
Erlbaum.
Schraw, G., Crippen, K.J., & Hartley, K. (2006). Promoting Self-Regulation in Science
Education: Metacognition as Part of a Broader Perspective on Learning. Research
in Science Education, 36, 111–139.
Schwartz, D. L. & Black, J. B. (1996). Shuttling between depictive models and abstract
rules: Induction and fallback. Cognitive Science, 20, 457-497.
Schwartz, D. L., Blair, K. P., Biswas, G., Leelawong, K., & Davis, J. (2007). Animations
of thought: Interactivity in the teachable agents paradigm. In R. Lowe & W.
Schnotz (Eds). Learning with Animation: Research and Implications for Design
(pp. 114-40). UK: Cambridge University Press.
Schwartz, D. L., Pilner, K. B., Biswas, G., Leelawong, K., & Davis, J. (2007).
Animations of thought: Interactivity in the teachable agents paradigm, In R. Lowe
& W. Schnotz (Eds.), Learning with Animation: Research and Implications for
Design (pp. 114-140). UK: Cambridge University Press.
Sherin, M.G. (2002). When teaching becomes learning. Cognition and Instruction, 20,
119-150.
Siegler, R. S. (1995). How does change occur? A microgenetic study of number
conservation. Cognitive Psychology, 28, 225-273.
Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard
Educational Review, 57(1), 1-22.
Turkle, S. (2005). The second self: Computers and the human spirit, twentieth
anniversary edition. Cambridge, MA: MIT Press.
Turkle, S. (Ed) (2007) Evocative Objects: Things We Think With. Cambridge, MA: MIT
Press.
31
Uretsi, J. A. R. (2000). Should I teach my computer peer? Some issues in teaching a
learning companion. In G. Gautheir, C. Frasson, & K. VanLehn (Eds.). Intelligent
Tutoring Systems (pp. 103-112). Berlin: Springer-Verlag.
Vygotsky, L.S. (1978). Mind in Society: The Development of Higher Psychological
Processes (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds. And
Trans.). Cambridge, MA: Harvard University Press.
Wagster, J., Tan, J., Wu, Y., Biswas, G., Schwartz, D. (2007). Do Learning by Teaching
Environments with Metacognitive Support Help Students Develop Better
Learning Behaviors? In D. S. McNamara & J. G. Trafton (Eds.), Proceedings of
the 29th Annual Cognitive Science Society (pp. 695-700). Austin, TX: Cognitive
Science Society.
Winne, P. H. (2001). Self-regulated learning viewed from models of information
processing. In B. Zimmerman & D. Schunk (Eds.), Self-regulated learning and
academic achievement: Theoretical perspectives (pp. 153–189). Mawah, NJ:
Erlbaum.
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J.
Hacker, J. Dunlosky, & A. Graesser (Eds.), Metacognition in educational theory
and practice (pp. 277–304). Hillsdale, NJ: Erlbaum.
Zimmerman, B. J. & Kitsantas, A. (2002). Acquiring writing revision and self-regulatory
skill through observation and emulation. Journal of Educational Psychology, 94,
660-668.
Zimmerman, B.J. (1989). A Social Cognitive View of Self-Regulated Learning. Journal
of Educational Psychology, 81, 329-339.
Zimmerman, B.J. (1990). Self-regulating academic learning and achievement: The
emergence of a social cognitive perspective. Educational Psychology Review, 2,
173-201.
Zwaan R.A, & Radvansky G.A. (1998). Situation models in language comprehension and
memory. Psychological Bulletin, 123, 162-185.
32
TABLES
Table 1. Average Number of Attributions to Success and Failure (and standard errors of the mean).
Attributions for Success Attributions when Failed _
Condition Self Agent Both Total Self Agent Both Total _
TA Answers .17(.12) .27(.12) 0.0 (.0) .44 (.16) .54(.13) .47(.21) .66(.19) 1.67(.28)*
Student Answers .53(.10) n/a n/a .53(.10) .65(.22) n/a n/a .65 (.22)
Note: * p < .05 – comparison of condition means
Table 2. Correlations between Students’ Agents and Students’ Test Scores.
Student Test Scores
All Questions TA-like Questions Non-TA Questions
APQ Index Test 1 Test 2 Test 3 Test 1 Test 2 Test 3 Test 1 Test 2 Test 3
Test 1 60** - - .51** - - .56** - -
Test 2 - .66** - - .47* - - .66** -
Test 3 - - .34 - - .12 - - .48*
Note: ** p < .01; * p < .05. Correlations between TA-like and Non-TA questions
are .47, .46, and .14 for Test 1, Test 2, and Test 3, respectively.
33
Table 3. Sample of Interactive Patterns and Responses by the SRL System
Regulation
Goal Pattern
Description Betty Response Mr. Davis Response
MONITORING
THROUGH
EXPLANATION
Multiple
requests for
Betty to give an
answer but no
request for
explanation
Let’s see, you have asked me a
lot of questions, but you have
not asked for my explanations
lately. Please make me explain
my answers so you will know if
I really understand.
Without asking Betty to explain
her answers, you may not know
whether she really understands
the chain of events that you
have been trying to teach her.
Click on the Explain button to
see if she explains her answer
correctly.
SELF-
ASSESSMENT Repeated quiz
request but no
updates have
been made to
the map.
Are you sure I understand what
you taught me? Please ask me
some questions to make sure I
got it right. I won’t take the
quiz otherwise. Thanks for
teaching me about rivers!
You have not taught Betty
anything new. Please, spend
some time teaching her new
links and concepts and make
sure she understands by asking
her questions. Then she can
take the quiz again. If you need
help learning new things, check
the resources.
TRACKING
PROGRESS The most recent
quiz score is
significantly
worse than the
previous
I would really like to do better.
Please check the resources,
teach me, and make sure I
understand by asking me
questions that are on the quiz.
My explanation will help you
find out why I am making
mistakes in my answers. Also,
be sure to check out the new
tips from Mr. Davis.
Betty did well on the last quiz.
What happened this time?
Maybe you should try rereading
some of the resources and
asking Betty more questions so
that you can make sure she
understands the material.
SETTING
LEARNING
GOALS
Betty is asked a
question that
she cannot
answer for the
second time
I just don’t know the
relationships yet, maybe we
should ask Mr. Davis what we
need to learn.
I’ve seen this kind of difficultly
with teaching some of my own
students in the past. You should
try looking for missing link
connections or links that are in
the wrong direction.
34
Table 4. Average concept map scores at the end of the main treatment (river ecosystems)
and the transfer treatment (land nitrogen cycle).
Study Phase .
Main Learning (1st) Transfer for Learning (2nd)
Map Score
Map Score
Condition M (SE) M (SE)
(RT) 1 Regulated Teaching 31.8 3, 6 (1.5) 32.6 4 (2.9)
(LT) 2 Learning-by-Teaching 25.8 (1.6) 31.8 5 (3.0)
(IC) Intelligent Coach 22.4 (1.5) 22.6 (2.9)
Note: Overall treatment means greater than IC: 1 p < .01; 2 p < .05. Post-hoc comparisons
for each study phase – Greater than IC: 3 p < .001; 4 p < .05; 5 p < .1. Greater than LT: 6 p
< .05
Table 5. Possible choices of activities in SRL Betty system.
Activity Name
_______________________ Student Actions
_____________________________________________
Edit Map (M) adding, modifying, or deleting concepts and links
Resource Access (R) accessing the resources
Request Quiz (Q) submitting map to take a quiz
Ask Query (A) asking Betty or Mentor to use map to answer a question
Request Explanation (E) asking Betty or Mentor to explain an answer to a query
Continue Explanation (C) asking for a more detailed explanation
35
FIGURE CAPTIONS
Figure 1. The Teachable Agent Named Betty. The student has (a) named his agent
“Bob” instead of Betty, (b) customized Bob’s look, (c) taught Bob about global warming,
and (d) asked Bob what happens to heat radiation if garbage increases.
Figure 2. Triple-A-Challenge Gameshow. Students log on for homework. After
teaching their agents, the agents play with one another. A host asks questions of each
agent. Students wager on whether they think their agent will give the right answer. The
agents respond based on what the students taught them. There is a chat window so
students can communicate with one another during the game.
Figure 3. Target Knowledge Organization for Global Warming Curriculum.
Figure 4. Effects of Betty versus Self. Each test included questions that depended on
short, medium, or long chains of causal inference to answer correctly. With more
experience across the lesson units, Betty students showed an increasing advantage for
longer causal inferences. The Self condition used the concept mapping software
Inspiration instead of Betty.
Figure 5. Adding Self-Regulated Learning to Betty’s Brain. The student has
submitted Betty to take a Quiz given by Mr. Davis, and the results are shown in the
bottom panel. Mr. Davis and Betty provide tips and encouragement for engaging in
metacognitive behaviors.
Figure 6. Transitional Probabilities between Aggregated Choice States. Each state,
derived through Hidden Markov Model statistical learning, represents a pattern of choices
that create a common cluster of learning activities. The numbers beside the arrows
indicate the probability that students would transition from one state to another.
36
FIGURES
Figure 1
37
Figure 2
38
Figure 3.
39
Figure 4.
40
Figure 5.
41
Figure 6.
... For example, when students read resource pages and immediately add to their map, they are demonstrating an IA (read) → SC (build map) strategy. Such combinations illustrate the coordination and enactment of different cognitive processes, including metacognitive regulation, in the form of learning and problem-solving strategies (Schwartz et al., 2009). For instance, applying an IA (read a page) → SC (add correct link from that page) strategy shows that a student is able to acquire information from the science book strategically, by (a) identifying the section that contains causal information they need to teach Betty, (b) then interpreting the causal relation from this information correctly, and (c) translating this acquired knowledge of the causal relation into a correct increase/decrease causal link on their map. ...
... More generally, this study and past studies (e.g., Leelawong & Biswas (2008); Schwartz, et al (2009);Roscoe, et al (2013)) demonstrates that in-time adaptive scaffolding directed toward learning and applying SRL strategies while problem solving in OELEs is essential to help students become better learners. However, the adaptive scaffold triggers need to be designed to be more cognizant of students' needs, and the scaffolding content may need to take into account the prior knowledge of Low performers. ...
Preprint
Providing adaptive scaffolds to help learners develop self-regulated learning (SRL) processes has been an important goal for intelligent learning environments. In this paper, we develop a systematic framework for adaptive scaffolding in Betty's Brain, an open-ended learning-by-teaching environment that helps middle school students learn science by constructing a causal model to teach a virtual agent, generically named Betty. Given the open ended nature of the environment, novice learners often face difficulties in their learning and teaching tasks. We detect key cognitive-metacognitive inflection points, i.e., instances where students' behaviors and performance change as they work on their learning and teaching tasks. At such inflection points, Mr. Davis (a mentor agent) or Betty (the teachable agent) provide conversational feedback, focused on strategies to help students become productive learners. We analyze data collected from a classroom study with 98 middle school students to analyze the impact of the scaffolds on students' learning performance and behaviors. We discuss how our findings will support the next iteration of our adaptive scaffolding framework to help students develop their SRL behaviors when working in OELEs.
... such as an expert/mentor or as a peer learner having the same or relatively less knowledge on the learning topic.Pedagogical agents in studies such asPareto et al. (2011);Kizilkaya and Askar (2008);Sjödén and Gulz (2015);Schwartz et al. (2009) etc were implemented in the roles of more knowledgeable agents such as instructor, tutor, guide etc.Baylor and Kim (2003) explored the roles of expert, mentor and motivator in terms of image, animation, voice and affect and reported the effectiveness of distinct roles and behaviours(Figure 3.6).The expert role was designed to exhibit mastery in the learning domain and featuredlimited non-verbal cues in the form of deictic gestures, authoritative and formal speech etc. The motivator role focused on the learner's efficacy and engagement Tärning et al. (2019) and was associated with behaviours such as suggestions and encouragement for the learner to progress in the task. ...
Thesis
Agents in a learning environment can have various roles and social behaviours that can influence the goals and motivation of the learners in distinct ways. Self-regulated learning (SRL) is a comprehensive conceptual framework that encapsulates the cognitive, metacognitive, behavioural, motivational and affective aspects of learning and entails the processes of goal setting, monitoring progress, analyzing feedback, adjustment of goals and actions by the learner. In this thesis, we present a multi-agent learning interaction involving various pedagogical agent roles aiming to improve the self-regulation of the learner while engaging in a socially shared learning activity. We used distinct roles of agents, defined by their social attitudes and competence characteristics, to deliver specific regulation scaffolding strategies for the learner. The methodology followed in this Thesis started with the definition of pedagogical agent roles in a socially shared regulation context and the development of a collaborative learning task to facilitate self-regulation. Based on the learning task framework, we proposed a shared learning interaction consisting of a tutor agent providing external regulation support focusing on the performance of the learner and a peer agent demonstrating co-regulation strategies to promote self-regulation in the learner. A series of user studies have been conducted to understand the learner perceptions about the agent roles, related behaviours and the learning task. Altogether, the work presented in this thesis explores how various roles of agents can be utilised in providing regulation scaffolding to the learners in a socially shared learning context.
... This might be through verbal prompts (e.g. "Take time to read everything,") [7,22] or more intricate support systems [25], such as progress bars [14], or tools such as notebooks, that better facilitate student reflection [2,35]. ...
Conference Paper
Full-text available
Self-regulated learning (SRL) is a critical 21 st-century skill. In this paper, we examine SRL through the lens of the searching, monitoring, assessing, rehearsing, and translating (SMART) schema for learning operations. We use microanalysis to measure SRL behaviors as students interact with a computer-based learning environment, Betty's Brain. We leverage interaction data, survey data, in situ student interviews, and supervised machine learning techniques to predict the proportion of time spent on each of the SMART schema facets, developing models with prediction accuracy ranging from rho = .19 for translating to rho = .66 for assembling. We examine key interactions between variables in our models and discuss the implications for future SRL research. Finally, we show that both ground truth and predicted values can be used to predict future learning in the system. In fact, the inferred models of SRL outperform the ground truth versions, demonstrating both their generalizability and their potential for using these models to improve adaptive scaffolding for students who are still developing SRL skills.
... Por último, si en las primeras décadas de investigacion y reflexión sobre la metacognición su énfasis estuvo en la caracterización del funcionamiento de la memoria de los niños, utilizando técnicas experimentales, los desarrollos teóricos y metodológicos más recientes giran hacia una perspectiva social de la metacognición, con niños, jovenes y adultos, en la que el contexto y las situaciones in vivo cobran especial relevancia (Goos, Galbraith, and Renshaw, 2002;Duman, 2006;Schwartz, Chase, Chin, Oppezzo, Kwong, Okita, Biswas, Roscoe, Jeong and Wagster, 2009;Briñol & DeMarree, 2012;Shea, Boldt, Bang, Yeung, Heyes, and Frith, 2014;Van De Bogart, Dounas-Frazer, Lewandowski and Stetzer, 2017;Tullis & Fraundorf, 2017). Desde estos planteamientos, la metacognición se convierte en una construcción social que impacta a los individuos y su entorno; de allí su relevancia en las dinámicas educativas de todos los niveles, como lo propuso Vygotsky (2000) en su teoría sociocultural del desarrollo cognitivo. ...
Article
Full-text available
La discusión sobre la metacognición, un fenómeno cognitivo de reciente caracterización, indispensable en los procesos educativos y la resolución de diferentes problemáticas, se ubica, particularmente, en dos campos disciplinares: la psicología evolutiva y el procesamiento de información; de allí se desprenden perspectivas teóricas que enfatizan en fenómenos como: la teoría de la mente, la evolución de la inteligencia, la sensación del saber, la comprensión, el conocimiento y la regulación. Como conclusiones, para el docente, destacamos la importancia de hacer conscientes las formas de procesamiento mental que realizan los estudiantes como posibilidad de adecuar las estrategias de enseñanza, un requerimiento clave para garantizar que el aprendizaje y el desarrollo sucedan y que el estudiante pueda configurar un rol más efectivo.
... Students' meaningful participation in modeling practice involves asking them to reflect the reasons and processes for managing modeling practice (Namdar & Shen, 2015). Schwartz et al. (2009) emphasized that modeling elements and meta-modeling knowledge are not independent learning goals, but it is the learning goals that should be integrated. The scientific modeling criteria can be obtained from the explanation of the concept and definition of evidence by Gott and Duggan (1995). ...
Article
Full-text available
Previous studies on the effectiveness of virtual laboratories for learning have shown inconsistent results over the past decade. The purpose of this research was to explore the effects of a virtual laboratory and meta-cognitive scaffolding on students' data modeling competences. A quasi-experimental design was used. Three classes of eighth graders from southern Taiwan participated in this research and were assigned to the Experimental Group Ⅰ (EG Ⅰ), the Experimental Group Ⅱ (EG Ⅱ), and the Control Group (CG). EG Ⅰ (n=25) received the virtual laboratory and meta-cognitive scaffolding in the teaching and learning. EG Ⅱ (n=28) received the virtual laboratory only in the teaching and learning. The CG (n=27) received the lecture with the cookbook laboratory. The teaching unit was Heat and Specific Heat, and the teaching time for the three groups was six lessons (of 45 minutes each). The Data Modeling Competences Test (DMCT) designed by the research team was used as the data collection instrument. The results showed that the virtual laboratory and meta-cognitive scaffolding had effects on students' data modeling competences. This research shows the importance of the meta-cognitive scaffolding strategy for virtual laboratories when conducting data modeling teaching.
... Studies have shown that the "teachable agent" paradigm, i.e. "learning-by-teaching" using teachable virtual agents in educational software, benefits learning by increasing students' sense of responsibility and supporting metacognition (see for instance Schwartz et al. (2009) and Biswas et al. (2005)). The "protégée effect" is a theoretical concept that describes the beneficial factors of the teachable agent paradigm in that the student makes larger learning efforts when the goal is to teach an agent than when the goal is to learn for themselves ). ...
... A number of TAEs have been built for many domains. These systems affect positively on student's learning processes at any education level [6] . ...
Article
Full-text available
Learning-by-teaching is a powerful approach that enhances students to think deeply, orally and repeatedly. Several computer-based systems have been implemented where students play the teacher role and virtual agents play the tutee role. The existing systems focus on various domains, but none of them has considered programming problem solving. Additionally, the majority of these systems did not provide metacognitive support. They only focus on providing feedback as correct answers, and this type of feedback is called knowledge of correct response. However, this paper explores the influence of guided metacognitive feedback on novice programmers in a teachable agent environment. For that, a computer-based learning environment is built to enable the novice programmers to teach programming problem solving to an animated agent. It combines learning-by-teaching technique and metacognitive support in order to assist those beginners to acquire comprehensive learning on how to solve unfamiliar problems and prepare those programmers for future learning tasks. We conduct an experiment to compare the effect of the aforementioned feedbacks on the novice programmers’ performance in learning-byteaching paradigm. The results show that the metacognitive feedback has positive effect on novice programmers’ achievement of solving problems. In addition, providing metacognitive feedback as explicit feedback in learning-by-teaching paradigm improves the novices' abilities to estimate what they know and what they do not know about how to solve new programming problems.
Article
Although an increasing number of ethical data science and AI courses is available, with many focusing specifically on technology and computer ethics, pedagogical approaches employed in these courses rely exclusively on texts rather than on algorithmic development or data analysis. In this paper we recount a recent experience in developing and teaching a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection. Interpretability of machine-assisted decision-making is an important component of responsible data science that gives a good lens through which to see other responsible data science topics, including privacy and fairness. We provide emerging pedagogical best practices for teaching technical data science and AI courses that focus on interpretability, and tie responsible data science to current learning science and learning analytics research. We focus on a novel methodological notion of the object-to-interpret-with, a representation that helps students target metacognition involving interpretation and representation. In the context of interpreting machine learning models, we highlight the suitability of “nutritional labels”—a family of interpretability tools that are gaining popularity in responsible data science research and practice.
Article
An algebraic model uses a set of algebra equations to precisely describe a situation. Constructing such models is a fundamental skill required by US standards for both math and science. It is usually taught with algebra word problems. However, many students still lack the skill, even after taking several algebra courses in high school and college. We are developing a short, intensive course in algebraic model construction. The course combines human teaching with a tutoring system. This paper describes the lessons learned during the iterative development process. Starting from an existing theory of model construction, we gradually acquired a completely different view of the skills required as we modified the tutoring system and the instruction. We close by describing encouraging results from a quasi-experimental study.
Article
Full-text available
Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors' actual behaviors may help to account for variation in learning outcomes and how typical tutor- ing behaviors may create or undermine opportunities for learning. The authors examine two tutoring activities that are commonly hypothesized to support tutor learning: explaining and questioning. These activities are hypothesized to sup- port peer tutors' learning via reflective knowledge-building, which includes self-monitoring of comprehension, integration of new and prior knowledge, and elaboration and construction of knowledge. The review supports these hypothe- ses but also finds that peer tutors tend to exhibit a pervasive knowledge-telling bias. Peer tutors, even when trained, focus more on delivering knowledge rather than developing it. As a result, the true potential for tutor learning may rarely be achieved. The review concludes by offering recommendations for how future research can utilize tutoring process data to understand how tutors learn and perhaps develop new training methods.
Article
An important element of adaptive expertise involves stepping away from a routine to retool one's knowledge or environment. The current study investigated two forms of this adaptive pattern: fault-driven adaptations, which are reactions to a difficulty, and prospective adaptations, which are proactive reformulations. Graduate and undergraduate students with no medical training engaged in a medical diagnosis task that involved complex information management. The graduate students, who were relative experts in information management and data analysis, uniformly made prospective adaptations by taking the time to create external representations of the available information before they diagnosed a single patient. In contrast, the undergraduate students only made representations reactively, when experimental manipulations made their default behaviors impractical. Graduate students tolerated the time lost creating representations in favor of future benefits—well-structured representations led to more optimal diagnostic choices. Overall, the results indicate that long-term educational experiences are correlated with prospective adaptation, even in a novel task domain that is not explicitly a part of those educational experiences. This research provides new metrics for evaluating educational interventions designed to move students along a trajectory toward adaptive expertise.
Article
A productive way to think about imagistic mental models of physical systems is as though they were sources of quasi-empirical evidence. People depict or imagine events at those points in time when they would experiment with the world if possible. Moreover, just as they would do when observing the world, people induce patterns of behavior from the results depicted in their imaginations. These resulting patterns of behavior can then be cast into symbolic rules to simplify thinking about future problems and to reveal higher order relationships. Using simple gear problems, three experiments explored the occasions of use for, and the inductive transitions between, depictive models and number-based rules. The first two experiments used the convergent evidence of problem-solving latencies, hand motions, referential language and error data to document the initial use of a model, the induction of rules from the modeling results, and the fallback to a model when a rule fails. The third experiment explored the intermediate representations that facilitate the induction of rules from depictive models. The strengths and weaknesses of depictive modeling and more analytic systems of reasoning are delineated to motivate the reasons for these transitions.
Article
This study examined the effects of previous training and experience in peer tutoring on the nature of student interactions. Sixteen classrooms were assigned randomly to two treatments: with and without previous training and experience in peer tutoring. Peer-tutoring teachers taught students a structured, interactional, explanatory verbal rehearsal routine that incorporated step-by-step feedback. Peer tutoring was implemented on a mathematics operations curriculum twice weekly for 10 weeks. Each teacher had identified an average achiever and a low achiever to serve, respectively, as the tutor and the tutee during peer-tutoring generalization sessions. Videotapes were analyzed at three levels: microlevel quantifications, global ratings, and transcripts of representative dyads. Across levels of analysis and across operations and applications content, experienced dyads provided explanations in a more interactional style that incorporated sounder instructional principles. As revealed in the transcripts, however, the nature of student explanations in both conditions was primarily algorithmic rather than conceptual.