Content uploaded by Chih-Yueh Chou
Author content
All content in this area was uploaded by Chih-Yueh Chou on Dec 30, 2020
Content may be subject to copyright.
An analysis ofinternal andexternal feedback
inself‑regulated learning activities mediated
byself‑regulated learning tools andopen
learner models
Chih‑Yueh Chou* and Nian‑Bao Zou
Abstract
In self‑regulated learning (SRL), students organize, monitor, direct, and regulate their
learning. In SRL, monitoring plays a critical role in generating internal feedback and
thus adopting appropriate regulations. However, students may have poor SRL pro‑
cesses and performance due to their poor monitoring. Researchers have suggested
providing external feedback to facilitate better student SRL. However, SRL involves
many meta‑cognitive internal processes that are hidden and difficult to observe
and measure. This study proposed a SRL model to illustrate the relationship among
external SRL tools, internal SRL processes, internal feedback, and external feedback.
Based on the model, this study designed a system with SRL tools and open leaner
models (OLMs) to assist students in conducting SRL, including self‑assessing their initial
learning performance (i.e. perceived initial performance and monitoring of learning
performance) after listening to a teacher’s lecture, being assessed by and receiving
external feedback from the OLM (i.e. actual performance) in the system, setting target
goals (i.e. desired performance) of follow‑up learning, conducting follow‑up learning
(i.e. strategy implementation), and evaluating their follow‑up learning performance (i.e.
perceived outcome performance and strategy outcome monitoring). These SRL tools
also externalize students’ internal SRL processes and feedback, including perceived
initial, desired, and perceived outcome performances, for investigation. In addition, this
study explores the impact of external feedback from the OLM on students’ internal SRL
processes and feedback. An evaluation was conducted to record and analyze students’
SRL processes and performance, and a questionnaire was administered to ask students
about their SRL processes. There are three main findings. First, the results showed that
students often have poor internal SRL processes and poor internal feedback, including
poor self‑assessment, inappropriate target goals, a failure to conduct follow‑up learn‑
ing, and a failure to achieve their goals. Second, the results revealed that the SRL tools
and external feedback from the OLM assisted most students in SRL, including monitor‑
ing their learning performance, goal‑setting, strategy implementation and monitoring,
and strategy outcome monitoring. Third, some students still required further support
for SRL.
Open Access
© The Author(s) 2020. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate‑
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creat iveco mmons .org/licen ses/by/4.0/.
RESEARCH ARTICLE
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
https://doi.org/10.1186/s41239‑020‑00233‑y
*Correspondence:
cychou@saturn.yzu.edu.tw
Department of Computer
Science and Engineering,
Yuan Ze University, TaoYuan,
Taiwan ROC
Page 2 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Keywords: Self‑regulated learning, Computer tools, Internal feedback, External
feedback, Open learner model
Introduction
SRL andfeedback
In self-regulated learning (SRL), students organize, monitor, direct, and regulate their
learning. SRL involves many meta-cognitive strategies, processes, and skills, such as self-
assessment, goal-setting, strategic planning, monitoring, and help-seeking (Lee et al.
2019; Panadero 2017; Winne 2011; Zimmerman 1990, 2001, 2002; Zimmerman and
Schunk 1989). Figure1 displays the SRL model proposed by Butler and Winne (1995)
and re-interpreted by Nicol and Macfarlane-Dick (2006). In the model, SRL involves a
self-oriented feedback cycle in which students set goals and strategies; monitor their
goals, strategies, and performance; and generate internal feedback to regulate their
knowledge, beliefs, goals, and strategies. A student is a cognitive system in which an
academic task, set by a teacher or the student, triggers the SRL. Based on a student’s
domain knowledge, strategy knowledge, and motivational beliefs, the student inter-
prets the meaning and requirement of the task, sets goals for the desired performance,
chooses and applies learning tactics and strategies to reach the goals, evaluates the out-
come learning performance (i.e., perceived performance) after applying the tactics and
strategies, and generates internal feedback to conduct appropriate regulation (see parts
1 to 5 in Fig.1) (Nicol and Macfarlane-Dick 2006). Students may have individual differ-
ences in domain knowledge, strategy knowledge, and motivational beliefs that affect SRL
behaviors and performance (Musso etal. 2019; Winne 1996). For example, students need
domain knowledge to correctly judge the difficulty of the task and the needed effort to
Fig. 1 An SRL and feedback model
Page 3 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
learn the task; some students may be unconfident in the task or lack motivation so that
they set a goal of just passing the exam; students may lack knowledge in how to choose,
apply, and regulate tactics and strategies, leading to poor SRL behaviors. Studies have
found that high-achieving students have good SRL behaviors, frequently apply SRL strat-
egies and develop good SRL processes whereas low-achieving students have poor SRL
behaviors, rarely apply SRL strategies and develop poor SRL processes (Burns etal. 2018;
Nota etal. Zimmerman 2004; Winne 1996; Zimmerman and Schunk 1989).
e monitoring of tactics and strategies and of the discrepancies between the goals
and outcome plays an important role in SRL (Griffin etal. 2013; Winne 1996). Moni-
toring is a metacognitive process for generating internal feedback to the student to
determine whether the regulation of knowledge, beliefs, goals, tactics and strategies is
required (see Fig.1). When a student identifies a discrepancy between the goals and out-
come through monitoring, the student may generate internal feedback (i.e., self-regulat-
ing metacognitive control or actions) to reduce the discrepancy (Butler and Winne 1995;
Hattie and Timperley 2007; Winne 1996). For example, if a student recognizes that the
outcome performance is lower than the goals, the student may change tactics and strat-
egies to reach the goals (i.e., internal feedback). However, students may generate poor
internal feedback that does not benefit learning. For example, students may lack strategy
knowledge and thus change to adopt ineffective tactics and strategies (i.e., poor inter-
nal feedback); students may also decrease their goals without further learning (i.e., poor
internal feedback) due to a lack of motivation or limits on time and learning resources.
In addition, a possible reason for poor internal feedback may be that students have inac-
curate monitoring, and, thus, appropriate regulation is not adopted. For example, stu-
dents may have a poor self-assessment (i.e., calibration, judgement of learning, perceived
performance, monitoring of learning performance) and may over-estimate their learning
performance so that they stop learning (i.e., poor internal feedback) before they master
the task (Chou etal. 2015).
Researchers have proposed providing students with external feedback from teach-
ers, teaching assistants, peers, or systems to facilitate better student SRL behaviors and
performance (Azevedo etal. 2007; Azevedo and Hadwin 2005; Butler and Winne 1995;
Chou etal. 2015, 2018; Lai and Hwang 2016; Lin etal. 2016; Müller and Seufert 2018;
Panadero etal. 2019; Schraw 2007; Shyr and Chen 2018). In general, external feedback
is intended to facilitate students’ monitoring, deliver information to students about their
learning (i.e., externally observable outcome performance), and provide opportunities
for students to generate internal feedback to close the gap between their goals and cur-
rent performance (see parts 6 and 7 in Fig.1) (Nicol and Macfarlane-Dick 2006). at
is, external feedback forms a scaffolding or external regulation to assist students in
reflecting on and monitoring whether a discrepancy exists between the current perfor-
mance and a goal of desired performance and to regulate their learning to reduce this
discrepancy if it exists (Azevedo etal. 2007; Devolder etal. 2012; Hattie and Timper-
ley 2007; Roll etal. 2014). Researchers have proposed four levels of external feedback,
namely, the task level, process level, self-regulation level, and self-level feedback (Hat-
tie and Timperley 2007). e task and process level feedback are domain-related feed-
back that addresses students’ tasks, solutions, and understanding of domain knowledge.
Task level feedback informs how well tasks are understood or performed. For example,
Page 4 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
outcome feedback describes whether or not the tasks are correct; however, it informs
the current performance but does not inform how to self-regulate (Zhou 2012). Pro-
cess level feedback cues the process to finish tasks or correct errors. For instance, a hint
feedback prompts students how to finish tasks, and a corrective feedback aims to guide
students to find and correct their errors; that is, process level feedback provides regula-
tion information. Self-regulation level feedback addresses how students monitor, direct,
and regulate their learning, such as through self-assessments, goal-setting, and regula-
tion actions. For example, a feedback on self-assessment assists students in reflecting
on whether their self-assessments are accurate and whether to regulate their self-assess-
ments or not. Self-level feedback expresses a personal evaluation, such as “good effort”.
However, self-level feedback contains less task-related or meta-cognitive information
and leads to less learning gains but affects students’ motivation and beliefs(Hattie and
Timperley 2007).
Recently, intelligent tutoring systems have detected students’ learning and SRL behav-
iors and then provide adaptive external feedback as scaffolding support to assist students
in conducting SRL, such as self-assessment, goal-setting, planning, and help-seeking
(Aleven etal. 2006, 2016; Azevedo etal. 2010; Chen 2009; Chen etal. 2019; Chou etal.
2015, 2018, 2019; Harley etal. 2017; Nussbaumer et al. 2015; Roll etal. 2011a, b; Su
2020). However, there are various combinations of different types of external feedback
and scaffolds (such as prompts, advice, information, tools, training, and guiding ques-
tions), SRL processes (such as self-assessments, goal-setting, planning, and help-seek-
ing), content domain, and context (such as traditional classrooms, online learning, and
blended learning); therefore, the effectiveness of different types of external feedback and
scaffolds in different SRL processes and contexts require further investigation (Devolder
etal. 2012).
Studies have found that the external feedback from the open learner model (OLM)
promotes better student self-assessments (Chou etal. 2015; Mitrovic and Martin 2007).
OLMs indicates that opening learner models (also called student models), which are
built by intelligent tutoring systems to provide adaptive tutoring and are hidden from
students (Brusilovskiy 1994; Conati and Kardan 2013; Desmarais and Baker 2012; Holt
etal. 1994; Woolf 2008), to students to increase the accuracy of learner models and the
interaction between the system and students (Bull 2004; Self 1988). OLMs can be built
from different agents’ perspectives, such as systems, teachers, peers, and students them-
selves and can be designed to enable students to view, self-assess, and edit their OLMs or
collaborate and negotiate with systems or teachers to build their OLMs (Bull 2004, 2016;
Bull and Kay 2016; Bull etal. 1995,2013; Chou etal. 2015). OLMs are usually designed
to promote students’ meta-cognitive processes, such as self-assessments, self-monitor-
ing, reflection, and planning (Bull etal. 2006; Bull and Kay 2013; Chou etal. 2015, 2017;
Long and Aleven 2017; Mitrovic and Martin 2007). In other words, OLMs assist stu-
dents in reflecting on their skills and knowledge, identifying their weaknesses, and plan-
ning their future learning through different visualizations, such as tables, charts, radar
charts, skill meters, concept maps, word clouds, or networks (Bull etal. 2016; Demmans
and Bull 2015). For example, if students over-estimate their outcome learning perfor-
mance, external feedback from the OLM will highlight discrepancies between the stu-
dents’ perceived performance and actual performance to assist students in reflecting on
Page 5 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
their monitoring of outcome learning performance and generating internal feedback to
modify their perceived performance. Students may ignore, modify, or accept the exter-
nal feedback from the OLM (Chou etal. 2015). In particular, how students’ internal SRL
processes and feedback run and whether and how external feedback on self-assessment
from the OLM affects these internal SRL processes and feedback remain unclear. ere-
fore, this study aims to investigate students’ internal SRL processes and feedback and
how they are impacted by the external feedback from the OLM.
Measurements ofSRL
SRL involves many meta-cognitive processes that are hidden inside students and thus
are difficult to observe and measure. Researchers have proposed different SRL measure-
ments based on different SRL models, such as models that characterize and measure
SRL as an attitude or an event (Winne and Perry 2000). Some self-reported question-
naires, such as the Motivated Strategies for Learning Questionnaire (MSLQ, Pintrich
etal. 1993), and interviews are designed for measuring students’ SRL attitudes, such as
value, expectancy, cognitive and metacognitive strategies, and resource management
strategies, by asking them about their meta-cognition, motivation, and actions when
facing different situations (Winne and Perry 2000). However, these self-reported SRL
measurements are characterized as more static and large-grained assessments and rely
heavily on students’ perspectives and beliefs (Panadero etal. 2016; Rovers etal. 2019;
Winne and Perry 2000). On the other hand, some SRL measurements, such as thinking
aloud and traces of students’ observable SRL indicators, focus on students’ small-grained
SRL events and dynamic processes, particularly meta-cognitive monitoring (conditions)
and control (actions) (Winne and Perry 2000). e thinking-aloud measurement asks
students to self-report their internal cognitive and meta-cognitive processes when they
are learning. Students’ observable SRL indicators in computer based learning environ-
ments can be recorded for traces and measurement (Winne 2010). Recently, many SRL
studies in computer based learning environments have applied traces to measure stu-
dents’ SRL behaviors (Azevedo etal. 2010; Chen 2009; Chou etal. 2015, 2018, 2019; Har-
ley etal. 2017; Jansen etal. 2020; Long and Aleven 2017; Nussbaumer etal. 2015; Roll
etal. 2011a, b). However, most studies focus on traces of students’ external operations
on the system, and hence students’ internal SRL processes are still unclear. Research-
ers suggested designing tools as scaffolding to assist students in conducting SRL and
to externalize their internal SRL processes for observation and measurement (Garcia
etal. 2018; Järvelä etal. 2015; Manlove etal. 2007; Panadero etal. 2016; Pérez-Álva-
rez etal. 2018; Roll etal. 2014; Winne and Hadwin 2013). In general, tools are designed
to facilitate student cognitive and meta-cognitive processes and reduce their cognitive
load, such as memory or computation (Azevedo etal. 2010; Lajoie 1993; Winne and
Nesbit 2009). Researchers have found that students with scaffolding of computer SRL
tools develop better SRL processes and performance than students without such tools
(Manlove etal. 2007). In addition, researchers have suggested that tools can be designed
to make students’ metacognition more explicit and expand their capacity by providing
external representation, interactivity, and distributed cognition (Pakdaman-Savoji etal.
2019). erefore, this study proposes SRL tools to assist students in conducting SRL and
to externalize their internal SRL processes for observation and measurement.
Page 6 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Purposes ofthis study
is study has two purposes:
#1: Investigating students’ internal SRL processes and feedback by designing SRL tools
to assist students in conducting SRL and to externalize their internal SRL processes for
observation and measurement. is study proposed an SRL internal and external model
(named SRL-IE) to illustrate the relationship among external SRL tools, internal SRL
processes, internal feedback, and external feedback (Fig.2). e SRL-IE model is based
on the SRL model proposed by Butler and Winne (1995) and re-interpreted by Nicol and
Macfarlane-Dick (2006) to illustrate the role of internal and external feedback in SRL
(Fig.1). e SRL-IE model also integrates the self-regulatory cycle model, which was
proposed to assist students in developing SRL through the following four phases of SRL
activities: self-evaluation and monitoring (i.e., initial self-assessment); goal setting and
strategic planning; strategy implementation and monitoring (i.e., follow-up learning);
and strategic outcome monitoring (i.e., follow-up self-assessment) (Zimmerman et al.
1996). Most SRL studies in computer based learning environments have been conducted
in the context of online learning or intelligent tutoring systems and covered all learning
activities. is study was conducted to assist students in conducting SRL after listening
to a teacher’s lecture. is study developed an intelligent computer assisted learning sys-
tem to provide SRL tools, based on the SRL-IE model, to assist students in conducting
SRL by listening to a teacher’s lecture, self-assessing their initial learning performance
(i.e., perceived initial performance) based on their domain knowledge, strategy knowl-
edge, and motivational beliefs, setting target goals of desired performance, conducting
follow-up learning by applying tactics and strategies, and self-assessing their outcome
performance (i.e., perceived outcome performance) after follow-up learning (see parts 1
Fig. 2 The SRL and feedback model (SRL‑IE) adopted in this study
Page 7 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
to 6 in Fig.2). In addition, these SRL tools were designed to externalize students’ internal
processes and feedback for measurement and investigation (parts 3 to 6 in Fig.2). Fur-
thermore, the externalization of students’ internal processes and feedback can be fur-
ther applied to detect students’ poor SRL processes and internal feedback and to provide
appropriate external feedback and intervention in future studies.
#2: Investigating whether and how SRL tools and external feedback from the OLM pro-
mote students’ internal SRL processes and feedback. is study designed SRL tools not
only to externalize students’ internal SRL processes for observation and measurement
but also to assist students in SRL by facilitating their SRL processes (parts 3 to 6 in Fig.2)
and by reducing their cognitive load by providing external representation, interactivity,
and distributed cognition (Azevedo etal. 2010; Lajoie 1993; Pakdaman-Savoji etal. 2019;
Winne and Nesbit 2009). ere are different levels of external feedback, specifically,
task, process, self-regulation, and self-level feedback (Hattie and Timperley 2007). is
study focused on providing OLMs as external feedback on self-assessment (i.e., a type of
self-regulation level feedback) to assist students in SRL (see parts 7 and 8 in Fig.2). e
OLMs were integrated with SRL tools so that actual performance from OLM, perceived
initial performance from the initial self-assessment, desired performance of target goals,
and perceived outcome performance from the follow-up self-assessment were recorded,
presented, and compared in the same format. e OLMs are designed as formative feed-
back to arouse students’ appropriate internal feedback to regulate their SRL processes
and to thus improve their learning performance. Particularly, after receiving the OLMs,
students were allowed to modify their self-assessments (i.e., perceived performance) to
investigate whether and how the external feedback from the OLMs triggers their inter-
nal feedback to modify their perceived performance.
e rest of the paper is organized as follows. e second section presents an intel-
ligent computer assisted learning system with SRL tools and external feedback from the
OLMs to support SRL. e third section reports the results of an evaluation of the sys-
tem. e fourth section discusses the evaluative results. e final section presents the
conclusions.
A system withSRL tools andOLMs tosupport SRL
An intelligent computer assisted learning system, called Open Learner Models for Self-
Regulated Learning (OLM-SRL), was designed and implemented based on the SRL-IE
model to support SRL. e system provides SRL tools to enable students to engage in
SRL through nine stages of learning activities across two classes in a round. e system
also offers OLMs as a form of external feedback to promote student SRL. e nine stages
of learning activities and related system support were designed based on the four phases
of the self-regulatory cycle model, namely, self-evaluation and monitoring, goal setting
and strategic planning, strategy implementation and monitoring, and strategic outcome
monitoring (Zimmerman etal. 1996) (Table1), which match parts 3 to 6 of the SRL-IE
model shown in Fig.2.
e details of the system support of the nine stages are presented as follows:
Stage 1: Teacher lecture.
Teachers lecture to instruct students on the concepts.
Page 8 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
After listening to the teacher lecture, the students self-evaluate and monitor their ini-
tial learning performance. e self-evaluation and monitoring phase covers stages 2, 3,
and 4. First, students self-assess their mastery levels to generate perceived initial per-
formance after the teacher’s lecture (stage 2). Researchers have suggested providing
students with opportunities to regularly and conveniently record their perceived per-
formance to facilitate their metacognitive monitoring (Winne and Nesbit 2009). e
OLM-SRL system provides a self-assessment tool to facilitate students’ metacognitive
monitoring to generate internal feedback on perceived performance (i.e. judgments of
learning, JOLs). In addition, the self-assessment tool externalizes and records students’
perceived performance. en, students participate in an initial system assessment to
assess their mastery levels (stage 3). After the initial system assessment, the OLM of the
initial system assessment is provided as a form of external feedback on self-assessment
to assist students in reflecting on their mastery levels. Students can compare their OLMs
from self-assessments (i.e. perceived performance) and system assessments (i.e. actual
performance) to enhance the calibration of their self-assessments (Butler and Winne
1995). Studies have found that external feedback from OLMs promotes better student
self-assessment (Chou et al. 2015; Mitrovic and Martin 2007). After receiving exter-
nal feedback from the OLM, students can reflect on and modify their self-assessments
(stage 4). e modifications of the self-assessments are recorded to analyze the impact of
the OLM on the students’ self-assessments, that is, whether the external feedback from
the OLM facilitates students’ internal feedback to modify their perceived performance.
In this phase, students monitor and reflect on their learning and identify unfamiliar con-
cepts for further learning.
Stage 2: Initial student self-assessment for generating perceived initial performance.
After the teacher lecture, the students are asked to self-assess their mastery levels of
the concepts taught in the lecture using the OLM-SRL system. For example, Fig.3 shows
the tool for self-assessment in a course that contains 10 meta-concepts and 36 concepts.
Students fill out their mastery levels from 0 to 100% in the concept fields with the white
background. e concepts with the gray background have not yet been taught and there-
fore do not need self-assessment. Students’ mastery levels of the meta-concepts are
Table 1 Stages oflearning activities andsystem support based onself-regulatory cycle
Self-regulatory cycle Stage
#1: Teacher lecture
Self‑evaluation and monitoring #2: Initial student self‑assessment for generating perceived
initial performance
#3: Initial system assessment for assessing actual initial
performance
#4: Modified initial student self‑assessment for reflecting on
perceived initial performance
Goal setting and strategic planning #5: Student goal‑setting for generating desired performance
Strategy implementation and monitoring #6: Student follow‑up learning after class
Strategic outcome monitoring #7: Follow‑up student self‑assessment for generating per‑
ceived outcome performance
#8: Follow‑up system assessment for assessing actual out‑
come performance
#9: Modified follow‑up student self‑assessment for reflecting
on perceived outcome performance
Page 9 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
computed by the weighted accumulation of the mastery levels of the related concepts.
e weights of the concepts in the meta-concepts were previously inputted by the teach-
ers according to their expert domain knowledge. For instance, a meta-concept contains
two concepts with weights of 0.6 and 0.4. If a student self-assesses his/her mastery levels
of the two concepts as 80% and 50%, the system computes the student’s mastery level
of the meta-concept as 68% (80% * 0.6 + 50% * 0.4). Similarly, students’ overall mastery
levels are computed by the weighted accumulation of the mastery levels of all meta-con-
cepts. e system presents a student’s OLMs, which arebuilt from students or the sys-
tem, in radar charts in which each axis indicates the mastery level of a meta-concept or a
concept(Fig.4). Radar charts are effective visualization tools for presenting and compar-
ing multivariate data (Chambers etal. 2018; Saary 2008).
Stage 3: Initial system assessment for assessing actual initial performance.
Students answer questions in the system to assess their initial mastery levels of the
concepts after listening to the lecture (Fig.5).
ese questions were designed by teachers to assess the mastery levels of those concepts.
e students’ mastery levels of concepts in the OLM from the system assessment are com-
puted based on the correctness of the student answers and the weighted relationship of
Fig. 3 Self‑assessment tool
Fig. 4 An OLM for representing students’ mastery levels of meta‑concepts
Page 10 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
the questions and concepts (Hwang 2003). e mastery level for student Sk of concept Cj is
computed as follows:
where n indicates the number of questions. Students need to master the concepts related
to a question so that they can correctly answer the question. e weight(Qi, Cj) value
denotes the influence of the mastery level of concept Cj on the correctness of the answer
to question Qi. If a student fails to answer a question, his/her assessed mastery levels of
the related concepts will decrease. e value of answer(Sk,Qi) is either 1 or 0, representing
whether student Sk correctly answers question Qi or not, respectively. For example, Table2
lists a weighted relationship of six questions and five concepts. Students need to master
concepts C1 and C2 so that they can correctly answer question Q1. If students do not master
concept C2, they will have a high possibility of incorrectly answering question Q1. Assuming
that the answer status of student Sk for the six questions is 1, 0, 1, 1, 1, and 0, the mastery
level for student Sk of concept C1 is computed as follows:
(1)
Mastery
Sk,Cj=
n
i=1
weight
Qi,Cj
×answer(Sk,Qi
)
weight(Qi,Cj)
0.2 ×1+0.3 ×0+0×1+0.1 ×1+0×1+0×0
0.2 +0.3 +0+0.1 +0+0
=
0.5
Fig. 5 System assessment interface
Table 2 Sample ofweighted relationships ofquestions andconcepts
C1C2C3C4C5
Q10.2 0.8 0 0 0
Q20.3 0.5 0.2 0 0
Q30 0.4 0 0.2 0.4
Q40.1 0 0.4 0.3 0.2
Q50 0.1 0.5 0 0.4
Q60 0 0 0.3 0.7
Page 11 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
erefore, the system can compute the mastery levels for student Sk of the five con-
cepts as 0.5, 0.722, 0.818, 0.625, and 0.588. e system presents the student’s mastery
levels of the concepts in the form of a radar chart representing the OLM of system
assessment with the same format of the OLM of student self-assessment. us, the
system enables students to compare their perceived performance and actual perfor-
mance by comparing their OLMs between the self-assessment and system assessment
(Fig.6). e OLM of the initial system assessment is provided as a form of external
feedback on self-assessment to assist students in reflecting on their mastery levels of
the concepts and self-assessment.
Stage 4: Modified initial student self-assessment for reflecting on the perceived ini-
tial performance.
After investigating the results of the initial system assessment, students can reflect
on their learning and modify their self-assessed mastery levels in the initial self-
assessment. is stage provides students with an opportunity to generate internal
feedback to modify their initial self-assessments (i.e., perceived initial performance).
e modification record of student self-assessments externalizes whether students
modify their self-assessment (i.e., internal feedback) and reveals the impact of the
external feedback from the OLM on students’ perceived performance.
Stage 5: Student goal setting for generating desired performance.
During the goal setting and strategic planning phase, students set their target mas-
tery levels of the concepts in the next class as their goals of desired performance for
follow-up learning. e goal setting tool is similar to the self-assessment tool. e
goal setting tool is designed to facilitate students to generate internal feedback on
their desired performance. e tool also externalizes and records students’ goals of
desired performance. Students fill out the target mastery levels of the concepts, and
the system presents OLMs based on their targets and enables them to compare their
OLMs from the self-assessment (i.e., perceived performance), system assessment
(i.e., actual performance), and target goals (i.e., desire performance) (Fig. 7). is
study focused on investigating perceived, actual, and desired performances; thus, the
Fig. 6 OLMs from the self‑assessment (orange) and system assessment (blue)
Page 12 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
system did not provide tools for strategic planning. e tools for strategic planning
will be implemented in the next version of the system.
Stage 6: Student follow-up learning after class.
During the strategy implementation and monitoring phase, the system enables stu-
dents to conduct follow-up learning after class to achieve their target goals for mastery
levels by reviewing their OLMs and the questions and answers of the system assessment.
Students can click on a meta-concept, such as operators, in the OLM of the initial sys-
tem assessment (Fig.8a) to investigate their mastery levels of related concepts (Fig.8b).
Students can further click on a concept to investigate the related questions of the system
assessment (Fig.9). Each color block represents a question. Green indicates a correctly
answered question, yellow indicates a not-yet-answered question, and red indicates an
incorrectly answered question. Students can click on a color block to investigate the
question, their answers, and the correct answer (i.e., outcome feedback at the task level).
Accordingly, the OLMs provide external feedback to promote students’ ability to find
and refine their imperfect knowledge (i.e., internal feedback) by showing the incorrect
Fig. 7 Open learner models from the self‑assessment (orange), system assessment (blue), and target (green)
Fig. 8 Open learner model of the mastery levels of meta‑concepts and concepts
Page 13 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
and correct answers in the hierarchy of meta-concepts, concepts, and questions. Regard-
ing incorrect answers, students may seek help from peers, teaching assistants, or teach-
ers. Help-seeking is a type of internal feedback in the SRL process to determine whether
to seek help and whom to seek help from (Karabenick 2011).
e strategic outcome monitoring phase covers stages 7, 8, and 9. Students self-assess
their mastery levels of concepts to generate perceived outcome performance (stage 7).
Students then participate in a follow-up system assessment (stage 8) after the follow-up
learning, and the system provides the OLM of the follow-up system assessment as exter-
nal feedback on actual outcome performance to assist students in reflecting on their
mastery levels. Finally, students can modify their self-assessments of perceived outcome
performance (stage 9). e modifications can reveal the impact of external feedback
from the OLM of actual outcome performance on students’ perceived outcome perfor-
mance. In this phase, students also reflect on their execution and the outcome of the fol-
low-up learning. e reflection helps students perform better in the next self-regulatory
cycle.
Stage 7: Follow-up student self-assessment for generating perceived outcome
performance.
At the beginning of the next class, students again self-assess their mastery levels of the
learned concepts after the follow-up learning. e tool is the same as that of the initial
student self-assessment. e system presents an OLM from the follow-up self-assess-
ment for displaying students’ perceived outcome performance to help students reflect on
their learning after the follow-up learning. In addition, students can compare their per-
ceived outcome performance with their desired performance goals to reflect on whether
they achieved their goals or not.
Stage 8: Follow-up system assessment for assessing actual outcome performance.
Students undergo a follow-up system assessment by answering questions in the system
for assessing their actual outcome performance. e questions of the follow-up system
assessment are similar to those of the initial system assessment. e system presents an
OLM from the follow-up system assessment as external feedback on the actual outcome
performance to assist students in reflecting on whether they have achieved their target
mastery levels.
Stage 9: Modified follow-up student self-assessment for reflecting on perceived out-
come performance.
Fig. 9 Questions related to a concept
Page 14 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
After investigating the OLM of the follow-up system assessment, students can reflect
on their learning and modify their self-assessed mastery levels of concepts in the follow-
up self-assessment. e modification record of student follow-up self-assessments exter-
nalizes whether students modify their perceived outcome performance and reveals the
impact of external feedback from the OLM of actual outcome performance on students’
perceived outcome performance.
In sum, the system generates multiple OLMs from different viewpoints of perceived
initial performance, actual initial performance, desired performance, perceived outcome
performance, and actual outcome performance in each round and enables students to
compare these OLMs to reflect on their learning.
Evaluation
Methods
An evaluation was performed to explore the following two research questions.
Research question #1: How do students conduct internal SRL processes and feedback
during the four phases of the self-regulatory cycle model?
Research question #2: Do SRL tools and external feedback from the OLM assist stu-
dents in SRL?
As mentioned in the Introduction, SRL is usually measured as events or attitudes
(Winne and Perry 2000). SRL events can be measured by traces of observable SRL indi-
cators and SRL attitudes can be measured by self-reported questionnaires. is study
developed SRL tools to externalize students’ internal SRL processes and feedback during
the four phases of the self-regulatory cycle model, including the initial self-assessment
(i.e., perceived initial performance), goal setting of the desired performance, follow-up
learning by applying tactics and strategies, and follow-up self-assessment (i.e., perceived
outcome performance) (parts 3 to 6 in Fig.2). To address research question #1, students’
SRL internal processes and feedback were recorded through SRL tools, and their SRL
performances were also recorded for analysis. To address research question #2, students’
reactions of whether they modify their self-assessment after receiving external feedback
on their self-assessment from the OLM were recorded and analyzed. In addition, a ques-
tionnaire was designed to ask students whether the SRL tools and the external feedback
from the OLMs assisted them.
e participants were 69 undergraduates who enrolled in a programming course to
learn basic programming knowledge and skills. Students used the OLM-SRL system to
engage in 4 rounds of SRL activities over five weeks (Table3). e class involved three
hours per week in a computer classroom in which each student had a computer. e
content included 6 meta-concepts and 16 concepts of computer programming. e sys-
tem assessment adopted program-output-prediction problems to ask the students to
predict the output of a program. e students needed syntax and semantic knowledge
and program tracing skills to correctly solve the program-output-prediction problems.
Previous studies have revealed that novices lack program tracing skills, which leads to
poor programming performance (Chou and Sun 2013; Perkins etal. 1986; Vainio and
Sajaniemi 2007). e system assessment for each week had six questions. e students’
data in the system were recorded to analyze their SRL behaviors. In the sixth week, the
students were asked to take a program-output-prediction examination (Exam 1) and a
Page 15 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
programming examination (Exam 2) to evaluate their final outcome learning perfor-
mance. Exam 1 included 12 related program-output-prediction questions. Exam 2 asked
the students to write five programs to solve five problems. Because some SRL processes
are beyond the system externalization and record, the students were asked to complete a
questionnaire that contained six items scored on a 7-point Likert scale and six yes-or-no
items to investigate their feelings about and SRL experience of the system.
Results
Four students missed some activities and their data were excluded; thus, data for 65 stu-
dents were included in the evaluation.
Research question #1: How do students conduct internal SRL processes and feedback
during the four phases of the self-regulatory cycle model?
To address research question #1, this study explored how the students perceived their
performance, how they set their target goals of their desired performance, whether they
conducted follow-up learning, how they perceived their outcome performance after fol-
low-up learning, whether they improved their performance after follow-up learning, and
whether they achieved their target goals.
How did students perceive their performance?
e correlation of the students’ internal SRL processes and feedback and learning per-
formance was calculated to explore the relationship between them. Table4 lists the cor-
relation of the overall mastery levels shown in the student self-assessment (i.e. perceived
performance), system assessment (i.e. actual performance), and target goals (i.e. desired
performance) with the scores on Exam 1 and Exam 2. e results revealed that most of
the students’ initial and follow-up system assessments were significantly positively cor-
related with their scores on Exams 1 and 2. is finding showed that the system assess-
ments actually reflected the students’ outcome learning performance. It is interesting
that the correlation of the system assessments (program-output-prediction questions)
and Exam 2 (programming problems) was higher than that of the system assessments
and Exam 1 (program-output-prediction questions). e results revealed that program-
output-prediction is a key ability for programming. In contrast, student self-assessment
and target goals had a low correlation with their scores on Exam 1 and Exam 2. e
Table 3 Scheme oftheevaluation
R1: 1st round; R2: 2nd round; R3: 3rd round; R4: 4th round; S1: Teacher lecture; S2: I nitial student self‑assessment; S3: Initial
system assessment; S4: Modied initial student self‑assessment; S5: Student goal setting; S6: Student follow‑up learning
after class; S7: Follow‑up student self‑assessment; S8: Follow‑up system assessment; S9: Modied follow‑up student self‑
assessment
1st Week 2nd Week 3rd Week 4th Week 5th Week 6th Week
R1-S1
R1-S2
R1‑S3
R1-S4
R1-S5
R1-S6
R1-S7
R1-S8
R1-S9
R2-S1
R2-S2
R2-S3
R2-S4
R2-S5
R2-S6
R2-S7
R2-S8
R2-S9
R3-S1
R3-S2
R3-S3
R3-S4
R3-S5
R3-S6
R3-S7
R3-S8
R3-S9
R4-S1
R4-S2
R4-S3
R4-S4
R4-S5
R4-S6
R4-S7
R4-S8
R4-S9
Exam 1
Exam 2
Questionnaire
Page 16 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
students’ self-assessment did not reflect their actual outcome learning performance, and
the results indicated that the students’ self-assessments were poor.
e data of students’ self-assessment, system assessment, and target goals were
recorded to analyze students’ internal SRL processes and feedback. A comparison of
the results of the initial self-assessment and the initial system assessment indicated
that the students had higher mastery levels on the self-assessment in 75% of the stu-
dent records and had higher mastery levels on the system assessment in 25% of the
student records (Table5). e results showed that the students often tended to over-
estimate their mastery levels.
How did students set their target goals ofdesired performance?
A comparison of the students’ target goals and modified initial self-assessment showed
that the students set their target mastery levels at the same level as in their self-assess-
ments in 8% of the student records, which is higher than in their self-assessments in 66%
of the student records and lower than in their self-assessments in 28% of the student
records (Table6). e target mastery levels are the goals for follow-up learning; thus, it
is inappropriate to set target mastery levels that are lower than the current mastery lev-
els. e results revealed that some students set inappropriate target goals.
Table 4 Correlation of student self-assessment, system assessment, target goals,
andperformance
Exam 1: program‑output‑prediction examination; Exam 2: programming examination
+ p < 0.1; *p < 0.05; **p < 0.01; ***p < 0.001
Round 1st 2nd 3rd 4th
Initial self‑assessment Exam 1 − 0.013 0.090 0.058 0.001
Exam 2 0.196 0.094 0.061 0.028
Initial system assessment Exam 1 0.192 0.278* 0.175 0.342**
Exam 2 0.207+0.358** 0.355** 0.561***
Modified initial self‑assessment Exam 1 0.002 0.061 0.054 0.030
Exam 2 0.237+0.177 0.150 0.171
Target Exam 1 − 0.051 0.031 0.081 − 0.026
Exam 2 0.000 0.026 0.095 − 0.021
Follow‑up self‑assessment Exam 1 0.052 0.069 0.204 − 0.026
Exam 2 0.043 − 0.131 0.028 − 0.053
Follow‑up system assessment Exam 1 0.215+0.167 0.236+0.315*
Exam 2 0.335** 0.378** 0.406** 0.490***
Modified follow‑up self‑assessment Exam 1 0.025 0.104 0.144 − 0.026
Exam 2 0.063 0.013 0.017 − 0.053
Table 5 Comparison ofinitial self-assessment andinitial system assessment
R1 R2 R3 R4 Total
Self‑assessment was
higher 66% 66% 86% 83% 75%
System assessment was
higher 34% 34% 14% 17% 25%
Page 17 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Did students conduct follow‑up learning?
e system records revealed that 43% of the students had reviewed the questions and
answers of the initial system assessment during follow-up learning, whereas 57% of
the students had not done so. e results revealed that more than half of the students
had not conducted follow-up learning after the class.
How did students perceive their outcome performance afterfollow‑up learning?
A comparison of the results of the follow-up self-assessment and follow-up system
assessment showed that the students had the same mastery levels in the two assess-
ments in 1% of the student records, had higher mastery levels in the self-assessment
in 63% of the student records, and had higher mastery levels in the system assessment
in 37% of the student records (Table7). e results indicated that the students often
tended to overestimate their mastery levels.
Did students improve their performance afterfollow‑up learning?
A comparison of the initial system assessment and follow-up system assessment
revealed that the students retained the same mastery levels in 7% of the student
records, increased their mastery levels in 61% of the student records, and decreased
their mastery levels in 32% of the student records (Table8). e results showed that
students decreased their mastery levels of concepts in the follow-up system assess-
ment compared with the initial system assessmentin approximately one-third of the
student records. ese students required assistance in follow-up learning.
Table 6 Comparison ofthemodied initial self-assessment andtarget goal
R1 R2 R3 R4 Total
Same 20% 6% 6% 0% 8%
Target is higher than self‑
assessment 46% 63% 58% 97% 66%
Target is lower than self‑
assessment 34% 31% 35% 3% 28%
Table 7 Comparison ofthefollow-up self-assessment andfollow-up system assessment
R1 R2 R3 R4 Total
Same 3% 0% 0% 0% 1%
Self‑assessment is higher 40% 63% 69% 78% 63%
System assessment is higher 57% 37% 31% 22% 37%
Table 8 Comparison ofthefollow-up system assessment andinitial system assessment
R1 R2 R3 R4 Total
Same 23% 5% 2% 0% 7%
Increase 55% 55% 65% 69% 61%
Decrease 22% 40% 34% 31% 32%
Page 18 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Did students achieve their target goals?
A comparison of the student target goals and follow-up student self-assessment showed
that the students considered that they had achieved their target goals for the mastery
levels in 1% of the student records, outperformed their target goals in 31% of the student
records, and failed to achieve their target goals in 68% of the student records (Table9).
e results revealed that students considered that they had failed to achieve their target
goals for their mastery levelsin more than two-thirds of the student records. A com-
parison of the student target goals and follow-up system assessment showed that the
students achieved their target goals for their mastery levels in 1% of the student records,
outperformed their target goals in 30% of the student records, and failed to achieve their
target goals in 69% of the student records (Table10). e results indicated that the stu-
dents often failed to achieve their target goals for their mastery levels.
Research question #2: Do SRL tools and external feedback from the OLM assist stu-
dents in SRL?
To address research question #2, this study explored how the students responded to
the external feedback from the OLM, and this study adopted a questionnaire to ask the
students about their feelings concerning their SRL experience of the system.
How did thestudents respond totheexternal feedback fromtheOLM?
e results of comparing the initial self-assessment and the modified initial self-
assessment indicated that the students retained the same mastery levels in 34% of
the student records, increased their mastery levels in 27% of the student records, and
decreased their mastery levels in 38% of the student records (Table11). e result
indicated that the external feedback from the OLM promoted the ability of most
Table 9 Comparison ofthefollow-up self-assessment andtarget goal
R1 R2 R3 R4 Total
Achieved the goal 3% 0% 0% 0% 1%
Outperformed the goal 28% 37% 28% 32% 31%
Failed to achieve the goal 69% 63% 72% 68% 68%
Table 10 Comparison ofthefollow-up system assessment andtarget goal
R1 R2 R3 R4 Total
Achieved the goal 3% 0% 0% 0% 1%
Outperformed the goal 51% 35% 17% 18% 30%
Failed to achieve the goal 46% 65% 83% 82% 69%
Table 11 Comparison oftheinitial self-assessment andmodied initial self-assessment
R1 R2 R3 R4 Total
Same 92% 31% 14% 0% 34%
Modified 8% 69% 86% 100% 66%
Increase 8% 37% 57% 8% 27%
Decrease 0% 32% 29% 92% 38%
Page 19 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
(66%) students to generate internal feedback to modify their initial self-assessments
(i.e., their perceived initial performances).
In the analysis of the modified initial self-assessment records of the students who
had different mastery levels between the initial self-assessment and the initial sys-
tem assessment, the results of the chi-squared test revealed that the students tended
to modify their initial self-assessment according to the difference between the initial
system assessment and the initial self-assessment (χ2 = 10.568, p < 0.01) ( Table 12).
at is, when the students found that their mastery levels in the initial system assess-
ment were lower than their mastery levels in the initial self-assessment, they tended
to decrease their mastery levels in the modified initial self-assessment. Otherwise,
they tended to increase their mastery levels in the modified initial self-assessment.
e results indicated that the external feedback from the OLM of the initial system
assessment assisted the students in reflecting on their mastery levels (i.e. perceived
performance). However, after investigating the OLM of the initial self-assessment and
initial system assessment, in 34% of the student records, the students did not modify
their initial self-assessment, although their results in the initial self-assessment were
different from their results in the initial system assessment.
Students’ feelings aboutandSRL experience ofthesystem
is study provided SRL tools as scaffolding to promote students to engage in SRL
processes, including initial self-assessment, goal-setting, follow-up learning, and a
follow-up self-assessment after follow-up learning. is study also provided OLMs
as external feedback to assist the students in SRL. Part 1 of the questionnaire was
designed to ask the students whether these SRL processes through SRL tools and
external feedback were beneficial to them. Table13 lists the results of part 1 of the
questionnaire. In total, 91% of the students agreed (strongly agree, agree, and some-
what agree) that the initial self-assessment helped them reflect on their learning and
understanding of concepts (item #1). In total, 89% of the students considered that
the OLM of the initial system assessment helped them understand their learning and
understanding of concepts (item #2). In total, 77% of the students expressed that set-
ting target goals for mastery levels of concepts prompted them to study hard (item
#3). In total, 82% of the students consulted the results of the initial system assessment
to set their target goals for follow-up learning (item #4). In total, 85% of the students
agreed that the follow-up self-assessment helped them reflect on their follow-up
Table 12 Distribution of the students’ initial system assessment, initial self-assessment,
andmodied initial self-assessment
Modied initial self-assessment—initial self-assessment Total
Same Positive Negative
Initial system assessment—initial self‑assessment
Positive 23 26 15 64
Negative 66 45 85 196
Total 89 71 100 260
Page 20 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
learning (item #5). In total, 89% of the students considered that the follow-up system
assessment helped them reflect on their follow-up learning (item #6).
Part 2 of the questionnaire was designed to ask the students about their SRL experi-
ence with the system. Table14 lists the results of part 2 of the questionnaire. Of the
students, 91% had identified their unfamiliar concepts by comparing the OLMs of the
initial self-assessment and the initial system assessment (item #7), and 69% modified
their initial self-assessment after investigating the OLM of the initial system assessment
(item #8). In brief, the external feedback from the OLM of the initial system assessment
helped students generate internal feedback to reflect on their mastery levels of concepts
and identify unfamiliar concepts. In total, 83% of the students expressed that they had
investigated the questions and correct answers of the initial system assessment during
follow-up learning after class (i.e. strategy implementation and monitoring, item #9),
but the system records revealed that only 43% of the students had reviewed the ques-
tions and answers of the initial system assessment during follow-up learning. e results
revealed that many students poorly monitored their follow-up learning. In total, 78% of
Table 13 Results ofpart 1 ofthequestionnaire
7: strongly agree; 6: agree; 5: somewhat agree; 4: neutral; 3: somewhat disagree; 2: disagree; 1: strongly disagree
7654321
#1. The initial self‑assessment helped me reflect on my learning and
understanding of concepts 11 36 12 4 2 0 0
17% 55% 18% 6% 3% 0% 0%
#2. The open learner model of the initial system assessment helped me
understand my learning and understanding of concepts 15 29 14 6 1 0 0
23% 45% 22% 9% 2% 0% 0%
#3. Setting target goals for mastery levels of concepts prompted me to
study hard 10241611310
15% 37% 25% 17% 5% 2% 0%
#4. I consulted the results of the initial system assessment to set target
goals for follow‑up learning 10241910110
15% 37% 29% 15% 2% 2% 0%
#5. The follow‑up self‑assessment helped me reflect on my follow‑up
learning 9 30 16 6 4 0 0
14% 46% 25% 9% 6% 0% 0%
#6. The follow‑up system assessment helped me reflect on my follow‑up
learning 12 32 14 5 2 0 0
18% 49% 22% 8% 3% 0% 0%
Table 14 Results ofpart 2 ofthequestionnaire
Yes No
#7. I identified unfamiliar concepts through comparing the open learner models of the initial self‑
assessment and initial system assessment 59 6
91% 9%
#8. I modified my initial self‑assessment after investigating the open learner model of the initial
system assessment 45 20
69% 31%
#9. I investigated the questions and correct answers of the initial system assessment during follow‑
up learning after class 54 11
83% 17%
#10. I sought help from the teacher, teaching assistants, or classmates regarding the incorrectly
answered questions in the initial system assessment during follow‑up learning after class 51 14
78% 22%
#11. I found that I had not conducted any follow‑up learning when I performed the follow‑up self‑
assessment 28 37
43% 57%
#12. I found that I had failed to achieve my target goals for mastery levels when I performed the
follow‑up self‑assessment 38 27
58% 42%
Page 21 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
the students expressed that they had sought help from the teacher, teaching assistants,
or classmates regarding incorrectly answered questions in the initial system assessment
during follow-up learning after class (item #10). at is, most students applied help-
seeking strategies to improve their learning. In total, 43% of the students found that
they had not conducted any follow-up learning when they performed the follow-up self-
assessment (i.e. strategy monitoring, item #11). In total, 58% of the students found that
they had failed to achieve their target goals for mastery levels when they performed the
follow-up self-assessment (i.e., strategy outcome monitoring, item #12). In brief, the fol-
low-up self-assessment helped students reflect on their follow-up learning by monitor-
ing their strategy implementation and outcome. Approximately half of the students had
not conducted any follow-up learning and failed to achieve their target goals.
Discussion
Students often have poor SRL processes andpoor internal feedback
Regarding research question #1, “How do students conduct internal SRL processes and
feedback during the four phases of the self-regulatory cycle model?” the results revealed
that students often have poor SRL processes and poor internal feedback.
First, students’ self-assessment is poor; that is, monitoring of their learning perfor-
mance is poor. e results showed that students’ self-assessment failed to reflect their
learning performance (Table4), and students often tended to overestimate their mastery
levels (Tables5, 7). e results were consistent with previous studies (Chou etal. 2015;
Dunning etal. 2004; Stone 2000). e students’ overestimated self-assessments might
deceive them and discourage them from further learning. Second, some students set
inappropriate target goals. Ideally, students should set their target mastery levels after
follow-up learning higher than their current mastery levels, as they conduct follow-up
learning to improve their mastery levels. However, the results revealed that some stu-
dents set inappropriate targets for follow-up learning (Table6), with their target mastery
levels lower than those in their self-assessment. ese students might lack confidence or
motivation to improve their mastery levels. ird, students often fail to conduct follow-
up learning. Students should conduct follow-up learning to improve their mastery lev-
els, but the system records and questionnaire results (item #11 in Table14) indicated
that approximately half of the students did not conduct follow-up learning. Fourth, stu-
dents often fail to achieve their target goals for mastery levels. Although students set
their goals for mastery levels after the follow-up learning, the results revealed that they
often failed to achieve those goals (Table10).
Accordingly, students often have poor SRL processes and poor internal feedback, and
these lead to poor learning performance. e students had poor monitoring of their
learning performance (i.e., poor self-assessment) and often overestimated their mastery
levels of concepts; therefore, they did not conduct further learning (i.e., poor internal
feedback) to improve their mastery levels. In addition, after monitoring their mastery
levels of concepts, some students did not set appropriate target goals (i.e., poor internal
feedback) to improve their mastery levels. Furthermore, the students often failed to con-
duct follow-up learning (i.e., poor internal feedback) and to achieve their target goals for
the mastery levels. As a result, the students were often unaware of unfamiliar concepts
and failed to improve their mastery levels of concepts.
Page 22 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
SRL tools andexternal feedback fromtheOLM assist most students inSRL
Regarding research question #2, “Do SRL tools and external feedback from the OLM
assist students in SRL?” the results showed that SRL tools and external feedback from
the OLMs assisted most students in SRL.
First, SRL tools externalize internal meta-cognitive SRL processes and feedback and
engage students in SRL. e questionnaire results revealed that the SRL tools of the ini-
tial self-assessment (item #1 in Table13) and follow-up self-assessment (item #5) helped
most students reflect on their learning (i.e. monitoring of learning performance). In
addition, setting target goals for mastery levels of concepts prompted most students to
study hard (i.e. goal-setting and learning motivation, item #3). Furthermore, the follow-
up self-assessment prompted most students to be aware of their lack of follow-up learn-
ing (i.e. strategy implementation and monitoring, item #11 in Table14) and failure to
achieve their target goals (i.e. strategy outcome monitoring, item #12).
Second, the external feedback of the OLM of the system assessment helped most stu-
dents reflect on their learning, set target goals, and conduct follow-up learning. e
questionnaire results showed that the OLM of the system assessment assisted most stu-
dents in reflecting on their learning and mastery levels of concepts (i.e., monitoring of
initial and outcome performance, items #2 and #6). Most students identified unfamiliar
concepts by comparing the OLMs of the self-assessment and system assessment (item
#7) and modified their initial self-assessment after investigating the OLM of the initial
system assessment (item #8 and Tables11 and 12). In addition, most students consulted
the results of the initial system assessment to set their target goals for follow-up learn-
ing (item #4). Furthermore, most students had investigated the questions and correct
answers of the initial system assessment (i.e., strategy implementation, item #9) and had
sought help from the teacher, teaching assistants, or classmates regarding their incor-
rectly answered questions (i.e., help-seeking strategy, item #10) during follow-up learn-
ing after class.
In sum, SRL tools and external feedback from the OLMs assisted most students in
SRL, particularly in reflecting on their mastery levels of concepts and unfamiliar con-
cepts (i.e., self-evaluation and monitoring of their learning performance), setting tar-
get goals for follow-up learning (i.e., goal-setting), conducting follow-up learning (i.e.,
strategy implementation), determining whether to seek help (i.e., strategy planning and
implementation), and being aware of their execution (i.e., strategy monitoring) and out-
come of follow-up learning (i.e., strategy outcome monitoring).
Some students still need further support forSRL
is study offers SRL tools and external feedback from OLMs to assist students in SRL.
e evaluation results showed that SRL tools and external feedback from the OLMs
are beneficial to most students but are ineffective for some students. Researchers have
argued that students’ individual differences influence the effects of SRL support, such as
prompts and external feedback (Wong etal. 2019). Some students still need further sup-
port for SRL. We suggest providing further support for SRL as follows.
First, adding negotiation mechanisms of SRL between students and the system would
enhance the impact of external feedback and regulate students’ poor SRL processes
and internal feedback. is study provides SRL tools and external feedback and leaves
Page 23 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
choices for the students, but some students did not utilize the SRL tools or respond to
the external feedback. For example, some students did not generate internal feedback
to modify their self-assessment based on the external feedback from the OLM of the
initial system assessment or they even modified their self-assessment in opposition to
the external feedback (Table12). Studies have found that students with poor SRL skills
often make poor choices when they have control of learning or system functions, and
they thus attain poor learning performance (Clark and Mayer 2008; Scheiter and Gerjets
2007; Vandewaetere and Clarebout 2011; Young 1996). e system could add mecha-
nisms to negotiate with students to reach a consensus on the self-assessment. Such
negotiation enables a co-regulation process to support students and to regulate their
ineffective SRL behaviors (Chou etal. 2015, 2018; Hadwin etal. 2011). e system can
adopt different negotiation strategies with fewer or more concessions to favor the sys-
tem or the students (Chou etal. 2015). If students have poor internal SRL processes and
feedback, the system could adopt a negotiation strategy with fewer concessions. Previ-
ous studies have confirmed that negotiation between the system and students promoted
student SRL processes, such as self-assessment, choice of next learning content, goal set-
ting, and help-seeking processes (Bull and Kay 2013; Chen etal. 2019;Chou etal. 2015,
2018, 2019). If students’ self-assessments vary widely from the system assessment and
students do not modify their self-assessments after external feedback from the OLM of
the system assessment, the system could remind them to note the substantial difference
between the self-assessment and system assessment and suggest that they modify their
self-assessments.
Second, adding different external feedback on and SRL tools of different SRL pro-
cesses would assist students in SRL. Researchers have proposed the following four levels
of external feedback: task level; process level; self-regulation level; and self-level feed-
back (Hattie and Timperley 2007). is study focuses on self-regulation level feedback
on self-assessments. However, SRL involves many meta-cognitive processes, such as
goal-setting, strategy choice and application. e results of this study have revealed that
some students set inappropriate goals for mastery levels that are lower than their cur-
rent mastery levels (Table6). e system could also add self-regulation level feedback
on goal-setting, such as SRL prompts, or mechanisms to negotiate with students to set
appropriate target goals (Chou etal. 2019). e system can prompt students to set targets
goals that are higher than their current mastery levels. In addition, this study enabled
students to review the questions and answers of the initial system assessment during fol-
low-up learning, but the system records and questionnaire results (item #11 in Table14)
revealed that approximately half of the students did not conduct follow-up learning. Fur-
thermore, over half of the students failed to achieve their goals (Table10 and item #12).
Students might lack motivation or strategies for conducting follow-up learning. ere-
fore, the system should add further support of learning strategies, SRL prompts, and
SRL tools for follow-up learning. For example, the system could add reminder mecha-
nisms, such as emails or messages, to prompt students to conduct follow-up learning.
In addition, the system could provide further learning materials and recommend that
students conduct follow-up learning by reading learning materials related to unfamiliar
concepts. e system could provide further similar questions for exercises and a dash-
board (Matcha etal. 2020) to visualize and reveal students’ follow-up learning behaviors
Page 24 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
to prompt students to conduct follow-up learning. Finally, the system provides the task
level outcome feedback of each question that is correct or not but does not provide pro-
cess level feedback to assist students in correcting their errors. e results of this study
have shown that many students have sought help from teachers, teaching assistants, or
classmates about their incorrectly answered questions (item #10 in Table14). e sys-
tem could add help-seeking support mechanisms to provide process level feedback, such
as corrective feedback and explanation feedback, to help students understand why their
answers are correct or incorrect (Chou etal. 2011). ese mechanisms might help stu-
dents improve their performance. e system could also add online help-seeking mech-
anisms to assist students in seeking help online from teachers, teaching assistants, or
classmates.
Conclusion
is study proposes an SRL internal and external model (SRL-IE) to illustrate the rela-
tionship among external SRL tools, internal SRL processes, internal feedback, and
external feedback. e model assists in investigating SRL and designing SRL tools
and external feedback. Based on the model, this study designs an intelligent computer
assisted learning system to offer SRL tools to enable students to engage in SRL, including
self-assessment of their mastery levels (i.e., monitoring of learning performance), setting
goals for their mastery levels for follow-up learning, conducting follow-up learning to
achieve their goals (i.e., strategy implementation and monitoring), and reflecting on and
regulating their learning (i.e., strategy outcome monitoring). e SRL tools also exter-
nalize students’ internal SRL processes and feedback for investigation. e results of the
system records and questionnaire revealed that students often have poor SRL processes
and internal feedback, including poor self-assessment, inappropriate target goals, a lack
of conducting follow-up learning, and a failure to achieve their goals. e externalization
of students’ internal processes and feedback could be further applied to develop adaptive
regulation mechanisms by detecting students’ poor SRL processes and internal feedback
and to provide appropriate external feedback and intervention in future studies.
e system provides OLMs as a form of external self-regulation level feedback on self-
assessment to assist students in SRL. Multiple OLMs are generated from different view-
points of perceived initial performance, actual initial performance, desired performance,
perceived outcome performance, and actual outcome performance in each round of SRL
and enable students to compare these OLMs to reflect on their SRL. e results revealed
that the SRL tools and external feedback from the OLMs assisted most students in SRL,
including monitoring of their learning performance, goal-setting, strategy implementa-
tion and monitoring, and strategy outcome monitoring. However, some students who
did not effectively respond to the external feedback need further support for SRL. Fur-
ther investigation of how to assist these students is needed.
Acknowledgements
Not applicable.
Authors’ contributions
C‑YC was responsible for writing the paper, project administration, conceptualization, methodology design, and
investigation. N‑BZ was responsible for software development, investigation, and formal analysis. Both authors read and
approved the final manuscript.
Funding
Not applicable.
Page 25 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Availability of data and materials
The data used in this study are confidential.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the
Center for Taiwan Academic Research Ethics Education.
Competing interests
The authors declare that they have no competing interests.
Received: 30 June 2020 Accepted: 16 September 2020
References
Aleven, V., Mclaren, B., Roll, I., & Koedinger, K. (2006). Toward meta‑cognitive tutoring: A model of help seeking with a
cognitive tutor. International Journal of Artificial Intelligence in Education, 16(2), 101–128.
Aleven, V., Roll, I., McLaren, B. M., & Koedinger, K. R. (2016). Help helps, but only so Much: Research on help seeking with
intelligent tutoring systems. International Journal of Artificial Intelligence in Education, 26, 205–223.
Azevedo, R., Greene, J. A., & Moos, D. C. (2007). The effect of a human agent’s external regulation upon college students’
hypermedia learning. Metacognition and learning, 2(2–3), 67–87.
Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self‑regulated learning and metacognition–Implications for the design of
computer‑based scaffolds. Instructional Science, 33(5), 367–379.
Azevedo, R., Johnson, A., Chauncey, A., & Burkett, C. (2010). Self‑regulated learning with MetaTutor: Advancing the sci‑
ence of learning with MetaCognitive tools. In: New science of learning (pp. 225–247). Springer, New York, NY.
Brusilovskiy, P. L. (1994). The construction and application of student models in intelligent tutoring systems. Journal of
Computer and Systems Sciences International, 32(1), 70–89.
Bull, S. (2004). Supporting Learning with Open Learner Models. In Proceedings of 4th Hellenic Conference with International
Participation: Information and Communication Technologies in Education, Athens, Greece. Keynote.
Bull, S. (2016). Negotiated learner modelling to maintain today’s learner models. Research and Practice in Technology
Enhanced Learning, 11(1), 1–29.
Bull, S., Ginon, B., Boscolo, C., & Johnson, M. (2016). Introduction of learning visualisations and metacognitive support in
a persuadable open learner model. In Proceedings of the sixth international conference on learning analytics & knowl-
edge (pp. 30–39). ACM.
Bull, S., & Kay, J. (2013). Open learner models as drivers for metacognitive processes (pp. 349–365). International handbook of
metacognition and learning technologies. New York: Springer.
Bull, S., & Kay, J. (2016). SMILI: a Framework for interfaces to learning data in open learner models, learning analytics and
related fields. International Journal of Artificial Intelligence in Education, 26(1), 293–331.
Bull, S., Johnson, M. D., Alotaibi, M., Byrne, W., & Cierniak, G. (2013). Visualising multiple data sources in an independ‑
ent open learner model. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial intelligence in education (pp.
199–208). Berlin Heidelberg: Springer.
Bull, S., Pain, H., & Brna, P. (1995). Mr. Collins: A collaboratively constructed, inspectable student model for intelligent
computer assisted language learning. Instructional Science, 23(1–3), 65–87.
Bull, S., Quigley, S., & Mabbott, A. (2006). Computer‑based formative assessment to promote reflection and learner
autonomy. Engineering Education, 1(1), 8–18.
Burns, E. C., Martin, A. J., & Collie, R. J. (2018). Adaptability, personal best (PB) goals setting, and gains in students’
academic outcomes: A longitudinal examination from a social cognitive perspective. Contemporary Educational
Psychology, 53, 57–72.
Butler, D. L., & Winne, P. H. (1995). Feedback and self‑regulated learning: A theoretical synthesis. Review of Educational
Research, 65(3), 245–281.
Chambers, J. M., Cleveland, W. S., Kleiner, B., & Tukey, P. A. (2018). Graphical methods for data analysis. New York: CRC Press.
Chen, C. M. (2009). Personalized E‑learning system with self‑regulated learning assisted mechanisms for promoting learn‑
ing performance. Expert Systems with Applications, 36(5), 8816–8829.
Chen, Z. H., Lu, H. D., & Chou, C. Y. (2019). Using game‑based negotiation mechanism to enhance students’ goal setting
and regulation. Computers & Education, 129, 71–81.
Chou, C. Y., Chih, W. C., Tseng, S. F. & Chen, Z. H. (2019). Simulatable Open Learner Models of Core Competencies for Set‑
ting Goals for Course Performance. In Proceedings of the 27th International Conference on Computer in Education (ICCE
2019), pp. 93–95. Kenting, Taiwan.
Chou, C. Y., Huang, B. H., & Lin, C. J. (2011). Complementary machine intelligence and human intelligence in virtual teach‑
ing assistant for tutoring program tracing. Computers & Education, 57(4), 2303–2312.
Chou, C. Y., Lai, K. R., Chao, P. Y., Lan, C. H., & Chen, T. H. (2015). Negotiation based adaptive learning sequences: Combining
adaptivity and adaptability. Computers & Education, 88, 215–226.
Chou, C. Y., Lai, K. R., Chao, P. Y., Tseng, S. F., & Liao, T. Y. (2018). A negotiation‑based adaptive learning system for regulating
help‑seeking behaviors. Computers & Education, 126, 115–128.
Chou, C. Y., & Sun, P. F. (2013). An educational tool for visualizing students’ program tracing processes. Computer Applica-
tions in Engineering Education, 21(3), 432–438.
Chou, C. Y., Tseng, S. F., Chih, W. C., Chen, Z. H., Chao, P. Y., Lai, K. R., et al. (2017). Open student models of core competen‑
cies at the curriculum level: Using learning analytics for student reflection. IEEE Transactions on Emerging Topics in
Computing, 5(1), 32–44.
Page 26 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Clark, R. C., & Mayer, R. E. (2008). Who’s in Control? Guidelines for e‑Learning Navigation. In E-learning and the science of
instruction: Proven guidelines for consumers and designers of multimedia learning (pp. 309–338). San Francisco: Pfeiffer.
Conati, C., & Kardan, S. (2013). Student modeling: Supporting personalized instruction, from problem solving to explora‑
tory open ended activities. AI Magazine, 34(3), 13–26.
Demmans, E. C., & Bull, S. (2015). Uncertainty representation in visualizations of learning analytics for learners: Current
approaches and opportunities. IEEE Transactions on Learning Technologies., 8(3), 242–260.
Desmarais, M. C., & Baker, R. S. (2012). A review of recent advances in learner and skill modeling in intelligent learning
environments. User Modeling and User-Adapted Interaction, 22(1–2), 9–38.
Devolder, A., van Braak, J., & Tondeur, J. (2012). Supporting self‑regulated learning in computer‑based learning environ‑
ments: Systematic review of effects of scaffolding in the domain of science education. Journal of Computer Assisted
Learning, 28(6), 557–573.
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self‑assessment implications for health, education, and the workplace.
Psychological Science in the Public Interest, 5(3), 69–106.
Garcia, R., Falkner, K., & Vivian, R. (2018). Systematic literature review: Self‑regulated learning strategies using e‑learning
tools for computer science. Computers & Education, 123, 150–163.
Griffin, T. D., Wiley, J., & Salas, C. R. (2013). Supporting effective self‑regulated learning: The critical role of monitoring. In
International handbook of metacognition and learning technologies (pp. 19–34). Springer, New York, NY.
Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self‑regulated, co‑regulated, and socially shared regulation of learning. In
Handbook of self-regulation of learning and performance (pp. 65–84).
Harley, J. M., Taub, M., Azevedo, R., & Bouchet, F. (2017). Let’s set up some subgoals: Understanding human‑pedagogical
agent collaborations and their implications for learning and prompt and feedback compliance. IEEE Transactions on
Learning Technologies, 11(1), 54–66.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
Holt, P., Dubs, S., Jones, M., & Greer, J. (1994). The state of student modeling. In Student Modeling: the Key to Individualized
Knowledge-Based Instruction. (Greer, J. & McCalla, G. I. Eds.) (pp. 3–35), Springer, Berlin.
Hwang, G. J. (2003). A conceptual map model for developing intelligent tutoring systems. Computers & Education, 40(3),
217–235.
Jansen, R. S., van Leeuwen, A., Janssen, J., Conijn, R., & Kester, L. (2020). Supporting learners’ self‑regulated learning in Mas‑
sive Open Online Courses. Computers & Education, 146, 103771.
Järvelä, S., Kirschner, P. A., Panadero, E., Malmberg, J., Phielix, C., Jaspers, J., et al. (2015). Enhancing socially shared regula‑
tion in collaborative learning groups: Designing for CSCL regulation tools. Educational Technology Research and
Development, 63(1), 125–142.
Karabenick, S. A. (2011). Methodological and assessment issues in research on help seeking. In Handbook of Self-regula-
tion of Learning and Performance, pp. 267–281.
Lai, C.‑L., & Hwang, G.‑J. (2016). A self‑regulated flipped classroom approach to improving students’ learning performance
in a mathematics course. Computers & Education, 100, 126–140.
Lajoie, S. P. (1993). Computer environments as cognitive tools for enhancing learning. In Computers as cognitive tools, pp.
261–288.
Lee, D., Watson, S. L., & Watson, W. R. (2019). Systematic literature review on self‑regulated learning in massive open
online courses. Australasian Journal of Educational Technology, 35(1), 28–41.
Lin, J. W., Lai, Y. C., Lai, Y. C., & Chang, L. C. (2016). Fostering self‑regulated learning in a blended environment using group
awareness and peer assistance as external scaffolds. Journal of Computer Assisted Learning, 32(1), 77–93.
Long, Y., & Aleven, V. (2017). Enhancing learning outcomes through self‑regulated learning support with an Open Learner
Model. User Modeling and User-Adapted Interaction, 27(1), 55–88.
Manlove, S., Lazonder, A. W., & de Jong, T. (2007). Software scaffolds to promote regulation during scientific inquiry learn‑
ing. Metacognition and Learning, 2(2–3), 141–155.
Matcha, W., Gasevic, D., & Pardo, A. (2020). A systematic review of empirical studies on learning analytics dashboards: A
self‑regulated learning perspective. IEEE Transactions on Learning Technologies, 13(2), 226–245.
Mitrovic, A., & Martin, B. (2007). Evaluating the effect of open student models on self‑assessment. International Journal of
Artificial Intelligence in Education, 17(2), 121–144.
Müller, N. M., & Seufert, T. (2018). Effects of self‑regulation prompts in hypermedia learning on learning performance and
self‑efficacy. Learning and Instruction, 58, 1–11.
Musso, M. F., Boekaerts, M., Segers, M., & Cascallar, E. C. (2019). Individual differences in basic cognitive processes and self‑
regulated learning: Their interaction effects on math performance. Learning and Individual Differences, 71, 58–70.
Nicol, D. J., & Macfarlane‑Dick, D. (2006). Formative assessment and self‑regulated learning: A model and seven principles
of good feedback practice. Studies in Higher Education, 31(2), 199–218.
Nota, L., Soresi, S., & Zimmerman, B. J. (2004). Self‑regulation and academic achievement and resilience: A longitudinal
study. International Journal of Educational Research, 41(3), 198–215.
Nussbaumer, A., Hillemann, E. C., Gütl, C., & Albert, D. (2015). A competence‑based service for supporting self‑regulated
learning in virtual environments. Journal of Learning Analytics, 2(1), 101–133.
Pakdaman‑Savoji, A., Nesbit, J., & Gajdamaschko, N. (2019). The conceptualisation of cognitive tools in learning and teach‑
nology: A review. Australasian Journal of Educational Technology, 35(2), 1–24.
Panadero, E. (2017). A review of self‑regulated learning: Six models and four directions for research. Frontiers in Psychology,
8, 422.
Panadero, E., Broadbent, J., Boud, D., & Lodge, J. M. (2019). Using formative assessment to influence self‑and co‑regulated
learning: The role of evaluative judgement. European Journal of Psychology of Education, 34(3), 535–557.
Panadero, E., Klug, J., & Järvelä, S. (2016). Third wave of measurement in the self‑regulated learning field: When measure‑
ment and intervention come hand in hand. Scandinavian Journal of Educational Research, 60(6), 723–735.
Pérez‑Álvarez, R., Maldonado‑Mahauad, J., & Pérez‑Sanagustín, M. (2018, September). Tools to support self‑regulated
learning in online environments: literature review. In European Conference on Technology Enhanced Learning (pp.
16–30). Springer, Cham.
Page 27 of 27
Chouand Zou Int J Educ Technol High Educ (2020) 17:55
Perkins, D. N., Hancock, C., Hobbs, R., Martin, F., & Simmons, R. (1986). Conditions of learning in novice programmers.
Journal of Educational Computing Research, 2(1), 37–55.
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies
for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813.
Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011a). Metacognitive practice makes perfect: Improving students’
self‑assessment skills with an intelligent tutoring system. In International Conference on Artificial Intelligence in Educa-
tion (pp. 288–295). Springer, Berlin, Heidelberg.
Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011). Improving students’ help‑seeking skills using metacognitive
feedback in an intelligent tutoring system. Learning and Instruction, 21(2), 267–280.
Roll, I., Wiese, E. S., Long, Y., Aleven, V., & Koedinger, K. R. (2014). Tutoring self‑and co‑regulation with intelligent tutoring
systems to help students acquire better learning skills. Design Recommendations for Intelligent Tutoring Systems, 2,
169–182.
Rovers, S. F., Clarebout, G., Savelberg, H. H., de Bruin, A. B., & van Merriënboer, J. J. (2019). Granularity matters: Comparing
different ways of measuring self‑regulated learning. Metacognition and Learning, 14(1), 1–19.
Saary, M. J. (2008). Radar plots: a useful way for presenting multivariate health care data. Journal of Clinical Epidemiology,
61(4), 311–317.
Scheiter, K., & Gerjets, P. (2007). Learner control in hypermedia environments. Educational Psychology Review, 19(3),
285–307.
Schraw, G. (2007). The use of computer‑based environments for understanding and improving self‑regulation. Metacog-
nition and Learning, 2(2), 169–176.
Self, J. (1988). Bypassing the intractable problem of student modeling. In International Conference of Intelligent Tutoring
Systems, Montreal, Canada, pp. 18–24.
Shyr, W. J., & Chen, C. H. (2018). Designing a technology‑enhanced flipped learning system to facilitate students’ self‑
regulation and performance. Journal of Computer Assisted Learning, 34(1), 53–62.
Stone, N. J. (2000). Exploring the relationship between calibration and self‑regulated learning. Educational Psychology
Review, 12(4), 437–475.
Su, J. M. (2020). A rule‑based self‑regulated learning assistance scheme to facilitate personalized learning with adaptive
scaffoldings: A case study for learning computer software. Computer Applications in Engineering Education., 28(3),
536–555.
Vainio, V., & Sajaniemi, J. (2007). Factors in novice programmers’ poor tracing skills. ACM SIGCSE Bulletin, 39(3), 236–240.
Vandewaetere, M., & Clarebout, G. (2011). Can instruction as such affect learning? The case of learner control. Computers &
Education, 57, 2322–2332.
Winne, P. H. (1996). A metacognitive view of individual differences in self‑regulated learning. Learning and Individual Dif-
ferences, 8(4), 327–353.
Winne, P. H. (2010). Improving measurements of self‑regulated learning. Educational Psychologist, 45(4), 267–276.
Winne, P. H. (2011). A cognitive and metacognitive analysis of self‑regulated learning. In Handbook of self-regulation of
learning and performance (pp. 15–32).
Winne, P. H., & Hadwin, A. F. (2013). nStudy: Tracing and supporting self‑regulated learning in the Internet. In International
handbook of metacognition and learning technologies (pp. 293–308). Springer, New York, NY.
Winne, P. H., & Nesbit, J. C. (2009). Supporting Self‑Regulated Learning with Cognitive Tools. In Handbook of metacognition
in education, p. 259.
Winne, P. H., & Perry, N. E. (2000). Measuring self‑regulated learning. In Handbook of self-regulation (pp. 531–566). Aca‑
demic Press.
Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G. J., & Paas, F. (2019). Supporting self‑regulated learning in online
learning environments and MOOCs: A systematic review. International Journal of Human-Computer Interaction,
35(4–5), 356–373.
Woolf, B. P. (2008). Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing e-learning. Boston:
Morgan Kaufmann Publishers.
Young, J. D. (1996). The effect of self‑regulated learning strategies on performance in learner controlled computer‑based
instruction. Educational Technology Research and Development, 44(2), 17–27.
Zhou, M. (2012). From “Self‑Tested” to “Self‑Testing”: a review of self‑assessment systems for learning. In S. Graf, F. Lin, & R.
McGreal (Eds.), Intelligent and adaptivelearning systems: Technology enhanced support for learners and teachers (pp.
119–132). Hershey, PA: Information Science Reference.
Zimmerman, B. J. (1990). Self‑regulated learning and academic achievement: An overview. Educational Psychologist, 25(1),
3–17.
Zimmerman, B. J. (2001). Theories of self‑regulated learning and academic achievement: An overview and analysis. Self-
Regulated Learning and Academic Achievement: Theoretical Perspectives, 2, 1–37.
Zimmerman, B. J. (2002). Becoming a self‑regulated learner: An overview. Theory into Practice, 41(2), 64–70.
Zimmerman, B. J., Bonner, S., & Kovach, R. (1996). Developing self-regulated learners: Beyond achievement to self-efficacy.
New York: American Psychological Association.
Zimmerman, B. J., & Schunk, D. H. (1989). Self-regulated learning and academic achievement: Theory, research, and practice.
New York: Springer.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.