ArticlePDF Available

Analysing feedback processes in an online teaching and learning environment: An exploratory study


Abstract and Figures

Within the constructivist framework of online distance education the feedback process is considered a key element in teachers’ roles because it can promote the regulation of learning. Therefore, faced with the need to guide and train teachers in the kind of feedback to provide and how to provide it, we establish three aims for this research: identify the presence of feedback according to the regulation of learning required; characterise this feedback according to content (i.e. the meaning of feedback); and, finally, to explore possible relationships between feedback and the results of the teaching and learning process (i.e. students’ satisfaction and final grades). The results for a sample of 186 students, taking nine courses at the Open University of Catalonia, are discussed in the light of feedback, which is considered a central element in university teaching practice in online environments. We conclude that, in general, the presence of feedback is associated with improved levels of performance and higher levels of satisfaction with the general running of the course.
Content may be subject to copyright.
Analysing feedback processes in an online teaching
and learning environment: an exploratory study
Anna Espasa ÆJulio Meneses
Published online: 10 June 2009
Springer Science+Business Media B.V. 2009
Abstract Within the constructivist framework of online distance education the feedback
process is considered a key element in teachers’ roles because it can promote the regulation
of learning. Therefore, faced with the need to guide and train teachers in the kind of
feedback to provide and how to provide it, we establish three aims for this research:
identify the presence of feedback according to the regulation of learning required;
characterise this feedback according to content (i.e. the meaning of feedback); and, finally,
to explore possible relationships between feedback and the results of the teaching and
learning process (i.e. students’ satisfaction and final grades). The results for a sample of
186 students, taking nine courses at the Open University of Catalonia, are discussed in the
light of feedback, which is considered a central element in university teaching practice in
online environments. We conclude that, in general, the presence of feedback is associated
with improved levels of performance and higher levels of satisfaction with the general
running of the course.
Keywords Distance education Regulation Feedback Formative assessment
Higher education Online environment ICT
Despite the fact that an essentially behaviourist pedagogical approach predominated in the
early days of distance education, the breakthrough in cognitive learning theories combined
with advances in the information and communication technologies (ICT) led to their
gradual introduction into teaching and learning processes (Garrison and Anderson 2003).
A. Espasa J. Meneses
Department of Psychology and Education, Open University of Catalonia, Rambla del Poblenou,
156, 08018 Barcelona, Spain
A. Espasa (&)
Department of Psychology and Education, Open University of Catalonia, Rambla del Poblenou,
156, 08018 Barcelona, Spain
High Educ (2010) 59:277–292
DOI 10.1007/s10734-009-9247-4
The first computer-assisted courses, based on the use of simulations and multimedia
applications, created the conditions necessary to be able—under the growing influence of
the constructivist approach—to take advantage of new opportunities for the development
and consolidation of online teaching and learning environments.
Taking this perspective—which is that adopted in this research—distance education is
conducted within the framework of a community whose ultimate goal is the co-con-
struction of knowledge through asynchronous interactions between students and teachers in
relation to content or learning tasks (Scardamalia and Bereiter 1994). Learning, therefore,
would be based on combining two basic psychological and complementary processes: one
that is interpersonal in nature, sustained in interaction, confrontation and negotiation in
regard to contributions from the participants in the educational activity, and another,
intrapersonal process, based on individual cognitive reflection.
In keeping with this perspective, the process of teaching and learning in online edu-
cational environments is usually based on assigments performed within the framework of
continuous learning assessment (Macdonald and Twining 2002). This type of evaluation,
complemented by traditional summative, or final assessment (Morgan and O’Reilly 1999),
is integrated into the teaching and learning process, where students, as proposed by Vy-
gostsky (1978), receive the help and support of the teacher and of their peer students,
which helps them progress in their learning. In this evaluative context, feedback processes
facilitate the regulation of learning and enable students to measure their performance
against their aims (Allal 1979,1988; Nicol and Macfarlane-Dick 2006). Feedback in the
specific context of formative learning assessment is the object of study of this article (Shute
2008; Yorke 2003), which is indispensable in the case of adult learners and asynchronous
teaching and learning environments because it allows students to progressively become
more autonomous in their learning.
Feedback as a promoter of regulation of learning
In recent decades, researchers have increased the interest in formative feedback in teaching
environments. For example, Chickering and Gamson (1991) and Chickering and Ehrmann
(2008) highlighted feedback as one of the key elements in quality teaching in higher
education. However, most studies conducted in this area do not provide empirical results or
go beyond theoretical formulations and neither analyse the specific characteristics of
feedback when they promote the regulation of learning. This is the case, for example, with
Nicol and Macfarlane-Dick (2006), who proposed seven principles for good feedback, and
Gibbs and Simpson (2004), whose interest was in the importance of feedback as an
influential mechanism in learning. For this reason, we want to contribute to reducing the
empirical gap that exists about which are the characteristics of the feedback that promote
the regulation of learning (Allal 1979,1988,1993; Kramarski and Zeichner 2001 regarding
the benefits of this type of feedback; and see Ley and Young 2001, for a discussion about
the importance of feedback as an instigator of regulation).
A teacher’s influence is crucial for propitiating students’ self-regulation in a virtual
environment (Williams and Hellman 2004). Given the different perspectives from
which self-regulated learning processes have been studied (for a review see, among
others, Butler and Winne 1995; Boekaerts 1997; Zimmerman 1995), our research
specifically focuses on the transition between external and internal regulation (Vermunt
and Verloop 1999; Vermunt and Verschaffel 2000). In this sense, according to our
conceptualisation of learning mentioned before (interpersonal and intrapersonal pro-
cess), our approach to the regulation of learning centres specifically on the feedback as
278 High Educ (2010) 59:277–292
an external source—interpersonal process—which promotes the internal regulation or
self-regulation of learning—intrapersonal process.
According to Allal (1979,1988,1993), we could identify three kinds of formative
assessment in learning environments, each related to a different type of regulation, as
follows: firstly, continuous assessment throughout the entire teaching and learning process,
involving interactive regulation that includes various forms of help—among them feed-
back—in the educational process; secondly, regular formative assessment, which requires
retroactive, compensatory regulation that seeks to improve results in order to achieve
objectives during the teaching and learning process; and finally, proactive regulation which
intended to consolidate the skills acquired by the student in relation to future learning.
Considering the design of educational practices and given the asynchronous nature and
written textuality of online environments—as analysed in this study—in online teaching
and learning contexts there should be a confluence of the three forms of regulation, which
we can situate, respectively, throughout the whole formative process, at the end of each
assignment, and at the end of the entire educational process. In this context, the presence of
feedback becomes a relevant factor in promoting the regulation of learning and this is why
we are going to analyse it in our research. In accordance with each of the three types of
regulation described above, and taken into account the characteristics of the online
teaching and learning process, three forms of feedback could be defined as follows (see
Fig. 1):
The resolution of student doubts about learning content during the period of realisation
of assignments within the course (interactive regulation).
The communication of results for each assignment according to pre-determined
objectives including strategies to improve the learning process (retroactive and
proactive regulation).
Fig. 1 Feedback (FB) and type of regulation in online teaching and learning processes. Source: Adapted
from Allal (1979,1988,1993)
High Educ (2010) 59:277–292 279
The communication of final assessment results on completion of the teaching process
(proactive regulation). The proactive regulation includes the enhancing component of
the learning process which we consider to be present during the whole of the academic
learner’s life. We situate it at the end of the teaching and learning process because
according to Allal (1979,1988), its aim is to foresee future training activities and is
geared towards the consolidation and deepening of the student’s competences.
Authors such as Collis et al. (2001) or Macdonald (2001) refer to feedback centred on
the communication of learning results in an online environment. Their studies described a
range of feedback strategies to communicate results; for example, feedback can be offered
individually—tailored to the work of each student—or in groups (by means of a general
communication to an online classroom), or by providing a model answer for students
against which they can check their own work. We will review some of these feedback
methods in the results section below.
Characterising feedback according to content (semantic dimension)
The research produced on feedback from the 1980s and 1990s and in more recent times
(see, for example: Bangert-Drowns et al. 1991; Cohen 1985; Miller 2009; Rice et al. 1994;
Shute 2008,) shows the feedback content (i.e. the feedback meaning) as one of the aspects
which have to be considered when analysing feedback processes, this is why we focus our
attention on this dimension. However, before going into it in depth, we conceptualise
feedback in a global sense. In online learning environments three general feedback
dimensions have been proposed by Narciss and her colleagues (Narciss 2004,2008;
Narciss et al. 2004; Narciss and Huth 2004,2006): firstly, the functional dimension which
refers to the specific role of feedback in the framework of educational activity. The authors
previously mentioned, identify three functions: cognitive, metacognitive and motivational
(Narciss 2008). For the interest of our research we focus on the cognitive (related to the
learning content) and metacognitive (related to the self-reflection about how to learn)
functions of feedback, leaving its motivational function somewhat to one side. As both
functions (cognitive and metacognitive) are present throughout the entire teaching and
learning process, we do not centre our analysis on that dimension. The second dimension is
the structural one. It refers to the form feedback takes shape in a specific context (i.e.
where it is given, who gives it, the moment when it is given); and finally, the semantic
dimension, referring to the feedback content or the significance of statements made in the
feedback. As we will explain below, semantic dimension will be the focus of our research.
As Narciss pointed out, feedback characteristics can be complemented by a further two
dimensions directly related to the receiver and the context: a student’s individual cir-
cumstances (for example, previous knowledge, or learning styles), and the character-
istics of the instructional context, including learning objectives and learning activities, and
the errors and obstacles that hinder learning.
Taking this multi-dimensional conceptualisation of feedback into account, we focus our
attention on the semantic dimension. A revision of the literature (see, for example: Kul-
havy and Stock 1989; Mason and Brunning 2001; Mory 2004; Narciss 2004; Tunstall and
Gipps 1996) suggests that it is make up of four sub-dimensions:
Information on errors made. For example, ‘‘answers 2 and 4 are incorrect, please
review and resubmit’’.
Information about the correct answer or final solution. For example: ‘‘the answer is
incorrect, it should be 6.26’’.
280 High Educ (2010) 59:277–292
Information about guidelines and strategies to improve work. For example: ‘‘Review
the second part of the study material again to better understand orientation within
Information about additional resources as an aid to future learning. For example: ‘‘If
you would like to learn more about the subject of orientation within organisations
consult the Educaweb web page:’’ .
According to Kulhavy and Stock (1989), the first two sub-dimensions make up the
verification component of feedback because they allow students to obtain information on
the correctness of their response. The latter two sub-dimensions, linked to improving the
assignment in hand and providing more in depth subject matter information, belong to the
elaboration component of feedback because it allows students to obtain information on
how to improve the learning process. According to Kulhavy and Stock (1989) and Mason
and Brunning (2001), feedback must integrate information both for verification and
elaboration in order to ensure the success of the teaching and learning process.
It is within this conceptual framework that we establish three objectives for this
research: (1) to analyse the presence of different feedback in the several moments of the
teaching and learning process; (2) to characterise these feedback according to content (i.e.
semantic dimension) and (3) to explore possible relationships between feedback and
learning outcomes (student final grades and satisfaction). After exploring these three issues
in the light of data collected we discuss the importance of including feedback as a central
feature in teaching practice for university teachers working in online environments.
In order to accomplish these objectives, exploratory research was conducted between
February and June 2005, among a non-random sample of 186 students from the Universitat
Oberta de Catalunya (UOC, Open University of Catalonia) graduate programmes. Although
our first intention was to design and construct a random sample from the whole population
of students, internal requirements prevented the researchers to contact the students indi-
vidually asking for collaboration.
Instead, we were forced to recruit participants from a
limited number of classes, developing an intentional sample of self-selected participants.
Participants of the study were thus recruited from nine selected courses belonging to seven
different graduate programmes available in 2004–2005 at the UOC (see Table 1). As has
been pointed out above, the students were informed and invited to take part of the study by
the teaching staff, who published a standard open letter asking their collaboration at their
own (electronic) class-board.
The final sample was composed of 186 students, implying an overall response rate of
28.31% with slight variations between courses (see Table 1). Participants are, in any case,
roughly comparable to the overall demographics of the UOC’s graduate students, showing
Although the government of the UOC was apparently interested in this research, this long, difficult and
extremely bureaucratic process led us to dismiss the idea of constructing a random sample. Nevertheless, it
is important to note that, in any case, the results of this study should not be taken further from its exploratory
High Educ (2010) 59:277–292 281
an average age of 34.17 years (SD =7.84) with a female population of 60.30%. Most
students (88.60%) were enrolled in more than one course, with just 13.50% of them
repeating the course in which they were invited to take part of the study.
Given the exploratory nature of this research, an electronic ad-hoc questionnaire was
developed and administered in the last week of the course, following common recom-
mendations in literature (see, among other, Andrews et al. 2003; Best and Krueger 2004;
Evans and Mathur 2005; Fox et al. 2003; Fowler 2001; Leung and Kember 2005).
Besides the common demographic information, the students were asked to assess the
kind of feedback they were receiving in the course, bearing in mind the three kinds of
regulation of learning early discussed: in response to a doubt they asked, after an
assignment during the course and after the final assignment (summative assessment).
Whenever the feedback was identified, the participants were asked to rate its nature
through four independent likert-type agreement items (totally disagree, disagree, neither
agree nor disagree, agree and totally agree) related to its significance in the learning
process (semantic dimension): information about mistakes or errors they made, correct
answers to questions, orientations or guidelines to improve their learning and additional
resources for further learning.
Finally, additional information about educational outcomes was also collected, asking
students to provide their final grades in the selected course and rate their overall satis-
faction with the course through another likert-type scale of satisfaction (very dissatisfied,
dissatisfied, neither dissatisfied nor satisfied, satisfied, and very satisfied).
In spite of the initial exploratory nature of the research, results presented in this paper
include inferential statistics to try to reach conclusions beyond the immediate data alone.
After descriptive explorations, bivariate analyses have been considered to make inferential
Table 1 Sample
Course Participants
Students enrolled
Response rate
Technical engineering 27 132 20.45
Representation and processing of knowledge 24 67 35.82
Introduction to macroeconomics 15 86 17.44
Interculturality and education 19 60 31.67
Professional orientation 23 72 31.94
Logic 11 71 15.49
Applied statistics 11 56 19.64
Fundamentals of search and recovery
of information
13 60 21.67
Data analysis II 43 53 81.13
Total (N) 186 657 28.31
Source: Author
282 High Educ (2010) 59:277–292
judgments and test some relationships between feedback and learning outcomes (i.e. final
course grades and satisfaction).
Regarding the likert-type nature of the items, neither those concerning the three kinds of
feedback considered in this study, nor the satisfaction one, respond to any kind of scale or
are intended to be summed or aggregated. Instead, they are treated as discrete items in
standard tests for the analysis of categorical variables (Agresti 2002). Using Pearson’s Chi-
square test (Liebetrau 1983), we try to reject the null hypothesis of independence between
pairs of variables (PB0.05).
Whenever the independence hypothesis is rejected, the strength of association between
the pair of variables is assessed with Cramer’s Vtest (Cramer 1999), with values between 0
and 1 (the last representing a perfect correlation as is the case in other typical measures of
association in social sciences). Additionally, the interested reader will find standardised
adjusted residuals
for further inspection.
Below we present our main results on the presence of feedback in online teaching and
learning environments, its characterisation according to the content (semantic dimension)
and its relationship with learning outcomes.
The presence of feedback in online environments
We identified three basic kinds of feedback corresponding to the three kinds of regulation
described in our theoretical framework, as follows: interactive regulation (response to
questions about course content); retroactive regulation (following an assignment); and
finally, proactive regulation (after final assignment). It is necessary to take into consi-
deration—as we do in our analysis of teaching practices—that not all the students in this
research will necessarily have received all the types of feedback, given that the educational
model of the university does not limit students to any single approach to undertaking their
courses of study (see Table 2).
Defining the first type of feedback, 40% of the students reported having sent a question
during the continuous assessment process and 97.4% reported having received an answer.
Furthermore, as Table 2shows, almost half the questions were answered both in the
classroom and via the student’s personal mailbox. It may seem surprising that—in an
asynchronous interaction environment between teacher and student—fewer than half the
students interviewed had sent a question on learning content. Teachers usually appreciate
questions being asked in the online classroom, as it encourages joint knowledge building
among students (to place the concept of peer learning in context, see Boud et al. 1999).
Almost all the students (96.8%) had completed assignments during the course and as
such received retroactive feedback. Although the university’s evaluation model allows
students the option of only taking the final assignment (summative assessment), it also
encourages students to participate in continuous and periodic assignments, with the aim of
monitoring learning throughout the entire education process rather than merely taking the
Standardised adjusted residuals are easily interpretable as the number of standard deviations above or
below the average by which the cell deviates from the expected value. A zero value is expected whenever
the specific column and row are independent, and the sign informs about the direction of the relationship.
Additionally, values higher or lower than ±1.9 inform about a statistically significant association.
High Educ (2010) 59:277–292 283
final result into account. Although most students (84.2%) reported having received post-
assessment feedback, on more than half of the occasions (60%) the feedback was offered
only in the classroom and was probably not, as a consequence, adapted to the specific
needs of the student—at least if we compare it to the feedback given after asking about a
doubt (13.1%). However, this particular teaching practice must be considered as com-
plementing the former.
Finally, although all the students had done the final assessment assignment—as required
by the university’s evaluation system—just over half (57%) reported having received
feedback afterwards, indicating a clear drop in feedback by teachers in comparison with
the other two kinds of feedback (see Table 2). Although initially this feedback could be
considered as of secondary importance because it is given at the end of the process, it
should, in fact, be awarded equal importance in the educational activity, since it is a
summative assessment of learning and an overall evaluation of the completion of the initial
learning objectives that facilitates future planning.
The content of feedback in online environments (semantic dimension)
As indicated above, we also analysed the three kinds of feedback in terms of their content
or semantic dimension.
In evaluating the content of feedback as a response to a question about learning content,
around three quarters of students (71.2%) agreed or totally agreed that the feedback
received guided the correction of errors (see Table 3). As would be expected in a formative
feedback process, approximately half the students (53.8%) had received information on the
correct answer, while around two-thirds reported that they were given information on how
to improve their work (70%) and on how to obtain further information to complement
learning (65.7%). This type of feedback is clearly aimed at helping the student to regulate
their learning process because the feedback is made up of both the components we pre-
viously introduced: verification (gives the resolution of the doubt and the correct answer)
and elaboration (gives information about how to improve their work in order to achieve
learning objectives).
In evaluating feedback following an assignment, the characterisation based on student
opinion presents us with a profile which is very close to the self-assessment model of
feedback that is typical in online environments (see Table 4). Most students agreed or totally
Table 2 Feedback received
feedback (%)
only (%)
only (%)
Total (n)
Feedback during continuous
assessment process (response
to a doubt)
2.6 46.1 38.2 13.1 100% (76)
Feedback after realising an
assignment during the course
15.0 17.2 7.8 60 100% (180)
Feedback after final assignment 42.5 14.5 3.8 39.2 100% (186)
Source: Author. To interpret this table, it must be taken into account that from the total sample, 40.9% of the
students reported having sent a question during the course. About 96.8% of the total sample had to do an
assignment during the course and all the students have done a final assignment (as the assessment system of
the university requires)
284 High Educ (2010) 59:277–292
agreed that this kind of feedback was basically aimed at providing the correct answer (83.1%)
or providing information on errors (69.4%), rather than at improving their work (51.8%), or
how to take learning deeper (47.9%). In this kind of feedback—known as a ‘model answer’ in
online teaching and learning environments (see, among others, Collis et al. 2001 and Mac-
donald 2001)—the teacher simply provides the correct answer for students to make their own
comparative evaluation. In other words, the teacher gives students the responsibility for
taking advantage of feedback within the learning process using their own initiative.
Finally, focusing on the content of feedback after the final assessment (see Table 5),
around half the students (51.6%) considered that this provided the correct answer or
information on errors (42.1%). However, approximately a quarter of the students disagreed
or totally disagreed that it had given them information on how to improve the work done
during the course (27.4%) or how to locate more resources in order to deepen learning
(27.7%). Bearing in mind that this type of feedback is given at the end of the teaching and
learning process, it can be concluded that teaching practices do not seem to be aimed at
general improvement or involving students in consolidation and further in-depth study that
goes beyond the course’s objectives.
The relationship between feedback and learning outcomes
In the UOC’s education model the student is at the centre of the teaching and learning
process and is responsible for building knowledge with the help and guidance of a teacher.
Table 3 Evaluation of feedback during continuous assessment process (response to a doubt)
Neither agree nor
disagree (%)
agree (%)
Total (n)
Information on errors made 6.1 6.1 16.7 40.9 30.3 100% (66)
Information on correct
15.4 6.2 24.6 32.3 21.5 100% (65)
Information on how to
improve work done
7.1 5.8 17.1 41.4 28.6 100% (70)
Information on further
learning on the subject
5.7 5.7 22.9 37.1 28.6 100% (70)
Source: Author
Table 4 Evaluation of feedback after realising an assignment during the course
Neither agree nor
disagree (%)
agree (%)
Total (n)
Information on errors made 6.3 7.6 16.7 37.5 31.9 100% (144)
Information on correct
1.4 4.2 11.3 42.3 40.8 100% (142)
Information on how to
improve work done
7.2 12.2 28.8 35.3 16.5 100% (139)
Information on further
learning on the subject
5.6 13.4 33.1 30.3 17.6 100% (142)
Source: Author
High Educ (2010) 59:277–292 285
We explore the relationship between feedback given by the teacher and learning outcomes
in terms of students’ final academic performance and satisfaction with the general
development of the course.
In accordance with the methodology outlined above, we found significant statistical
differences in performance and satisfaction between students, who were classified in two
groups according to whether or not they had received feedback after doing assignments and
after their final assessment. In the case of feedback during the continuous assessment
process—with which we began the results section of this research—it is not statistically
possible to compare two groups because practically all of the students (97.4%) fell into the
first group, i.e. they had received feedback.
Regarding the possible relationship between the provision of feedback and student final
performance (see Table 6), students that had received feedback after assignments achieved
better academic results (F=13.229, P=0.010; V=0.272, P=0.010). As can be seen
in the grade percentages in the upper part of Table 6(compare with the corrected residuals
per cell), the percentage of fails and sufficient was significantly higher (48.1%) among
those who had not received feedback. Of those who had received feedback, the percentage
of the students who gained good, very good and excellent grades (78.9%) was significantly
A significant relationship exists between feedback received after assignments and
student results. However, as would be expected (see lower half of Table 6), the relationship
between feedback received after the final assessment and student results was not significant
for the established level of confidence (F=6.632, P=0.157). In other words, there is no
association between feedback received after completing the teaching and learning process
and the final grade. These results are coherent with the time sequence established in the
university’s educational model, whereby feedback received after the issue of a final grade
will have no retroactive influence on the grade.
In analysing the relationship between the provision of feedback and student satisfaction
(see Table 7), a positive association was found to exist between student satisfaction with
the general functioning of the course and feedback received after performing assignments
(F=16.602, P=0.002; V=0.309, P=0.002) and after final assessment (F=25.159,
P=0.000; V=0.375, P=0.000). However, the correlation for feedback received after
the final assignment was slightly higher.
In relation to feedback after an assignment (upper half of Table 7) the percentage of
students who were dissatisfied or very dissatisfied was lower (4.1%) among those who had
received feedback after undertaking these tests than among those who had not received
Table 5 Evaluation of feedback after final assessment
Neither agree nor
disagree (%)
agree (%)
Total (n)
Information on errors made 13.7 14.7 29.5 29.5 12.6 100% (95)
Information on correct
10.5 10.5 27.4 33.7 17.9 100% (91)
Information on how to
improve work done
10.5 16.9 38.9 26.3 7.4 100% (95)
Information on further
learning on the subject
11.7 16.0 37.2 22.3 12.8 100% (94)
Source: Author
286 High Educ (2010) 59:277–292
Table 6 Feedback received by final course grades
Fail Sufficient Good Very good Excellent Association
No feedback after realising an
assignment during the course
11.1% (2.8) 37.0% (2.0) 29.6% (-1.5) 18.5% (-1.2) 3.7% (-0.2) F=13.229 (4 g.l.) (P=0.010)
V=0.272 (P=0.010)
Feedback after realising an assignment
during the course
1.3% (-2.8) 19.7% (-2.0) 44.7% (1.5) 29.6% (1.2) 4.6% (0.2)
Total 2.8% 22.3% 42.5% 27.9% 4.5% n=179
No feedback after final assignment 6.3% (1.6) 27.8% (1.6) 40.5% (-0.3) 21.5% (-1.7) 3.8% (-0.3) F=6.632 (4 g.l.) (P=0.157)
Cramer’s Vnot applicable
Feedback after final assignment 1.9% (-1.6) 17.9% (-1.6) 42.5% (0.3) 33.0% (1.7) 4.7% (0.3)
Total 3.8% 22.2% 41.6% 28.1% 4.3% n=185
Source: Author. The total percentages are not the same between both types of feedback, given that not all students chose to undertake assignments during the course
High Educ (2010) 59:277–292 287
Table 7 Feedback received by satisfaction with the general functioning of the course
Very dissatisfied Dissatisfied Neither satisfied nor
Satisfied Very satisfied Association
No feedback after realising an
assignment during the course
14.8% (3.5) 3.7% (0.3) 18.5% (1.7) 48.1% (-1.1) 14.8% (-1.4) F=16.602 (4 g.l.) (P=0.002)
V=0.309 (P=0.002)
Feedback after realising an assignment
during the course
1.4% (-3.5) 2.7% (-0.3) 8.2% (-1.7) 59.9% (1.1) 27.9% (1.4)
Total 3.4% 2.9% 9.8% 58.0% 25.9% n=174
No feedback after final assignment 7.7% (2.8) 7.7% (2.8) 15.4% (2.4) 51.3% (-1.5) 17.9% (-2.2) F=25.159 (4 g.l.) (P=0.000)
V=0.375 (P=0.000)
Feedback after final assignment 0% (-2.8) 0% (-2.8) 5.0% (-2.4) 62.4% (1.5) 32.7% (2.2)
Total 3.4% 3.4% 9.5% 57.5% 26.3% n=179
Source: Author. The total percentages are not the same between both types of feedback, given that not all students chose to undertake an assignment during the course
288 High Educ (2010) 59:277–292
feedback (18.5%). In the case of feedback after final assessment (lower half of Table 7),
more students who had received this type of feedback (95.1%) compared to students who
had not (69.2%) were satisfied or very satisfied with the general functioning of the course.
Discussion and conclusions
In regard to online environments, worthy of mention are studies by Dunn et al. (2003), Lou
et al. (2003) and Williams et al. (2006), who, despite being markedly theoretical, com-
menced the shift from feedback studies within the face to face context to feedback studies
applied in online teaching and learning environments. Our research—by taking an
empirical focus—attempts to advance beyond discussion and prescription in regard to
feedback in fully online educational environments and the semantic dimension of this
feedback, and to justify, moreover, the need for feedback, by making clear the positive
association between feedback and student satisfaction and performance.
In relation to types of feedback, as we have observed, feedback offered during the
continuous assessment process (answering student doubts) is the most widespread form of
feedback in online classrooms. From the viewpoint of the feedback’ semantic dimension
our results allow us to conclude that this feedback is basically characterised by information
on how to improve work and how to take learning further. That is, in this kind of feedback,
the elaboration component is more often present than the verification one. Therefore, we
could conclude that this type of feedback fulfils a formative or regulatory role—it not only
provides a solution, but also helps to improve a student’s work. In accordance with our
theoretical framework, feedback during continuous assessment process is feedback that
fosters interactive regulation in the teaching and learning process.
Feedback given after an assignment is the second most common type of feedback—and
more present than feedback provided after the final assessment. Although both kinds of
feedback are necessary, as indicated above, regulation of learning in online environments
is more retroactive than proactive and more oriented towards error correction than the
consolidation or furthering of learning. Both types of feedback basically provide infor-
mation about errors made and provide the correct answer, rather than about how to
improve work. Therefore, the main feedback component is the verification one. It is for
that reason that these two kinds of feedback—which communicate results—cannot be
considered as being formative in comparison with feedback provided during a continuous
assessment process (Perrenoud 1998), given that they concentrate more on the errors made
than on giving information about how to improve work. To this end the results obtained
allow us to affirm that even though the techno-pedagogical design of the subjects studied is
based on a continuous assessment process, this does not implicitly contain the necessary
formative component which would allow students to improve their learning process.
However, despite the fact that the assessment’s formative character is of little signifi-
cance in the specific case of feedback after an assignment, the results obtained show the
statistical relationship between feedback and the learning results (students’ satisfaction and
final grades). This allows us to claim the relevance of feedback in favouring self-regulatory
competences within distance teaching and learning practices.
Finally we point out some of the limitations of our work from which we identify future
research lines. On one hand, as we previously explained in the method section, the results
we have obtained only make sense within the frame of the subjects analysed. Therefore, it
would be interesting to carry out similar research with a larger sample made up of different
types of courses. On the other hand, taking into account the feedback conceptualisation
High Educ (2010) 59:277–292 289
proposed by Narciss (Narciss 2004,2008; Narciss et al. 2004; Narciss and Huth 2004,
2006), other dimensions which define feedback processes could be analysed, for example,
the structural dimension (i.e. the characteristics of feedback within a specific context) or
the motivational function of feedback (the characteristics of feedback when it promotes
To conclude, we would like to emphasise that despite the evidence found in the liter-
ature reviewed which highlights the relevance of feedback in online environments, more
teacher training should be given on this topic. In other words, the training of university
teachers in asynchronous and written contexts should undoubtedly take into account
developing strategies for providing teachers with knowledge on the types and character-
istics of feedback (Egan and Akdere 2005; Goodyear et al. 2001; Williams 2003). Feed-
back as a tool to promote the regulation of learning could be the key to good teaching
practice, especially in online environments.
Agresti, A. (2002). Categorical data analysis (2nd ed.). Hoboken, NJ: Wiley.
Allal, L. (1979). Strategies d’evaluation formative: Conceptions psycho-pedagogiques et modalitats
d’application. In A. L. Allal, J. Cardinet, & P. Perrenoud (Eds.), L’evaluation formative dans un
enseignement diffe
ed.). Berna: Peter Lang.
Allal, L. (1988). Vers un e
´largissement de la pe
´dagogie de maı
ˆtrise: Processus de re
´gulation interactive,
´troactive et proactive. In M. Huberman (Ed.), Assurer la re
´ussite des apprentissages scolaires? Les
propositions de la pe
´dagogie de maı
trise (pp. 86–126). Neucha
ˆtel: Delachaux & Niestle
Allal, L. (1993). L’e
´valuation formative des processus d’apprentissage: Le ro
ˆle des re
´gulations me
cognitives. In R. Hivon (Ed.), L’e
´valuation des apprentissages (pp. 57–74). Sherbrooke-Que
´ditions du CRP.
Andrews, D., Nonnecke, B., & Preece, J. (2003). Electronic survey methodology: A case study in reaching
hard to involve internet users. International Journal of Human–Computer Interaction, 16(2), 185–210.
Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback
in test-like events. Review of Educational Research, 61(2), 213–238.
Best, S. J., & Krueger, B. S. (2004). Internet data collection. Thousand Oaks, CA: Sage.
Boekaerts, M. (1997). Self-regulated learning: A new concept embraced by researchers, policy makers,
educators, teachers, and students. Learning and Instruction, 7(2), 161–186. doi:10.1016/S0959-4752
Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment & Evaluation in
Higher Education, 24(4), 413–426. doi:10.1080/0260293990240405.
Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of
Educational Research, 65(3), 245–281.
Chickering, A., & Ehrmann, S. C. (2008). Implementing the seven principles: Technology as lever. The TLT
Group: Accessed 3 June 2009.
Chickering, A. W., & Gamson, Z. F. (1991). Applying the seven principles to good practice in under-
graduate education. San Francisco: Jossey-Bass.
Cohen, V. B. (1985). A reexamination of feedback in computer-based instruction: Implications for
instructional design. Educational Technology, 25(1), 33–37.
Collis, B., de Boer, W., & Slotman, K. (2001). Feedback for web-based assignments. Journal of Computer
Assisted Learning, 17(3), 306–313. doi:10.1046/j.0266-4909.2001.00185.x.
Cramer, H. (1999). Mathematical methods of statistics. Princeton: Princeton University Press.
Dunn, L., Morgan, C., O’Reilly, M., & Parry, S. (2003). The student assessment handbook. New directions
in tradicional & online assessment. London: Routledge Falmer.
Egan, T., & Akdere, M. (2005). Clarifying distance education roles and competencies: Exploring similarities
and differences between professional and student-practitioner perspectives. American Journal of
Distance Education, 19(2), 87–103. doi:10.1207/s15389286ajde1902_3.
Evans, J. R., & Mathur, A. (2005). The value of online surveys. Source Internet Research, 15(2), 195–219.
290 High Educ (2010) 59:277–292
Fowler, F. J. (2001). Survey research methods (3rd ed.). Thoushand Oaks, CA: Sage.
Fox, J., Murray, C., & Warm, A. (2003). Conducting research using web-based questionnaires: Practical,
methodological, and ethical considerations. International Journal of Social Research Methodology,
6(2), 167–180. doi:10.1080/13645570210142883.
Garrison, D. R., & Anderson, T. (2003). E-learning in the 21th century. A framework for research and
practice. London: MPG Books.
Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning
and Teaching in Higher Education, 1, 3–31.
Goodyear, P., Salmon, G., Spector, M., Steeples, C., & Tickner, S. (2001). Competences for online teaching:
A special report. Educational Technology Research and Development, 49(1), 65–72. doi:10.1007/
Kramarski, B., & Zeichner, O. (2001). Using technology to enhance mathematical reasoning: Effects of
feedback and self-regulation learning. Educational Media International, 38(2–3), 78–81.
Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude.
Educational Psychology Review, 1(4), 279–308. doi:10.1007/BF01320096.
Leung, D., & Kember, D. (2005). Comparability of data gathered from evaluation questiionnaires on paper
and through the internet. Research in Higher Education, 46–5, 571–591.
Ley, K., & Young, D. B. (2001). Instructional principles for self-regulation. Educational Technology
Research and Development (ETR&D), 49(2), 93–103.
Liebetrau, A. M. (1983). Measures of association. Beverly Hills: Sage.
Lou, Y., Dedic, H., & Rosenfield, S. (2003). A feedback model and successful e-learning. In S. Naidu (Ed.),
Learning & teaching with technology. Principles and practices (pp. 249–269). London: Kogan Page.
Macdonald, J. (2001). Exploiting online interactivity to enhance assignment development and feedback in
distance education. Open Learning, 16(2), 179–189. doi:10.1080/02680510120050334.
Macdonald, J., & Twining, P. (2002). Assessing activity-based learning for a networked course. British
Journal of Educational Technology, 33(5), 603–618. doi:10.1111/1467-8535.00295.
Mason, J., & Brunning, R. (2001). Providing feedback in computer-based instruction: What the research tell
us. Centre of Instructional Innovation, University of Nebraska-Lincoln:
MB/MasonBruning.html. Accesed 20 Nov 2008.
Miller, T. (2009). Formative computer-based assessment in higher education: The effectiveness of feedback
in supporting student learning. Assessment & Evaluation in Higher Education, 34(2), 181–192. doi:
Morgan, C., & O’reilly, M. (1999). Assessing open and distance learners. London: Kogan Page.
Mory, E. H. (2004). Feedback research revisited. In D. Jonassen (Ed.), Handbook of research on educational
technology and technology (pp. 745–785). Mahwah, NJ: Lawrence Erlbaum Associates.
Narciss, S. (2004). The impact of informative tutoring feedback and self-efficacy on motivation and
achievement in concept learning. Experimental Psychology, 51(3), 214–228. doi:10.1027/1618-
Narciss, S. (2008). Feedback strategies for interactive learning tasks. In A. J. M. Spector, M. D. Merrill, J.
Van Merrie
¨nboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and
technology (Aect). New Jersey (EUA): Lawrence Erlbaum.
Narciss, S., & Huth, K. (2004). How to design informative tutoring feedback for multimedia learning. In H.
Niegemann, R. Bru
¨nken, & L. Detlev (Eds.), Instructional design for multimedia learning (pp. 181–
195). Mu
¨nster: Waxmann.
Narciss, S., & Huth, K. (2006). Fostering achievement and motivation with bug-related tutoring feedback in
a computer-based training for written subtraction. Learning and Instruction, 16(4), 310–322. doi:
Narciss, S., Ko
¨rndle, H., Reimann, G., & Mu
¨ller, K. (2004). Feedback-seeking and feedback efficiency in web-
based learning—How do they relate to task and learner characteristics? In P. Gerjets, P. A. Kirschner, J.
Elen, & R. Joiner (Eds.), Instructional design for effective and enjoyable computer-supported learning.
Proceedings of the first joint meeting of the EARLI SIGs Instructional Design and Learning and
Instruction with Computers (pp. 377–388). Tuebingen: Knowledge Media Research Center.
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and
seven principles of good practice. Studies in Higher Education, 31(2), 199–218. doi:10.1080/
Perrenoud, P. (1998). From formative evaluation to a controlled regulation of learning processes. Towards a
wider conceptual field. Assessment in Education, 5(1), 85–102. doi:10.1080/0969595980050105.
Rice, M., Mousley, J., & Davis, R. (1994). Improving student feedack in distance education: A research
report. In T. Evans & D. Murphy (Eds.), Research in distance education (pp. 52–62). Geelong
`lia): Deaking University Press.
High Educ (2010) 59:277–292 291
Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. Journal of
the Learning Sciences, 3(3), 265–283. doi:10.1207/s15327809jls0303_3.
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:
Tunstall, P., & Gipps, C. (1996). Teacher feedback to young children in formative assessment: A typology.
British Educational Research Journal, 22(4), 389–404. doi:10.1080/0141192960220402.
Vermunt, J., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning and
Instruction, 9, 257–280. doi:10.1016/S0959-4752(98)00028-0.
Vermunt, J. D., & Verschaffel, L. (2000). Process oriented teaching. In R. J. Simons, J. van der Linden, & T.
Duffy (Eds.), New learning. Dordrecht: Kluwer.
Vigostsky, L. S. (1978). Mind and society, the development of higher psychological processes. Cambridge:
Harvard University Press.
Williams, P. (2003). Roles and competencies for distance education programs in Higher Education insti-
tutions. American Journal of Distance Education, 17(1), 45–57. doi:10.1207/S15389286AJDE1701_4.
Williams, P., & Hellman, C. (2004). Differences in self-regulation for online learning between first- and
second-generation college students. Research in Higher Education, 45(1), 71–82. doi:10.1023/B:RIHE.
Williams, D., Howell, S., & Hricko, M. (2006). Online assessment, measurement and evaluation. Emerging
practices. EUA: Information Science Publishing.
Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and for enhancement of
pedagogic practice. Higher Education, 45, 477–501. doi:10.1023/A:1023967026413.
Zimmerman, B. J. (1995). Self-regulation involves more than metacognition: A social cognitive perspective.
Educational Psychologist, 30(4), 217–221. doi:10.1207/s15326985ep3004_8.
292 High Educ (2010) 59:277–292
... Online environments are conducive for developing collaborative and creative authentic assessments (McVey, 2016). Nevertheless, several studies have highlighted challenging aspects of assessment, especially homework and formative assessment in online teaching (Amasha et al., 2018;Eichler & Peeples, 2013;Espasa & Meneses, 2010;Tinoca & Oliveira, 2013). Anderson (1998) links teachers' choice of assessment strategies to their centeredness of teaching, thus teaching and assessment components are highly Tinoca & Oliveira, 2013) proposed a conceptual framework for assessment in an online environment based on four dimensions: authenticity, consistency, transparency, and practicality. ...
... These platforms enable formative assessment by collecting student evidence and providing them with feedback. Moreover, timely feedback is associated with higher levels of student performance and satisfaction (Espasa & Meneses, 2010). This is crucial for students to achieve their learning goals, and for teachers to reflect on their online practices (Faber et al., 2017). ...
... Given the overwhelming ERT context, teachers in our study were challenged in the 'new' learning environment due to insufficient prior exposure and training, and as such teachers lacked the time and skills necessary to implement more authentic and student-centered assessments. This further highlight concerns about online assessment (Amasha et al., 2018;Eichler & Peeples, 2013;Espasa & Meneses, 2010;Tinoca & Oliveira, 2013), and confirm findings by Anderson (1998) linking the choice of assessment tools to the use of student-centered teaching methods. As shown in Table 3, 82% of teachers stated they promptly returned all assignments to their students and provided individualized useful feedback. ...
Full-text available
The COVID-19 pandemic necessitated school closures globally, resulting in an abrupt move to online/distance teaching or emergency remote teaching (ERT). Teachers and students pivoted from face-to-face engagement to online environments, thus impacting curriculum, pedagogy, and student outcomes across a variety of disciplines. In this paper, the authors focus on science/STEM teachers’ experiences with online teaching and learning in a Canadian context during the pandemic. Qualitative and quantitative data were collected through an online questionnaire administered to 75 Grade 1–12 science/STEM teachers in a Canadian province in May–July 2020. Through the TPACK framework and self-efficacy theory, the authors explore i) curriculum planning and implementation in online settings, ii) assessment practices and their effectiveness, and iii) student outcomes, as observed by the teachers. Results indicate that teachers used a variety of platforms, and choice of platform was mainly due to user-friendliness and interactivity, or administrative decision making. Despite teachers organizing online lessons during ERT, gaps were identified in teachers’ TPACK framework and self-efficacy, thus impacting their curriculum development, pedagogical approaches, and assessment practices. In general, teaching strategies included pre-recorded videos and self-directed learning in which teachers assigned specific tasks for students to perform independently. Teachers prioritized subject content and covering curriculum objectives over creative and student-centered pedagogical approaches. Assessment techniques employed were viewed by teachers as unauthentic and generally ineffective. Moreover, teachers reported difficulties addressing student needs and abilities, resulting in challenges providing equitable and inclusive online teaching. Finally, online teaching was viewed negatively by most teachers, in terms of student engagement and outcomes.
... Previous research focusing on online curriculum delivery has highlighted the importance of building online learning communities that are collaborative, in which students are constantly and actively engaged (Espasa and Meneses 2010;Kuo et al. 2014). Cross (1998, 4) states that learning communities involves "groups of people engaged in intellectual interaction for the purpose of learning". ...
Full-text available
This article is vested on the need for higher education educators to be reflective on their practices in order to configure effective ways to interact with the students and knowledge for specific courses. It is uncontested that education systems globally are under constant pressure to respond to the changing needs of societies. The outbreak of COVID-19 has reminded us that the complexity of education needs responsive practices to facilitate effective teaching and learning across all levels of schooling globally. All over the world, the normative ways of teaching and learning evolved drastically in the first quarter of the 2020 academic year when teachers and students found online offerings to be the dominant option available as a sequel to the pandemic conditions. In South Africa specifically, students and teachers were thrust into virtual teaching and learning situations with the majority of them having no preparation for this shift. This article presents an auto-ethnographical account of the knowledge gaps in the teaching and learning of mathematics education in a first-year education course in an online space. We used auto-ethnography to discuss our experiences of teaching limits and continuity. We argue that teaching the topic on an online platform constrain student teachers' procedural thinking, conceptual development, and demonstration of their thought processes during mathematics learning and assessment. We also discuss our experiences of developing assessment tasks for the topic and how students identified cheating mechanisms to answer questions in assessments.
... Despite the acknowledged impact of feedback on learning and engagement [1], Massive Open Online Courses (MOOCs) represent a learning context where delivering timely and personalized feedback regards an ongoing challenge [2], [3]. Specifically, the wide instructor-learners ratio renders difficult the manual provision of interventions that satisfy the learners' needs [4]. ...
... In addition, Price points out that the tone and word selection of feedback comments are essential for feedback effectiveness, as distance learners have limited opportunities to understand a teacherís sense of humor and commenting style (Price, 1997). To sum up, it can be concluded that feedback in the distance learning process: -helps improve studentsí learning process, thereby allowing them to achieve their goals and progress (Adcroft, 2011); -motivates students to improve their present and future performance, while also highlighting their strengths and weaknesses (Weaver, 2006); -creates opportunities for a dialogue between the teacher and the student (Beaumont et al., 2011); -promotes reflection and self-regulation, thereby encouraging the autonomy of learners (Espasa & Meneses, 2010). Based on literature review, it can be concluded that effective feedback is important for both the classroom and distance learning processes. ...
Full-text available
In the distance learning process, teachers, students, parents and institutions must continue the teaching and learning process despite the various limitations. During the face-to-face learning process, instructions, concepts and feedback can be verbally communicated within a relatively short period of time; while teachers in the distance learning process must briefly express their thoughts in written form so that every student can clearly understand what is being done. It is not always an easy task. One of the challenges of the distance learning process is to find ways how to provide feedback to students in a timely and meaningful way to help them improve their performance, actively engage in the learning process, and not to lose the link between a student and a teacher. The article, using theoretical (scientific and methodological literature analysis) research methods, analyzes the concept and theoretical models of the distance learning process, describes the preconditions for feedback and the importance of feedback in the teaching and learning process. By analysing the importance of feedback, suggestions for improving the distance learning process have been developed.
... ese findings confirm the importance of encouraging student-instructor interaction to promote active learning. Similarly, in a quantitative study of 186 online graduates, the study [59] discovered a significant statistical link between instructor feedback on completed assignments and learning outcomes, as measured by student satisfaction and overall grades. ese results stress the importance of student-instructor contact in student performance and further highlight the importance of satisfaction in online learning. ...
Full-text available
While online learning has always faced criticism regarding social issues such as the lack of engagement, interaction, and communication, the COVID-19 social distancing has increased the public’s concern and criticism. Therefore, the purpose of this research was to investigate and model the social factors affecting students’ satisfaction with online learning. The proposed research model was constructed based on the extensive review of relevant literature to determine the critical social factors examined and validated in this research. The data were collected from a total of 258 students using a quantitative method approach. A structural equation modeling technique was utilized in analyzing the obtained data. The findings of the study reveal that all proposed social factors namely social presence, social interaction, social space, social identity, social influence, and social support were found to significantly affect the students’ satisfaction with online learning. The examined factors account for about 56% of the total variance in students’ satisfaction. Several suggestions and recommendations are provided in line with study limitations.
Full-text available
MARWAN, SAMIHA ABDELRAHMAN MOHAMMED. Investigating Best Practices in the Design of Automated Hints and Formative Feedback to Improve Students' Cognitive and Affective Outcomes. (Under the direction of Thomas W. Price). Timely support is essential for students to learn and improve their performance. However , in large programming classrooms, it is hard for instructors to provide real-time support (such as hints) for every student. While researchers have put tremendous effort into developing algorithms to generate automated programming support, few controlled studies have directly evaluated its impact on students' performance, learning and affective outcomes. Additionally, while some studies show that automated support can improve students' learning , it is unclear what specific design choices make them more or less effective. Furthermore, few, if any, prior studies have investigated how well these results can be replicated in multiple learning contexts. Inspired by educational theories and effective human feedback, my dissertation has the goal of designing and evaluating different design choices of automated support, specifically next-step hints and formative feedback, to improve students' cognitive and affective outcomes in programming classrooms. In this thesis I present five studies that attempt to overcome limitations in existing forms of automated support to improve students' outcomes, specifically hints and formative feedback. Hints may be ineffective when they: 1) are hard to interpret, 2) fail to engage students to reason critically about the hint, and 3) fail to guide students to effectively seek help. In Study 1, I addressed the first two challenges by evaluating the impact of adding textual explanations to hints (i.e. explaining what the hint was suggesting), as well as adding self-explanation prompts to hints (i.e. asking students to reflect on how to use the hint). I found that hints with these two design features together increased learners' learning as evidenced by the increase in their performance on future isomorphic programming tasks (without hints available). In Study 2, I tackled the third challenge in two phases. First, I created a preliminary taxonomy of unproductive help-seeking behaviors during programming. Then, using this taxonomy, I designed and evaluated a novel user interface for requesting hints that subtly encourages students to seek help with the right frequency, estimated with a data-driven algorithm. This led to an improvement in students' help-seeking behavior. In Study 3, I replicated my first two studies in an authentic classroom setting, across several weeks, with a different population, to investigate the consistency and generalizability of my results. I found that hints with textual explanations and self-explanation prompts improved students’ programming performance, and increased students’ programming efficiency in homework tasks, but the effectiveness of hints was not uniform across problems. Formative feedback is effective when it is immediate, specific, corrective and positive. Learning theories, and empirical human tutoring studies show that such elements of feed- back can improve both students’ cognitive and affective outcomes. While many automated feedback systems have some of these feedback elements, few have them all (such as provid- ing only corrective feedback but not encouraging positive feedback), and those were only evaluated on a small set of short programming tasks. In Study 4, I tackled this gap in re- search by developing an adaptive immediate feedback (AIF) system, using expert-authored rules, that provides students with immediate positive and corrective feedback on their progress while programming. I found that the AIF system improves students’ performance, engagement in programming, and intentions to persist in computer science. Lastly, in Study 5 I developed a hybrid data-driven algorithm to generate feedback that can be easily scaled across different programming tasks, with high accuracy and low expert effort. I then used this algorithm to design an improved version of the AIF system (i.e. AIF 3.0), with a more granular feedback level. In Study 5, I deployed and evaluated the AIF 3.0 system in an authentic CS0 classroom study over several weeks. I found that the AIF 3.0 system improved students’ performance and the proportion of students who fully completed the programming tasks, indicating increased persistence. Studies 1, 2, and 4 are laboratory studies, while Studies 3 and 5 are classroom studies, all conducted with iSnap, a block-based programming environment. The contributions of this thesis include: 1) the discovery of effective design choices for automated hints, 2) the design of adaptive immediate feedback systems, leveraging an expert-authored and hybrid data-driven models, 3) an empirical evaluation of the impact of automated hints and formative feedback on learners’ cognitive, and affective outcomes, and lastly 4) replication evaluations of hints and feedback in authentic classroom settings, suggesting consistent effects across different populations and learning contexts. These contributions inform researchers’ knowledge of challenges of automated support designs using either data-driven or expert-authored models, as well as challenges in classroom studies for open-ended programming tasks; and how they can affect students’ outcomes, which overall can guide future research directions in computing education and human- computer interaction areas.
Full-text available
The second edition of E-Learning in the 21st Century provides a coherent, comprehensive, and empirically-based framework for understanding e-learning in higher education. Garrison draws on his decades of experience and extensive research in the field to explore the technological, pedagogical, and organizational implications of e-learning. Most importantly, he provides practical models that educators can use to realize the full potential of e-learning. This book is unique in that it focuses less on the long list of ever-evolving technologies and more on the search for an understanding of these technologies from an educational perspective.
Full-text available
Networking provides the means to deliver enhancements to assignment development and feedback which have not previously been possible for ODL courses. This paper describes the experimental introduction and evaluation of network delivered model answers and peer review as additional formative feedback on assessment. The enhancements assisted students in attuning themselves to the writing demands of the course, and there may be particular times in a course when they are most of value. Use of a network allowed for delivery within a controlled time frame, whilst providing an interactive environment for debate on alternative perspectives.
Online Assessment, Measurement and Evaluation: Emerging Practices provides a view of the possibilities and challenges facing online educators and evaluators in the 21st Century. As technology evolves and online measurement and assessment follow, Online Assessment, Measurement and Evaluation: Emerging Practices uses established evaluation principles to employ new tools in evaluation systems that support stakeholders, clarify values and definitions of the evaluation methods, encourage thought about important questions, and refresh the readers' memories of contexts and backgrounds. This book also adheres to evaluation standards of feasibility, propriety, utility, and accuracy in order to help participants realize that technical issues and methods are only worthwhile when they are in the service of helping people make thoughtful choices.
Chapter 3 applies basic methods of inference to two-way contingency tables. It shows how to construct confidence intervals for association parameters (such as the odds ratio) in 2-by-2 tables, presents chi-squared tests of independence in two-way contingency tables, shows residual analyses and other ways to follow-up chi-squared tests, and presents more powerful methods for tables with ordered classifications. It also discusses small sample methods for tests and confidence intervals, such as Fisher's exact test.
Feedback is an essential construct for many theories of learning and instruction, and an understanding of the conditions for effective feedback should facilitate both theoretical development and instructional practice. In an early review of feedback effects in written instruction, Kulhavy (1977) proposed that feedback’s chief instructional significance is to correct errors. This error-correcting action was thought to be a function of presentation timing, response certainty, and whether students could merely copy answers from feedback without having to generate their own. The present meta-analysis reviewed 58 effect sizes from 40 reports. Feedback effects were found to vary with control for presearch availability, type of feedback, use of pretests, and type of instruction and could be quite large under optimal conditions. Mediated intentional feedback for retrieval and application of specific knowledge appears to stimulate the correction of erroneous responses in situations where its mindful (Salomon & Globerson, 1987) reception is encouraged.