ArticlePDF Available

Continuous assessment and support for learning: An experience in educational innovation with ICT support in higher education

Authors:

Abstract and Figures

In this article we present and discuss an integrated system of continuous assessment (ISCA) in higher education, designed to collect multiple evidences of students' knowledge and abilities, and to facilitate monitoring and support of their learning processes. Inspired by a socio-constructivist approach, which assumes a close relation between teaching, learning and assessment, this system combines different types of activities, organized around blocks of broad themes, and aimed at acquiring information on content comprehension as well as its application and functional use in authentic contexts. The educational innovation experience which constitutes the framework in which this system was developed and applied took place during the 2005-2006 school year. It was carried out in three class groups of "Educational Psychology", a mandatory course for the Bachelor's degree in Psychology, based on ECTS (European Credit Transfer System), using a teaching methodology focused on the student, and the support of information and communication technology (ICT). The experience was planned and developed by the consolidated group for teaching innovation in educational psychology (GIDPE) at the University of Barcelona. Results uphold a positive assessment of students' academic achievement, as well as their satis-faction with participation in the experience. Two conclusions are worthy of mention. The first is that the ISCA proved to be an effective instrument, useful for acquiring evidence of the learning processes and for administering and managing different educational helps to students in these processes. The second is that the strength and usefulness of the ISCA consists in the integration of options and criteria as a whole rather than in applying any one criterion or option separately.
Content may be subject to copyright.
Continuous assessment and support for
learning: an experience in educational
innovation with ICT support
in higher education
César Coll, María J. Rochera,
Rosa M. Mayordomo, Mila Naranjo
Dept. of Developmental & Educational Psychology,
University of Barcelona
Spain
Dr. César Coll. Dpto. de Psicología Evolutiva y de la Educación, Universidad de Barcelona, Passeig de la Vall
d'Hebron, 171. 08035 Barcelona, Spain. E-mail: ccoll@ub.edu
© Education & Psychology I+D+i and Editorial EOS (Spain)
César Coll et al.
-784- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
Abstract
In this article we present and discuss an integrated system of continuous assessment (ISCA)
in higher education, designed to collect multiple evidences of students’ knowledge and abili-
ties, and to facilitate monitoring and support of their learning processes. Inspired by a socio-
constructivist approach, which assumes a close relation between teaching, learning and as-
sessment, this system combines different types of activities, organized around blocks of broad
themes, and aimed at acquiring information on content comprehension as well as its applica-
tion and functional use in authentic contexts.
The educational innovation experience which constitutes the framework in which this system
was developed and applied took place during the 2005-2006 school year. It was carried out in
three class groups of “Educational Psychology”, a mandatory course for the Bachelor’s de-
gree in Psychology, based on ECTS (European Credit Transfer System), using a teaching
methodology focused on the student, and the support of information and communication
technology (ICT). The experience was planned and developed by the consolidated group for
teaching innovation in educational psychology (GIDPE) at the University of Barcelona. Re-
sults uphold a positive assessment of students’ academic achievement, as well as their satis-
faction with participation in the experience.
Two conclusions are worthy of mention. The first is that the ISCA proved to be an effective
instrument, useful for acquiring evidence of the learning processes and for administering and
managing different educational helps to students in these processes. The second is that the
strength and usefulness of the ISCA consists in the integration of options and criteria as a
whole rather than in applying any one criterion or option separately.
Key words: continuous assessment, educational support, higher education, educational inno-
vation, authentic assessment, information and communication technologies.
Received: 07-23-07 Initial acceptance: 09-25-07 Final acceptance: 10-04-07
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 785 -
Introduction
Higher education has evolved in recent years toward incorporation of new, alternative
evaluation systems (as compared to traditional ones), systems with names such as “authentic
assessment”, “performance assessment” or “alternative assessment” (Ahumada, 2005; Biggs,
2005; Birembaum et al., 2006; Diaz Barriga, 2006). These systems share a new way of under-
standing the assessment process to the extent that they are focused on learning situations from
real life and on significant, relevant, complex problems which require demonstrating the use of
an entire set of knowledge, skills and attitudes much broader than can be displayed through
oral or written exams with brief or extended responses.
On the other hand, the need to identify generic or cross competencies as well as profes-
sion-specific competencies has been one of the challenges and objectives put forward in the
framework of the European convergence process. Competency here is understood to mean an
adequately-learned ability to perform a task, function or role relating to a particular work con-
text – in this case in the area of Educational Psychology – which integrates knowledge, skills
and attitudes (Roe, 2003; de la Fuente et al., 2005; VV.AA., 2005). Identification of the role
and tasks which an educational psychologist performs, comprehension of educational psychol-
ogy texts, application of psychoeducational knowledge to educational situations and cases,
cooperative work, and regulation of individual and group work and learning are some of the
competencies which students of Educational Psychology should learn.
The process of European convergence has also prompted implementation of teaching
methodologies centered on students’s autonomous work. Toward this end, it is considered
necessary for students to have competencies for regulating individual and group work, for es-
tablishing learning goals, planning courses of action, selecting suitable strategies and re-
sources, persisting in the resolution, review and reorientation of tasks in order to meet prede-
termined objectives. As numerous studies have shown (Torrano & González, 2004), self-
regulation is a complex process where diverse factors intervene, including cognitive and meta-
cognitive, affective, motivational and volitional (Pintrich, 2000); the same can be said of the
other competencies mentioned. In this context, continuous evaluation systems offer teachers
the chance to follow the students’ learning process with precision and to gather multiple evi-
dences of results attained and the degree to which they have developed competencies (Delgado
et al., 2005). From our perspective, the central question consists of designing and incorporat-
César Coll et al.
-786- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
ing evaluation systems into university teaching which not only facilitate gathering this evi-
dence, but which teachers can also use to adequately support students in acquiring and using
autonomous regulation competencies of their individual and group learning processes
(Boekaerts, 1999; Allal & Wegmuller, 2004). In summary, the teacher’s follow-up, tutoring
and support for students’ work is of great importance and doubtlessly constitutes one of the
fundamental elements for success in teaching and learning processes.
The need to use evaluation for pedagogical ends – without necessarily overlooking or
undervaluing the importance of final credentials – has been highlighted by numerous authors
(see, for example, Schunk & Zimmerman, 1998; Coll & Onrubia, 1999; William, 2000; Broad-
foot & Black, 2004; McDonald, 2006). This perspective emphasizes not only “assessment of
learning”, but also, and especially, “assessment for learning” (Birembaum et al., 2006), accen-
tuating the developmental function of assessment (Nunziati, 1990; Allal, 1991) and the impor-
tance of providing students with information about their own learning process, as well as pos-
sible ways for improving it.
In this context there are several studies which have focused on applying ICT to manag-
ing and driving learning assessment of students in higher education (for example, Lara, 2001,
2003; Rodríguez, 2002). A good share of these studies is oriented toward use of ICT as an in-
strument for assessing learning. In the study presented here, however, ICT is used more as a
support to a continuous assessment system with developmental purposes, as a support to stu-
dents’ reflection and regulation about their learning process, and with formative purposes, as a
support to the teacher’s tutorial work over students’ learning.
Starting from a perspective that relates assessment to educational help in promoting
learning, the present study has three objectives: (i) to introduce and discuss fundamental crite-
ria and options which uphold an integrated system of continuous evaluation (ISCE) in higher
education; (ii) to illustrate this system through describing an experience in teaching innovation
supported by a case analysis methodology and by use of ICT incorporated into continuous as-
sessment; and (iii) to introduce and discuss some particularly important results of this experi-
ence from the point of view of continuous assessment.
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 787 -
Designing a continuous assessment system integrated into learning activities: options and
criteria
The experience of incorporating an integrated system of continuous assessment into a
given university teaching practice forms part of a broader teaching innovation project1 devel-
oped by the Teaching Innovation Group in Educational Psychology at the University of Barce-
lona2. The experience was carried out over the 2005-2006 school year in three experimental
groups with a total of 186 students in the “Educational Psychology” course, a required core
subject from the 5th semester of the Bachelor’s program in Psychology at the University of
Barcelona. This subject has been designed in ECTS credits (European Credit Transfer Sys-
tem), using a case analysis and problem-solving methodology and with the support of techno-
logical tools offered by the Moodle3 virtual platform. Use of this platform has enabled a
blended context of teaching and learning which combines face-to-face and distance, as well as
the use of some available resources so that students might reflect on their work and their learn-
ing and that the teacher may guide and oversee this process.
The assessment system is based on a theoretical perspective linked to socio-cultural
constructivism (Coll, Martín & Onrubia, 2001), according to which assessment, educational
help and learning are closely related. From this perspective, assessment is considered to be a
fundamental instrument by which the teacher can regulate his or her teaching activity along the
way and by which the student can regulate his or her own learning process (Mauri & Rochera,
1997). As explained below, in order to fulfill this role, assessment activities are to be inserted
into teaching and learning activities, organized and sequenced around broad thematic areas,
and teachers encouraged to provide follow-up, support and tutoring to students during the de-
velopment of assessment activities.
1 “L’ensenyament de la psicologia de l’educació des de la perspectiva de la convergència europea: una proposta
basada en el treball de l’alumne i en l’ús de les noves tecnologies de la informació la comunicació” [Teaching
Educational Psychology from the perspective of European convergence: a proposal based on students’ work and
the use of ICT] (Reference 2003 MQD 00149. Director: C. Coll. Convocatòria d’ajuts per al finançament de
projectes per a la millora de la qualitat docent de les universitats catalanes corresponent a l’any 2003).
2 http://www.ub.edu/grintie/
3 The Moodle platform (http://moodle.org) is distributed under an open code license (GNU Public License) and
due to its flexibility can generate diverse settings for teaching and learning.
César Coll et al.
-788- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
Integration of assessment activities in the students’ learning activities
We understand assessment as an element inherent in the process of teaching and learn-
ing and as an instrument at the disposal of this process. The two reasons which justify this
statement are: (1) the fact that situations and activities used for identifying and assessing what
students have learned constitute the nexus between the teaching process laid out by the teacher
and the knowledge construction processes performed by students (Coll, Martín & Onrubia,
2001); and (2) the fact that assessment activities must be coherent with the other elements
which make up the teaching and learning process, especially with objectives and with activities
presented throughout this process (Wiliam, 2000; Hargreaves, Earl & Schmidt, 2002; Dochy,
2004; Norton, 2004). From this perspective, if we seek to assess not only the conceptual
knowledge of students, but also their skills in real contexts (Shepard, 2000), it is necessary to
integrate assessment in the very learning process that students are carrying out while they per-
form teaching and learning activities.
According to these criteria, in this particular innovation experience, teaching and learn-
ing activities are at the same time assessment activities. Activities are not designed according
to single topic units, but rather into a thematic block which connects one or more topics. Each
thematic block proposes a set of continuous assessment activities which require the students to
produce different products in a complex case analysis or problem-solving situation. Further-
more, students fill out individual and group self-evaluation reports about their own working
and learning process at the end of each thematic block. At the same time, continuous assess-
ment activities are planned in such a way as to facilitate the teacher’s follow-up of the stu-
dents’ work process, by producing written reports to be returned to students and by performing
follow-up tutoring based on assessment results from each thematic block. All these aspects are
presented in greater detail in the sections which follow.
In this way the continuous evaluation system seeks to fulfill its pedagogical claims: on
one hand, helping teachers to make decisions which improves their teaching practice, as it re-
lates to the student learning, and to make adjustments to their educational assistance as a func-
tion of the progress, difficulties or relapses which students experience (formative assessment);
on the other hand, helping students make decisions based on improving their learning activity
(developmental assessment).
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 789 -
Organization and sequencing of assessment activities around thematic blocks
In order to encourage students to approach knowledge from a more functional, global
sense, it seemed right to separate the work from this academic course into broad content units.
Thematic blocks are content groupings or nuclei which are meaningful in themselves, and
learning them can contribute decisively to developing the competencies of Educational Psy-
chology. Within the framework of each thematic block, there are teaching and learning activi-
ties aimed at understanding the knowledge and applying it in simulated real contexts. These
activities, as indicated above, are at the same time assessment activities which allow teachers
to collect information on the extent to which students reach an understanding of the content
and on their ability to use what they have learned.
Four thematic blocks were established, these are addressed through the presentation
and resolution of a case or problem typical to the demands and tasks of school psychology:
fulfilling the functions and tasks of a school guidance counselor at a Secondary Education
school (thematic block 1); preparing a talk addressed to parents on the relationship between
intelligence, learning strategies and school performance, within the framework of a “Parenting
School” (thematic block 2); preparation of an interview with a child’s teacher, where the child
is showing lack of interest and motivation for learning (thematic block 3); and finally, helping
teachers in the process of attention to diversity in the classroom (thematic block 4).
Assessment activities follow a single sequence, with minor variations, for the four the-
matic blocks. As is seen in Table 1, the sequence includes different evaluation activities aimed
at gathering information not only about student’s comprehension of the content, but also and
especially about their “performance” ability. In sum, the sequence is organized such that stu-
dents can demonstrate an increasingly expert approach and resolution of the cases or problems
presented as they progress in their comprehension and assimilation of the content covered in
the thematic block. Additionally, this organization allows the teacher to offer, within the
framework of this sequence, a set of diverse aids – direct and indirect, in person or through
ICT – which are aimed at improving the students’ learning process.
César Coll et al.
-790- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
Table 1. Sequence of evaluation activities and diversity of educational helps
Integrated, continuous evaluation system in higher education (ICESHE)
Type and sequence of evaluation ac-
tivities (in each thematic block) Type and sequence of educational helps
(in each thematic block)
Initial evaluation activities
Initial responses
-individual and/or group-
to case analyses
Process evaluation activities
Glossaries
Conceptual maps
Final evaluation activities
Final responses and their comparison
with initial responses
Individual and group completion of
self-evaluation questionnaires for
each thematic block
Helps aimed at raising students’ awareness of the initial definition of
the situation. Creation of a common definition shared between teach-
ers and students.
Students get a first representation of the case or problem and be-
come aware of the need for a deeper understanding than their cur-
rent knowledge.
The teacher collects information about students’ prior knowledge
and their initial representation of the case or problem, thus provid-
ing him or her with a baseline and an anchor point for teaching.
Helps aimed at the control, evaluation and improvement of learning
Follow-up and tutoring the work process in face-to-face situations
or through communication tools provided through the Moodle
platform.
Preparation of written reports on the results of the evaluation for
each thematic block, including information about the correction
criteria and the degree of goal attainment, with guidance for im-
proving learning.
Tutoring sessions with feedback of results, in face-to-face and
online situations
Carrying out these activities implies placing the student in simulated real contexts
which allow him or her to relate theory to practice and use the acquired knowledge in a con-
textualized fashion, while at the same time encouraging attainment of certain competencies
required by the professional practice of an educational psychologist. However, the potential of
such situations of case analysis or problem solving to enable development of professional
competencies will only be effective to the extent that students are provided with the educa-
tional assistance necessary for them to successfully address or resolve the case or problem in
question. This assistance can be facilitated through use of ICT (Mauri, Colomina & Rochera,
2006).
In this sense, several conditions must be met in order to address the cases or problem
situations which are the backbone of the thematic blocks. These include: the learning of sig-
nificant, core knowledge; performing individual and group tasks linked to solving the case or
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 791 -
problem; collection of information on the students’ learning process and feeding back to them
an assessment; as well as follow-up and help from the teacher at different stages in the process.
Teacher support, follow-up and tutoring during the completion of assessment activities
Taken as a whole, the sequence of assessment activities which we have just mentioned
provides students with opportunities for acting autonomously in real, complex situations and
problems (even though in a simulated context), planning courses of action, deciding what
knowledge must be used and how they should use it in resolving the case or problem, compar-
ing the initial, tentative resolution with the final resolution, reflecting on the course of action
followed and thereby reorienting the learning process itself. In our experience, however, stu-
dents are unlikely to learn to make optimal decisions if they do not receive necessary support
and help at specific moments in the process, especially in initial stages, and if this support and
help does not evolve, gradually being reduced and withdrawn as students’ ability to work and
learn autonomously increases. In this context, assessment activities become privileged occa-
sions for teachers to provide ongoing support to the students’ work and learning process as
needed.
In order to obtain evidence of the learning process which students are following, teach-
ers use different instruments and resources enabling them to provide follow-up and support to
individual and group work while assessment activities are under way, whether directly or indi-
rectly, in-person or online. On one hand, the teacher plans and carries out a series of in-person
sessions, some mandatory and some optional, for each thematic block; these facilitate observa-
tion of students’ production process. Over the course of these sessions, small groups of four to
six student members address resolution of the case, construction of a glossary, and elaboration
of a conceptual map. This way, they plan resolution of tasks, they share and exchange mean-
ings, they identify difficulties and propose solutions. At the same time, the teacher can follow
their process in some detail and offer different types of support – further explaining the in-
structions, providing additional information, resolving doubts, etc. — all aimed at encouraging
conscious, reflective and self-regulating activity in students.
The didatic guide for each block, the mandatory readings, reading guidelines and sup-
port materials (topic outlines, further readings, tutorials for constructing conceptual maps,
etc.), were among the indirect aids offered to students in each thematic block; all of these were
César Coll et al.
-792- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
permanently available in the virtual classroom for the course, constructed on the Moodle plat-
form. The virtual classroom (see Figure 1) also offers a set of online spaces and technological
resources that can be used by students to plan and regulate their own learning process (notetak-
ing, automatic activity records, guidelines for reflection, detailed planning calendars for the
work sessions, etc.). It also allows the teacher to carry out continuous assessment supported
by multiple evidences (both group and individual activities and tasks, online activity registers,
contributions to the general subject forum, to the small-group forums and to the collaborating
editor, etc) and to provide constant assistance, follow up and guidance to the learning process
as deemed appropriate from the evidence gathered (on line tutorials; returned assignments,
corrected and assessed; follow up and intervention in small group forums; the general forum,
or the collaborating editor, etc.).
Figure 1. Main screen of the Educational Psychology virtual classroom
These technological resources and virtual spaces facilitate observation of joint knowl-
edge construction processes among the students, processes which otherwise might remain in-
accessible to the teacher. Obviously, monitoring this work and communication spaces means
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 793 -
additional work and a considerable time investment for teachers and for students, as is seen in
the Results section. However, this is compensated for by significant improvement in gathering
evidence of students’ progress and difficulties in the learning process, and the “expansion” of
channels for tutoring and support which is very difficult to attain in teaching and learning ac-
tivities done exclusively in person (Onrubia, 2005).
One especially interesting and useful resource for promoting learning through assess-
ment is, at the end of each thematic block, the preparation and submission of a report on re-
sults of learning. This report is organized into the following sections: (i) assessment criteria
which take into account how well assignments were completed (corresponding to an assess-
ment scale of A, B, C, D); (ii) detailed assessment of work in relation to the criteria; (iii)
evaluation of the responses to questionnaire items of the group self-assessment; (iv) evaluation
of the responses to questionnaire items of the individual self-assessment and guidelines for
revising and improving the activity; and (v) a proposal of in-person and online tutorial situa-
tions for discussing the report.
Table 2 summarizes the set of relative criteria within the global assessment system
which was designed. In the first column are basic criteria of the assessment system and in the
second are the options, resources and instruments which give shape to each of them.
Table 2. Options, criteria and resources in the integrated, continuous evaluation system
Basic options of the evalua-
tion system Criteria and resources of the evaluation system
Continuous evaluation sup-
ported by multiple evidence
-Embedding evaluation activities in students’ learning activities
-Evaluation with formative and developmental purposes. Actions aimed at
improving teaching assistance and regulation of learning.
Sequencing evaluation activi-
ties in each thematic block and
through the thematic blocks
themselves
-Theoretical-practical integration: the thematic blocks.
-Organization of evaluation activities around broad thematic blocks which are
approached through analysis and resolution of cases or problems.
-The combination of activities aimed at understanding the content and its ap-
plication in simulated contexts.
-The combination of individual and group activities.
-Students’ elaboration of different products in each thematic block:
-initial case resolution (initial diagnostic evaluation),
-glossary of concepts and conceptual map (formative evaluation during the
process)
-final case resolution and reflection on the elaboration process (final
evaluation).
-Gradual increase in autonomy in elaboration of products in successive the-
matic blocks.
César Coll et al.
-794- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
-Use of ICT as a resource for collaboration among students: collaborative work
spaces.
The teacher’s support, follow-
up, and tutoring during realiza-
tion of evaluation activities
-Observation and tutoring during performance of evaluation activities, in both
mandatory and optional face-to-face sessions
-Follow-up and evaluation of the individual and group self-evaluation ques-
tionnaires completed at the end of each thematic block
-The support of ICT as a resource for continuous evaluation. Use of different
visual spaces in the Moodle virtual classroom (general forum, small group
forums, collaborative editor, automatic records, messaging and online tutoring,
etc.) in order to facilitate students’ work and its ongoing supervision and sup-
port.
-The teacher’s preparation of written reports addressed to students at the end of
each thematic block: criteria for correction and evaluation of the assigned
products, evaluation of answers to the individual and group self-evaluation
questionnaires, proposed guidelines for revising and improving one’s learning.
-Follow-on tutoring sessions performed in person or on line in order to feed
back the results of the evaluation.
Results
Results from the experience show improvement in students´ final performance (N=
186) for the course both in terms of the number of students who passed their final exams at
first attempt as well as in the mean and distribution of grades. Table 3 shows that 90.81% of
students pass the subject and 75.8% do so with a grade of A or B4.
Table 3. Performance of students in the experimental groups (Grade received)
A 23 12.36%
B 118 63.44%
C 28 15.05%
Fail 2 1.07%
Drop out or No show at final exam 15 8.06%
Total 186 100%
In order for students to evaluate their participation in the experience, a 28-item ques-
tionnaire was prepared (23 items on a scale of 1 to 5, and 5 short-answer items). The follow-
4 In prior schoolyears when an assessment system was used with an equivalent final exam, the percentage of
students passing at first attempt usually fell between 60% and 70%. Since it is impossible to obtain exactly
equivalent data for the different groups, it is unadvisable to calculate the statistical significance of the difierences
between these percentages and those presented in Table 3.
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 795 -
ing aspects were addressed: the type of activities and tasks used in this methodology; the
thematic block structure; the different types of aid offered for orienting and facilitating learn-
ing; the methodology of case analysis and resolution; the types of cases presented; the use of
different resources from the Moodle platform; how these resources contributed to different
learning processes; cooperative work in small groups; the continuous assessment system; ac-
tivities for reviewing the planning of one’s individual work; the amount of time required; and
a global evaluation of the approach and realization of the course. This questionnaire was an-
swered individually and anonymously by a total of 115 students at the end of the year. Even
though an exhaustive analysis of all questionnaire items was carried out, here we present only
those related directly to the topic of continuous assessment as an instrument for optimizing
pedagogical assistance. Let us recall, in this respect, that students’ opinion of their degree of
satisfaction with learning processes in which they participate is normally considered one of
the fundamental dimensions to consider for improving quality of education (González, 2006).
First, we present students’ evaluation of some aspects of the experience. Figure 2
shows graphically the response to the following question: “Taking into account all the aspects
considered throughout the questionnaire, your global assessment of the approach and reali-
zation of this course is: not satisfactory, minimally satisfactory; neutral; quite satisfactory;
very satisfactory”. More than half the participants (59%) rate the approach and realization of
the course as “quite satisfactory”. The fact that only 1% of students make a global assessment
of “not satisfactory” is especially interesting.
Quite satisfactory
59%
Satisfactory
7%
Unsatisfactory
1%
Very satisfactory
33%
Figure 2. Global assessment of the course approach and realization.
César Coll et al.
-796- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
As Figure 3 shows, the usefulness of the continuous evaluation system is generally
rated very positively. The question in this case is: “Rate the degree to which you consider
that the continuous evaluation system in this course has helped you: to work more continu-
ously and systematically; to read systematically and in depth; to heighten your interest and
motivation; to better regulate your learning process; to improve communication with the
teacher; to improve the meaningfulness of your learning” (assessment scale: very little, a lit-
tle, some, quite a bit, very much). Out of all these, “to read systematically and in depth” (very
little 0%; a little 0%; some 1.74%; quite a bit 31.30%; very much 66.96%) and “to work more
continuously and systematically” (very little 0%; a little 0%; some 0.87%; quite a bit 27.83%;
very much, 71.30%) are those most often mentioned. “to increase participation in class
(very little 2.61%; a little 10.43%; some 33.04%; quite a bit 34.78%; very much 19.13%) and
to heighten your interest and motivation” (very little 1.74%; a little 7.83%; some 24.35%;
quite a bit 44.35%; very much, 21,74%) are those least supported when assessing the useful-
ness of the evaluation system.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Working more
continuously and
systematically
Reading more
systematically and
in depth
Increasing your
interest and
motivation
Increasing your
class participation
Better regulat ion of
your learning
process
Better
communication
with the teacher
Making learning
more meaningful
Very much
Quite a bit
Somewhat
A little
Very little
Figure 3. Assessment of the continuous evaluation system
One of the key aspects for recognizing how teachers exercise their educational influ-
ence throughout the process is to look at the different kinds of help they offer to students and
how these are valued by the latter (see Figure 4). The question asked of students was: “Differ-
ent types of help and support were made available during the course in order to accompany, to
guide and to facilitate learning. Rate the degree to which you consider that each of these ac-
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 797 -
tually benefited your learning: didactic guide; mandatory readings; reading guidelines; sup-
port material; group discussion sessions with the teacher; Moodle platform; face-to-face
communication with the teacher; final reports” (assessment scale: very little, a little, some,
quite a bit, very much). In this area, the first interesting result is that almost all helps were
rated very positively, although the most highly rated are “mandatory readings” (very little 0%;
a little 0%; some 1.74%; quite a bit 42.61%; very much 52.17%), the “group discussion ses-
sions with the teacher” (very little 0%; a little 2.61%; some 8.70%; quite a bit 47.83%; very
much 40.87%) and the “support materials” (a little 0.87%; a little 0%; some 15.65%; quite a
bit 60.87%; very much 22.61%). Lower ratings were given to the “Moodle platform” overall
(very little 14.78%; a little 20.87%; some 32.17%; quite a bit 23.48%; very much 6.96%), the
“reading guidelines” (very little 6.09%; a little 14.78%; some 32.17%; quite a bit 39.13%; very
much 7.83%) and the “didactic guide” (very little 3.48%; a little 9.57%; some 29.57%; quite a
bit 39.13%; very much 17.39%).
0%
20%
40%
60%
80%
100%
Didactic guide
Required reading
Reading guidelines
Support material
Discussions with the
teacher
Moodle Platform
Face-to-face
communication with
the teacher
Final reports
na
Very much
Quite a bit
Somewhat
A little
Very little
Figure 4. How much the different types of helps contributed to learning
Finally, we cannot overlook the “cost” involved in participating in this experience.
Thus Figure 5 shows graphically the response to this question: “Taking the whole semester
into account, estimate the weekly hours that, on average, you have dedicated to working on
this subject in small groups (not counting mandatory hours of class attendance)”. As for the
number of weekly hours required for class study and work, most students tell us that they
César Coll et al.
-798- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
spend – in addition to classroom hours – about 10 hours, half on individual work and the
other half on group work. In percentages, 52.7% say they spent between 6 and 10 hours on
average per week, 30.43% between 11 and 15 hours, and 5.22% between 16 and 20 hours.
One noteworthy data point is that most students (63%) affirm that they could only adequately
keep up with 2 simultaneous subjects with the same amount of work and dedication that they
have devoted to this one; 15% consider that they could keep up with three, and 6% with four.
Taking the above data as a reference, and adding on mandatory classroom hours, the total
hours which students dedicate to course work would be on average about 200 hours. The ini-
tial design predicted a total of 185 hours, such that the design should be slightly reduced in
order for students’ reported hours of work to coincide with course design predictions.
0
10
20
30
40
50
60
Percentage of students
0-5h 6-10h 11-15h 16-30h
Figure 5. Estimate of hours spent weekly on individual and small-group work
For their part, teachers report that this type of class design and development involves a
considerable increase in the volume of work for teaching; they point to the increase in de-
mands coming from implementation of the continuous evaluation system and the follow-up
and support for students’ individual and group work (supervision of student contributions,
with an average frequency of three or four times per week, follow-up and tutoring in the
process of completing assignments and of work turned in in the virtual classroom and in per-
son, etc.).
Conclusions
Results from this experience show that continuous evaluation activities can be useful
instruments for collecting multiple, diverse evidence of students’ learning and for providing
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 799 -
well-suited educational helps which encourage attainment of learning.
The potential usefulness of continuous evaluation activities, in our experience, lies in
the set of options, criteria and resources which support the system in its totality, more than in
the use of any one of these elements considered in isolation. Integrating evaluation activities
within the framework of learning activities, organizing them around broad thematic blocks,
combining activities for assessing knowledge comprehension with others involving its appli-
cation in real-life situations—complex and relevant—and increasing possibilities for offering
follow-up and continuous support for students’ learning process and its results, are all actions
which generate an optimal context for improved learning.
For the continuous evaluation system to really take its place as an instrument that
promotes learning depends on fulfilling a series of conditions, both educational and institu-
tional. On one hand, use of student knowledge, skills and attitudes should be encouraged
through the design of situations that simulate real, complex problems. These situations
should promote a process of reflection that extends from the retrieval of prior knowledge, as
prompted by the initial formulation of the case, through to its final formulation, after having
gone through successive revisions. A continuous evaluation system with these characteristics
requires high levels of student involvement and effort that are only reached, and especially
maintained, when they manage to attribute meaning to what they are learning and to the situa-
tions in which they are learning it (Coll, 2004). In the case of the present experience, results
indicate that students found meaning in involving themselves in case resolution which simu-
lates common situations faced by the school psychologist in his or her professional practice.
Along these lines, results show that students gave lesser value and meaning to certain
instruments specifically designed for encouraging and regulating learning, such as the indi-
vidual and group self-evaluation questionnaires. One of the factors which helps explain the
low value attributed to these questionnaires is that the dominant evaluation culture in higher
education encourages students to be more involved in activities which “count” more highly
for their final class grade, such as resolution of the case-problem and the elaboration of the
glossaries and conceptual maps, as opposed to answering some self-evaluation questionnaires
whose relevance for the final grade was perceived to be considerably less or even null.
Regarding the low global value which students assign to the Moodle platform, it is
César Coll et al.
-800- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
best to consider this in the light of other more specific results obtained from the same ques-
tionnaire. These results point out a higher value for ICT as a resource for continuous access
to activities and materials for a problem situation, and lower value as a resource for commu-
nication with the teacher and classmates. These results may be interpreted more properly if it
is understood that the Moodle platform was used in the experience as support to in-person
teaching in the framework of a hybrid teaching-learning context. In our opinion, the peda-
gogical and didactic value of certain uses of ICT, such as communication or collaborative
learning uses, may increase significantly if effective conditions are created, different than
those commonly existing in face-to-face situations.
On the other hand, in order for the continuous evaluation system to fulfill its function
as a support for improved learning, it is not enough to create optimal conditions for promoting
students’ involvement in carrying out evaluation activities. In addition, in the work that stu-
dents are performing individually or in groups, in person or using ICT, the teacher’s tutoring,
follow-up, and support emerge as the fundamental elements for continuous evaluation to be
successful.
Finally, based on of the increased volume of work involved in implementting an
evaluation system such as what we have presented here, one must insist on the need to im-
prove conditions under which university teaching takes place, including how teaching hours
are defined and counted, recognition for teaching as compared to other duties of university
faculty members, and the number of students per class group. These and other institutional
conditions are essential in order to ensure the introduction, effectiveness and sustainability of
continuous evaluation in higher education and, by doing so, to make progress in improving
the quality of university teaching.
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 801 -
References
Ahumada, P. (2005). La evaluación auténtica: un sistema para la obtención de evidencias y
vivencias de los aprendizajes. [Authentic evaluation: a system for gathering learning
evidence and experiences.] Perspectiva Educacional, 45, 11-24.
Allal, L. (1991). Vers una pratique de l’évaluation formative. [On the practice of formative
evaluation.] Brussels: De Boek.
Allal, L., & Wegmuller, E. (2004). Finalités et fonctions de l’évaluation. [Purposes and func-
tions of evaluation.] Educateur (special issue 04), 4-7.
Biggs, J. (2006). Calidad del aprendizaje universitario. [Quality of university learning.] Ma-
drid: Narcea.
Birembaum, M., Breuer, K., Cascallar, E., Dochy, F., Dori, Y., Ridway, J., Wiesemes, R. &
Nickmans, G. (2006): A Learning Integrated Assessment System. Educational Re-
search Review, 1, 61-67.
Boekaerts, M. (1999). Self-regulated learning: where we are today. International Journal of
Educational Research, 31, 445-457.
Broadfoot, P. & Black, P. (2004). Redefining assessment? The first ten years of “Assessment
in Education”. Assessment in Education, 11 (1), 7-27.
Coll, C. (2004). Esfuerzo, ayuda y sentido en el aprendizaje escolar. [Effort, help and mean-
ing in school learning.] Aula de Innovación Educativa, 120, 36-43.
Coll, C., Martín, E. & Onrubia, J. (2001). La evaluación del aprendizaje escolar: dimensiones
psicológicas, pedagógicas y sociales. [Evaluation of school learning: psychological,
pedagogical and social dimensions.] In C. Coll, J. Palacios & A. Marchesi. Desar-
rollo psicológico y educación, 549-572. Madrid: Alianza Editorial.
Coll, C. & Onrubia, J. (1999). Evaluación de los aprendizajes y atención a la diversidad.
[Evaluation of learning and attention to diversity.] In C. Coll (Coord.), Psicología de
la instrucción. La enseñanza y el aprendizaje en la educación secundaria, 141-168.
Barcelona: Horsori / ICE de la UB.
De la Fuente, J., Justicia, F., Casanova, P.F. & Trianes, M.V. (2005). Perceptions about the
construction of academic and professional competencies in psychologists. Electronic
Journal of Research in Educational Psychology, 3 (1), 3-34
Delgado, A. M., Borge, R., García, J. Oliver, R. & Salomón, L. (2005). Competencias y dise-
ño de la evaluación continua y final en el Espacio Europeo de Educación Superior.
Programa de Estudios y Análisis (EA2005-0054). [Competencies and design of con-
César Coll et al.
-802- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
tinuous and final evaluation in the European Space for Higher Education. Studies and
Analysis Program.] Madrid: Ministry of Education and Science. Dirección General de
Universidades.
Díaz, F. (2006). La evaluación auténtica centrada en el desempeño: una alternativa para eva-
luar el aprendizaje y la enseñanza. [Authentic evaluation, focused on performance: an
alternative for evaluating learning and teaching.] In F. Díaz (Coord.), Enseñanza si-
tuada: vínculo entre la escuela y la vida (pp. 125-163). México: McGraw-Hill.
Dochy, F. (2004). Assessment engineering: aligning assessment, learning and instruction.
Keynote presentation. Accessed on May 31, 2006 from
http://www.assessment2004.uib.no/keynotes/dochy.page
González, I. (2006). Dimensions for evaluating university quality in the European Space for
Higher Education. Electronic Journal of Research in Educational Psychology, 4 (3),
445-468.
Hargreaves, A., Earl, L. & Schmidt, M. (2002). Perspective on Alternative Assessment Re-
form. Amercian Educational Research Journal, 39 (1), 69-95.
Lara, S. (2001). La evaluación formativa en la Universidad a través de Internet: aplicaciones
informáticas y experiencias prácticas. [Formative evaluation at university through
Internet: computer applications and practical experiences.] Pamplona: Eunsa, Edicio-
nes de la Universidad de Navarra.
Lara, S. (2003). La evaluación formativa a través de Internet. [Formative evaluation through
Internet.] In Cebrián, M., Enseñanza virtual para la innovación universitaria. Ma-
drid: Narcea, 105-117.
Mauri, T., Colomina, R. & Rochera, M.J. (2006). Análisis de casos con TIC en la formación
inicial del conocimiento profesional experto del profesorado. [Case analysis using ICT
in initial teacher training in practitioners’ knowledge and expertise.] Revista Interuni-
versitaria de Formación del Profesorado, 20 (3), 57, 219-232.
Mauri, T. & Rochera, M.J. (1997). Aprender a regular el propio aprendizaje. [Learning to
regulate one’s own learning.] Aula de Innovación Educativa, 67, 48-52.
Mcdonald, R. (2006). The use of evaluation to improve practice in learning and teaching.
Innovations in Education and Teaching International, 43 (1), 3-13.
Norton, L. (2004). Using assessment criteria as learning criteria: a case study in psychology.
Assessment and Evaluation in Higher Education, 29 (6), 687-702.
Nunziati, G. (1990). Pour construire un dispositif d’évaluation d’aprentissage. [Towards
building an application for evaluation of learning.] Cahiers Pédagogiques, 280, 47-64.
Continuous assessment and support for learning: an experience in educational innovation with ICT support
in higher education
Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3), 2007. ISSN: 1696-2095. pp: 783-804 - 803 -
Onrubia, J. (2005). Aprender y enseñar en entornos virtuales: actividad conjunta, ayuda peda-
gógica y construcción del conocimiento. [Learning and teaching in virtual environ-
ments: joint activity, pedagogical help, and knowledge construction.] RED. Revista de
Educación a Distancia, número monográfico II. Accessed on May 28, 2005 from
http://www.um.es/ead/red/M2/
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts,
P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451-502). San
Diego, CA: Academic Press.
Rodríguez, M. J. (2002). Aplicación de las TIC a la evaluación de alumnos universitarios.
Ediciones de la Universidad de Salamanca. Accessed on Sept. 29, 2007 from
http://www.usal.es/~teoriaeducacion/rev_numero_06_2/n6_02_art_rodriguez_conde.h
tm
Roe, R. (2003). ¿Qué hace competente a un psicólogo? [What makes a psychologist compe-
tent?] Papeles del Psicólogo, 86. Accessed on May 31, 2006 from
http://www.cop.es/papeles/vernumero.asp?id=1108
Schunk, D.M. & Zimmerman, B. J. (eds.) (1998). Self-regulated learning: From teaching to
self-reflective practice. New York: The Guilford Press.
Shepard, A.A. (2000). The role of assessment in a learning culture. Educational Researcher,
29 (7), 4-14.
Torrano, F. & Gónzález, M. C. (2004). Self-Regulated Learning: Current and Future Direc-
tions. Electronic Journal of Research in Educational Psychology. 2(1), 1-34.
VV.AA. (2005). Libro Blanco. Título de grado en Psicología. [The White Book. Under-
graduate degree in Psychology.] ANECA.
Wiliam, D. (2000). Integrating summative and formative functions of assessment. Keynote
address. First Annual Conference of the European Association for Educational As-
sessment. Prague, Czech Republic.
César Coll et al.
-804- Electronic Journal of Research in Educational Psychology, N. 13 Vol 5 (3),, 2007. ISSN: 1696-2095. pp: 783-804
[Page left intentionally blank]
... At the university level, under teacher-centered methodologies, the assessment traditionally targeted concepts mastery by students, but with the outreach of competencybased education [26][27][28][29], some form of continuous assessment (CA) is usually run to provide a grade that reflects the level of competencies achievement [30][31][32]. CA paradigm finds a wide variety of forms to be implemented at the university level considering their outcomes (exams, tests, projects, assignments, portfolio, essays, presentations, etc.) [30][31][32][33][34]. On the other hand, CA has been extensively used in schools [35][36][37][38], potentially due to its formative feature. Within this context and related to getting data on the students' learning process, Elliott/Resing/Beckmann [36] distinguish between dynamic testing and dynamic assessment, the former being of particular interest for academic researchers in psychology with a focus on the study of reasoning and problem-solving, and the latter for those having a practitioner orientation and tending to be particularly concerned with exploring the ways by which assessment data can inform educational practice. ...
... At the university level, under teacher-centered methodologies, the assessment traditionally targeted concepts mastery by students, but with the outreach of competencybased education [26][27][28][29], some form of continuous assessment (CA) is usually run to provide a grade that reflects the level of competencies achievement [30][31][32]. CA paradigm finds a wide variety of forms to be implemented at the university level considering their outcomes (exams, tests, projects, assignments, portfolio, essays, presentations, etc.) [30][31][32][33][34]. On the other hand, CA has been extensively used in schools [35][36][37][38], potentially due to its formative feature. ...
... DCDA embraces the idea that each assessment and output is itself an input into the learning process, and we must check if ulterior outputs reflect that the assessment of prior topics does match with the degree of achievement of competencies that have already been assessed. The DCDA system/method is a novelty that combines the known CA paradigm, which has been widely considered in the literature [30][31][32][33]42], with a system based on taking into consideration the chains of topics that relate to each other in a discrete dynamical sense to confirm or reassess the level of competencies achieved. ...
Article
Full-text available
Learning is a non-deterministic complex dynamical system where students transform inputs (classes, assignments, personal work, gamification activities, etc.) into outcomes (acquired knowledge, skills, and competencies). In the process, students generate outputs in a variety of ways (exams, tests, portfolios, etc.). The result of these outputs is a grade aimed at measuring the (level of) competencies achieved by each student. We revisit the relevance of continuous assessment to obtain this grading. We simultaneously investigate the generated outputs in different moments as modifiers of the system itself, since they may reveal a variation of the level of competencies achievement previously assessed. This is a novelty in the literature, and a cornerstone of our methodology. This process is called a Dynamical Continuous Discrete assessment, which is a form of blended assessment that may be used under traditional or blended learning environments. This article provides an 11-year perspective of applying this Dynamical Continuous Discrete assessment in a Mathematics class for aerospace engineering students, as well as the students’ perception of continuous assessments.
... In fl uid education, course evaluation through a continuous assessment strategy facilitates measurement of learners' competences at any time. This approach promotes learner self-regulation through continuous evaluation (Coll et al. 2007), looking for new ways to facilitate teachers to follow students' learning process, gathering multiple outcomes while supporting them in acquiring and using autonomous regulation competencies. Similarly, students have continuous information about their learning process and results, with possible opportunities, resources, and ways of improving them. ...
Book
Universities have traditionally aimed to instill in their students the ability to interpret information as well as the joy of learning. However, today’s universities are challenged with the need to also incorporate technological advances without forsaking the solid principles at their foundation. Furthermore, modern society’s demand for a university-educated workforce is increasing, while the demand for unskilled jobs is decreasing. Universities now face the challenge of training many students with vast diff erences in background, previous knowledge, and study motivation. Supporting student learning with digital resources A huge amount of digital resources is now available for administrative as well as pedagogical support and enhancement in higher education. Today’s students, who were born in the 2000s, now expect modern universities to provide an appropriate digital infrastructure for teaching and learning. Medical education must adapt to many new and diff erent healthcare contexts, including digitalized healthcare systems and digital-generation students in a hyper-connected world. Educational design needs to be adapted to the target learners, setting, and available resources. While the use of technology was already widespread in medical education, the Covid-19 pandemic accelerated the need for more fl exible, personalized, and collaborative learning. Filetti S.,† Grani G.,‡ Murat G.,†† Saso L.‡‡ † Unitelma Sapienza University ‡ Department of Translational and Precision Medicine, Sapienza University of Rome †† STITCH—Sapienza Information-Based Technology InnovaTion Center for Health, Sapienza University of Rome ‡‡ Faculty of Pharmacy and Medicine, Sapienza University of Rome Innovative Medical Education in the Digital Era 1 Innovative Medical Education in the Digital Era 2 For this reason, educators and lecturers are expected to eff ectively incorporate digital tools into their teaching. Simply being an expert in a certain academic fi eld is no longer suffi cient. Today’s lecturers need both pedagogical competence in order to help as many students as possible pass exams, as well as adeptness in the use of digital resources. This competence is referred to as technological, pedagogical, and content knowledge (TPACK). In order to achieve this goal, we must abolish (or at least adapt) some old-fashioned teaching methods, including chalkboards, practice groups, and lecture- based lessons. The traditional format of the lecture may be enriched using live streaming or be made available on demand. Digital tools may be used to add interactive components to traditional lectures. In the fi rst chapters of this e-book, Liapi provides an overview of the technological tools available to modern teachers, such as multimedia approaches, simulation, gamifi cation, artifi cial intelligence application, and virtual learning environments and augmented reality. As discussed by Riggio and Durante, in-person traditional lectures may be enhanced by the availability of online resources (e.g., e-books, videos, podcasts) before the lecture, that may be transformed in a series of active learning, student-centered activities (the so-called fl ipped classroom). Digitalization further promotes interaction, thus boosting exchanges and educational collaboration between disciplines, professions, and universities worldwide. The Covid pandemic forced many courses to be moved online only and the experience of language teaching is explored by Markovina and Krasilnikova, looking at the experience of both students and teachers regarding tools, expectations, advantages, and limitations. Another example of digital interaction is the modernization of the anatomical theater teaching tool. Tuebingens’ Sectio chirurgica is a free interactive lecture produced by the Department of Anatomy at Tübingen University, which is targeted to both students and professionals. Surgeries are broadcast live via the internet. Surgeons of diff erent disciplines explain and perform various procedures on anatomical specimens with the aim of applying anatomical knowledge to a clinical context while demonstrating the importance of theoretical knowledge. One of Sectio chirurgica’s most important features is viewer interaction. Viewers are invited to contribute to the live stream by chat, by completing a quiz, via “second stream screens” or by contacting the dedicated hotline. Viewers actively infl uence surgical activity by digital means, thus representing a digital reinvention of traditional teaching methods (“chalk and talk”) still practiced at universities today. However, as compared to a traditional lecture, Sectio chirurgica has proven to be more easily understandable and entertaining. It also transmits the same anatomical knowledge and more clinical insights. It is discussed in detail by Shiozawa and Hirt. Digital instruments have the general aim to simplify the application of active learning. Research has shown that active learning, supported by eff ective use of pedagogical digital resources, can both improve student results and support inclusiveness. The fl ipped classroom, active learning classroom (also in combination with the traditional classroom), problem-based learning (PBL), student response systems, and digital exams are some examples of common resources/methods that may benefi t from digital support. Innovative Medical Education in the Digital Era 3 In particular, PBL, a constructive, self-directed, collaborative, and contextual learning method, has been shown to be eff ective when students are on campus and study full-time, but entirely online courses may be provided for master-level courses or adult learners who are working professionals. Many online courses resort to a teacher-led traditional design. However, while there is no one-size-fi ts-all solution for student-centered online learning, successful examples include small-scale blended courses with synchronous online discussions, middle-scale fully online courses with individual project work and no synchronous communication, and large-scale massive open online courses (MOOCs) based on PBL principles. Verstegen and De Nooijer report three examples of these diff erent-scale courses, along with their actual application and results. Virtual reality and virtual fl ipped classroom may be applied successfully both in pre-clinical and clinical teaching. Cytometry and cell culture basics may be delivered using a virtual reality environment (which also reduces costs and plastic waste), both for students and for professionals needing continuing professional development (as reported by Baus and colleagues in their chapter). On the other hand, clinical training may be particularly benefi tted by the application of new technologies. As reported by Pecoraro and colleagues, artifi cial intelligence is able to improve the teaching of radiology (providing diagnostic clues on real CT and MRI scans, suggesting anatomical segmentation, reporting discrepancies and inter-observer variability among trainees). Simulation is useful for the initial training of clinical abilities as well as for the maintenance of lifelong professional competence throughout one’s career. Medical simulation can use equipment (e.g., high-fi delity mannequins), virtual reality (serious games), or standardized patients. In recent years, technological progress has made simulation-based teaching one of the most appealing educational resources. As discussed by Lubrano, Bloise and Bertazzoni, simulation provides a safe environment for emergency medicine training (the fi rst time is never on the patient), and off ers proactive, controlled, reproducible, and standardized training based on feedback and debriefi ng. The human impact brought by teachers and coordinators is needed to create real medical histories to immerse learners in the simulation scenario. A skilled instructor should guide the scenario and evaluate performance. This educational technique has several advantages: it allows learners to play an active role, its use of debriefi ng stimulates cognitive eff orts, and it promotes the learning of correct clinical case management in the long term. Simulation in teams also allows students to develop team-working skills, practice leadership roles, and optimize group dynamics in order to achieve the best possible outcome, thus also eff ectively practicing multitasking. Furthermore, simulation training can also be a major factor in reducing work stress. Simulation may also be enhanced by team-based competition, which promotes the natural instinct of teams to excel, and may motivate students to study and prepare harder before the competition. A team-based competition organized by Sapienza showed a marked improvement in standard of care and technical knowledge over time. This was due to the sharing of knowledge between students who had participated in previous editions and their younger colleagues. Innovative Medical Education in the Digital Era 4 Teaching humanities The decision to pursue medical studies is often infl uenced by humanitarian considerations that are formulated in moral terms (e.g., duty of helping others). Digitization may help shape and enhance the moral intuitions and judgments of medical students. Critics maintain that online education cannot be compared to the instant feedback and sense of community provided by face-to-face courses, and that the use of simulation reduces human interaction. Medical education in the digital era will pose important challenges in building empathy in medical practice. McFarland proposes to restore the role of the humanities in the medical curriculum, also proposing why and how. Socaciu and Gibea explain how the issue of ethics may be addressed using new technologies: students may be immerged in clinical scenarios posing ethical dilemmas, applying some of the tools we have already discussed, such as virtual reality, group activities, and gamifi cation. Finally, the organization of clinical learning activities may also be updated, as shown by the experience of Operemos, a managing platform explained by Guadalajara, Esteban, Lopez-Fernandez and García-Olmo. Even mobility programs may be rethought and reorganized to include a preparatory online session and a shorter time in person, as proposed by Calés Bourdet. The publisher’s role must also undergo a complete rebuild. As medical publishers, Allison and Grillo discuss the evolution of the e-book from 1971 to date. To support case-based learning and problem-based learning, content also needs to be refreshed. Students now need to engage with the content, with video and audio support, and to receive instant feedback on activities. The McGraw Hill approach is detailed in the last chapter of this book. Principles of modern medical education Innovative medical education curricula may be developed according to the following principles: ⊲ Interactivity. Active learning implies a shift from a teacher-centered class to a student-centered approach that will increase curiosity, boost engagement, and lead to better learning and comprehension. Educational technology should promote interactivity in all teaching settings. ⊲ Bidirectionality. Students should be allowed to apply their knowledge to challenging problems in a setting that promotes collaboration with peers and continuous bidirectional feedback between educators and students and peer to peer. ⊲ Blendedness. New technologies should be integrated with traditional methods. Online lectures, VPs, and online games must integrate traditional lectures, bedside teaching, and group simulations in a comprehensive curriculum. ⊲ Transnationality. Since web-based platforms allow for international cooperation, medical curricula should be transnational and promote contributions from diff erent universities. This would enable homogeneity of training across European countries. It will also improve understanding of cultural diversity. ⊲ Up-to-dateness. The ability to record and broadcast lectures that learners may attend from their own home at their chosen time should not encourage material recycling from year to year. Materials should be accurately checked for up-to-dateness and refreshed continuously.
... La innovación educativa mediada por TIC es un aspecto notable en la literatura científica (Coll et al., 2007;Hernández, 2015;Hidalgo-Arango y Pérez-Caballero, 2018;Losada Iglesias et al., 2012;Muñoz-Cano et al., 2012;Portuguez-Castro y Gómez-Zermeño, 2020;Said-Hung et al., 2017). Aunque es válido señalar que aún son insuficientes las investigaciones que abordan de manera integrada "innovación educativa y TIC"; todos los estudios anteriores, reconocen las TIC como dinamizador de la innovación educativa. ...
Article
Full-text available
La investigación ha demostrado que las condiciones institucionales son fundamentales para potenciar la innovación educativa con TIC. Sin embargo, articular estrategias pertinentes para ello, requiere examinar el liderazgo tecnológico, así como aspectos de gestión y apropiación. Por ello, esta investigación pretende evaluar las Condiciones Institucionales para promover la Innovación Educativa con TIC (CIIETIC) desde la perspectiva de los docentes. Se desplegó un enfoque metodológico mixto, aplicándose un estudio de caso comparado de tipo heterogéneo. Los datos (procesados con SPSS y Atlas.Ti) se recogieron a través de una encuesta online a 154 docentes de 4 universidades latinoamericanas. Se encontró que las CIIETIC entre las universidades participantes son semejantes, con diferencias específicas. Fueron revelados tres factores críticos de éxito: participación en comunidades profesionales de aprendizaje con TIC; capacitación y actualización permanente en TIC; implementación y equipamiento de laboratorios con acceso a Internet en las universidades. Así, implementar una innovación educativa con TIC en universidades latinoamericanas, requiere condiciones institucionales que privilegien estos factores críticos de éxito, desde la gestión de potencialidades formativas de los actores del proceso educativo y el reconocimiento de potencialidades de estudios sobre Educación Superior en la región, permitiendo contextualizar la actualización, acceso, aprendizaje, innovación y uso de las TIC.
... Some instructors have no positive attitude towards the implementation of CA due to lack of training, support and encouragements from university management. This tragic state of affairs provides a learning opportunity for us in setting guidelines for the implementation of CA. Coll et al. (2007) discuss the lessons learned and value added by introducing CA, using Moodle as the delivery platform. By organising and sequencing assessment activities around thematic blocks, they found that CA enhances learning and is "a well-suited instrument for fostering the attainment of learning" (Coll et al. 2007, 799). ...
Preprint
Many universities, including open distance education institutions, are currently investigating the introduction of a continuous assessment framework. When introducing continuous assessment in an academic department, it is necessary to ensure that all participants understand all concepts related to such framework and the implications of introducing such a framework. This paper investigates studies on continuous assessment in the literature to identify the advantages and disadvantages of such a framework. The lessons learned are used to identify guidelines regarding the importance of feedback and to develop a model that can be used to plan and introduce continuous assessment, considering all the different environmental factors that will affect the introduction.
... Students' participation in the learning process itself [57][58][59][60][61] is one way to improve the learning process, and active methodologies [62][63][64][65] should be used in conjunction with continuous learning assessment [66][67][68][69][70][71]. ...
Article
Full-text available
Active educational methodologies promote students to take an active role in their own learning, enhance cooperative work, and develop a collective understanding of the subject as a common learning area. Cloud Computing enables the learning space to be supported while also revolutionizing it by allowing it to be used as a link between active methodology and students’ learning activities. A Cloud Computing system is used in conjunction with an active methodology to recognize and manage individual, group, and collective evidence of the students’ work in this research. The key hypothesis shown in this work is that if evidence management is made clear and evidence is consistently and gradually presented to students, their level of involvement will increase, and their learning outcomes will improve. The model was implemented in a university subject of a first academic year using the active Flipped Classroom methodology, and the individual, group and collective evidence is constantly worked with throughout the implementation of a teamwork method.
... Continuous evaluation activities can be useful instruments for collecting multiple, diverse evidence of students' learning and for providing well-suited educational helps which encourage attainment of learning (Coll et al, 2007). The results of the study carried out by Rosario (2012) ...
... Il permet aussi de développer de nouvelles compétences transversales dans la manipulation d'interfaces informatiques (Herzog & Katzlinger, 2017). Enfin c'est un outil pratique qui permet d'ajouter, de multiplier les contrôles continus et les évaluations formatives (Col et al., 2007). Chaque année un patron des comptes-rendus des TP1, 2 et 3 est fourni aux étudiants. ...
Article
Full-text available
Every year 900 to 1000 students are registered in the EU portal MONOD Integrative Biology of Organisms. The Biology-Ecology department has chosen to favour practical teaching to promote learning. This UE of 5 ECTS thus counts 30h of TP, 10.5h of TD for 9h of CM. The challenge of this UE is to make students aware of all the facets of biology while making them acquire, through practice, a certain number of basic learning specific to biology (recognizing and classifying organisms with dedicated methods...) and more transversal skills: writing an observation report, making a manipulation diagram while respecting a schedule. The first hurdle is therefore to acquire many skills at different levels and at the same time. In addition, there is the number and diversity of student profiles but also the number and diversity of teacher profiles. Knowing how to write a report of observation and experimentation is an essential skill for a scientist and follows the logic of writing a publication. The objective of our study was therefore to optimize this learning and to compensate for the lack of uniformity in both the form and the deadline for reporting corrections to the reports. Moreover, teachers' corrections are often misunderstood and badly experienced. Self- and peer evaluation seem to be methods that allow students to better understand what is expected. In self-evaluation, the student is face to face and does not have to worry about the image he or she reflects. Peer assessment allows for a learner-learner relationship. There is no hierarchical relationship. On the other hand, the fear of the other person's gaze may be a bias. Anonymity is a solution to overcome this bias. Using this method via a Moodle-like computer interface allows for more controlled standardization, and easier setting of distribution, anonymity and grading. It requires a good practice of office tools by the students beforehand. Our objective was therefore to use the Moodle platform to use the workshop tool both for the self-assessment of the first 2 TPs and the peer assessment of TP3, followed by a classical assessment by the teacher for TP4. We have received support from the Faculty of Sciences of the University of Montpellier through the ICTE 2016-2017 call for projects. Our results concern both the appreciation of students for this tool but also the assessment of the evolution of grades during the year and in comparison with the promotions of previous years that did not have this system. This tool can only last if it has the support of both students and teachers. Our results show that the majority of students are in favour of this tool if it is used early in the learning process. The feedback from teachers is more heterogeneous, especially on the fact that this evaluation is integrated into the EU average. We will also comment on the tool, the advantage and disadvantage of the different parameters tested, but also the management time in relation to the use of this tool. Finally, we will propose perspectives for improvement in order to optimize its use. Indeed, the implementation and maintenance is quite heavy during the operation. Awareness-raising among teachers and training on the tool would allow to take full advantage of this tool.
... The use of the new Information and Communication Technologies (ICT), as well as methodologies focused on the autonomous work of students [3], are fundamental in the improvement of teaching and learning [4]. The use of ICT improves the academic performance of those groups of students who receive innovative teaching methods [5]. ...
Chapter
Full-text available
Port and Coastal Engineering is a subject included in the third year of the degree in Civil Engineering (Civil Constructions and Hydrology specialty) taught at the Polytechnic School of Algeciras. As this is a subject with a very specific syllabus, there is no textbook (or even appropriate references) that contemplates all the related topics of the whole course. Moreover, there are many students who, for different reasons (repeaters, Erasmus, labour obligations, ...), cannot attend the regular classes. This lack of attendance has been leading to another problem: this kind of students have been leaving the subject for several years. In addition, understanding of the basic concepts becomes even more complicated for Erasmus students enrolled in the subject because of the language barrier. In order to solve this problem, a project of teaching innovation has been launched which consists of the creation of a set of evaluable tasks for the different topics addressed to be performed by the students. In this way, the students are motivated to keep up-to-date the course contents. A comparison of the last year results with those of the preceding five years is provided. Finally, an analysis of the significance of the improvement is also presented.
... En la misma dirección convergen las ideas de Coll, Rochera, Naranjo & Mayordomo (2007) al reconocer a la evaluación como una experiencia continua, formativa, reguladora, auténtica, participada y social. Es decir, como un sistema integrado, coherente y conexo con las actividades de enseñanza y aprendizaje. ...
Thesis
Full-text available
Esta investigación se ocupa de estudiar las prácticas de enseñanza de la Matemática y la valoración que los docentes hacen con relación a la resolución de problemas enmarcados en la realidad. El propósito general de este trabajo ha sido conocer qué importancia le dan los profesores al proceso de matematización en las prácticas de enseñanza y de evaluación en el nivel del Ciclo Básico de Educación Secundaria de una Zona Metropolitana de Montevideo. El estudio se desarrolló mediante una metodología mixta, cuantitativa y cualitativa, que aprovecha las fortalezas de cada técnica y se enriquece de la asociación de ambas. Esta complementariedad, posibilitó mejorar la comprensión de la realidad estudiada y permitió una mayor riqueza interpretativa. Enmarcado pues, en una metodología mixta, se utilizó en una primera fase un cuestionario cerrado y autoadministrado, formado por 48 ítems, con formato de escala de valoración. El cuestionario se aplicó a 41 docentes de Matemática de los cinco liceos públicos de la zona en la que se realizó el estudio, con el objetivo de relevar las concepciones docentes acerca del uso de la resolución de problemas enmarcados en la realidad tanto para la enseñanza como para la evaluación. En una segunda fase se realizaron entrevistas semi-estructuradas a 6 docentes para profundizar en sus concepciones. Los resultados obtenidos evidencian que los profesores que participaron de la investigación han revelado su inclinación por utilizar métodos constructivistas en los procesos de enseñanza y aprendizaje, pero la mayoría de ellos sostienen prácticas instrumentalistas. Por otro lado, los hallazgos de este estudio confirman que los profesores no denotan una reflexión sobre el concepto de matematización y su implicancia para las prácticas de enseñanza, en consecuencia, no desarrollan procesos de matematización horizontal, aunque se evidenciaron, tanto en el plano discursivo como en la práctica, aspectos que promueven la matematización vertical.
Article
This article outlines the effect of the collaborative educational tool ViLLE when learning business mathematics in higher education. ViLLE validates students’ answers during the assessment process and provides immediate feedback, enabling students to receive feedback and guidance about the correctness of their answers. The learning results in the business mathematics course in Turku University of Applied Sciences are used for the study. The effect of the ViLLE tool is researched by quasi-experimental study using the 2013 course as the control group and 2014 course as the treatment group. This research confirmed that the usage of the ViLLE tool improved the students’ learning of business mathematics.
Article
Full-text available
This article examines classroom assessment reform from four perspectives: technological, cultural, political, and postmodern. Each perspective highlights different issues and problems in the phenomenon of classroom assessment. The technological perspective focuses on issues of organization, structure, strategy, and skill in developing new assessment techniques. The cultural perspective examines how alternative assessments are interpreted and integrated into the social and cultural context of schools. The political perspective views assessment issues as being embedded in and resulting from the dynamics of power and control in human interaction. Here assessment problems are caused by inappropriate use, political and bureaucratic interference, or institutional priorities and requirements. Last, the postmodern perspective is based on the view that in today’s complex and uncertain world, human beings are not completely knowable and that “authentic” experiences and assessments are fundamentally questionable. Using a semi-structured interview protocol, teachers were asked about their personal understanding of alternative forms of assessment; about how they had acquired this understanding; how they integrated changes into their practices; what these practices looked like; what successes and obstacles they encountered during implementation; and what support systems had been provided for them
Article
Full-text available
Introduction. Evaluating competencies required for professional practice is a matter of particular current interest. Its importance lies in improvements that can be made in both preparatory and ongoing training and development processes. This paper summarizes results obtained from a recent investigation regarding this issue. Method. A total of 76 subjects of varying typology participated. These differed in what degree they had earned, in when they had completed their studies, in their current professional position, and in their level of professional experience. All of them completed an online version of the Escala para la Evaluación de la Formación Psicológica recibida por los profesionales [Scale for Evaluating Training in Psychology Received by Practicing Professionals], version 1.00 (De la Fuente, 2003). We performed descriptive analysis and analyses of variance with data obtained. Results. The academic and professional competencies identified are developed, or constructed, in both developmental environments, although not in proper balance, i.e. there is not always adequate coordination between the two environments. In general, subjects feel that a greater number of competencies are constructed in the applied-professional context. Most factual knowledge (knowing) is constructed in the degree program environment, while construction of procedural knowledge (know how) is produced in the applied environment. Discussion. We consider this line of work to be quite beneficial in evaluating the quality of training received. By taking a close-up look at the current situation we have been able to discern perceptions of students, teachers and practitioners. This input is quite valuable for redesigning preparatory and ongoing training processes for future psychologists.
Article
Full-text available
The completion of the first ten years of this journal is an occasion for review and reflection. The main issues that have been addressed over the ten years are summarized in four main sections: Purposes, International Trends, Quality Concerns and Assessment for Learning. Each of these illustrates the underlying significance of the themes of principles, policy and practice, which the journal highlights in its subtitle. The many contributions to these themes that the journal has published illustrate the diversity and complex interactions of the issues. They also illustrate that, across the world, political and public pressures have had the effect of enhancing the dominance of assessment so that the decade has seen a hardening, rather than any resolution, of its many negative effects on society. A closing section looks ahead, arguing that there is a move to rethink more radically the practices and priorities of assessment if it is to respond to human needs rather than to frustrate them.
Article
Introduction: This paper seeks to establish basic dimensions on which to construct a system of indicators for evaluating university quality, from the perspective of pupils, and within the framework offered by the European Space for Higher Education. Method: The population for this study was defined as the set of students enrolled at the Uni-versities of Salamanca (USAL) and Cordova (UCO) in the academic year 2004/2005, a total of 45,751 students. The two institutions are located in diverse geographies within the territory of Spain, and each has a distinct identity. The sample obtained comprised a total of 1167 sub-jects, stratified as a function of branch of specialization: health sciences, humanities, legal-social studies, and technical programs. A questionnaire was designed to collect information using a protocol of scale-based evaluation items. Application of factorial analysis allowed us to establish basic dimensions from which to determine different evaluation indicators. Results: Results yielded a total of 14 factors. Those which stand out as most powerful and useful are student satisfaction, academic and professional competencies, evaluation of aca-demic performance, virtual teaching and the advising process. Discussion: The various tests demonstrate that quality is defined first and foremost by student satisfaction, a result which ratifies the logic behind models of institutional evaluation being implemented in Spain. © 2018, Education & Psychology (E& P), I+D+i. All rights reserved.
Chapter
In Chapter 2, Self-Regulated Learning: Present and Future of Research, González and Torrano (Spain) comment on profound changes seen over the past 30 years in the context of the Psychology of Education. Self-learning today has become a central topic for research and one of the primary axes in educational practice. Since the publication of Zimmerman and Schunk (1989), Self-Regulated Learning and Academic Achievement: Theory, Research, and Practice, numerous investigations in self-regulated learning have been undertaken. Drawing on these and more current publications, the present objective is to bring together the main issues being addressed in the scope of self-regulated learning. In addition, guidelines for future research are also proposed.
Article
Current research on goal orientation and self-regulated learning suggests a general framework for examining learning and motivation in academic contexts. Moreover, there are some important generalizations that are emerging from this research. It seems clear that an approach-mastery goal orientation is generally adaptive for cognition, motivation, learning, and performance. The roles of the other goal orientations need to be explored more carefully in empirical research, but the general framework of mastery and performance goals seems to provide a useful way to conceptualize the academic achievement goals that students may adopt in classroom settings and their role in facilitating or constraining self-regulated learning. There is much theoretical and empirical work to be done, but the current models and frameworks are productive and should lead to research on classroom learning that is both theoretically grounded and pedagogically useful.
Article
Evaluation has become an everyday activity for many us, whether of our practice in the classroom, the student experience, institutional performance or funded projects. This article looks at the main aspects of evaluation and adopts a systematic approach to evaluation by looking at why, what, for and by whom, when and how to evaluate. The emphasis is on why we want to evaluate and its use in decision making rather than on how we are going to carry it out. The assumption is also that we want to involve as many stakeholders in the evaluation as possible so as to promote greater ownership of the whole process.
Article
In this paper it is argued that the current trend of making assessment criteria more explicit in higher education may have a deleterious effect on students' learning. Helping students to concentrate on assessment criteria paradoxically means that they may take a strategic approach and end up focusing on the superficial aspects of their assessment tasks, rather than engaging in meaningful learning activity. One solution might be to re‐conceptualize assessment criteria as ‘learning criteria’ using Biggs' principle of constructive alignment in curriculum development and delivery. To illustrate how this can work in practice, a case study is presented detailing the development of a counselling psychology module over several years to progressively incorporate a text‐based adaptation of the problem‐based learning approach. Student evaluations of the approach are presented together with some examples of feedback given on students' work to demonstrate the effects on students' understanding and functioning knowledge
Article
1. Executive summary The following position paper sets out to inform policy makers, educators, and fund raisers about the state-of-the-art, the possibilities, and the needs for innovation in assessment. The position paper is divided into the following sections: • Why current assessment systems fail learners and teachers This section describes the shortcomings of current assessment practices for both learners and teachers. Current assessments focus on assessment of learning rather than assessment for learning. They are limited in scope, and lead to teaching for assessment, NOT teaching for learning. They ignore individual learner differences. These current assessment practices also tend to be uneconomical and prevent teachers from developing teaching skills as part of their continuous professional development as assessments can develop into teaching 'straightjackets'. In general, current assessment practices do not fit the needs and demands of today's information and knowledge societies. Learning in today's knowledge and information society requires learners to become problem solvers and creative thinkers in all subjects and areas. These needs are currently not reflected. • The need for fundamental change Re-thinking assessment forms part of a larger drive to effect change across the curriculum. Whilst modern societies have dramatically changed with the advent of technological changes and the development of information technology systems, most schools still rely on teaching according to an out of date information transmission model. Current assessment practices fail to address the needs of today's learners and the modern, complex and globalised societies that they are a part of. Teachers need to be supported in changing their current practices in order to assess learners in ways that reflect the future needs that will be placed upon them. This position paper is written by members of the European Association for Research on Learning and Instruction. The opinions and conclusions expressed in this paper represent the views of the author(s) and should not be seen as an official standpoint by EARLI as an organisation.