ChapterPDF Available

SERRANO ANGULO, J. AND CEBRIÁN, M. (2011). Study of the impact on student learning using the eRubric tool and peer assessment. In Education in a technological world: coomunicating current and emerging research and technological efforts. Edit Formatex Research Center



The current study aims to tackle the impact of self and peer assessment on learning at university, and the internalisation of competences when students use eRubrics to assess class tasks and projects. In this context eRubrics are used as a tool, a technique and a method that can help teachers to focus on the students’ learning, hence, closer to the new methodological principles of the European Space. This study focuses on the current use of technology to assess learning, and more specifically the use of eRubrics. It will look at peer assessment and all the other required methodologies to promote the development of self-regulating competences among students. Likewise, the present study aims to find out, based on contrasting results, the requirements for the implementation of the new European methodological principles. A number of peer assessments and their comparison to teacher's assessments were analysed for three academic years consecutively. This was done based on the presentation of class tasks and projects, where 70 university students assessed their peers and the resulting assessments were compared to those provided by the teacher during their last academic year. Results include a total of 994 eRubrics, which were used to assess the presentation of 14 class groups. The final results highlighted that students internalised quality criteria, and gradually gained more practice with the eRubric methodology. During the three years the eRubric and the teaching methodologies also improved, as did the instruments required for their analysis
Study of the impact on student learning using the eRubric tool and peer
J. Serrano Angulo and M. Cebrián de la Serna1
1 Málaga University, Faculty of Education. Campus Teatinos s/n, 29071 Málaga, Spain
The current study aims to tackle the impact of self and peer assessment on learning at university, and the internalisation of
competences when students use eRubrics to assess class tasks and projects. In this context eRubrics are used as a tool, a
technique and a method that can help teachers to focus on the students’ learning, hence, closer to the new methodological
principles of the European Space. This study focuses on the current use of technology to assess learning, and more
specifically the use of eRubrics. It will look at peer assessment and all the other required methodologies to promote the
development of self-regulating competences among students. Likewise, the present study aims to find out, based on
contrasting results, the requirements for the implementation of the new European methodological principles. A number of
peer assessments and their comparison to teacher's assessments were analysed for three academic years consecutively.
This was done based on the presentation of class tasks and projects, where 70 university students assessed their peers and
the resulting assessments were compared to those provided by the teacher during their last academic year. Results include
a total of 994 eRubrics, which were used to assess the presentation of 14 class groups. The final results highlighted that
students internalised quality criteria, and gradually gained more practice with the eRubric methodology. During the three
years the eRubric and the teaching methodologies also improved, as did the instruments required for their analysis.
Keywords: Peer assesment, eRubric, Formative Assessment and Project-Based Learning.
1. Introduction
Assessment has been reintroduced as a major area by many professors in regards to projects of innovation and
educational improvement at a university level, due to the new methodological changes promoted by the European
Space. This has always been a recurring issue. However, these recent changes focus on competences assessment on one
hand, and on the other demand greater participation from the student throughout the entire process. The changes have
turned all aspects of assessment into one of the main areas dealing with innovation and current studies. Thus,
assessment has become one of the most important aspects of teaching practice, and one where both students and
teachers try to give their best; although sometimes despite their efforts, lack of understanding and unsatisfactory
situations may occur from both sides [1][2][3]. This may actually happen even when students have clear and precise
criteria and a formative assessment is developed. Furthermore, these problems may still persist even when students raise
their voices and get more involved in their own learning process. To avoid these problems, a series of procedures and
strategies should be carried out in order to assess the scope and limits of this greater student participation in the
assessment process.
Teachers face the assessment process from a new teaching approach focused on students’ learning, where attention is
drawn to students’ workload and the credits assigned to the subjects. This new approach, together with the new
resources and possibilities offered by educational technologies (online learning, etc), is shaping new methodological
possibilities [4][5][6][7][8][9] and practices that require new competences from both teachers and students [10].
In general, these changes require a greater responsibility from students so that they can assume a more defined role in
both the learning and the assessment processes [11][12]. This is why this model advocates a broader understanding and
cooperation between students and teachers, in order to make assessment criteria, function and scope more transparent.
This idea is closer to a formative assessment than summative, as it enables significant learning and allows students to
check on their learning. However, formative assessment is not always possible, especially when there is a large group of
students. Therefore, teachers are encouraged to try different methods and techniques in different contexts in order to
reach a better understanding with students. In this way students aim to understand and internalise the assessment criteria
and quality standards, while teachers aim to find out the real scope of their students’ learning process. There are two
techniques that can particularly help achieve these aims:
a) On the one hand, peer assessment and self assessment are carried out by both teachers and students and are not
exclusive to one or another, but instead, they become innovative methods and models which allow them both: teachers
and students to reach professional competences [13][14][15][16]. Teachers carry out the final evaluation and marking.
Peer assessment and self assessment must be understood as a methodology for students to acquire better knowledge of
and commitment to the assessment process, but should not be taken into account in final marking and evaluation.
Peer assessment refers to both peer correction and peer feedback. The two aspects can be and must be combined, as
they help to achieve class cohesion [17]. On the other hand, self-assessment aims for students to identify the criteria or
standards to be implemented in their activities and to assess them according to such criteria [18]. The importance and
implementation of such criteria differs according to teachers or students [19]; hence it would be interesting to research
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
their understanding of the process of criteria and assessment. This framework helps us understand the acquisition of
these competences by students and their attainment with new teaching methodological models. This methodology is
particularly interesting because it requires students to take more and more responsibility of their own learning process,
making them increasingly involved and motivated. After all, the ability shown by these students when internalising peer
assessment criteria is, by itself, a learn to learn methodology, or lifelong learning, to acquire and implement standards
that will be later used with other peers in the professional world.
b) On the other hand, the eRubric is a tool, an equally innovative technique and methodology, which allows for the
development of professional competences. At the same time it requires students to carry out ongoing self assessment
when identifying the criteria or standards to be met in their tasks, as well as applying such criteria and standards.
Therefore, the eRubric is likely to elicit greater participation from students during the assessment process, by learning,
sharing and internalising indicators and assessment criteria. These indicators and assessment criteria are established by
the teacher and contrasted with students’ learning evidence by means of an ongoing debate. This way students are
allowed to manage their own learning process. The eRubric also facilitates the peer and self-assessment methodology,
and offers students guidelines to be aware of and differentiate between the learning development process when creating
criteria (their meaning, their relationship with the meaning of standards, their link to professional competences, etc) and
when implementing these criteria [13]. The term eRubric is not frequently used in our immediate context neither in the
university teaching practice, although it is widely used in the context of assessment matrices and in the practice of this
tool [20]. On the contrary, there is very little experience with the use of eRubrics by students to carry out peer
assessment. In an English-speaking context, it seems that certain scholars find this term a bit confusing and as
belonging to a very reduced semantic field, when they define eRubrics as a technology assessment scale [21]. There is
much more published literature on eRubrics in this technology context than in the context of peer assessment.
The process of criteria and assessment making by teachers and students is a strategy that helps us understand the
acquisition of competences by students and, at the same time, experience new methodological models - peer assessment
-, techniques and tools - eRubrics – to assess professional competences. The peak of technology in university teaching
in general and in the assessment process in particular is an unstoppable fact. Therefore, the time has come to assess
experiences derived from the actions promoted by universities within the new European Space policy. The specialised
literature shows some evidence of the benefits of formative assessment, peer assessment and the use of techniques and
tools such as the eRubric [22][23][24][25]. However, it is still unknown as to how these thoughts and the internalisation
of assessment criteria and standards occur in students when the Internet and specialized softwares - such as eRubrics –
are involved.
2. Methodology
The following analysed experience took place in an elective subject for three consecutive years (2007-2008; 2008-2009
and 2009-2010) with fourth year Pedagogy degree students. The group size during the three year case study varied
between 30 to 65 students depending on the year. During that period, the used methodology always consisted of
project-based learning with formative assessment using eRubrics. The teacher used two eRubrics to assess the
competences required during the course. The percentages and requirements of each eRubric were modified in order to
adapt it to each situation depending on the number of students per group.
On the one hand, the first eRubric (40% of the total mark) tackled student participation during class and the teacher’s
office hours used to follow up group projects, by means of a “learning contract” where students’ responsibilities were
reflected and student participation in peer assessment was encouraged. The task consisted of designing a professional
project to produce and assess teaching materials for a particular educational case. On the other hand, the second eRubric
(50% of the total mark), aimed to assess all the competences necessary for the project. In addition to these two
eRubrics, an individual test was also required from each student to show his/her knowledge and command of the project
(10% of the total mark).
The teacher carried out an overall evaluation of each project during his/her follow-up throughout the whole year and
also based on the final presentation at the end of the year. To do so, the teacher used a diary to write down the different
aspects of group work and the office hours required by each project. Students assessed the presentation of the project,
hence the presentation competences were the ones used to elaborate a comparative study between peer assessment and
teacher assessment (e.g. oral competence, use of technology in the presentation, etc.), as it will be further explained.
While all competences were designed and explained by the teacher, indicators were created and agreed on by groups as
an exercise at the beginning of each year. Also, the eRubric criteria were discussed, and those aspects that seemed
ambiguous to students were clarified. Likewise, the importance of internalising criteria and standards was discussed at
the beginning of each year, together with the importance of peer assessment in view of the professional world. In fact,
an assessment exercise driven by reasons other than professional implementation of the criteria would be wrong.
The general methodology of the course involved different strategies:
Lessons led by the teacher at the beginning of each unit; using group dynamics to discuss the theoretical
background of the programme contents.
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
Debate sessions and activities (outlines, summaries, reports, comments and projects) carried out by students
either individually, in groups or in peers. They worked in the classroom using examples of all the materials
previously presented by the teacher.
Group project. Students carried out the design, experimentation and assessment of a particular material within
a specific teaching context.
Office hours and online tutor-sessions. These were of crucial importance as they aimed for teachers to use the
eRubric to accompany students’ learning process with their projects. A learning contract was drawn up for
each project and group; on which the sense of each eRubric indicator for each particular project was specified.
Additional activities. Several complementary activities were presented in order to look in depth at specific
topics by means of conference summaries. Students also colaborated with other students who were studying
different subjects and who attended different universities (depending on the opportunities available throughout
the year).
Course stages:
The methodology gradually developed along the three years, through different stages and pace within each
academic year; and it was finally defined by three stages gathered in the Agenda. For each stage there was an open
forum on a variety of topics.
The stages were as follows:
The first stage (approximately one month), where the teacher presents the topics and group dynamics are
carried out as a group.
The second stage (almost the whole lenght of the course), where each group selects one project from
those presented by the teacher and draws up a contract with the teacher to carry it out. At this point,
group tutor sessions and teamwork are especially important, the latter through the group ePortfolio and
the eRubric.
The third and final stage, where each group introduces their project as a conference paper. The
presentation is assessed by the teacher and the rest of the students. This is called peer assessment and is
widely mentioned in the pedagogical literature [2].
2.1. Instruments
The analysis instrument consisted of using the same eRubric throughout the course, together with an ePortfolio for each
group which kept recorded evidence of learning, experiences and presented materials. The format of the rubrics used in
peer assessment during presentations was paper-based. The text of the rubric has changed over the three years due to
debates and discussions at the beginning of each year and to its adjustment to students' projects. Likewise, new aspects
such as scale numbering from 1 to 10 were used to score evidences in the rubric. Given that each competence included
four evidences, the total range of a competence was 0-40. By doing so, the possibility of conducting correlational
analyses increases.
Students’ projects were aimed at designing teaching materials adapted to a certain educational level and knowledge
area, where the final user’s profile and context accounted for an important key. Such materials must come with an
assessment design aimed at a specific context and users, and must require at least one pilot test to check the assessment
design proposal. Designing any teaching material in a technological format (multimedia, software, web pages, etc.)
requires technical competence and time that the students did not have. Therefore they were given the chance to adapt
some already existing teaching materials to a particular context; or to use materials designed by the teacher’s research
group. The Faculty of Education of the University of Malaga has a special department and two technicians available to
assist these specialised teaching processes, as well as with the appropriate equipment to re-edit technological materials
(audio editing equipment, video, multimedia, etc.).
The eRubric instrument has been modified throughout the three years and has finally kept the following variables:
Descriptive variables:
Who assesses (numbered name)
Who is being assessed (numbered name)
Exam mark and final grade
Observations and incidents recorded
Competence variables of the eRubric:
1. Technology used (this section assesses the use of technology in the presentation of the project). 5% in relation to
the rest of the competences.
No visual support at all (ranging from 0-10)
Poor or scarce visual support (0-10)
Satisfactory visual support (0-10)
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
Different visual support formats, satisfactorily presented and complementary to one another (posters, presentations,
pamphlets, etc.) (0-10)
The term satisfactory here refers to the ideal practice taught by the subject of Educational Technology in previous
years and reinforced by means of examples, templates, presentations, posters, etc.
2. Presentation of the project (this section assesses the effort made to present the project to others). 10% in relation to
the rest of the competences.
The presentation is not comprehensible due to lack of coherently organised information (ranging from 0-10)
The presentation is difficult to follow due to sequential and logical failure (0-10)
The presentation is clearly comprehensible as information follows a logical sequence (0-10)
The information is presented following a logical sequence and in an European language other than Spanish (0-10)
3. Adaptation of the design and assessment. 15% in relation to the rest of the competences.
No assessment design is presented (ranging from 0-10)
The assessment design presented is not 100% coherent with the assessment objectives (0-10)
The assessment design presented is 100% coherent with the assessment objectives (0-10)
The assessment design presented is coherent, creative and full of instrumental and graphic resources (0-10)
4. Self-assessment of the whole process (this section assesses the constraints and limitations encountered in the
process, together with the required resources and competences to develop the project both individually and as a group.
It also assesses the steps taken to alleviate the constraints as well as the abilities developed to achieve the objectives).
20% in relation to the rest of the competences.
No group self-assessment is presented (ranging from 0-10)
The group self-assessment presented is poorly argued (0-10)
Errors and achievements of the group are clearly presented (0-10)
Errors and achievements are identified and their causes explained (0-10)
2.2. Analysis Techniques
The data of the present study were taken from the analysis of the third year from the three that were studied between
2007 and 2010. Throughout these years the methodology, the eRubric and the instruments of the study improved. In the
last year there was a total of 70 students and 14 groups, producing a total of 994 assessed eRubrics. The data were
organised as follows: assessor identifier (no. of student and teacher); no. of the group assessed; data from each of the
competences and their assessment by the group, and student assessor's exam grade.
The data analysis has worked out the assessment of each competence provided by each participant and the sum of all
assessments provided by each group. Then, the difference between teacher and student assessment of each group has
been worked out in each of the competences. In this way the distributions and differences of the teacher and student
assessments are obtained. It is worth taking into account that in those cases where the teacher’s assessment is of an
extreme difference (higher or lower) to that of the student’s, the student’s assessment will always be negative when it’s
lower and positive when it’s higher. Whereas when the assessment is not of an extreme difference it can be positive or
The scale used to assess each indicator in each competence ranged from 0 to 10, for being the one students were most
familiar with, as it is the scale used to grade in the Spanish educational system. The average and the typical deviations
of the distribution of the differences between student and teacher assessment have been worked out for each
competence and for the total of all competences. This was done with the purpose of finding out which competences
present more differences between student and teacher assessment, and whether or not these differences were maintained
in the total; as a competence may be assessed at its lowest and another one at its highest, in which case the total
assessment average would be similar to that of the teacher.
This method allows us to detect all the differences, regardless of whether they were produced in the assessment of a
competence or in the assessment of all competences. Additionally, the average of these differences in peer assessment
has been worked out for each student. This allows us to distinguish those students who, on average, over-assess in
relation to the teacher from those who under-assess and finally those whose assessment is similar to the teacher’s.
Students’ attendance in the activities was also taken into account, differentiating between the assessment provided by
regular attendees and by irregular attendees.
Data and variable analysis:
Differences between teacher and student assessment of competences
Differences between teacher and student assessment of competences as compared to students’ exam grades
Changes detected in peer assessment over time as compared to teacher assessment. The aim is to find out
whether the internalisation of the criteria improves in line with a more frequent use and practice of the eRubric
methodology and peer assessment.
Differences with other variables such as attendance
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
3. Results
Graphic 1 shows the average of the differences between the total assessments of each group provided by each student
and by the teacher respectively. In the first two groups, it can be observed how students under-assess in relation to the
teacher, as they seem to be more demanding with their peers. Instead, in the following groups, there is a tendency to
over-assess in relation to the teacher. There might be various reasons for this: students may want to favour their peers,
they may have more reference information as they assess more and more groups, etc. In the competence analysis
(graphic 2) the tenth group is the most irregular, as it has a positive difference in the first three competences and a very
negative one in the fourth competence. This explains why the total of peer assessments for this group is similar to the
teacher’s assessment (graphic 1). Another aspect to be highlighted is the fact that students under-assess their peers in
relation to the teacher in the fourth competence. It can also be observed that, except for the first group and the strange
case of group 10, the average of the differences in the assessment of each competence is generally between five points.
If we look at the averages of the differences in the assessments provided by each student in relation to that provided
by the teacher (see chart 1), we find that 19.14% of students under-assess their peers in relation to the teacher, with an
average of 3 points out of 40; whereas 10.63% of students give an average assessment of 3 points or more. The averages
of these differences throw 6.4 points out of 40. Likewise, 42.55% of students give an assessment of which the average
of the differences ranges between -1.9 and 1.9.
Graphic 1. Averages of the differences in the total assessment of each group by each student and by the teacher.
Graphic 2. Competence analysis
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
Averages of the Differences Students’ Percentages
-4 a -3 19.14
-3 a -2 14.89
-2 a 1.9 42.55
2 a 2.9 12.76
3 a 6.5 10.63
Chart 1. Averages of students’ differences and percentages .
According to our data, the averages of these differences tend to go up. This may be due to students’ learning process
when applying the eRubric criteria over time, as well as to more psychological factors inherent to every assessor. Once
a first moment of “tension and responsibility to assess others” (where students might be tempted to carry out a more
demanding assessment) has passed, they might eventually apply the assessment criteria more calmly and rationally,
taking into account the shown evidence. Likewise, as projects are presented and students apply criteria to different
situations, they are learning to internalise the criteria. Additionally, students might improve their projects over time due
to them analysing their peers’ presentations. However, the teacher’s assessment seems to have disregarded this
possibility, as except for the first group (that has the lowest grade), the rest of the groups have high grades regardless of
their presentation order (the ones with the highest grades are 3, 5, 6, 11 and 14).
The eRubric methodology and technique applied to peer assessment poses a huge professional challenge for both
students and teachers. Students are gradually learning to use it by means of regular practice, which contributes to them
better internalising criteria and assessment standards.
Some competences were easier to learn than others. For instance, students assessed less accordingly to the teacher in
competences of self-assessment, in the analysis of constraints and limitations encountered, and in the required resources
and competences to develop the project both individually and as a group. We therefore understand that we must invest
more effort in these types of competences and pay constant attention to the follow-up of projects, demanding ongoing
analysis from students by using techniques such as elaborated learning diaries. In an increasingly globalised world, our
institutions will also be globalised [26] and will have to offer professional training for an equally globalised future [27].
This implies that from basic educational training, students should have more involvement in self assessment and in the
exchange of quality criteria and standards. In other words, we need our students to discuss these criteria and standards
when working with other students from their own or different universities, just like they would in the professional
world. Likewise, students need to share and generalise assessment criteria from other institutions. This higher level of
committment from students towards the teaching and learning processes, together with globalisation and standards
exchange, places assessment in the core of attention. Likewise, the learning of quality criteria and standards shared with
other institutions requires the creation and research of online tools and services to assist teachers and students in
competence assessment, peer-assessment and self-assessment.
Acknowledgements The results of the present study were extracted from the project “Federated e-rubric service to assess university
learning”, nº EDU2010-15432 (Official State Bulletin. BOE 31/12/2009). MINISTRY OF RESEARCH. DEPARTMENT OF
[1] Alvarez Méndez, J.M. Evaluar para conocer, examinar para excluir. Madrid, Morata; 2001.
[2] Brown, S. and Glaser, A. Evaluar en la universidad. Problemas y nuevos enfoques. Madrid, Nancea; 2003
[3] Lara, S. La evaluación formativa en la universidad a través de internet: Aplicaciones informáticas y experiencias prácticas.
Pamplona, Eunsa ; 2001.
[4] Cabero, J. Formación del profesorado universitario en estrategias metodológicas para la incorporación del aprendizaje en red en el
Espacio Europeo de Educación Superior. Píxel-Bit. Revista de Medios y Educación, 2006; 27:11-29.
[5] Salinas, J. Hacia un modelo de educación flexible: Elementos y reflexiones. En F. Martínez y M. P. Prendes (coords.). Nuevas
Tecnologías y Educación. Pearson/Prentice Hall, Madrid; 2004: 145-170.
[6] Martinez, F. and Prendes, M.P. Redes de comunicación en la enseñanza. Las nuevas perspectivas del trabajo corporativo.
Barcelona, Paidós; 2003.
[7] Gallego, M. Comunicación didáctica del docente universitario en entornos presenciales y virtuales. Revista Iberoamericana de
Educación. 2008; 46: 1.
[8] Baelo, R. and Cantón, I. Las TIC en las universidades de Castilla y León. Comunicar. 2010; 35, V. XVIII:159-166.
[9] Santos Guerra, M. La evaluación, un proceso de diálogo, comprensión y mejora. Aljibe, Archidonda; 1993.
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
[10] Tejedor, J. and García-Valcarcel, A. Competencias de los profesores para el uso de las TIC en la enseñanza. Análisis de sus
conocimientos y actitudes. Revista Española de Pedagogía. 2006; 233: 21-44.
[11] Zabalza Beraza, M.A. La enseñanza universitaria: el escenario y sus protagonistas. Madrid, Narcea: 2002.
[12] Zabalza Beraza, M.A. Competencia docentes del profesorado universitario. Calidad y desarrollo profesional. Madrid, Narcea;
[13] Orsmond, P., Merry, S. & Reiling, K. The use of student derived marking criteria in peer and self assessment. Assessment and
Evaluation in Higher Education. 2000; 25-1: 23–38.
[14] Hamrahan,S. & Isaacs, G. Assessing Self- and Peer assessment: the students’ views. Higher Education Research &
Development. 2001; 20: 1.
[15] Prins, F., At. All. Formative peer assessment in a CSCL environment: a case study. Assessment & Evaluation in Higher
Education. 2005; 30-4: 417–444.
[16] Vickerman, P. H. Student perspectives on formative peer assessment: an attempt to deepen learning? Assessment & Evaluation
in Higher Education. 2008; 1–10.
[17] Boud, D. The challenge of problem-based learning. London, Kogan Page; 1997.
[18] Boud, D. Understandind learning ar work. London, Rutledge; 2000.
[19] Lapham, A. & Webster, R. Evaluación realizada por los compañeros: motivaciones, reflexión y perspectivas de futuro. En
Brown, S. & Glaser, A.; Evaluar en la universidad. Problemas y nuevos enfoques. Madrid, Nancea; 2003:203-210.
[20] Ágra, mª.; Doval; I.; Gerwec,A.; Martínez, E.; Montero, L. and Raposo,M. Trabajar con ePortafolios. Docencia, investigación e
innovación en la universidad. 2010 (CD. ISBN: 978-84-693-3740-0).
[21] Moore, J.C. and Hafner, P. Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group
rating. J.SCI.EDUC. 2003; Dic. 25-12: 1509-1528.
[22] Harner, J.C. and Hafner, P.H. Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group
rating. International Journal of Science Education, December. 2003; 25-12: 1509–1528.
[23] Osana, H. and Seymur, J.R. Critical thinking in preservice teachers: a rubric for evaluating argumentation and statical reasoning.
Educational Research and Evaluation. 2004; 10/4-6: 473-498.
[24] Andrade, H. and Du, Y. Student perspectives on rubric referenced assessment. Practical, Assessment, Research & Evaluation.
2005; 10: 3.
[25] Marcos, L. Tamez, R. and Lozano, A. Aprendizaje móvil y desarrollo de habilidades en foros asíncronos de comunicación.
Comunicar. 2009; 33-XVII: 93-110.
[26] Lefrere, P. Competing Higher Education Futures in a Globalising World. European Journal of Education. 2007; 42-2.
[27] Brown, G. High skills: Globalization, competitiveness, and skill formation. Osford, University Press; 2001.
Education in a technological world: communicating current and emerging research and technological efforts
A. Méndez-Vilas (Ed.)
... En nuestro caso, las rúbricas utilizadas han sido digitales, de ahí que nos refiramos a ellas como eRúbricas. La eRúbrica es una herramienta tecnológica y una metodología de evaluación formativa basada en competencias, que ayuda en la comunicación evaluativa entre docentes y estudiantes, y facilita la gestión de la autoevaluación de los aprendizajes por los estudiantes (Serrano y Cebrián, 2011), además de promover el diálogo, la reflexión y la socialización. ...
... Las eRúbricas diseñadas de autoevaluación y coevaluación, consisten en la valoración del propio aprendizaje y el de sus compañeros, y de los factores que influyen en él. Un aspecto a destacar de estos instrumentos, es que son tareas de aprendizaje en si mismas, es decir: proporcionan al alumnado estrategias de desarrollo personal y profesional; ayudan a desarrollar la capacidad crítica; favorecen la autonomía; comprometen al alumnado en el proceso educativo; motivan para aprender; incrementan la responsabilidad con el aprendizaje; promueven la honestidad con los juicios y, proporcionan información al docente sobre su diseño y acción didáctica (Blanco, 2008;Buján, Rekalde y Aramendi, 2011;Serrano y Cebrián, 2011). ...
Full-text available
This work was carried out at University of the Basque Country in two different research contexts-the School of Philosophy and Education Science and the Teachers College in Donostia-and two different degree programs: Primary Education and Social Education. The modular structure of the two degree programs and the fact that they are both committed to the development of the transversal competences of teamwork and oral communication has allowed instructors to collaborate on eRúbrica, a common architecture for learning tasks and assessment tools, and on the criteria included within the architecture. eRúbrica favors the construction of a virtual environment that is shared by instructors and students, where the objective is to track the development and assessment of the two transversal competences. This project highlights the usability of eRúbrica in tracking and assessing the development of transversal competences as well as the fact that the collaborative work carried out by instructors is a basic element that is key to its success.
... The main aim of the eRubric project is to implement, develop and evaluate the tool in a given context, namely two courses within the Programme in Early Childhood Education at the University of Stockholm (210 university credits).The implementation of eRubrics is not only aimed at solving a technical need (i.e. adding to the technical support that is already being used), but also at developing a different methodology for educational assessment with ePortfolios ( Serrano and Cebrián, 2011). The first stage of the project started at the beginning of the Swedish semester, which lasts from January to June 2012. ...
Full-text available
p>El proyecto que presentamos consistió en una experimentación y evaluación de la herramienta eRubrica[1,2] dentro del programa de formación de docentes de Educación Infantil. eRubrica es una herramienta pero tambien un método para formación didáctica y evaluación formativa. Como principios posee la creación de ambientes de aprendizaje colaborativo, facilitar la consciencia en los estudiantes sobre su propio proceso de aprendizaje y la participación activa en los cursos para asegurar su calidad. Los métodos planteados se entrelazaron con la herramienta y los grupos de control inmersos en la plataforma institucional de la Universidad de Estocolmo. En una perspectiva transnacional se coopera activamente con la universidad de Málaga a través del grupo Gtea[3]. El proyecto se inicia en el curso año académico 2012 y se planificó en tres fases: Implementación, desarrollo y evaluación. Los resultados muestran ventajas pedagógicas de la herramienta en lo que respecta a la reflexión del profesor sobre las competencias y evidencias de aprendizaje, la participación activa del estudiante en sus tareas y el feedback con sus compañeros; al tiempo que se vislumbran resistencias en el docente a la hora de implementar e innovar con la herramienta y su metodología. Por su parte, los estudiantes también realizan un proceso de reflexión y aprendizaje colaborativo obteniendo resultados positivos, mostrando dificultades y limitaciones. </p
... In fact, under the category of digital rubrics, assessment processes using web systems known as " e-rubrics " are used in naturally complex interdisciplinary and collaborative learning environments. Some examples of this were implemented to assess various tasks to be performed in forums (Bartolomé, Martínez Figueira, & Tellado-González, 2014 ; Gordillo & Perera-Rodríguez, 2010), peer review processes (Serrano & Cebrian, 2011), or in co-peer reviews and self-assessment (Gallego-Arrufat & Raposo-Rivas, 2014). ...
Full-text available
p class="3">Open Assessment of Learning (OAoL) is an emerging educational concept derived from the incorporation of Information and Communication Technologies (ICT) to education and is related with the Open Education Movement. In order to improve understanding of OAoL a literature review was conducted as a meta-synthesis of 100 studies on ICT-based assessment published from 1995 to 2015, selected from well-established peer-reviewed databases. The purpose of this study focused on identifying the common topics between ICT-based assessment and OAoL which is considered as an Open Educational Practice. The review showed that extensive use of the Internet makes it easy to achieve some special features of OAoL as collaboration or sharing, which are considered negative or inconvenient in traditional assessment but at the same time become elements that promote innovation on that topic. It was also found that there is still a great resistance to accept change (as OAoL does) when structural elements of traditional assessment are questioned or challenged.</p
... The teacher-researcher contrasted three studies in which PA to foster the development of speaking skills in English language learners was used. The first study was conducted in Iran (Ahangari, Rassekh-Alqo, & Akbari, 2013), the second one was carried out in Spain (Serrano & Cebrián de la Serna, 2011), and the last one was developed in Colombia (Gómez, 2014). As a result of contrasting the studies, the teacher researcher could have an idea of what have been done around the globe in regard to the use of PA to foster speaking development. ...
Full-text available
This article reports on how the use of peer-assessment and a corpus influence the development of the spontaneous interactive speaking of 14 adults with an A1 English level. The data, that were collected through video recordings, two peer-assessment forms, and a teacher’s journal, evidenced the development of three enhancement strategies (willingness to improve, use of compensatory strategies, and construction of a personalized version of the corpus) and two detrimental traits (underassessment and dependency on the corpus).The results of the inquiry evinced some limitations in the pedagogical intervention.
Full-text available
Due to the virulence of the COVID-19 pandemic, we have been forced to hastily modify teaching methodologies, the organisation of resources and competence assessment systems without prior reflection, which has been particularly difficult for graphics subjects in Architecture, because they have drawing as their means of communication. This study has focused on evaluating and analysing the different distance assessment formulas that have been used in graphic design subjects at our university, in order to determine the advantages and disadvantages of each one of them. In this sense, based on surveys on distance assessment mechanisms among students, teachers and service staff, guidelines have been drawn up to improve the design of assessment tasks for graphic subjects in our school. Debido a la virulencia de la pandemia COVID-19, nos hemos visto forzados a modificar apresuradamente, las metodologías de enseñanza, la organización de los recursos y los sistemas de evaluación de competencias sin haber tenido reflexión previa sobre ello, lo cual ha sido particularmente difícil para las asignaturas gráficas en arquitectura, que tienen el dibujo como su medio de comunicación. Este estudio se ha centrado en evaluar y analizar las diferentes fórmulas de evaluación a distancia que se han empleado en nuestra universidad en asignaturas gráficas, para determinar las ventajas e inconvenientes de cada una de ellas. En este sentido, a partir de encuestas sobre los mecanismos de evaluación a distancia entre alumnado, profesorado y personal de servicios, se han redactado unas directrices para la mejora del diseño de tareas de evaluación para asignaturas gráficas en nuestra escuela.
Full-text available
Introducción Un profesor universitario acababa de recibir una carta de la agencia evaluadora. En ella se hacía una valoración no favorable de sus méritos investigadores y docentes. El profesor estaba desconsolado. Indagaba y trataba de comprender los criterios que dicha agencia había empleado para evaluarlo. No le parecían claros. No encontraba la correlación entre las valoraciones cualitativas que le hacían, la puntuación cuantitativa y sus propios méritos. Pensaba que los criterios deberían ser más claros y que el proceso debería ser más trasparente. A las pocas horas, este profesor estaba frente a sus estudiantes. Había dejado atrás la decepción sufrida para enfrentarse una vez más a sus clases. Durante el transcurso de la clase, un estudiante le interrogaba sobre cómo iba a corregir el trabajo en grupo que les había pedido días atrás. El profesor, que no quería perder tiempo en explicaciones sobre eso, remitía al estudiante a la programación. Le indicó que allí estaba todo claro y que debía continuar con su clase. ¿Estaría realmente todo claro en su programación? ¿Estaban allí los criterios de evaluación claramente definidos de manera que cualquier estudiante pudiera saber cómo iba a ser evaluado? Quizá sí, o quizá no. Esta anécdota es más que posible en la realidad actual de las aulas universitarias. La transparencia que solicitan unos y otros, cuando 33 Tójar, J. C. y Velasco, L. (2015). La rúbrica como recurso para la innovación educativa en la evaluación de competencias. En
Full-text available
Las herramientas tecnológicas al servicio de la enseñanza universitaria acercan los procesos de aprendizaje a las prácticas de interacción de los estudiantes. El profesorado dispone de recursos que facilitan los procedimientos didácticos combinando estrategias metodológicas que ayudan a construir conocimiento. Aquí se presenta una experiencia docente basada en la tradición académica de la evaluación por pares a través de la cual nuestros alumnos toman contacto con las características pedagógicas de la evaluación. Palabras clave: Didáctica, TIC, enseñanza-aprendizaje, evaluación por pares.
Full-text available
The European Higher Education Area (EHEA) is a political programme for higher education in Europe that was developed in the context of the Bologna process. It highlights the importance of focusing education on students' learning. It also claims that students should achieve certain skills in a self-study process supported by their teachers. This approach demands that students reflect upon and self-assess their learning. It therefore requires that research is conducted on new technologies that require a greater involvement of students in their own education. The technologies require a higher commitment from students regarding their own assessment. eRubric evaluation is part of this conception of technology-enhanced learning and formative assessment. It is an evaluation method, a technique and an assessment management tool to support self-regulation of the learning process. Information is given about a research project to construct the use of eRubrics.
Full-text available
This study suggests that students use rubrics to support their own learning and academic performance. In focus groups, fourteen undergraduate students discussed the ways in which they used rubrics to plan an approach to an assignment, check their work, and guide or reflect on feedback from others. The students said that using rubrics helped them focus their efforts, produce work o f h igher quality, earn a better grade, and feel l ess anxious about an assignment. Their comments also revealed that most of the students tend not to read a rubric in its entirety, and that some may perceive of a rubric as a tool for satisfying a particular teacher's demands rather than as a representation of the criteria and standards of a discipline.
Sobre el autor Juan Manuel ÁLVAREZ MÉNDEZ es profesor titular de Didáctica en la Facultad de Educación de la Universidad Complutense de Madrid, en la que trabaja desde 1974. En su labor docente y en sus publicaciones se centra en temas relacionados con la Didáctica aplicada a la enseñanza de la Lengua y con la Didáctica General y el Currículum, dedicando especial atención a la formación de profesores y al estudio de las reformas educativas y de la evaluación. Algunos de sus libros publicados son: Linguistica fundamental: introducción a los autores (1985); Didáctica de la Lengua desde el punto de vista lingüístico (1987); Teoría lingüística y enseñanza de la Lengua; textos de orientación interdisciplinar (1987); Didáctica, curriculum y evaluación (2000. 2° ed.), Entender la didáctica, entender el currículum (en prensa). Asimismo, es autor de numerosos artículos en revistas de educación. Son muy frecuentes sus participaciones en cursos de formación docente y de posgrado sobre temas de Didáctica aplicada, Currículum y Evaluación.
Although the rubric has emerged as one of the most popular assessment tools in progressive educational programs, there is an unfortunate dearth of information in the literature quantifying the actual effectiveness of the rubric as an assessment tool in the hands of the students. This study focuses on the validity and reliability of the rubric as an assessment tool for student peer-group evaluation in an effort to further explore the use and effectiveness of the rubric. A total of 1577 peer-group ratings using a rubric for an oral presentation was used in this 3-year study involving 107 college biology students. A quantitative analysis of the rubric used in this study shows that it is used consistently by both students and the instructor across the study years. Moreover, the rubric appears to be 'gender neutral' and the students' academic strength has no significant bearing on the way that they employ the rubric. A significant, one-to-one relationship (slope = 1.0) between the instructor's assessment and the students' rating is seen across all years using the rubric. A generalizability study yields estimates of inter-rater reliability of moderate values across all years and allows for the estimation of variance components. Taken together, these data indicate that the general form and evaluative criteria of the rubric are clear and that the rubric is a useful assessment tool for peer-group (and self-) assessment by students. To our knowledge, these data provide the first statistical documentation of the validity and reliability of the rubric for student peer-group assessment.
The authors implemented a cognitive apprenticeship learning community (Collins, Brown, & Newman, 1989) in a class of preservice teachers at the University of Missouri - Columbia to enhance their argumentation and critical thinking skills about complex, educational problems. A detailed rubric based on the literature in argumentation and scientific reasoning was developed to evaluate the students' performance before and after the intervention. The rubric was designed to measure students' (a) conceptions and use of evidence, (b) notions about research and its applicability in evaluating complex social problems, and (c) ability to consider alternative perspectives. This article describes the intervention, the theoretical underpinnings of the rubric, and the use of the rubric in measuring the development of critical thinking in a group of preservice teachers. Strengths and weaknesses of the rubric are outlined and avenues for future research are discussed.
Peer assessment is a common form of shared learning in which students provide feedback on each others work. Peer assessment takes many forms; and involves students and tutors taking various roles at different stages of the process. This study explores the views and opinions of undergraduate students in relation to their perceptions and experiences of formative peer assessment introduced as a learning development opportunity for the first time. The study found that on the whole formative peer assessment was a positive experience in enhancing students learning and development. However, consideration needs to be taken to address individual learning styles, as a limited number of students found the process to be less useful. Consequently, when tutors are constructing peer assessment strategies they should be cognisant at the planning stage of the variety of learning styles that are evident in order to maximise the development opportunities this can bring to students.
This paper reports a study which implemented and evaluated a method of student self and peer assessment involving student constructed marking criteria. The theme of the importance of marking criteria (Orsmond et al ., 1996, 1997) is developed. Pairs of first-year undergraduate biology students were asked to complete a poster assignment. The study was designed to allow the comparison and evaluation of (1) student self and tutor marking for individual marking criteria; (2) student self and peer marking for individual marking criteria; and (3) student and tutor marking for student constructed and tutor provided individual marking criteria. The present study shows that: (a) students may be less able to discriminate between individual marking criteria which they have constructed compared to marking criteria that have been provided; (b) asking students to construct their own marking criteria in discussion with tutors or fellow students does not enhance agreement between student/tutor or student/student marking; and (c) allowing students to construct their own marking criteria may lead to different learning outcomes compared to providing students with a marking criteria.