ArticlePDF Available

Abstract and Figures

In the present paper, we describe the work being carried out by a group of professors so as to implement the follow-up and feedback processes of the activities students do throughout the first academic years in their Engineering studies. Not to mention, this project is within the EHEA (European Higher Education Area) framework. Our results show that these processes are key to enhance students learning, they can also have an impact in their academic performance and can be optimized in such a way that professors can assume the time it requires.
Content may be subject to copyright.
Journal of Technology and Science Education
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
FOLLOW-UP AND FEEDBACK PROCESSES IN THE EHEA
Pablo del Canto, Isabel Gallego, José Manuel López, Esunly Medina, Francisco Mochón, Javier Mora, Angélica
Reyes, Eva Rodríguez, Esther Salami, Eduard Santamaría, Miguel Valero1
1Departament d’Arquitectura de Computadors, Universitat Politècnica de Catalunya
Barccelona, Spain
miguel.valero@upc.edu
Abstract
In the present paper, we describe the work being carried out by a group of professors so as to implement the
follow-up and feedback processes of the activities students do throughout the first academic years in their
Engineering studies. Not to mention, this project is within the EHEA (European Higher Education Area)
framework. Our results show that these processes are key to enhance students learning, they can also have an
impact in their academic performance and can be optimized in such a way that professors can assume the time
it requires.
Keywords European higher education area, feedback, continuous assessment, student centered learning
1 ABOUT THE FOLLOW-UP AND FEEDBACK
We understand follow-up as the process by which the professor supervises the work carried out by students
and takes decisions according to the results obtained in such monitoring. As for the feedback, we see it as the
information that aids students become aware of their own learning progress (or lack of it).
In fact, follow-up and feedback processes are both sides of the same story and do often appear together. For
example, the professor typically asks students to carry out a task and they do it. Afterwards, he/she corrects
the results and gives them back to their students so that they know what to improve. Therefore, we find
follow-up by the professor and feedback for the student in the same activity.
However, conceptually they are different processes and can appear one without the other. For instance, the
professor can enhance team work in which each student can receive a considerable amount of feedback from
classmates without the professor having to supervise the task, which, in fact, could take place outside the
classroom. A complete opposite case would be the one in which the professor collects the results of the tasks
to simply follow them up without giving the student any feedback at all.
In any case, whichever shape follow-up and feedback processes might present, whether together or isolated
both these processes are key elements in a quality teaching organization. A case in point is that feedback on
time is one of the 7 principles of quality teaching according [1] and has an important imp act in student’s
learning process *2+. On the other hand, professor’s follow-up of students’ work is especially relevant in the
EHEA scenario, in which our students must devote a considerable amount of time and a great deal of their
efforts outside the classroom according to the European Credit Transfer System (ECTS).
On the other hand, they are processes that can imply an important cost in terms of professor’s dedication,
especially when the group sizes are very big. Thereby, any strategy to implement follow-up and feedback
processes in an efficient way is always of great interest.
In this article, we describe the strategies that we currently use in order to carry out this implementation in the
first academic years subjects of the Engineering studies and we pay special attention to keep an acceptable
level of effort from the professor’s side. Section 2 in this article presents our work framework. In section 3, we
present a model that has been used as a guideline to introduce the necessary changes to adapt our subjects to
the EHEA requirements. Sections 4 to 8 describe in much detail the model and the implementation more
closely related to the follow-up and feedback. Finally, in sections 9 and 10 we present the results and
conclusions of our work.
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
2 THE CONTEXT OF OUR WORK
Our work takes place in the context of two subjects taught during the first academic year of
Telecommunications Technical Engineering (nowadays Degree in Telecommunications Engineering) at the
Escola d’Enginyeria de Telecomunicació i Aeroespacial de Castelldefels (EETAC) of the Universitat Politècnica de
Catalunya (UPC). The two subjects in particular are: Introduction to Computing (IC from now on), which is
taught during the first term and Programming Project (ProP from now on), which is taught during the second
term. In these two subjects the student has to acquire a basic notion on computer architecture and also
develop a capacity to build medium size applications in computing language C#.
Both subjects are of 6 ECTS each. Since the UPC has assigned 25h of workload per ECTS our students have to
devote a total of 150 hours per subject which divided into 15 tuition weeks per term equals 10h of student’s
weekly work. Out of these 10h per week, 3 are of tuition (one session per week only) and the rest are tasks that
students have to carry out outside the classroom.
The UPC framework also applies for professors, in other words, 1 ECTS implies a recognition of 11h of
professor’s work per group of students. In this sense, the group size can be chosen according to the nature of
the subject (groups of 80, 40 or 20 students). From the stated working hours of each professor, 7 of them must
be of tuition and the rest (up to 11) can be devoted to other tasks, such as follow-up activities and feedback. In
each subject there is flexibility in terms of choosing the group size and the number of lectures.
In our subjects we use Project Based Learning methodology (PBL) [3]. At the beginning of each academic year,
students are faced with a project that they have to carry out as teamwork. Thereby, the activities done
throughout the course are designed according to the learning needs of students while developing their project.
In this context, we have chosen groups of 20 students and a small amount of lectures (approx imately 7.5h per
ECTS). Therefore, professor’s dedication on a weekly basis is of 4.4 hours per group of 20 students: 3h
consisting on the weekly tuition session and 1.4h being the amount of time devoted to other tasks, such as
follow-up and feedback. Table 1 shows the summary of the time devoted to our subjects by both our students
and professors.
Table 1: Summary of the time devoted by students and professors
1 ECTS
6 ECTS
15 weeks
class
Student
25 hours
150 hours
10 hours/week
3 hours
Professor
11 hours
66 hours
4,4 hours/week
3 hours
This discussion is very relevant for the topic we focus on in this article as one of the strategies to guarantee
efficient follow-up and feedback is reducing the time being spent in other tasks (as in our case, reducing the
expositive sessions or so-called lectures). Certainly, 1.4 hours weekly might not be sufficient time for these
tasks but, at least, part of this time is recognized in the teacher’s official workload.
Another important element in our context of work is the use of a digital/virtual campus called ATENEA (UPC’s
institutional campus), which is based on Moodle [4]. We must say here that this is an essential tool to distribute
information and material to students and, at the same time, to implement follow-up and feedback processes,
as we will explain further in this paper.
Finally, we would like to emphasize that besides the specific objectives of computing as a discipline, our
subjects also have the responsibility of contributing to the development of some more generic competences
set by the UPC, and by the EETAC itself. Precisely, the 7 generic competences established by the UPC are:
1. Initiative and innovation
2. Sustainability and social commitment
3. Effective written and oral communication
4. Teamwork
5. Competent use of information resources
6. Autonomous learning
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
7. Third (foreign) language
And the two generic competences we add at the EETAC are the following:
8. Project work
9. Competent use of lab resources
In our subjects, we deal with competences 3, 4, 6 and 8, which naturally fit in the PBL teaching scheme as it is
in this type of organization that students work autonomously in groups and have to regularly report the results
of their projects through written documents or oral presentations.
3 OUR TEACHING MODEL
Any method, strategy or teaching tool (as the ones presented here to implement follow -up and feedback) must
suit the teaching model that inspires the organization of the subject itself. In our case, the follow-up and
feedback processes are aligned to the teaching model that we are developing to cope with EHEA requirements.
In this section, we will describe the basic aspects of such model.
The proposed teaching model, which is learning centered, can be summarized in the following sentence
(inspired by John Cowan [5]):
Prepare an activity program from which the student cannot escape without learning, make sure that
the student does such activities and if he/she reaches the end of the activity program, pass him/her.
This motto has inspired us in terms of working by following the 9 criteria shown in Figure 1.
Follow-up and feedback processes are closely related to the criteria shown in Figure 2. Precisely, immediate
feedback mechanisms to which criterion 4 is referred and the feedback mechanisms are based on students’
assignments deliverables (criterion 3) and these allow setting up recovery plans for students who are behind or
motivation plans for those who are much ahead (criterion 5). Students’ opinions (criterio n 6) are also object of
follow-up and feedback. Finally, the assessment methodology (criterion 9) can be partially filled with the
follow-up and feedback results. In the following sections we describe in much detail how we implemented the
criteria most related to follow-up and feedback processes in our subjects.
1. Clearly define the learning objectives (what your students should be able to do at the
end of the subject).
2. State in detail what your students should do in class and, most of all, outside the
classroom (especially outside, not because it is more important but because we do not
have the habit of planning homework carefully, and, obviously, because we cannot
observe this work directly).
3. State deliverables (results of the programmed activities which show whether the task
has been carried out or no, whether good or bad or if the student works regularly).
4. Establish immediate feedback mechanisms (according to the deliverables)
5. Prepare specific actions for students who have more difficulties (and also for the ones
who are ahead).
6. Plan a systematic gathering of both students and professors opinions throughout the
course and use these data as a trigger for a continuous improvement process.
7. Make sure that your activities plan contains achievable steps but with an ambitious
target.
8. Use cooperative learning and PBL strategies to motivate students follow the course.
9. Design the assessment criteria as a motivation for students to meet the course
objectives and do the planned activities.
Fig. 1: The 9 criteria which characterize our model of subjects’
adaptation to de EHEA
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
4 ACTIVITIES AND DELIVERABLES PLANNING
As we have already mentioned, our students should devote approximately 10 hours weekly to carry out this
course tasks. Most of these hours (7h) are for outside the classroom activities, namely studying the topics of
the course (that are not explained in the classroom), doing exercises, organizing and attending group meetings
for the project’s tasks, etc. In the classroom, we have a 3h session per week in which students perform tasks
involving interaction with their classmates and with the professor (group exercises, teamwork meetings, etc.)
From time to time, the professor gives brief explanations to students but they are closely related to answer
their specific needs, which are precisely detected by the follow-up processes being carried out.
In this sense, every week we publish in ATENEA a detailed plan of tasks (both inside and outside the classroom
activities) besides from stating the specific materials needed to carry out such tasks. Usually, each task has a
linked result which we call deliverable, that is a key element to organize the follow-up and feedback sessions.
Some examples of these deliverables are the following:
A list of doubts (in paper) that each student brings to class as a result of his/her homework.
A list of doubts (in paper) that each group produces in class and hands in to the professor after having
tried to solve each other’s doubts as a team.
A self-assessment report that each student has to post in ATENEA based on the results of an exercise
done outside the classroom.
A project planning document that each group has to develop and post (in electronic format) in
ATENEA.
The computer program code resulting from the project carried out by each team.
The result of an individual exam on basic knowledge carried out in the class.
Student’s work in the first subject focus of this study (IC) is carefully planned and deliverables are very frequent
(approximately 35 throughout the course). However, work is less monitored in the second subject (ProP) and
there are less deliverables (around 20). In any case, one of professor’s duties is to control such deliveries during
the course because to pass the subject it is compulsory for students to have given 80% of the deliverables, as it
is explained in section 8. Therefore, the control of student’s deliveries can be visualized via a website that all
students can check in ATENEA and which looks as shown in Fig. 2.
Fig. 2: Webpage in which the fulfillment of student’s deliveries can be visualized.
Professors register each of the student’s deliveries in this webpage. In this sense, this resource plays two roles.
On the one hand, makes clear that the professor knows which deliveries have been done and which have not
(in other words, students perceive that the professor controls the situation). On the other hand, the progress
of the course itself is visually available to students and in this way if most students fulfill their tasks, the ones
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
who do not feel pressure to do so and rush to keep up (the same way as when the herd moves, all move).
Naturally, if most students are not doing their job then the effect it is just the opposite (if the herd doesn’t
move, no one does). Precisely, to fight this potential negative effect the first deliverables of the course are very
simple and are done in the classroom so that the first marks in the follow-up grid are practically sure for every
student.
Each of the tasks comprised in the activity plan is assigned an estimated amount of time to be devoted by the
student. Thereby, by adding all the estimated times we should obtain the number of hours assigned in ECTS.
Students have instructions of not dedicating more significant time than the requested per task. In the case that
they had not finished in such estimated time, they should stop and reflect about their doubts and clearly
identify them to be discussed in the class with both their peers and the professor (and continue doing other
tasks from this subject or others). On the other hand, every week professors collect individual information
about the time devoted by each student to the weekly tasks. This monitoring of student’s time allow us,
professors, to detect our bad estimations on the amount of time to be devoted to each task or the possible
difficulties a student in particular may have. Moreover, this systematic gathering of data usually makes
students more aware of the importance of how we spend our time.
5 TYPOLOGY OF STUDENT’S DELIVERABLES ACCORDING TO THE REQUIRED FEEDBACK
Obviously, professor’s work cannot be limited to taking notes of student’s deliveries. This handing over or
posting of documents must be the basis to provide students with the suitable feedback to improve. So as to
organize an efficient feedback system it has been very useful for us to classify these deliverables into three
types.
For Type 1 deliverables, the professor has just to control that they have been carried out. It is not necessary to
check whether they are good or badly done. An example of this kind of deliverables is the weekly sheet of
doubts that the student brings to class to share with his/her classmates. In this sense, the professor only
controls that each student has his/her own sheet of doubts but does not pick them up because students need
them for their group discussion.
For Type 2 deliverables, besides from verifying that the tasks have been done by students, the professor must
check that they have a minimum of quality. Otherwise, they will be rejected and the student (or the group) will
have to do them all over again. For example, each student has to fill-in a self-assessment report regularly using
a precise format (in which there is a question where the student has to indicate the errors committed during
the exercise). The professor, then, has to check that every student has posted the report in ATENEA and
observes if the proper format has been used and if all the questions have been answered. If this is not the case,
the deliverable is rejected and the professors indicates the student the reasons for such decision.
In Type 3 deliverables, it is required a careful assessment of the contents. An example is the first draft of the
project (a computer program). The professor has to download from ATENEA each group’s project and submit it
to a very detailed evaluation according to the quality criteria he/she had already set. The feedback, in this case,
consists of a report including the assessment results that each group must consider before attempting a second
version of the project.
Obviously, type 3 deliverables need a greater amount of time from the professor’s side but the time devoted to
deliverables types 1 and 2 is much less. In addition, in our case most deliverables are of these latter types. In
particular, in IC subject we have 6 deliverables of type 1, 30 of type 2 and 7 of type 3 while in Prop subject we
have 3 deliverables of type 1, 22 of type 2 and 10 of type 3. Furthermore, all professors know in which weeks
we have deliverables type 3 so that we can plan our time and keep some hours in our agenda for those weeks
(as if we had a meeting or some classes). In conclusion, a good planning of feedback tasks aids professors to
have the timing under control.
On the other hand, providing feedback via ATENEA helps us having information well-organized and accessible
to everyone, which enhances a greater efficiency in the end. Fig. 3 shows an example of feedback for a
deliverable type 3. In the column called Comentari, the professor has written a text -which has not been totally
displayed in the figure-with the details of the assessment (the same assessment for each and every member of
the group because it is a teamwork deliverable).
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
Figure 3: The Campus Virtual facilitates deliverable management and, in particular, it allows
professors to make detailed comments in a well-organized manner and it is easily accessible for those
students involved
6 SELF- ASSESSMENT AND PEER-ASSESSMENT
Students themselves can help professors produce feedback by stating whether their own work is correct (self -
assessment) or by assessing other student’s work (peer-assessment), always following the criteria already set
up by the professor. Consequently, besides from helping the professor, self-assessment and peer-assessment
are strategies that also contribute to develop important abilities, such as self-criticism or evaluation of peers [6,
7].
We must state here that in our subjects we regularly use self-assessment and peer-assessment strategies. In
the following lines we present some examples.
Frequently, students have to do homework and they are exercises with only one possible solution. Besides
from solving the exercise, each student must compare his/her solution to the correct one to be found in
ATENEA. Since there is only one correct answer any difference between the student’s answer and the official
one leads to a more than likely mistake from student’s side. By comparing the answers, students must identify
their own errors and fill-in a self-assessment report in which they should state which exercises are good/not so
good/bad and which mistakes they have committed. As we have already explained, a self-assessment report is
a delivery type 2 and the professor simply verifies that the form has been properly filled in. This activity
enhances students’ awareness on their progress. Not to mention, if the professor finds a self-assessment
report where the student states that he/she has made many mistakes, such valuable information can be used
to take the necessary measures, as the ones we describe in section 7.
Peer-assessment is the most suitable for deliverables with multiple correct answers. Two illustrative cases
would be the following.
As stated before, students must carry out a computer project which meets a series of requirements and
specific quality criteria in the form of grading rubrics [8].
Each group must deliver a first draft f the project, which will be evaluated by the professor according to the
quality criteria (this is an example of deliverable type 3 which as already mentioned in section 5 of this article) .
The final version of the project must be handed out the last day of class. In this last session, each group should
assess the quality of the projects carried out by other two groups. This activity lasts for an hour (half an hour to
assess each one of the two projects). Moreover, every group should produce-for each evaluation- a report
assessing their peer’s project according to the quality criteria. In this sense, each computer program will have
to pass a series of tests conveniently specified. This report includes both qualitative assessment and
quantitative grading for each quality criterion.
As a result of this activity, the professor obtains two evaluations per project. If both assessments coincide quite
reasonably and professor’s perception on the project assessed goes along the same lines, a final mark is
generated by calculating the average of the marks given by the evaluators. In case of discrepancy, the professor
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
makes an additional assessment to decide on the final mark. In any case, this process makes the task of
evaluating the final version of the projects low cost in terms of time for the professor.
We also use peer-assessment to evaluate the oral presentations that students have to do during the course. In
particular, each group (usually of 3 students) must learn about 3 topics from the subject’s syllabus, which are
necessary to master so as to carry out the project. The learning of these themes is done by the puzzle
technique [9]. That is to say, each member of the group must study on of the three topics. After the
autonomous learning, each student will have a meeting with those students from other groups who have been
assigned the same topic to master. This classroom activity will allow students to share doubts and deepen into
the learning of the theme together. Then, at home, each student will have to prepare a 10 minute oral
presentation to explain the assigned topic to the teammates. We must say that this presentation is designed
with computer screen capture software (including also the voice of the speaker). In other words, this
presentation may include most of the quality criteria for oral presentations, except those related to body
language.
Every student must post his/her oral presentation in ATENEA and has to see his/her teammates’ presentations.
Besides from learning contents from the other two presentations, students must evaluate their quality
according to quality criteria already established by the professor (related to its structure, verbal language,
images used, contents, etc). The assessment report is also posted in ATENEA so that each student can prepare
an improved version of the oral presentation the following week using the comments and evaluations received
by peers. This definite version is assessed by the professor using the same quality criteria (this is also a type 3
deliverable which requires more effort from the professor’s side).
As we can see, the two examples of peer-assessment follow a common pattern: they both consist on evaluating
products which have no correct version to be compared with (a computer program and an oral presentation).
Each of these products has a preliminary delivery which receives a feedback to help students produce a better
version to be re-submitted for evaluation (and this second assessment is the one taken into account for the
final marking). In the case of the computer program, the first evaluation is carried out by the professor because
it is essential to give very precise indications to help groups improve badly oriented projects. The final version is
the one submitted to peer-assessment but in the case of the oral presentation the evaluation process is just in
reverse. In other words, it is the first draft the one being evaluated by peers, who can give very reasonable
recommendations on the formal aspects of the presentation which, in turn, will help every student to carry out
the final version (to be assessed by the professor).
7 PLAN FOR ADVANCED AND SLOWER STUDENTS
The follow-up process of deliverables provides professors with valuable information that allow them to
intervene to improve the process. This is especially important for the slower students, that is to say, the
intervention clearly aims at helping students who have shown greater difficulties. In the following lines we will
explain how we organize the plan for slower students.
The first thing to be taken into consideration is determine which information is going to be used to identify the
students who may be needing help. For example, in the first term subject we take as a basis the resul ts of a
self-assessment which is carried out during week 3 and an individual exercise in test format (but not counting
towards the final mark) which is done in class in session 4. From the results of these two exercises, we classify
the students in our course in the following three categories:
Students who follow the course properly (they have obtained a good result in both deliveries).
Through ATENEA, the professor sends these students a message congratulating them for the good job
they are doing. They are also informed (in the message or in the classroom) about their possible minor
errors.
Students who have obtained bad results and they are not doing the deliverables and/or are dedicating
little time to the subject (professors are aware of this thanks to the timing follow-up, as described in
section 4). These students are sent a message informing them of their bad results and reminding them
that if they do not meet the deadlines for deliverables or do not devote the estimated time to the
assignments, they will not pass the subject. They are even reminded of the course requirement, which
states that students must give, at least, 80% of the deliverables to pass the subject as it is explained in
section 8.
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
Students who although doing their tasks and devoting the estimated time to them have obtained bad
results. These students are sent a message in which the professor recognizes their effort and
encourages them to keep on. Moreover, an interview with each student is set during professor’s office
hours. In this appointment, the professor insists on the possibility of passing the subject if the student
keeps working hard and, usually, the student is faced with an exercise to be solved during the
interview so that the professor can identify what exactly is not understood and tries to answer any
questions. Usually, students are asked to do a reinforcement task that have to bring to class so as to
confirm that they no longer have doubts.
In week 7 there is a written test in which the professor can observe the impact of the above described
intervention and, in any case, determine whether it is still necessary to provide extra help measures for some
students or not.
About this plan we would like to make the following observations:
All professors were doing something very similar but poorly planned. The important thing about the
plan is that it has become a routine for all the professors and groups. Therefore, we can be more
efficient and reach more students.
A plan like the one we have described requires having individual information. The exercise in week 4
was, in the first editions of IC subject, a group activity, which is very useful for students but the need
of having reliable individual information made us re-design it as an individual task.
It is crucial to intervene in the very precise moment, not before not after, so that the interview
professors have with the students appointed can be efficient. In our case, we cannot intervene before
week 4 because we can have hardly tackled the core content of the subject neither we have sufficient
individual reliable information. If we intervened in week 6 or afterwards, the amount of
information/material involved in students’ doubts would be large and, therefore, professor’s time
devoted to identify and clarify doubts during meeting with students would be significantly greater. In
week 4 a short interview can be enough to detect and solve key doubts.
Obviously, professor’s visiting hours are used by students who want a meeting with him/her taking
their own initiative. But, as we have already seen, professors also used office hours to schedule
interviews with slower students.
It is also important to deal with students who can do more and, thereby, need also special attention. In this
case we do not have such a systematized intervention plan. Basically, for the first part of the first subject, we
have a set of programming exercises that we call “interesting”. It consists of a series of tasks which can be seen
as a more stimulating challenge than the ones regularly done throughout the course and that each professor
uses under his/her own criterion if a student with greater capacities or major interest for the subject is
detected.
8 MARKING METHODOLOGY
The marking methodology is a key element for any teaching organization as it determines at a large extent
what the student is going to do (and what he/she is not going to do). In this section, we will describe the
marking methodology in our subjects and we will especially focus on how it is related to the follow-up and
feedback processes.
Table 2 shows the four components of the marking methodology that we use in our subjects. Students can
obtain up to 2 points over 10 in the overall marking just by doing all the deliverables of the subject on time,
regardless of their quality. However, deliverables which may show a complete lack of interest from student’s
side are rejected (for example, when he/she has not even used the official deliverable format). Moreover, if a
student has not completed 80% of the course deliverables, he/she cannot pass the subject because his/her job
is not being done.
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
Table 2: Marking methodology components
Deliverables
20%
Reward or penalty for student’s continuous effort from the
first week of class
Projects
40%
Student’s learning is oriented towards the project’s needs,
his/her investment of efforts leads to an achievable
objective in co-operation with peers.
Basic knowledge
30%
Co-operative work does not exclude individual
responsibility to acquire a basic knowledge.
Attitude
10%
It includes those aspects that are related to professor’s
subjective perception of student’s work.
The project’s mark is basically the same for all group members and it has the same proportional weight
according to the time devoted to the project throughout the course (this is important so as to give this work
the value it really has). The basic knowledge is comprised in 8 types of essential exercises that each student has
to be able to do individually as an exam. As the basic knowledge is crucial for the subject, a student who do no
acquire, at least, 7 out of these 8 essential basic knowledge topics cannot pass the subject (no matter how
good the project is or the fact that all course deliverables have been made). To verify the basic knowledge,
professors schedule three tests throughout the course: in week 7 (an opportunity to show the acquisition of
the first 4 topics), in week 14 (all basic knowledge can be proved) and in the final exams week (the last chance
to show student’s basic knowledge acquisition). Obviously, once students succeed in one type of these
essential exercises, it is not necessary for them to repeat it in the following exams.
Professors assign the attitude mark according to their own criteria with certain doses of subjectivity.
Frequently, we use this component of the marking methodology to reward outstanding students (above their
peers for several reasons). For example, they have been group leaders, they have attended classes regularly,
they have been rigorous with the processes of peer-assessment, etc.
Naturally, the deliverables component (20% of the course grading) is the closest related to the follow -up and
feedback processes that we describe throughout the present paper. Our philosophy, then, is clear: it deals with
using part of a student’s mark to encourage him/her to do all the assignments so that professors can organize
follow-up and feedback mechanisms accordingly (aid slower students, etc.), as we have previously described.
Furthermore, the marking method states very clearly that there is no other way to pass the subject but to do
the job and if in the end the teamwork (the project) and the individual work (basic knowledge acquisition) have
no quality, the student will not pass this course.
9 EVALUATION OF RESULTS
When we have to evaluate the results of the follow-up and feedback processes described in this paper, we can
use indicators of academic performance, drop-outs and students’ satisfaction. These indicators, then, have to
be assessed taking into account that our subjects are taught during the first year of engineering studies, where
the passing mark is 5.00.
One of the first indicators in which the follow-up and feedback processes can have an impact is the number of
drop-outs in the subject. In our context, students who leave coincide with the ones who do n ot reach 80% of
assignments delivered, as requested in our marking methodology. In fact, these students who give up the
course, do it early in the first weeks and as soon as they realize that they lack the time needed to carry out the
tasks or that not doing the set assignments will not allow them to follow the course properly.
The drop-out rate in academic year 2009-2010 (first term) is around 8% in our subjects. This figure contrasts
with the 30%-40% of drop-outs observed 3 or 4 years ago, when the processes that we described in this work
were not implemented.
As to the academic performance rate (number of students passing over number of enrolled students), it can be
around 70% and 3 or 4 years ago it was between 40%-50%. On the other hand, 20% of the students who do not
give up but still do not pass the subject are the ones failing basic knowledge acquisition tests, a requirement of
this subject as we have carefully described in section 8.
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
We must say that our students show their level of satisfaction on several elements of this subject through a
final questionnaire in which they have to assess their level of agreement to a set of claims on a scale 1 (do not
agree at all) to 5 (completely agree). Figure 5 shows the results of the questionnaire related to IC subject in
academic year 2009-2010 (first term).
1
In this subject I have learnt things that I consider useful for my training
4,46
2
Professor’s guidance has made my learning process easier
3,52
3
Course material is well-prepared and suitable
3,67
4
I knew at all times what I had to do (both in the classroom and outside from it)
3,46
5
I have always felt very well-informed on my progress (or lack of it) in the course
4,19
6
Teamwork has been very helpful for me
3,78
7
Course assessment is suitable
3,56
8
This course has helped me organize my own time better
3,21
9
The development and assessment of course videos is a good training to produce better
oral presentations in the future
3,42
10
I think that it is appropriate that part of the course materials is in English
3,61
Figure 5: Results from the satisfaction questionnaire that students fill in at the end of the
course. They correspond to IC subject, academic course 2009-2010, first term. In total, 52
students answered (out of 81 enrollments)
This data (especially, question 5) indicate that the follow-up and feedback processes here described are
effective according to students’ views.
Finally, it is important to assess professor’s cost of the follow-up and feedback processes implementation. In
this sense, taking into account the data related to the time devoted as described in section 3, it seems that a
full-time professor should be in charge of two groups with 20 students each to total 8.8 hours of recognized
dedication per week (the same as 8h of class weekly in the previous model). Our experience indicates that all
feedback and follow-up work as it is described in our paper (and the rest of tasks as preparing materials,
planning activities, etc.) can be done in such an amount of time adding 6h of tutoring weekly, which are the
ones assigned to a full-time contract in our University.
In conclusion, the time devoted to teaching in a full-time position (around 14 hours per week) is more than
enough to keep the above described system properly and to obtain satisfactory results in terms of students’
academic performance and participants’ opinions.
10 CONCLUSIONS
The conclusions of the present work can be summarized in the following claims:
1. The follow-up and feedback processes must be at the service of a teaching model to guide professors’
efforts. In our case, the model we present in Figure 2 has been the central element around which we
have made all the efforts towards the adaptation of our previous subjects to EHEA.
2. The follow-up and feedback processes here developed have contributed to increase our students’
academic performance as for the decrease in drop-outs and the increase in the percentage of passing
students. This result is especially remarkable if we take into account that it has coincided in ti me with
the lowering of student’s average mark to access our telecommunication degrees.
3. The experience and the use of tools and specific methods (a digital campus, self-assessment, peer-
assessment, etc.) has allow us to optimize the efforts devoted to follow-up and feedback in a way that
the time dedicated to teaching corresponds with a professor’s full-time position (approximately 14
hours weekly), which is more than sufficient to keep running the described system.
J. Technol. Sci. Educ. Vol: 1 nº 1, 2011
ISSN 2013-6474
11 REFERENCES
1. A. W. Chickering and Z. F. Gamson, Seven principles for good practice in undergraduate education, The
American Association for Higher Education Bulletin, March 1987,
http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/teachtip/7princip.htm
2. G. Gibss and C. Simpson, Conditions under which assessment supports students’ learning, Learning and
Teaching in Higher Education, Issue 1, 2004
3. T. Markham, J. Larmer Project Based Learning handbook, a guide to Standard-focused project based
learning for middle and high school teachers, Buck Institute for Education, 2003
4. Campus Digital de la UPC, http://atenea.upc.edu/moodle/
5. J. Cowan, On Becoming an Innovative University Teacher: Reflection in Action, Open University Press, 2006
6. D. Boud, Enhancing learning through self assessment, Routledge Falmer, 1995
7. A. W. Bangert, Peer Assessment: A Win-Win Instructional Strategy for Both Students and Teachers, Journal
of Cooperation & Collaboration in College Teaching, Vol. 10, No. 2, 2001, pp 77-84
8. Ideas and Rubrics, Instructional Intranet, Chicago Public Schools, 2009
http://intranet.cps.k12.il.us/Assessments/Ideas_and_Rubrics/ideas_and_rubrics.html
9. E. Aronson et al., The jigsaw classroom. Beverly Hills, CA: Sage Publishing Company, 1978
... Evaluating the student progress implies establishing different types of tests to enable students and teachers to know at what stage the former are in their learning process, where they have to go and the best way to get there (Del Canto, Gallego, Santamaria, López, Medina, Mochón et al., 2011;Whitelock & Bill, 2011;Marchand & Gutierrez, 2012). Thus, the evaluation is not just a control activity, but a tool that improves learning acquisition (Whitelock & Bill, 2011) via providing feedback to students (Del Canto et al., 2011;Whitelock & Bill, 2011;Sansone et al., 2012). ...
... Likewise, those teachers capable of using various strategies are usually more effective. Providing feedback to students also improves their results throughout the course (Rodríguez, Zamorano, Rosales, Dopico & Pedraza, 2004;Malmi, Korhonen & Saikkonen, 2002;Del Canto et al., 2011). ...
Article
Full-text available
E-learning platforms are a powerful tool that provides substantial improvements in the academic performance of students in distance learning courses and constitute an important support for the acquisition of skills. This paper explores whether these advantages also apply in blended learning courses (Moodle platform), where face-to-face interactions might attenuate them. Specifically, we asked whether the usage of online resources by students in blended learning courses also influences their performance. Particularly, we focus on evaluating the acquisition of theoretical and practical knowledge. The data for our study comes from 256 students in a Business Administration course. We use regression analyses to explain how Moodle platform usage influences academic performance in terms of both types of skills. We have found that the intensity of resources usage and their variety influence learning outcomes. This beneficial effect can be moderated by learning outcomes measurement and by students' previous abilities.
... Hence, we decided to integrate our solving problems routines in our new Moodle courses. The adoption of solving problems in e-learning platforms can dramatically reduce the time devoted to marking problems and giving feedback to students, thus facilitating its management and the learning process itself (Del Canto et al., 2011;Whitelock & Bill, 2011). LMS provide instructors with tools to track and evaluate students' performance (Sancho, Moreno-Ger, Fuentes-Fernández, &Fernández-Manión, 2009). ...
Article
Full-text available
The benefits of solving problems have been widely acknowledged by literature. Its implementation in e–learning platforms can make easier its management and the learning process itself. However, its implementation can also become a very time–consuming task, particularly when the number of problems to generate is high. In this tutorial we describe a methodology that we have developed aiming to alleviate the workload of producing a great deal of problems in Moodle for an undergraduate business course. This methodology follows a six-step process and allows evaluating student’s skills in problem solving, minimizes plagiarism behaviors and provides immediate feedback. We expect this tutorial encourage other educators to apply our six steps process, thus benefiting themselves and their students of its advantages. Keywords: Learning management systems; problem generation; randomization; cloze questions
... Hence, we decided to integrate our solving problems routines in our new Moodle courses. The adoption of solving problems in e-learning platforms can dramatically reduce the time devoted to marking problems and giving feedback to students, thus facilitating its management and the learning process itself (Del Canto et al., 2011;Whitelock & Bill, 2011). LMS provide instructors with tools to track and evaluate students' performance (Sancho, Moreno-Ger, Fuentes-Fernández, &Fernández-Manión, 2009). ...
Article
Full-text available
The benefits of solving problems have been widely acknowledged by literature. Its implementation in e–learning platforms can make easier its management and the learning process itself. However, its implementation can also become a very time– consuming task, particularly when the number of problems to generate is high. In this tutorial we describe a methodology that we have developed aiming to alleviate the workload of producing a great deal of problems in Moodle for an undergraduate business course. This methodology follows a six-step process: exercise design, data generation, exercise computations, design of interpretation rules, wording generation and generation of cloze questions. It allows evaluating student's skills in problem solving, minimizes plagiarism behaviors and provides immediate feedback to students (thus improving their results). Additionally, it also reduces the workload of teachers in large groups and helps evaluating the student's learning more objectively. We have designed this methodology based upon our experience in Economics curricula, where we have applied it at undergraduate and graduate courses. However, we consider that it can be applied with minor modifications in a very wide range of college courses and e–learning platforms. We expect that that this tutorial encourage other educators and educational developers to apply our six steps process, thus benefiting themselves and their students of its advantages.
... The PBL is not a new idea, Barrows, is a pioneer in medical projects [1][2][3][4]. In Spain the pioneer in this type of innovation in engineering is Miguel Valero in Castelldefels School [5][6][7]. The PBL encourages independent learning and collaborative learning that allows students to acquire skills. ...
Chapter
Full-text available
The Problem Based Learning methodology is increasing its popularity day by day. Nowadays, a lot of Universities implement this methodology to develop the skills of their students. Moreover, it is known that in PBL activities, the quality of the problem is a key challenge to achieve the learning objectives. Both the design and interest of the problem influence the motivation of the students. Similarly, the development and implementation of the problems are also important for the PBL activity to work well. Over the last few years, newly structured Problem Based Learning methodology has been implemented in the Telematics Engineering degree at the University of Extremadura, Spain. In this chapter, the experience of this PBL model is described through three case studies. The aim of the chapter is to detail three case studies of problem design and implementation of PBL in the context of these experiences. In order to quantify our experiences, some results are given in terms of student validation and academic achievement.
... It is within this context that Problem-Based Learning (PBL) has proved its usefulness to reach such an objective. In this sense, several authors have shown that PBL can be successfully implemented in Engineering studies and as early as in the first year (Del Canto et al., 2011) by integrating teamwork among other competences. ...
Article
Full-text available
In this second year of our Journal JOTSE our main challenge is to publish experiences related to innovation methodologies within the university classroom. Thus, allowing for the implementation and/or evaluation of competences throughout students’ learning process and, especially, in the scientific and technological fields. We understand competences as the combination of knowledge, skills and attitudes necessary to perform a task efficiently. Thereby, demonstrating abilities in action and developing them through activities that integrate all these aspects. In the area of higher studies in the scientific and/or technological fields it is rather common that the methodologies developed have a very practical component and, in addition, they are closely linked to the professional career our students are being trained for. Particularly, in the last academic years and at the end of their studies it is when students attend more applied subjects, such as Projects in the case of Engineering Studies that together with the Degree’s Final Project (PFC) allow to integrate a wide range of generic or cross-curricular competences and specific ones within the field. These types of subjects have shown to be very efficient to make students become closer to the professional reality that they will face at the end of their studies and where they will have to provide a solution for problematic situations or to meet the needs nowadays society demands. (Dochy et al. (2003), Prince (2004),Prince and Felder (2006)) Furthermore, with the incorporation to Bologna process, targeting a more active role of students during their learning process, it is even more relevant that students face real problems from the very beginning of their studies so that they gradually acquire generic competences, which are vital for their training both as individuals and as professionals in our present society. It is within this context that Problem-Based Learning (PBL) has proved its usefulness to reach such an objective. In this sense, several authors have shown that PBL can be successfully implemented in Engineering studies and as early as in the first year (Del Canto, 2011) by integrating teamwork among other competences.
Article
Encouraging the use of active‐learning methodologies, both inside and outside the classroom by means of planned activities, is a key factor in effective learning, as well as being essential for students to achieve the goals set for each subject by making them responsible for their own learning. In this longitudinal and quantitative study, we describe the different active learning activities, such as Flipped Classroom, Design Thinking, Visual Thinking, and Project‐Based Learning (PBL), undertaken from 2015 to 2018 in the compulsory subject Graphic Expression imparted during the first year from all the degree studies of the Escola d'Enginyeria de Barcelona Est from Universitat Politècnica de Catalunya—UPC BarcelonaTech. The introduction of active‐learning teaching methodologies for the solving of problems with computer aided design throughout the course, with increasing complexity of exercise completion, has led to a significant improvement in academic results by students in this subject. The results show that the inclusion of these methodologies improves student learning, as seen through the group development of an engineering project presented on the conclusion of the course. PBL is the methodology that is most highly appreciated by students because it enables them to integrate all the skills and knowledge acquired throughout the course and is also the activity in which they obtain the best marks.
Article
Full-text available
This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d’Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya). These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process. Machine Theory is a required course with a large number of students. These students are divided into groups of three, and required to carry out a supervised work constituting 20% of their final mark. These new evaluation methods were proposed in response to the significant increase of students in spring semester of 2010-2011, and were pilot tested during fall semester of academic year 2011-2012, in the previous Industrial Engineering degree program. Pilot test results were highly satisfactory for students and teachers, alike, and met proposed educational objectives. For this reason, the new evaluation methodology was adopted in spring semester of 2011-2012, in the current bachelor’s degree program in Industrial Technology (Grau en Enginyeria en Tecnologies Industrials, GETI), where it has also achieved highly satisfactory results.
Article
Assessment is at the centre of learning process, and is what therefore determinates the process of teaching-learning. It is because of this, the assessment tasks must be designed with several criteria which guarantee its quality. In the present work eight criteria are shown which will allow designing assessment activities coherent with the demanded requirements for the adaptation of EHEA. The innovation group GRAPA, of RIMA project from UPC, have performed a rubric which allow, in a quick way, to auto-evaluate and design assessment tasks. In addition, some tasks are shown as examples.
Article
Much evaluation of teaching focuses on what teachers do in class. This article focuses on the evaluation of assessment arrangements and the way they affect student learning out of class. It is assumed that assessment has an overwhelming influence on what, how and how much students study. The article proposes a set of 'conditions under which assessment supports learning' and justifies these with reference to theory, empirical evidence and practical experience. These conditions are offered as a framework for teachers to review the effectiveness of their own assessment practice.
Article
Provides several suggestions to help prepare students for peer assessment: (1) emphasize its purpose; (2) articulate relationship between peer assessment and learning outcomes; (3) ensure students understand elements of high-quality, proficient performance; (4) develop analytic scoring criteria systems; (5) encourage collaborative discussions; and (6) train students to apply scoring criteria in a consistent and unbiased manner. (EV)
Article
Describes the "jigsaw technique," an alternative to conventional classroom teaching methods. Rather than grouping a whole class around a teacher, the students are taught to work in smaller interdependent groups; each child is given a part of a topic to be studied, and when finished, the students fit their pieces of the subject area together to form a complete "jigsaw" picture. Examples of the use of the method, suggested projects, and research findings are included. (40 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Larmer Project Based Learning handbook, a guide to Standard-focused project based learning for middle and high school teachers
  • T Markham
T. Markham, J. Larmer Project Based Learning handbook, a guide to Standard-focused project based learning for middle and high school teachers, Buck Institute for Education, 2003