ArticlePDF Available

Self and peer assessment for group work in large classes

Authors:

Abstract and Figures

Group learning tasks are now ubiquitous in formal university education, but assessment processes for large classes have too often counteracted the benefits of peer learning. These benefits have been identified in the educational literature in vital areas of graduate attribute development, but there is a fundamental problem when the assessment of group work disregards the quality and level of individual contributions. This paper briefly outlines examples of paper-based self and peer assessment systems used to address this problem for large classes and lists possible reasons for their failure. Examples of the implementation of an online system are described and comparisons from two faculties drawn from student comments and online data. The authors conclude that there are crucial advantages of this online approach that resulted in evidence of attribute development and a responsible approach by students to the self and peer assessment process.
Content may be subject to copyright.
Making a Difference: 2005 Evaluations and Assessment Conference. 30 November-1 December,
Sydney.
Self and Peer Assessment for Group Work in Large Classes
Darrall Thompson & Ian McGregor
University of Technology, Sydney, Sydney, Australia
Group learning tasks are now ubiquitous in formal university education, but assessment processes for
large classes have too often counteracted the benefits of peer learning. These benefits have been
identified in the educational literature in vital areas of graduate attribute development, but there is a
fundamental problem when the assessment of group work disregards the quality and level of
individual contributions. This paper briefly outlines examples of paper-based self and peer assessment
systems used to address this problem for large classes and lists possible reasons for their failure.
Examples of the implementation of an online system are described and comparisons from two
faculties drawn from student comments and online data. The authors conclude that there are crucial
advantages of this online approach that resulted in evidence of attribute development and a responsible
approach by students to the self and peer assessment process.
Introduction
In the current context of higher education the development of graduate attributes, such as
communication skills, interpersonal skills, leadership, are of prime importance (Barrie, 2004).
Group work can be a very potent vehicle for the development of these attributes, but the
implementation of this peer learning strategy requires a subtle approach to the design of
learning tasks and assessment processes (Boud, Cohen & Sampson, 2001).
There are many issues in the design of successful group work strategies in formal courses,
including group formation and monitoring the groups’ progress. However, the issue of
‘unfair’ assessment for group tasks, particularly for large classes, is significant in student
feedback surveys and identified in the educational literature as a cause for concern (Boud,
Cohen & Sampson, 2001; Freeman and McKenzie, 2001). For example, Boud, Cohen and
Sampson argue,
assessment is the single most powerful influence on learning in formal
courses and, if not designed well, can easily undermine the positive
features of an important strategy in the repertoire of teaching and learning
approaches. (Boud, Cohen & Sampson, 2001, p. 67)
In the design of group tasks lecturers have often avoided the issue of group assessment by
requiring individual submissions from group work activities. This method diminishes the
important design criteria that group tasks should require interdependence, and tends to
promote non-collaboration, plagiarism and the students’ adoption of a surface approach to
their development of group work attributes (Barkley, Cross & Major, 2005; Davis, 1993).
This paper is a result of the authors’ separate attempts to address the problem of group task
assessment for large classes. A brief critique of paper-based approaches to self and peer
assessment of group work is followed by examples of the use of an online system in an
42
D. Thompson & I. McGregor
undergraduate design subject and a business subject at the University of Technology, Sydney
(UTS). The comparison and analysis of the online implementations is discussed, including
similarities and differences in design and business results presented in tabular form. The
Tables 1 and 2 show evidence of the responsible and reflective approach that students have
taken to the self and peer assessment process. This is supported by evidence of attribute
development found in focus group and staff comments. The conclusions drawn in this paper
focus on the advantages of the online system that may be generalisable to other university
contexts where large classes involve group-based assessment of learning activities.
The Fundamental Group Work Assessment Problem
Well-designed group work projects, where all group members receive the same mark
regardless of their input, appears to be unacceptable from the students’ perspective and can
cause deep resentment to build. The following quote explains the problem concisely:
Each student in a group had received the same assessment mark regardless
of individual performance or group involvement. Students, attempting to
establish themselves in a new learning environment, complained bitterly
about the uneven workload, the lack of specific skills by some group
members, and the non-involvement of others, which had left them with a
sense of inequity, unfairness and considerable discontent. (Wilson, 2001)
University teachers are not in a position to accurately monitor the contributions of all
group members in the vast majority of group work projects for large classes. In educational
systems where a percentage assessment mark is central to the calculation of grades and other
awards, it would be beneficial to provide a fair and accurate way of adjusting the assessment
of group tasks to reflect individuals’ contributions.
Self and peer assessment for group work is a process whereby the students are given the
responsibility to rate themselves and their peers according to the levels of contribution to the
group task, using criteria that accurately describe the range of possible contributions (Cheng
& Warren, 2000). There are a vast range of methods available but many university teachers
continue to give the same group mark to every member of the group regardless of the level
and quality of individual contributions.
Examples of Self and Peer Assessment Using Paper-Based Systems
The following section is a description of both authors’ attempts to use paper-based self and
peer assessment processes for large classes, with an analysis of the reasons for their failure to
distribute marks according to individuals’ contributions.
Example A: Faculty of Design, Architecture and Building- Paper-Based Approach
Context. Ninety-five students were in groups of four or five to design communication
solutions for community groups that could not afford the assistance of professional designers.
The student groups presented completed projects to community ‘clients’.
Result. Groups were given a percentage mark for their group work, which they then had to
share as a 60% contribution to their final individual mark for the whole subject. Each student
was given a piece of paper stating three criteria, and a twenty minute period in which to assess
their own and each other’s contributions to the group task. Without exception, they gave each
other an equal share of the group mark, even though it was clear that uneven contributions
had been obvious throughout the project work.
Self and Peer Assessment for Group Work 43
Example B: Faculty of Business- Paper-Based Approach
Context. Two hundred students in groups of 3 to 6. Each group of students developed a
Strategic Business Plan and then implemented it in competition with the other groups in their
tutorial. This group work component represented 30% of the assessment of the subject, 15%
for the Business Plan and 15% for their implementation of the Business Plan as documented
in a final group presentation to their tutor.
Result. The students almost always rated each other equally even when there had been
feedback from students during the Semester that some were not doing their ‘fair share’ of
work or that a conscientious student was doing more than a fair share.
Discussion of Feedback in Examples from Two Faculties
Whilst the authors wanted to use self and peer assessment as both summative and formative
feedback in the group tasks described, the results were disappointing. An example of the
paper-based system used in the Faculty of Business can be found in Appendix A.
The practicality of administering both paper-based processes with reasonably large student
cohorts was onerous and a serious disincentive from the lecturer’s perspective. Collecting self
and peer feedback at the end of a subject was also very difficult where students were absent or
part-time tutors employed. Analysis of both lecturer and student comments from both
Faculties yielded a list of possible reasons for the failure of both paper-based systems in
producing a fair distribution of marks.
Possible Reasons for Unsatisfactory Results from the Paper-Based Approaches
There are a number of possible explanations for why the paper-based approach did not
achieve the desired goals. These include:
lack of time for students to reflect after the submission of work
the small number of self and peer assessment criteria did not address the range of
possible contributions to the group task
lack of anonymity arising from the practicality of self and peer feedback
happening when all the students were together
lecturer's reluctance to perform large data inputs and complex conversions of the
group marks into individual marks
lack of an explicit mathematical calculation method, leaving the process open to
subjective assessment by the lecturer.
difficulty of distribution and collection of self and peer assessment particularly
with part time courses and lecturers.
lack of clear explanation by teaching staff may have caused students to consider
self and peer assessment a ‘farce’.
The last point listed relates partly to a misunderstanding of the two-stage process involved in
the group task assessment. The first stage is the assessment of the group task by the lecturer
according to the learning objectives and assessment criteria for the task. The second stage is
the process by which students rated their own and their peers levels and qualities of
contribution to the achievement of the group mark, using a different set of criteria. The
following section explains how this second process works with examples from a PowerPoint
presentation developed to explain the system to staff and students.
44
D. Thompson & I. McGregor
Examples of self and peer assessment using an online system. It was the realisation that
paper-based systems were not viable for large classes that led to one of the author’s
(Thompson) involvement in the development of an online Self and Peer Assessment Resource
Kit (SPARK) at UTS in 1998. It was released in 2000 as an ‘open source’ project and
information is available at http://www.educ.dab.uts.edu.au/darrall/SPARKsite/. This online
system was based on early academic research into group work assessment (Goldfinch &
Raeside, 1990; Goldfinch, 1994), and was developed by a multi-disciplinary team of
academics at UTS. The ratings process is achieved by students accessing a web interface with
pop-up rating scales against a set of agreed group work criteria, as shown in Figure 1.
Figure 1. Screenshot of SPARK rating screen
When each of the students has rated themselves and their colleagues in the group against
each criteria, SPARK calculates an ‘adjustment factor’, called a Self & Peer Assessment
(SPA) factor that is multiplied by the group mark to produce an individual mark for each
group member.
A PowerPoint slide presentation was developed to explain the affect of this second stage of
the assessment process. It uses an analogy in which the group mark given by the lecturer is
represented by a larger or smaller apple. It is then explained that the students divide the apple
between them according to criteria relating to the range of possible contributions to the group
task. An important feature being that students have to rate their own performance as well as
those of each of their peers in the group.
For example, the apples shown in Figure 2 are designated to represent the assessments
given to two differently performing groups. It is clear that a member of Group A (80% mark)
or Group B (60% mark) who has been credited by his or her peers with making an
outstanding contribution to their group’s final result deserves a higher mark than the other
members of his or her group. Similarly a member of either group who contributed minimally
to the group result deserves a lower mark than the percentage given by the lecturer for the
group overall.
Self and Peer Assessment for Group Work 45
Figure 2. PowerPoint slide showing apple analogy to explain the two-stage assessment process
An important feature of the ratings process shown in Figure 1 is that ratings criteria
express a broad range of possible contributions and relate to the development of good
teamwork skills.
The rating scale from -1 to +3 used in Figure 1 was designated as follows:
3 = a major contribution in a particular criteria
2 = average contribution 'average' here means much the same as the rest of the group.
1 = below average
0 = no contribution
-1 = a detrimental contribution
The plan view of the apple shown in Figure 3 illustrates how the factor produced by the
SPARK software, multiplied by the group mark has altered the individual marks of each
student in Group A. In this group of five students two students have received considerably
higher marks than other group members and two have received less than the group mark
given by the lecturer for Group A’s result (60%).
Figure 3. PowerPoint slide showing the comparison between individual students’ results after the self
and peer assessment process.
46
D. Thompson & I. McGregor
The comparison of the final division of marks for both groups A and B shows that one
student in Group A has received a higher percentage than two students in Group B because of
the factors produced by the criteria-based self and peer assessment process.
In the following section we will show actual results of using the SPARK system in two
faculties at the University of Technology, Sydney.
Example C: Faculty of Design, Architecture and Building- Online System
Context. Ninety students in 18 groups of four to six engaging in a largely self-directed online
group work task. Each of the groups had the task of researching and ‘becoming’ a given
designer, and then to debate online and live with another group (designer), with opposing
views. This group task was part of a larger subject “Introduction to Typography” and was
allocated 20% of the total mark for the subject. The group work online task had to be
completed without a specific allocation of class time, other than two half-hour lecture slots in
explanation of the online project, and a two-hour session for the live face-to-face debate by
students at the end of the project. Apart from this, all information and communication
occurred online through the integrated use of the three online systems including the SPARK
software for assessment of the groups’ contributions to both online submissions and the live
debate. The data collected for the case study included server access data, student
questionnaires, SPARK ratings and factors, student focus groups, student comments and staff
interviews.
The group work criteria used in this task for self and peer ratings were nine in all:
Helping the group to function well as a team
Level of participation in the online debate project
Performing tasks efficiently
Bringing things together for a good team result
Helping to decide who does what when
Contributing to the cogency of written submissions
Doing research and finding references
Suggesting ideas
Understanding what was required
These criteria were rated using a scale plus 3 to minus 1 as described for Figure 1. Students
were able to rate themselves and each other for a period of one week after the online
submissions and final live debate were completed.
Example D: Faculty of Business- Online System
Context. Two hundred and twenty-five students in groups of three to six in the same subject
as described earlier in Example B: Faculty of Business - paper-based approach.
Seven criteria were used in SPARK for this subject as the basis for rating each student’s
contribution to the group’s performance. They were:
Understanding what is required
Suggesting ideas
Performing tasks effectively
Organising the team and ensuring things get done
Level of enthusiasm & participation
Helping the group to function well as a team
Doing a fair share of the work.
Self and Peer Assessment for Group Work 47
A rating scale of 0-5 was used for each factor with the following legend:
0 = No contribution
1 = Well below average for the team
2 = Below average for the team
3 = Average for the team
4 = Above average for the team
5 = Well above average for the team.
Students were given one week after their group presentation to complete the online ratings
process.
Calculating the Group Mark
In both of the examples described above, the factors were then calculated within the online
system and multiplied by the group mark to produce individual marks. There are actually two
different SPARK factors generated. The first is derived from self ratings divided by the
average group ratings (SA/PA) and the second is a combination of all ratings (SPA) which is
the factor used to multiply by the group mark.
Comparisons between these factors shown in Table 1 indicates that those students who
rated well against the criteria and received higher marks than the group mark were not
overrating their own contributions, in fact quite the contrary. Out of the eleven students whose
marks rose by between 5% and 10% due to the SPA factor, nine underrated their own
contributions when compared to the average of peers ratings of their contributions.
Table 1. Students whose contributions were highly rated by their team, underrated themselves
SA/PA
Factor
SPA
Factor
Group
Mark
Group Work Mark after SPA
Adjustment Applied
1
1.07
70.5
75.4
1
1.07
66.5
71.2
1.11
1.07
71.5
76.5
1
1.07
80.5
86.1
1
1.07
86
92.0
0.97
1.08
73.5
79.4
0.98
1.08
70.5
76.1
0
1.09
66.5
72.5
1.09
1.09
80.5
87.7
1.01
1.1
75.5
83.1
0.94
1.11
76
84.4
1
1.11
71
78.8
This is a surprising result but one that has been a consistent feature of SPARK
implementations. It shows that students are being genuinely reflective about their own
contributions and not giving themselves the highest rating on all criteria. The opposite case of
underperforming group members overrating their contribution was also documented. An
example of students’ responsible approaches to the self and peer assessment process is seen in
Table 2.
48
D. Thompson & I. McGregor
Table 2. Ratings entered for each of the rating values - showing responsible use of the -1 rating
Rating
Description
Percent
-1
Detrimental contribution
0.4%
0
No contribution
1.5%
1
Below average contribution
13%
2
Average contribution
29%
3
Above average contribution
56%
Total
100%
Table 2 shows responsible use of the -1 rating (used when a contribution to a particular
criteria is thought to have been detrimental to the group’s process) as it was only used 12
times in a total of 2,986 ratings entered. The 0 and the 1 ratings shown were, however, used
sufficiently to show that many group members were being genuinely reflective about their
own and their peers’ levels of contributions.
Comments collected in student focus groups and staff interviews after the ratings process
highlight some of the benefits they found in using SPARK The students emphasised how the
anonymity of the online system encouraged their reflection. For example,
‘… it was interesting because before I used SPARK, when I thought about
this particular group experience, I just kind of thought some of us did a lot
of work and some of us didn’t. (Focus Group 3)
And then I looked at it and I tended to be more in the ‘did a lot of work’
group, and then I looked at and there were all the questions and I went
through them, and it made me think about different angles of the group
work. Like the one question, one of the questions down the bottom—how
did everyone work at structuring the team and making sure that the team
was ….do you know what I mean…(Focus Group 3)
And I sort of had to give myself a really low mark for that because I just
sort of agreed to take on all this work without saying maybe we should
share it around and try to make an effort to get the team involved. And just
sort of taking it all on myself. (Focus Group 3)
Staff interviews completed after the group ratings process and calculation of marks
emphasised how SPARK provided a fair and viable result for staff using the system for large
classes.
‘… SPARK is wonderful because over the last few years, well, ever since
I’ve been teaching, it’s always limited the way we do group work because
every student gets the same mark. SPARK has enabled us to get students to
actually do self and peer assessment, which has enabled us to get fair
marks for group work. Which is just a huge thing for students.’ (Staff
member A)
‘… I think that idea of the students being able to put in relatively
confidential assessment of each other and to rate each other against criteria
is actually a really, really valuable thing and I think it worked well when I
look at the grades that came back.’ (Staff member B)
‘… one email I received from a student in the most recent semester
specifically commented that SPARK was good. However, another email
from a student in the same semester had quite a different view and
challenged the validity of the SPARK process. The problems seem mainly
to relate to clearly communicating to students how SPARK works and once
Self and Peer Assessment for Group Work 49
they understand it there is little argument that it is a reasonably fair process
for assessing individual student’s contributions to group work.’ (Staff
member C)
Discussion of the Online Examples from Design And Business
The early implementations in the Design, Architecture and Building (DAB) and Business
Faculties, were approached differently in relation to rating scales and the use of the SPA
factor to ‘individualise’ the group marks. This fact did not appear to alter the approach of
students to the process, or significantly affect the analysis of ratings and factors from both
Faculties.
The authors had both experienced challenges from students to the use of SPARK and
found that these exchanges were valuable in enabling further explanation and clarification of
the system. In the design case, this led to the design of a PowerPoint presentation illustrated in
Figure 2, which is now used by all the lecturers using the SPARK system to explain the
process.
Students in both Faculties appeared to treat the self and peer assessment process seriously,
which had not been the case with paper-based approaches. This may have been due to the
confidentiality afforded by the online system and the opportunity for students to have time to
input ratings for a period after the completion of the group task.
A comparison was made between the Design students and Business students with regard to
their overrating of their own contributions which, with reference to their peers’ ratings, can be
seen in Table 3 below.
Table 3: Design and Business students’ overrating of their own contributions compared to the mainly
peer rated SPA factor
Design Subject
Business Subject
SA/PA
SPA
SA/PA
SPA
1.61
0.79
2.35
0.68
1.27
0.89
2.22
0.64
1.23
0.86
2.04
0.62
1.23
0.99
1.97
0.69
1.22
0.98
1.75
0.74
1.2
0.93
1.6
0.82
1.17
0.91
1.54
0.79
1.17
0.98
1.49
0.71
1.16
0.85
1.49
0.68
1.12
1
1.48
0.83
1.12
1.06
1.48
0.87
1.12
0.93
1.45
0.85
Average
1.22
0.93
1.74
0.74
It is interesting to note that all the Business Students giving themselves very high ratings
scored relatively low average Self and Peer Assessment (SPA) factors. The gap is much larger
with the Business Students, although this is partly due to the different rating scales used (–1 to
3 in the Design subject and 0-5 in the Business Subject). For most of the other comparisons,
there were no significant differences in the students use and rating via SPARK, with both sets
of students using really low ratings very sparingly.
50
D. Thompson & I. McGregor
Potential Improvements to the Online System
As students’ rating themselves very highly does have some impact on the final SPA factor, it
would be useful to have a Peer Assessment factor that shows a rating only based on the
assessment from the other group members. This would also enhance the feedback given to
students by providing them with a Self Assessment/Peer Assessment Factor a Self/Peer
Assessment Factor and a Peer Assessment Factor.
The other area where improvement could be made is in relation to feedback provided to
students about ratings for each of the group work criteria used in the ratings process. It may
also be interesting to give students the opportunity within the SPARK system to comment
about their own and their peers’ contributions.
Conclusions
The benefits of group work as a learning and teaching strategy seem to be enhanced, in part,
through students understanding that they will be fairly treated in the assessment process. The
responsibility that students exercised in the ratings process, in both Faculty examples offered
here, implies that a more careful and reflective evaluation of their group engagement is
achieved compared with the paper-based examples quoted.
There were a number of positive features of the online system that could make it attractive
for general application in any faculty where large classes use group-based assessment tasks.
The data from the online system facilitated evaluations of subtle group dynamics and the
underrating and overrating of individual contributions, providing excellent formative
feedback. The ease with which anonymous data was gathered remotely, at times convenient to
the student groups, encouraged both staff and students to engage in the process. The
allowance for entering or re-entering ratings up to a week after the group submissions gave
students ample time to reflect. Explicit criteria for rating of contributions focused the
reflection beyond vague generalisations about other group members. Student feedback about
the self and peer assessment process contained evidence of deeper approaches to their
reflections about the value of group work and the attributes they were developing through it.
The negative features were also important and highlight the fact that there is no such thing
as an online ‘solution’. Where staff were not confident in explaining the self and peer
assessment process student groups tended to treat the system in a ‘surface’ manner. Students
who misunderstood the self and peer assessment process blamed the online system for
producing an imbalance in marking. When criteria for individual ratings were not explained
as contributing to the important graduate attributes of working in teams, students questioned
the relevance of the process to their learning in the particular discipline. Where tutors did not
stay aware of the numbers of members in individual groups (particularly unenrolling
students), the online assessment was skewed due to marks being shared between fewer group
members.
However both authors found SPARK to be a better way of assessing individual
contributions to group work for large classes than the paper-based systems they had
previously used. Given that the benefits of group work can easily subside through a range of
motivational and other factors within the learning environment, it is hoped that the examples
in this paper may serve to encourage the appropriate use of online systems in group work
assessment for large classes.
Self and Peer Assessment for Group Work 51
Notes on Contributors
Working as a designer and senior lecturer (fifteen years in London, followed by fourteen
years in Australia), has given Darrall Thompson a broad and extensive view of both the
design profession and design education. He has over thirty journal articles and conference
publications and a research Masters degree in design education. He was involved with the
original multidisciplinary group of academics at UTS that researched the SPARK process as
part of a government grant.
In 2003, Ian McGregor was appointed to the position of Lecturer in the School of
Management in the Faculty of Business at UTS. He completed an MSc at London Business
School. His areas of expertise include strategic planning, marketing, feasibility studies, cost-
benefit analysis and ecological economics.
Address for correspondence. Darrall Thompson, Faculty of Design Architecture and
Building, University of Technology Sydney , GPO Box 123, Broadway, Sydney NSW 2007
Australia. Email: darrall.thompson@uts.edu.au
References
Barkley, E. F. Cross, K. P., Major, C. H. (2005). Collaborative learning techniques: A handbook for
college faculty. San Francisco: Jossey-Bass
Barrie, S.C. (2004). A research-based approach to generic graduate attributes policy. Higher
Education Research and Development. 23 (3), 261-275.
Boud, D., Cohen, R. & Sampson, J. (Eds.) (2001) Peer Learning in Higher Education. London: Kogan
Page.
Cheng, W. and Warren, M. (2000). Making a difference: using peers to assess individual students'
contributions to a group project. Teaching in Higher Education, 5 (2).
Davis, B.G. (1993). Tools for Teaching. San Francisco: Jossey-Bass.
Freeman, M. & McKenzie, J. (2001). Aligning peers assessment with peer learning for large classes:
The case for an online self and peer assessment system. In D. Boud, R. Cohen, & J. Sampson.
(Eds), Peer Learning in Higher Education: Learning from and with each other. London: Kogan
Page.
Goldfinch, J. (1994). Further developments in peer assessment of group projects. Assessment and
Evaluation in Higher Education, 19 (1), 29-35.
Goldfinch, J. and Raeside, R. (1990). Development of a Peer Assessment Technique for Obtaining
Individual Marks on a Group project. Assessment and Evaluation in Higher Education, 15 (3),
21-31.
Wilson, J. (2001) Teaching and learning discussion paper, Unpublished manuscript, University of
Technology, Sydney.
52
D. Thompson & I. McGregor
Appendix A: Example of Paper Based System used in the Business Faculty
21193: Introduction to Corporate Strategy
GROUP PART ICIPATION RATING FORM
This form will assist your lecturer make an assessment of the participation of each member of your group.
Please complete this form and submit it to your Workshop Lecturer before the end of the last session.
YOUR STUDENT NA ME (completing form):………………………………………………………
SELF ASSESSED OVERALL CONTRIBUTION RATING: ………………/100 (50% equal to an even
contribution)
GROUP MEMBERS:
1. STUDENT NAME: …………………………………………………………………………………
OVERALL CONTRIBUTION RATING: ………………/100 (50% equal to an even contribution)
2. STUDENT NAME: …………………………………………………………………………………
OVERALL CONTRIBUTION RATING: ………………/100 (50% equal to an even contribution)
3. STUDENT NAME: …………………………………………………………………………………
OVERALL CONTRIBUTION RATING: ………………/100 (50% equal to an even contribution)
4. STUDENT NAME: …………………………………………………………………………………
OVERALL CONTRIBUTION RATING: ………………/100 (50% equal to an even contribution)
ADDITIONAL INFORMATION QUESTIONS:
1. For each member of your group write their name against one of the following frequency graphs, note
the quality of the student’s contribution to business simulation decision making. (Included under
quality are factors such as whether contributions operated as a stimulus to discussion and
contributed to the general success of the group.)
Low quality High quality
(A) Frequent contributors
……………………………………………………………………………………
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
(B) Fairly frequent contributors
.………………………………………………………………………………………
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
(C) Infrequent contributors
………………………………………………………………………………………
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
2. Rate the frequency and effectiveness of the each group member’s attempts to contribute effectively
to the decisions on the business simulation.
Low quality (No idea) High quality (Mentor)
………………………………………………………………………………………
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
3. Rate the quality of the preparation for decisions on the business simulation contribution:
Poor (Did nothing) Fairly effective Excellent
………………………………………………………………………………………
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
………………………………………………………………………………………………
4. Further comments on the value of individual group member’s contributions:
... Furthermore, contributions made to the group project helped students develop skills for articulating their point of view, even though the challenges of groupwork can be frustrating, particularly when peers are perceived not to be contributing to the group project activities. Other studies have attributed this to common problems like poor communication, personal issues, unequal contribution, lack of focus, and missing deadlines (Barkley et al., 2014;Thompson & McGregor, 2005), all of which were observed in this present study. ...
Article
Full-text available
This research examines the effect of critical reflection activities in an educational environment that uses active and blended learning experiences for students to develop confidence, motivation, and engagement with their learning. A mixed-methods research approach was adopted on a sample of 137 Communications and Media students, who were involved in a game-based project. Using a validated survey instrument to measure the students’ perceptions of their experiences, Pearson Correlation coefficient was used to estimate the relationship between the learning activities and students’ perceived confidence, motivation, and engagement. Finally, multiple linear regression was used to determine the effect of the critical reflection activities. This study identifies several variables that significantly contribute to student confidence, motivation, and engagement with learning. Yet, some variables were identified that also diminish students’ confidence, motivation, and engagement. Learning activities that show statistically weak and strong significant relationships were also identified. Some factors that significantly impacted students’ confidence, motivation, and engagement include better understanding of course topics; direct relevance of an online critical reflection; students’ motivation to learn more about the subject; clear, timely, and in-depth feedback; and the use of some pedagogical triggers.
... Although the intensity of their use by lecturers varies greatly even within a degree program, VLEs often already supply a variety of online tools enabling lecturers to set up a form of PA [24]. Next to that, several digital reflections and assessment tools have become available on the market such as Spark [25] FeedbackFruits [26], Scorion [27], and CATME [28]. These tools offer different functionalities that are suited for different types of PA. ...
Article
Full-text available
Increasing student numbers in higher education, particularly in engineering and computer science, make it difficult for motivated lecturers to continue engaging in active teaching methods such as Flipped Classrooms and Work-Based Learning. In these settings, digital Peer Assessment can be one approach to provide effective and scalable feedback. In Peer Assessment, students assess each other’s performance whilst gaining useful reflection and judgment skills at the same time. This umbrella review of 14 review papers on the use of digital (peer) assessment in education provides a comprehensive overview of design choices and their consequences open to educational practitioners wishing to implement digital Peer Assessment in their courses, the type of tooling available and the possible effects of these choices on the learning outcomes as well as potential pitfalls and challenges when implementing Peer Assessment. The paper will inform and assist educators in finding or developing a tool that fits their needs.
... There is a sample set of criteria in Spark. This set is based on research and generates valid results [8]. However, the designers recommend negotiating relevant criteria together with the students. ...
Conference Paper
Full-text available
Teamwork is an important aspect to the engineering profession. Therefore, students in engineering must learn teamwork competencies, like project management and communication skills as part of their training. In many engineering curricula at least part of the learning process is organized in the format of projects. Starting of with authentic problems, students collaborate as a small team engineers in real life. Different from the real world, in the educational setting the main purpose of this activity is not to solve the problem at hand, but rather to trigger a learning process. Feedback is essential to keep the students focused on this target. A problem for the teacher as a facilitator is that he/she does not have access to all relevant information. In most cases the teacher is not present at all sessions of the group and it takes a lot of time to read all intermediate reports and logbooks. Peer Assessment and peer review are methods teachers can apply to ease this burden. Presently there are numerous software applications available allowing the teacher to engage students in reviewing each other's products and to assess the value of each other's contributions. The use of these instruments has become increasingly popular in particular in the context of problem based and project organized learning. Peer evaluations allow faculty to differentiate in individual grading of group work and prevent free-riders. This paper highlights the development of a peer evaluation instrument at TU Delft, which is part of a project sponsored by the Dutch National foundation SURF.
... Thompson reports that one time the students opted to include the -1 and 0 rating in the assessment. Research [10], [11], [12] has shown that the students use these ratings carefully and effectively. ...
... Thompson, J. & McGregor, I. (2005). Self and peer assessment for group work in large classes. ...
... Thompson reports that one time the students opted to include the –1 and 0 rating in the assessment. Research [10], [11], [12] has shown that the students use these ratings carefully and effectively. ...
Conference Paper
Full-text available
Peer and self evaluations are an excellent way to monitor and evaluate group skills in project based design work. Their use has become increasingly popular with increase in popularity of project based learning. Peer evaluations allow faculty to differentiate in individual grading of group work and prevent free-riding. This paper makes a distinction between three types of peer evaluation: ranking students, dividing assets between students and rating students on qualitative criteria. Based on these criteria it compares the system developed at Delft University of Technology with existing systems in Eindhoven (NL) and Sydney (Aus) based on their functionality and cultural dimensions. We also discuss the hurdles faced by all parties in using this tool in grading and how we have overcome them. This results in a clear set of recommendations for lecturers who would like to use peer evaluation in their projects.
Conference Paper
� Abstract— In a Problem-Based Learning (PBL) curriculum in engineering, students work together as a project team. By doing that they have the opportunity to exercise teamwork competencies, like project management and communication skills while studying various engineering topics. In this process feedback is essential in order to stimulate the students. However, the lecturers do not witness all the interactions in the group and also it takes a lot of time to give extensive feedback. Peer review and Peer evaluation method enable teachers to assess teamwork competencies more completely and more time efficient. An extra benefit is that the students learn to take responsibility for assessing the value of contributions to the team efforts. Based on a research project at TU Delft in the Netherlands, this paper will present a variety of Peer evaluation and Peer review instruments and discuss their potential in supporting the PBL group process.
Article
Full-text available
For many years universities around the world have sought to articulate the nature of the education they offer to their students through a description of the generic qualities and skills their graduates possess. Despite the lengthy history of the rhetoric of such policy claims, universities' endeavours to describe generic attributes of graduates continue to lack a clear theoretical or conceptual base and are characterized by a plurality of view-points. Furthermore, despite extensive funding in some quarters, overall, efforts to foster the development of generic attributes appear to have met with limited success. Recent research has shed some light on this apparent variability in policy and practice. It is apparent that Australian university teachers charged with responsibility for developing students' generic graduate attributes do not share a common understanding of either the nature of these outcomes, or the teaching and learning processes that might facilitate the development of these outcomes. Instead academics hold qualitatively different conceptions of the phenomenon of graduate attributes. This paper considers how the qualitatively different conceptions of graduate attributes identi®ed in this research have been applied to the challenge of revising a university's policy statement specifying the generic attributes of its graduates. The paper outlines the key ®ndings of the research and then describes how the university's revision of its policy statement has built upon this research, adopting a research-led approach to academic development. The resultant two-tiered policy is presented and the key academic development processes associated with the disciplinary contextualization of this framework are considered. The discussion explores some of the implications of this novel approach to structuring a university's policy, in particular, the variation in the relationship between discipline knowledge and generic attributes which was a key feature of the qualitative variation in understandings identi®ed in the research.
Article
Full-text available
DOI: 10.1080/135625100114885 Overcoming the potential dilemma of awarding the same grade to a group of students for group work assignments, regardless of the contribution made by each group member, is a problem facing teachers who ask their students to work collaboratively together on assessed group tasks. In this paper, we report on the procedures to factor in the contributions of individual group members engaged in an integrated group project using peer assessment procedures. Our findings demonstrate that the method we used resulted in a substantially wider spread of marks being given to individual students. Almost every student was awarded a numerical score which was higher or lower than a simple group project mark would have been. When these numerical scores were converted into the final letter grades, approximately one third of the students received a grade for the group project that was different from the grade that they would have received if the same grade had been awarded to all group members. Based on these preliminary findings we conclude that peer assessment can be usefully and meaningfully employed to factor individual contributions into the grades awarded to students engaged in collaborative group work. Author name used in this publication: Winnie Cheng
Article
The use of peer assessment as a way of differentiating between individual students on a group project is discussed. A new style of peer appraisal questionnaire for the students to complete is introduced, together with the detailed description of a method of calculating a ‘peer assessment factor’ from these questionnaires. This factor allocates to an individual group member a percentage of the mark awarded to the group's project submission. The results obtained when this scheme was used on a large first year course are discussed, together with some possible modifications and their effects. Examples of the appraisal questionnaire and the calculations are included
Article
This report advises of refinements to the peer assessment technique detailed in Goldfinch & Raeside's paper of 1990. This technique is used to assign individual marks to the members of a team who have been working on a group project. The improvements include a way of easing the administrative burden of the technique for the lecturer, and a safeguard against an observed problem whereby over‐generous students effectively penalised themselves.
Teaching and learning discussion paper, Unpublished manuscript
  • J Wilson
Wilson, J. (2001) Teaching and learning discussion paper, Unpublished manuscript, University of Technology, Sydney.