ArticlePDF Available

Collaborative learning through formative peer review: Pedagogy, programs and potential

Authors:

Abstract and Figures

We examine student peer review, with an emphasis on formative practice and collaborative learning, rather than peer grading. Opportunities to engage students in such formative peer assessment are growing, as a range of online tools become available to manage and simplify the process of administering student peer review. We consider whether pedagogical requirements for student peer review are likely to be discipline-specific, taking computer science and software engineering as an example. We then summarise what attributes are important for a modern generic peer review tool, and classify tools according to four prevalent emphases, using currently available, mature tools to illustrate each. We conclude by identifying some gaps in current understanding of formative peer review, and discuss how online tools for student peer review can help create opportunities to answer some of these questions.
Content may be subject to copyright.
Collaborative learning through formative peer review: pedagogy,
programs and potential
Harald Søndergaard
a
*, and Raoul A. Mulder
b
a
Department of Computing and Information Systems, The University of Melbourne,
Melbourne, Victoria 3010, Australia;
b
Department of Zoology, The University of
Melbourne, Melbourne, Victoria 3010, Australia
(Received 15 May 2012; final version received 23 August 2012)
We examine student peer review, with an emphasis on formative
practice and collaborative learning, rather than peer grading.
Opportunities to engage students in such formative peer assessment
are growing, as a range of online tools become available to manage
and simplify the process of administering student peer review. We
consider whether pedagogical requirements for student peer review
are likely to be discipline-specific, taking computer science and
software engineering as an example. We then summarise what
attributes are important for a modern generic peer review tool, and
classify tools according to four prevalent emphases, using currently
available, mature tools to illustrate each. We conclude by identifying
some gaps in current understanding of formative peer review, and
discuss how online tools for student peer review can help create
opportunities to answer some of these questions.
Keywords: student peer assessment; peer reviewing; peer review tools;
collaborative learning
1. Introduction
Judging from a growing literature, there is a steadily increasing interest in
student participatory activities in higher education. This interest has been
driven by educational research identifying the positive role of collabora-
tive learning, as well as by the availability of better computer-based tools
to support collaborative learning activities.
The use of computer-based tools offers advantages beyond facilitating
the administration of peer review. For instance, many such tools offer
learning analytics. Understanding patterns of use can help to suggest new
practices (akin to online shopping tools, which enable vendors to
understand buyers and their habits and personalise and adapt user
*Corresponding author. Email: harald@unimelb.edu.au
Computer Science Education
2012, 1–25, iFirst article
ISSN 0899-3408 print/ISSN 1744-5175 online
Ó 2012 Taylor & Francis
http://dx.doi.org/10.1080/08993408.2012.728041
http://www.tandfonline.com
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
interfaces in response). For instance, PeerWise (Denny, Luxton-Reilly, &
Hamer, 2008) is a tool that manages students’ contributions to (and
subsequent use of) repositories of drill and practice questions. Its web-
based technology not only enables wide functionality (such as students’
ability to rate questions, discuss suggested solutions, and follow favourite
question-designers), but also leaves a seam of data for instructors and
researchers to mine, for example, to gauge student understanding or
identify student misconceptions.
PeerWise also exemplifies learning tools that are built around student-
created artefacts. Such tools have dual potential. Firstly, learning (by
others) is a primary purpose of the constructed learning objects (here drill
questions). But learning (and probably at a higher level) also results from
the production process itself, thus benefiting the producer. Both the
consumers and the producers learn.
The idea that students can benefit from trying out the instructor’s role
and contribute learning materials resonates with ideas in ‘‘active
learning’’, ‘‘learning by doing’’ and ‘‘learning by teaching’’. Referring
to Sfard’s (1998) distinction between (and proposed conjunction of)
‘‘acquisition’’ and ‘‘participation’’ models of learning, Collis and Moonen
(2005) see contribution-oriented pedagogy as ‘‘focusing on a practical
application of the participation model’’ and ‘‘an essential complement to
existing instructionist approaches that relate to the acquisition model’’.
With the contribution approach, ‘‘participation is not enough; the learner
must also contribute to make a difference’’. As examples, Collis and
Moonen mention students finding and sharing material on the web,
creating tutorial papers on specific topics, constructing drill questions,
conducting online task-directed discussions and performing peer assess-
ment activities.
Collis and Moonen (2006) use the term ‘‘contributing student
pedagogy’’ and provide examples of such pedagogy from both higher
education and professional contexts. Hamer et al. (2008) use the same
term (and the acronym CSP) for pedagogy with student involvement in
activities that traditionally have been part of the instructor’s role, such as
the creation of drill questions, demonstrations, test data, visualisations
and other teaching materials, as well as the provision of feedback on
student work, and even grading. In defining CSP, Hamer et al. capture
the kind of learning that takes place in a supportive, collaborative
environment, rather than a competitive one: ‘‘A pedagogy that
encourages students to contribute to the learning of others and to value
the contributions of others’’ (Hamer et al., 2008). Such pedagogy sees a
class as a learning community with shared goals and interests. It relies on
the individual student’s recognition of the class as a group of fellow
travellers on a shared journey and acceptance of the assumption that
collaboration can work in everybody’s favour.
2 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Hamer, Luxton-Reilly, Purchase, and Sheard (2011) identify and
discuss a number of computer-based tools for CSP. To fix the scope of
their survey, Hamer et al. (2011) consider the definition of CSP in greater
detail. Here we make two observations about CSP, as defined by Hamer
et al. (2008, 2011). First, although the range of activity is large, it does not
always involve computers. For example, students may be involved in
presentations, building models that are used in demonstrations, or
organising conferences (Gruba & Søndergaard, 2001). Second, it is much
clearer how to encourage contribution than how to encourage the valuing of
others’ contributions. Contribution can usually be encouraged by awarding
marks for this task. The ‘‘valuing’’ component, however, is critical in the
definition, and it is predicated on a certain classroom culture. We discuss
this in Section 2.4 in the context of community building.
In this article we focus on student peer assessment activity and tools
that support it. We ask whether there is a need for tools that are
specialised to particular disciplines, or whether we are likely to see
convergence towards a few universal student peer assessment tools. In
Section 2 we discuss student peer assessment and reviewing. W e list
several benefits and problems, and stress the importance of coherence
with learning philosophy and learning design. In Section 3 we take the
perspective of computer science and software engineering, asking whether
there are requirements to computer-supported peer review activity that
are specific to the discipline. We stress the importance of peer review in
software engineering practice. We also provide pointers to articles
reporting use of peer assessment in computing and software engineering.
In Section 4 we consider the case of generic online tools that support the
peer assessment process. We distinguish between four emphases in peer
reviewing, and provide an example of a well-subscribed, mature tool for
each emphasis. The coverage is not intended to be complete but
complements an earlier survey (Luxton-Reilly, 2009). In Section 5 we
summarise research on the impact of student peer assessment, discuss
some controversies around educational research methodology, summarise
some gaps in current understanding of formative peer review, and offer
concluding thoughts.
2. Student peer assessment and reviewing
peer n. person who is an equal
assess v. [ . . . ] originally a frequentative form of Latin assidere sit,
especially as assistant judge or assessor, literally meaning ‘‘to sit
beside another’’.
Chambers dictionary of etymology
Computer Science Education 3
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
The Latin origin of ‘‘assessment’’ may have a strong participatory
flavour, but in tertiary education, assessment and feedback are
traditionally rather private activities. Assessment whether formative
or summative is usually performed by an instructor, and feedback
(when provided) tends to be one-way communication, not a cyclical,
iterative process. When feedback provision does involve dialogue, it is
usually a private communication between student and instructor. In this
tradition, a student has little access to other students’ thinking and work,
beyond what can be gleaned in the classroo m or through group projects.
Students’ norm referencing is limited to observing fellow stude nts in class
and comparing marks.
There are, however, signs of change in the assessment space. There
appears to be a growing interest in variations of meaningful assessment and
feedback. In particular, peer assessment is gaining popularity as a way of
enriching feedback and at the same time giving students more opportunity
to develop important affective, social, communication and judgement skills.
By student peer assessment we mean a process in which students assess
other students’ work, often against explicit criteria, and usually providing
feedback in the process. We use the term ‘‘peer grading’’ for the variant in
which a student assigns a mark or grade to a peer’s work and this mark
has some impact on the peer’s overall result. By ‘‘peer reviewing’’ we mean
the variant in which generated feedback is used in a purely formative way,
although the peer reviewing activity itself may be graded.
In Section 5 we consider some of the extensive research literature on
student peer assessment and its effects. While studies (and meta-analysis
in particular) are complicated by the fact that ‘‘peer assessment’’ covers
many different types of practice, there are nevertheless plenty of evidence-
based studies that show improvements in specific attributes and
competencies when peer assessment is used. We refer to Falchikov
(2007, 132–134) for pointers and a concise summary. In the next section
we list a number of plausible benefits of student peer assessment.
2.1. Advantages of peer assessment
The potential benefits of peer assessment are many:
. Peer assessment increases the amount of diverse, timely feedback
received by a student.
. It can improve the quality of work submitted for assessment.
Students perceive benefit in peer assessment of their work, and the
process can help improve the standard of their work (e.g. Reily,
Finnerty, & Terveen, 2009).
. It fosters learnin g autonomy and helps students develop important
skills related to evaluation, diagnosis, summary and professional
4 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
communication (Falchikov, 2007). These are skills that are highly
valued in the workplace.
. It deepens learning, as the process of systematically measuring and
judging someone else’s work requires particularly good under-
standing of the subject matter. Moreover, it allows students to learn
from other students’ successes and weaknesses (Race, Brown, &
Smith, 2005).
. It invites reflection on own work and learning (Pearce, Mulder, &
Baik, 2009). Students’ occasional access to the work of peers is
valuable, not only because it enables peer learning; it also allows
students to better gauge their own learning and progress.
. It supports development of affective and social skills, including
empathy, diplomacy, assertiveness, learning how to negotiate and to
give and receive criticism (Topping, 1998).
. It offers an alternative channel for student engagement and
participation. A student who may be quiet in class can turn into a
valuable contributor, with much to say, when put in the reviewer’s
role.
2.2. Concerns about peer assessment
Of course there are also problems and pitfalls in the use of peer
assessment. The most common concern is with reliability and validity of
peer assessment, but there are other concerns as well. Hamer, Ma, and
Kwong (2005) give the following list of issues of possible concern:
(1) mechanisms for distributing assignments and collecting reviews;
(2) maintaining validity and reliability in the grading;
(3) motivating students to complete the reviews;
(4) minimising the influence of ‘‘rogue’’ reviewers;
(5) ensuring anonymity of reviewer and/or the student being reviewed;
(6) detecting and preventing plagiarism; and
(7) dealing with grading disputes.
However, (1) and (5) have become non-issues with the advent of modern
online tools for administering peer assessment (Hamer et al., 2005). Items
(3) and (4) can be addressed in various ways, but primarily by making the
review itself an assessment component which attracts marks. If the peer
reviewing process is considered valuable for its ability to develop critical-
judgement capability and other higher-order cognitive skills, then it
should be assessed. Item (6) is not specific to a peer assessment regime.
On closer scrutiny, the most challenging concerns stem from the use of
peer grading (Gehringer, 2001; Hamer et al., 2005). Topping (1998)
summarises 31 studies of validity and reliability of peer assessment,
Computer Science Education 5
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
noting that a ‘‘majority of studies (18) suggest that peer assessment is of
adequate reliability and validity’’ but ‘‘a substantial minority (7) found
the reliability and validity of peer assessment unacceptably low in
particular projects’’. Falchikov and Goldfinch (2000) found, based on 48
studies, that peer and instructor ratings were highly correlated, in
particular when ratings were based on well understood criteria. It would
be sen sible to propose that validity and reliability is closely linked to
student maturity, but Falchikov and Goldfinch found no support for that
proposition. Navalta and Lyons (2010) analysed 45 articles submitted to a
journal that encourages postgraduate student submissions and uses both
students and academics as reviewers. For 33 student-submitted articles,
student reviewers and academics returned very similar decisions, while for
the remaining articles there was a near statistical difference, the student
reviewers being on the lenient side.
In a peer-review-of-writing context, Cho and Schunn (2007) and Cho,
Schunn, and Wilson (2006) have found that once the number of peer
reviews exceeds 3, peer reviewing is, in aggregate, highly reliable, and at
least as valid as instructor ratings. In a more general setting of know ledge
management, Cho, Chung, King, and Schunn (2008) suggest that a higher
number of (non-expert) peer reviews will not only compensate for lack of
expertise, but will produce more useful feedback, owing to the closer
cognitive proximity of peers, combined with experts’ tendency to
overestimate non-experts’ potential performance.
So reliability and validity may not be too problematic. (However,
perceptions also matter, and somewhat paradoxically, Cho et al., 2006
have observed low estimates of reliability and validity by students, while
concurrently observing high actual values.) Of course, items (2) and (7)
are non-issues when grading is not involved. In our own experience, it is
important to grade the reviewing activity itself, but preferable to avoid
‘‘students grading students’’. Peer grading may introduce a degree of
discomfort and/or an unwanted sense of competition amongst students,
jeopardising the collaborative potential. Of course, variants of peer
grading are possible. Students can provide grades that are indicative but
do not really count, as in the setup of Gibbs (1999).
An additional issue is whether peer feedback is comparable in quality
to that of instructors. Patchan, Charney, and Schunn (2009) extracted
over 1400 comment segments in an undergraduate history subject using
Scaffolded Writing and Reviewing in the Discipline (SWoRD) (see
Section 4.4) to compare student comments with those of writing and
content instructors. Apart from the expected instructor biases towards
specificity in writing and content respectively, the study showed that
comments offered by students were remarkably similar to those of
instructors. The only notable difference is that peer reviewers provided
significantly more praise (Patchan et al., 2009).
6 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Finally, some students may object to peer assessment because it is new
to them, or because they believe it is the instructor’s responsibility. It is
clearly important that the rationale for the activity is well explained to
students before it is launch ed.
2.3. The authors’ experience
In our view, it makes little sense to discuss whether peer assessment is a
good strategy or not without considering the context within which it is
used. We have found that it works particularly well in a pedagogi cal
setting of collaborative learning. The potential benefits are easily
explained to students, and we have enjoyed a high degree of buy-in
from classes where peer reviewing has been carefully integrated into an
overall collaborative learning plan and aligned with other assessment
components (Pearce et al., 2009; Søndergaard, 2009). Staged project work
(which has its own benefits) lends itself particularly well to integrated peer
assessment. The value of feedback is diminished when feedback is used
simply in conclusion to assessment tasks, as an ‘‘after the event’’ activity.
A staged project is useful because it allows feedback to be produced and
digested for a project that is still in progress. With staged project work,
where each project stage depends on having discharged those before it
successfully, feedback on one stage becomes important input, and help,
for the next stage. This gives added meaning and value to feedback and
encourages reflection. Student peer assessment, amongst other things,
helps reiterate that, naturally, ‘‘inter-stage’’ feedback works under strict
time constraints above all it must be timely.
2.4. Pedagogical alignment and learning community
As mentioned earlier, Hamer et al. (2011) define ‘‘contributing student’’
pedagogy as one in which students produce artefacts designed to
contribute to other students’ learning, and in which students are
encouraged to value such contributions. Student peer reviewing not
only fits the definition; separated from a pedagogical philosophy that
emphasises ‘‘contributing’’, ‘‘valuing’’ and ‘‘collaborating’’ it may not
work part icularly well. It probably pays off best where a classroom
culture of mutual help, trust and respect has been established. In turn,
peer reviewing, as other participatory pedagogi es, can contribute to the
building of a strong learning community.
The development of learning community is an important aspect of
CSP (Hamer et al., 2008). Naturally, we wish for students to feel at home
in their class, to see it as a community, something one may ‘‘belong to’’
and feel engaged with. Fortunately, modes of instruction are available
which aim to build community and exploit social activity to improve
Computer Science Education 7
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
learning. One purpose of collaborative learning is to turn a class into an
environment which students feel comfortable being part of, and to use the
generated trust and goodwill to enhance learning. Cross (1998) suggests
that learning commun ity building not only enhances learning, but that its
aims are well aligned with an emerging, more sophisticated view of the
nature of knowledge. A focus on learning communities is, to quote Cross,
more than an academic diversion. It is providing an alternative view for
some of the most prevalent criticisms of our educational systems -
egalitarianism versus hierarchies, collaboration versus competitiveness, and
active participation versus passive absorption. The current wave of interest
in learning communities is not [. . .] just nostalgia for the human touch, or
just research about the efficacy of small-group learning, but a fundamental
revolution in epistemology (Cross, 1998).
The potential role of peer reviewing activity in building a supportive
learning community should not be under-estimated, but success requires
attention to pedagogical context and careful co mmunication with
students to make their role clear.
3. Peer assessment in the technical disciplines
The ability to read and grasp, to summarise succinctly, to analyse and
diagnose, to evaluate and compare and to communicate ideas and
opinions eloquently, are all critical in the professional lives of engineers,
analysts and programmers. The engineering profession uses peer
reviewing extensively, as a proven quality assurance method. Clearly
students of engineering and information technologies need reasonable
exposure to this kind of activity.
Like other engineering disciplines, the software engineering industry
employs standards and processes for the review of designs and software,
including team review, Fagan inspection, walkthrough and scenario-
based evaluation. The typical software engineer spends considerably more
time reading and maintaining software than writing new code. Trytten
(2005) notes that software engineering students get plenty of practice
writing code, but very little practice reading and comprehending others’
code. Reading and writing code are rather different activities, and each
has great vocational value.
In fact, it can be argued that familiarity with peer review is particularly
important in the computing and software engineering field. Working
software engineers soon learn the lesson that software testing has its
limitations and that peer review is an invaluable complementary tool.
Peer reviews find mistakes in requirements specifications, documenta tion
and manuals problems that no amount of testing will help solve. A peer
reviewer can point to poor program structure or coding style, or can
identify lacking clarity of code, problems which in the long run may be far
8 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
more costly than software bugs, owing to their greater impact on
maintainability. Moreover, peer review can meaningfully be applied early
in a project, with the potential to discover problems early, at a point when
effective testing is still infeasible. Wiegers (2002) suggests that there are
even more important (but not easily quantifiable) benefits of peer
reviewing in software engineering. These benefits come from process
improvements that will prevent errors in future products. Process
improvements in turn rely on the development of shared expectations
and understanding among production team members, poss ibly the most
important by-product of a peer reviewing culture.
There is a solid body of literature on the use of student peer reviewing
in computing, software engineering and information systems education,
mostly in the form of experience reports and accounts of student
perceptions of the value of peer reviewing. This covers a broad range of
subjects. Table 1 illustrates this diversity, without any claim of
completeness. The table may be of help to readers who wonder whether
peer assessment has been tried and is suitable in their particular area.
Most of the tools referred to in the table are not discussed in this article.
Table 1 provides pointers, and some of the tools mentioned (OASYS,
OPAS, PeerGrader and Praktomat) are covered by Luxton-Reilly (2009).
Before we turn to a discussion of generic peer assessment tools, and
criteria for usefulness, it is worth considering whether the technical
disciplines have requirements to a peer reviewing tool that differ from
those of other disciplines. As discussed in Section 4, we consider
customisability an essential requirement, and given sufficient flexibility
and customisability, it is hard to imagine that a generic peer review tool
will not be able to serve the needs of computing/engineering instructors.
As an example, consider the special needs of the code review.
(1) A tool used for code review should offer direct support for the
classification of comments: syntax error, semantic error, logical
flaw, performance issue, violation of standard, poor layout and so
on. There is no reason why this cannot be handled by flexible
rubric design features.
(2) A code review tool should have the ability to handle different file
formats, accept multiple files and support different types of
documents and programs. As we shall see, these are features of
some current systems.
(3) Ideally a code review tool should provi de the ability to annotate,
to link co mments very closely to the site where they apply. Again,
this is within the capabilities of current technology.
(4) The most challenging and unique aspect of the evaluation of code
is that, ideally, it requires integration with an extensive toolbox. A
software reviewer does not simply read and think but will, early in
Computer Science Education 9
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
the process, want to place the code under review in a suitable
programming environment. This will enable the reviewer to use
language-specific tools for compilation, testing, fuzzing, profiling
and debugging. In an educational setting, such an environment
may also include specialised similarity detection tools like Moss
(http://theory.stanford.edu/* aiken/moss/).
Table 1. Some examples of the use of peer reviewing in computing-related subjects.
Subject Tool used Class size and context Reference
Algorithms and
data structures
CAROUSEL
Paper based?
OPAS
Small/medium; first
year
Large
Medium; third year
Hu
¨
bscher-Younger and
Narayanan (2003)
Machanick (2005)
Trahasch (2004)
Compilers PRAZE Small; third year,
staged project
Søndergaard (2009)
Computing
foundations
Own, unnamed Large class of first-year
students and service
students across
disciplines
De Raadt, Toleman,
and Watson (2005)
Design Paper based Medium; writing task Settle, Wilcox, and
Settle (2011)
HCI Paper based Large; third year,
review based on lab
demonstration
Purchase (2000)
Operating systems PeerGrader No detail Gehringer (2001)
Programming PeerGrader
OSBLE/PCR
Own, unnamed
OASYS
Manual
Manual
Praktomat
No detail
Small; first year
Medium; first year
Large; first year Unix
shell programming
Large; mostly first year,
service students
Medium; Java
programming, data
structures
Medium/large; first
year
Gehringer (2001)
Hundhausen,
Agarwal, and
Trevisan (2011)
Reily et al. (2009)
Sitthiworachart and
Joy (2003, 2004)
Trytten (2005)
Turner et al. (2008)
Zeller (2000)
Software
engineering
Paper based
PeerGrader
Manual
Small; second/third
year
No detail
Small; fourth year,
project focus, UML
and Java
Anewalt (2005)
Gehringer (2001)
Garousi (2010)
Web programming MyPeerReview Small; undergraduate Ha
¨
ma
¨
la
¨
inen,
Hyyrynen, Ikonen,
and Porras (2011)
10 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
We see the last point as the only serious challenge to existing systems.
This is not to say that the other points do not provide challenges in the
educational setting. For example, preserving reviewee anonymity with a
peer review system that insists on single-file submission may require extra
care, as some file bundling tools, including Unix’s tar in default mode,
will wrap source identification together with the bundled files. These are,
of course, technical problems for whic h solutions easily can be found.
4. Tool support for managing student peer assessment
The administrative burden associated wi th building and managing
supportive learning communities by means of formative peer review has
been greatly diminished with the advent of modern online peer review
tools. Many of the early online tools owe their existence to academics
from computer science; hence it is not surprising that experimental
student peer assessment has had a solid uptake in that discipline.
In the following we review some currently available systems for
formative peer review, and summarise the capabilities of different
packages. In his review, Luxton-Reilly (2009) provided information on
some 18 tools avai lable at that time. These were classified into generic
systems (which are highly configurable and support peer review activities
in a range of disciplines), domain-specific systems intended to support
activities in a part icular domain, and context-specific systems written for
use in a specific course and bound to that context.
The market for such tools is undergoing rapid change, witness the fact
that many tools listed in Luxton-Reilly’s review now appear to be defunct
or no longer available, while some significant new tools have become
available. In reviewing new platforms, we follow Luxton-Reilly (2009) in
intentionally omitting tools that have as their sole purpose to facilitate
summative, rather than formative assessment (group work and/or grade
adjustment). We also omit those encompassing entire learning manage-
ment systems, such as OSBLE (Hundhausen, Agrawal, Fairbrother, &
Trevisan, 2009).
What are the desirable attributes of a modern peer review tool?
Fortunately, the needs of instructors and students are closely aligned:
pedagogically, both teachers and students need tools that will help deliver
improved learning experiences, and they have a common interest in tools
that make the process of administering and participating in peer review
simple and intuitive.
With these needs in mind, the following are attributes that we believe
are essential for modern peer review tools to be able to be used easily and
effectively.
Automation. A successful peer review tool must be able to automate
fundamental aspects of the peer review process. These include the
Computer Science Education 11
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
anonymisation of student work, the distribution of work be tween
reviewers and reviewees and notifications to administrators and students
about aspects of the review process.
Simplicity. Syst ems should be simple and easy to use for both
instructors and students. This means an intuitive and attractive user
interface, integration with student record and learning management
systems, and the availability of instructions, help and other support.
Customisability. Because learners and teachers have tremendously
diverse needs, peer review tools should be flexible enough to allow them
to be configured for varying needs. At a minimum this means the ability
to upload any kind of docume nt for review and to create individualised
review rubrics tailored to the learning objectives of the subject employing
peer review.
Accessibility. Student peer review tools should ideally be free, web-
based (24/7 access) and globally available to maximise accessibility and
ensure equitable opportunities for participation. Growing use of mobile
computing platforms can be expected to generate demand for mobile
versions.
In addition to these fundamental attributes, we discuss below some
further attributes that, while not essential to every subject employing peer
review, may be of particular interest to subjects with certain class
structures or pedagogical objectives.
Rule-based review distribution. In many classes, students are organised
into sub-units such as groups or topic areas, and an instructor may wish
to organise the peer review process so that particular reviewer-reviewee
pairings are avoided or mandated. For instance, if a group works
collaboratively on the same topic but group members submit assessable
work individually, it may be desirable to ensure that reviewers of this
work come from outside the group. By pairing dissimilar topic areas,
instructors can maximise the potential for learning by ensuring that each
reviewer is exposed to a new topic area different to their own. Also,
mismatched pairings reduce the potential for plagiarism between
reviewers and reviewees, because the content of the material being
reviewed is not pertinent to that being prepared by the reviewer.
Reviewer training/calibration. One commonly discussed issue with peer
review is that some students have concerns about the reliability and
validity of feedback provided by peers (Cho et al., 2006). Some tools
include an explicit calibration step in which each potential reviewer is
tested against a reference document to gauge their ability to correctly
identify issues that need addressing. Reviewers are not able to participate
in peer review unless they have ‘‘graduated’’ with the minimum required
set of competencies.
Similarity checking. It is increasingly common for work submitted by
students to be tested for originality by subjecting it to similarity checking.
12 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Not surprisingly, there is also growing interest in software that can
provide the same originality checks for work that is submitted for peer
review.
Reporting tools. The best tools also have effective reporting features
that allow students to compare their reviews to those of others, and allow
instructors to monitor the progress of the review process and access
submitted work.
Tools that were developed with a context-specific purpose generally
perform poorly against the criteria above, and for many, it is difficult
to determine whether they are still currently active. We refer the reader
to Luxton-Reilly’s (2009) review and do not consider them further
here.
It is perhaps not surprising that even very capable tools tend to
perform strongly in particular areas and less well in others. In Table 2, we
highlight four broad areas of emphasis in terms of the criteria above, and
highlight a mature web-based tool that exemplifies each approach. Below,
we briefly describe these four web-based reciprocal peer review systems.
4.1. Calibrated Peer Review (CPR)
Calibrated Peer Review (Russell, 2004; Russell, Chapman, & Wegner,
1998) differs from most other peer review systems in its strong emphasis
on training or ‘‘calibration’’ of reviewers in the skills of reviewing. A CPR
assignment consists of three stages. In the first text entry stage, a student
explores online material on an assignment topic and submits text based
on this material in response to a question. During the subsequent
calibration and review stage, students evaluate example texts, also known
as calibration essays, by answering questions about them. A student’s
response is compared to a reference set of correct answers, and the
student can access these for purposes of feedback and comparison. Until
students attain a minimum standard of competence in evaluation
(measured as a percentage of ‘correct’ answers against the rubric), they
are unable to graduate to reviewing their peers’ work. Once they have
mastered the calibration, they are permitted to evaluate texts submitted
by peers, and their own text entry. In the final stage, each student receives
feedback both on the quality of their reviews, as well as a summary of
evaluations by other students of their text.
The emphasis on reviewer calibration is beneficial both for reducing
variance among reviewers in their proficiency and, in the process,
fostering confidence among authors in the expertise of their reviewers . On
the other hand, calibration may not be suited to all assignments (for
example, where there is no ‘‘correct’’ answer), and might also discourage
creativity in some contexts. Structuring within a class is not possible in
CPR, so review of peer performance in any context other than written
Computer Science Education 13
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Table 2. Different emphases in approaches to formative student peer review, and an example of a capable software tool for each.
Approach focus Training-oriented Similarity checking-oriented Customisation-oriented Writing skills-oriented
Description Peer and self-assessment with
explicit development of
reviewing skills through
calibration and repetition
Anonymous or attributed peer
review linked to Turnitin
assignments
Flexible and customisable
peer review system
allowing pairing rules
Web-based reciprocal
system emphasising
development of writing
skills
Tool CPR PeerMark PRAZE SWoRD
Main advantages Ability to calibrate reviewers
before they engage in peer
review; maturation of
understanding of criteria;
online tutorial
Integration with plagiarism
detection, customisable
forms; assignment library;
flexible assignment of work
Customisability of all
aspects of the review
process; works from
within LMS with user
recognition
Iterative and cyclical (multi-
phase) review process;
feedback to reviewers
Main drawbacks Calibration and reviews must
be on same topic; emphasis
on fact-based learning;
separate interface and login
from learning management
system
Inability to specify distribution
rules; does not
accommodate group
structuring within class;
separate site/login; no demo
version
In-house product, no stand-
alone version yet
Review criteria minimally
customisable and
generally restricted to
writing (flow, logic and
insight); separate site/
login; inflexible
distribution
Automation Yes Yes Yes Yes
Simplicity þ ; User guide þþ; User guide þþ; User guide þ; User guide
Customisability Review rubrics (e.g. creation of
multiple question types and
response formats)
Review rubrics (e.g. creation of
multiple question types and
response formats)
Review rubrics (e.g.
creation of multiple
question types and
response formats)
Review rubrics (e.g.
creation of multiple
question types and
response formats)
Accessibility Online (Internet Explorer or
Netscape Navigator only)
Online; Institutional/
departmental licence
Online; free Online; free
Distribution rules No Yes Yes No
Reviewer calibration Yes No No No
Reviewee feedback No No Yes No
Plagiarism check No Yes No No
Web site http://cpr.molsci.ucla.edu/ http://turnitin.com/en_us/
products/peermark
https://mlt.unimelb.edu.au/
about/praze/
http://www.lrdc.pitt.edu/
schunn/SWoRD
References Russell (2004) iParadigms (2011) Mulder and Pearce (2007) Cho and Schunn (2007)
14 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
work is excluded. This tool is best suited to content learning and coaching
of students unfamiliar with peer review, by means of iterative exposure to
key criteria.
4.2. PeerMark
The Turnitin system is best known for its plagiarism detection tool. More
recently, peer review functionality has also been enabled for Turnitin
assignments, through the ‘‘PeerMark’’ component (iParadigms, 2011).
Instructors may upload their own assignment or select one from an online
library. Students then submit articles on the assignment topic (the format
of uploaded work is currently restricted to MS Word documents).
Reviewers are assigned to articles in one of three ways: automatically
(random), manually by the subject administrator, or open (students select
articles for review that interest them). Reviews are completed in an
interface that allows the review rubric and document under review to be
simultaneously viewed. The text being reviewed can be marked and
referenced in the review, enabling students to match rubric comments in
the review with the relevant point in their document.
PeerMark is particularly attractive for subject s in which similarity
checking matters (and which already employ Turnitin assignments), and
for which review material is submitted in MS Word document format.
The library of pre-written review assignments will also appeal to
instructors who may be new to online peer review. However, the software
is not designed to deal with non-standard assignment formats, or those in
which students are allocated to different topics within a particular
assignment.
4.3. Peer Review from A to Z for Education (PRAZE)
Peer Review from A to Z for Education (Mulder & Pearce, 2007) is a
sophisticated online system that facilitat es flexible management of all
aspects of peer review. It allows staff to set up, customise and manage a
peer review process within a subject, so that students can then
anonymously review each others’ work, send and receive feedback on
their work, and/or carry out a peer self-review of group work. Students
can complete online sign-up to assignment topics, create their own, or be
allocated to topics by administrators. In principle, there is no limit on
what can be reviewed (traditional written assignments, files of any format,
external works or events such as URLs, artwork or oral presentations).
Administrators can specify rules that govern the distribution of work (for
example to mandate or eliminate the possibility of reciprocal review
among group members), and can enable involvement by both peers and
teachers in the review process. After the review process, an author can
Computer Science Education 15
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
complete a feedback form that is returned to the reviewer. Review and
feedback forms can be easily designed from a menu of drag-and-drop
elements, or drawn and modified from a library of existing forms.
Assignment management is facilitated by integrated summary tables. A
desktop widget provides updates on the status of the review process.
The main strengths of the PRAZE platform are its flexible and
powerful customisation features. To the best of our knowledge, it is the
only peer review system currently available that is capable of managing
any type of peer review. Staff often seek to employ more than one stage
and type of review assignment within a unit of study (for example, peer
review of a group-generated document, followed by within-group
assessment of member performance), and management of this process
within a single system is easier and more elegant than employing several
different programs.
Another feature of this platform is the ability for documents to be
submitted by groups, but reviewed by individuals, enabling higher
reviewer:reviewee ratios. This is particularly important in the light of
analysis (Cho et al., 2006; Cho & Schunn, 2007) suggesting that the
aggregate ratings of at least 4 peers on a piece of writing can be both
highly reliable and as valid as instructor ratings. To date, the platform has
not been universally accessible, but at the time of writing, such a platform
is near completion.
4.4. Scaffolded Writing and Reviewing in the Discipline (SWoRD)
The primary goal of the review process implemented by SWoR D (Cho &
Schunn, 2007) is to improve student writing skills by means of repeated
review and feedback cycles. Students learn to assess written work
according to key attributes: flow, argument logic and insight (before
embarking on reviewing, students can participate in a calibration exercise,
but this is optional). An author first submits written work which is
distributed to five or six peers (reviewers), who grade the article according
to the three key attributes and offer advice on how to improve it. The
author may then revise the article, which is sent back to the same
reviewers for final review. Authors rate reviews on their accuracy and
helpfulness. Composite ratings for each reviewer from the five or six
authors are computed to determine a grade, which provides an incentive
for reviewers to invest in the task. The rubric can be changed to include
criteria other than the three key default attributes, but criteria are limited
to three dimensions. Scaffolded Writing and Reviewing in the Discipline
keeps track of reviewer behaviour and is able to provide a reviewer with
sophisticated feedback, such as diagrams that help identify systematic
differences with other reviewers.
16 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Scaffolded Writing and Reviewing in the Discipline is particularly well
suited to assignments in which the development of writing skills is critical.
It is less suited to conten t learning via peer review than some other
systems (such as CPR), and its restrictive design for criteria means that it
is not ideal for assignments that do not involve writing per se. Like most
other systems, it does not accommodate group structure, and work can
only be randomly distributed.
4.5. Other approaches
Crespo, Pardo, and Kloos (2004) take the CPR approach as a starting
point, but where CPR uses calibration to evaluate a student’s ability to
act as reviewer, Crespo et al. (2004) use calibration to assess a student’s
mastery of content material, with the purpose of finding an optimal
matching of reviewers to reviewees. The underlying thinking is that the
pairing of like students has limited benefit; that, instead, the best
combinations involve a proficient reviewer guiding a less proficient
reviewee, or else a less knowledgeable reviewer being exposed to the wo rk
of a proficient reviewee.
Table 1 listed a number of peer assessment tools, some early, some
recent. Some are no longer under active development, while others are
defunct. Many, while no longer in use, have been important progressive
steps in the development of better tools. While it is outside the scope of
this article to discuss these tools exhaustively, we do mention WebCom
(Silva & Moreira, 2003) and PECASSE (Gouli, Gogoulou, & Grigor-
iadou, 2008), not only because they are interesting systems, but also
because the two cited articles contain many further references for the
interested reader. For other pointers we refer to Luxton-Reilly (2009).
Some learning management tools offer integrated peer review features.
OSBLE (Hundhausen et al., 2009) is a system built specifically to support
a stu dio-based mo del of teaching and learning and it embraces peer
review as a critical component. Its peer review features include a
possibility for ‘‘issue voting’’, that is, for team members to cast votes on
issues that have been brought up in reviews.
4.6. What the future holds
Given the diversity of ways in which peer review can be employed, it is
interesting to consider whether the next decade will see a contraction or
expansion of the suite of tools available for student peer review.
Ultimately, the choice of software will depend on a multitude of factors
such as (1) the learning objectives of the peer review exercise, (2) whether
or not the teacher’s institution supports/promotes a particular tool, (3)
how much customisation is required and can be accommodated by the
Computer Science Education 17
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
tool and (4) the subject administrator’s level of experience and willingness
to experiment. Academics are famously individualistic, so it seems is no
more likely that we will see a move towards adoption of a single universal
peer review tool than we would see universal adoption of a single personal
desktop computer system or web browser. Nonetheless, it is also clear
that users are converging on a limited set of tools that are well-designed,
user-friendly and integrated with common learning management systems.
A system such as PeerMark appears particularly well poised to offer
users an appealingly broad range of integrated tools. Of course there is a
financial cost associated with its use. It is conceivable that Pee rMark will
act as a driver of competition and the development of progressively
better, freely accessible tools.
There are striking similarities in functionality between, on the one hand,
tools developed for conference management, such as Easychair (www.ea-
sychair.org), or for reviewing associated with academic journals, such as
ScholarOne (scholarone.com), and, on the other hand, tools for the
management of student peer review. Efficient, anonymous distribution of
material for review is central to all, yet the mature tools available for
student peer review were all developed quite independently of other
platforms. The parallel development of such conceptually similar tools may
suggest a lack of appreciation by developers of the respective platforms of
the broader potential of electronically managed peer review (or perhaps
alternatively a lack of commercial opportunity or the difficulty of catering
for specialised ‘‘niche’’ requirements in different sectors). Nevertheless,
tools designed for peer review processes outside the education setting are
now highly sophisticated, and offer a rich source of ideas for improved
functionality of student peer review programmes. For instance, reviewers
using Easychair can ‘‘bid’’ for review tasks, which might enable better
reviewer/reviewee matching and offer a clever solution to the occasional
problem of student peers not performing their assigned reviews. An
exchange of ideas between conference management tools and student peer
review tools may be another important driver of the progress of both.
5. Concluding thoughts
The availability of online tools to manage the process of formative
student peer review will undoubtedly encourage the use of peer
assessment across a range of disciplines. This will result in more extensive
practical experience with peer review. It will also re-focus many questions
asked about formative peer review and its benefits.
Early research on the effect of student peer assessment was based on
measures of student and instructor perceptions. Most research articles
that relate to student peer assessment fall in this category. These articles
leave the question of actual value open.
18 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
The value of feedback itself is not in question, assuming feedback is
timely and of reasonable quality. Hence the results of Cho et al. (2006),
discussed in Section 2.2, are important, because they justify the general
idea of involving peers in feedback provision, even when we restrict
attention to the feedback receiver.
The situation is less clear when it comes to the benefits for the
feedback provider. In Section 2.1 we listed many potential benefits, such
as more reflection, the development of analysis and evaluation skills, of
communication skills, of affective and social skills, as well as deeper
learning of subject matter. Many of these potentials may be realised with
the use of peer assessment, if the activity is well aligned with other
assessment activity and congruent with the overall learning design and
teaching philosophy behind a subject. However, no doubt there are many
ideas that are both plausible and fail to work. Hence we need to ask to
what extent the suggested benefits can be demonstrated rigorously, and
how research can inform further development of peer assessment
practice.
Such questions, however, are difficult to answer. There is an extensive
literature on the use of student peer assessment in secondary and tertiary
education, including careful analysis and meta-analysis (Boud, Cohen, &
Sampson, 2001; Boud & Falchikov, 2007; Brown & Glasner, 1999;
Falchikov, 2005; Kollar & Fischer, 2010; Liu & Carless, 2006; Sluijsmans,
Dochy, & Moerkerke, 1999; Topping, 1998). Shute (2008) provides a
relatively recent review of the body of research on formative feedback.
But meta-analysis is complicated by the fact that many variations of peer
assessment/review are used across many different disciplines. Topping
(1998) proposes a typology of peer assessment, which includes 17 different
parameters, and others have since added to the list. The different
parameters all appear relevant to questions about efficacy and how
specific mechanisms in th e peer assessment setup might advance different
desirable outcomes (or not). The high dimensionality of the peer
assessment design space is not just academic speculation; it reflects
practice. There really is considerable variation in how peer assessment
activity is conducted.
One response to this situation has been a call for more disciplined use
of experimental methods, for finer dissection of the peer assessment
process, for better isolation of variables and greater use of controlled
experiments (Strijbos & Sluijsmans, 2010; Topping, 2010). However, it is
also recognised that, owing to the large number of variables, such a
research programme has a time horizon that is quite distant, and
meaningful results arrive very slowly, at best. Van Zundert, Sluijsmans,
and van Merrie
¨
nboer (2010) lament that, after so much research ‘‘it is still
impossible to make claims about wh at exactly constitutes effective peer
assessment’’. Strijbos and Sluijsmans (2010) acknowledge the magnitude
Computer Science Education 19
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
of the research task, as well as the difficulties; in particular ‘‘ecologically
valid research settings complicate the inclusion of a genuine control group
. . .’’ They recommend that research in the area use greater variety in
research designs, research instruments and analytic techniques.
More radical responses are suggested in some critique of traditional
education research. In objecting to a call for greater use of randomised
experiments in educational research, Olson (2004) points out that
educational trial is fundamentally different from, say, medical trial and
that methods from physical and biological sciences do not translate easily
to the education setting. As an example, the use in medical research of
double-blind experiments (to eliminate a Hawthorne effect) is impossible
in educational research, as one cannot, in an educational trial, hide the
actual ‘‘treatment’’ from the trial’s participants. ‘‘Not only do the
teachers and students know what is being done to them, they may either
believe or doubt that what is being done is worthwhile’’ (Olson, 2004). In
educational trials, context is generally much more complex, and harder to
screen for than in medicine, and maintaining uniform ‘‘treatment’’
conditions is virtually impossible. Moreover, assumptions about simple
cause-effect relations may often be justified in medicine, but rarely apply
in education where confounding factors include the goals, beliefs and
intentions of teachers and learners, factors that may not even remain
constant during a trial (Olson, 2004).
Reeves (2006) mounts similar objections against mainstream educa-
tional technology research. He calls for a re-orientation towards different
research paradigms, notably educational design research (van den Akker,
Branch, Gustafson, Nieveen, & Plomp, 1999; van den Akker, Grave-
meijer, McKenney, & Nieveen, 2006), with its greater emphasis on
domain-specificity (as opposed to the development of context-free
theories), educational design practice, in vivo experimentation, iterative
design/research cycles and theory/practice dialectics.
Noting that, in the education field, research has had much less impact
on practice than is the case in, say, medicine or engineering, Burkhardt
(2006) suggests that education research should take inspiration from
research in the engineering disciplines and focus more on impact (new or
better products and processes), since we should expect practical goal-
oriented activity to be a better driver for new insights and theory-
building, compared to mere hypothesis generation and testing.
Below, we highlight two questions of interest, and opportunities for
addressing them, with the hope that they will spark interest in research
aimed at better understanding the impact of peer review on learning
outcomes.
. How do feedback providers benefit from peer review? We know that
students report positive experiences with peer review, and that
20 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
students perceive that there are learning benefits, also from the
provision of feedback. However, most of the purporte d benefits are
hard to define and/or isolate and involve subtle but important shifts
in perception. Nevertheless, there is now a wide variety of survey
instruments available to measure changes in such attributes as
critical thinking, active learning, student interaction and collabora-
tion, higher order thinking and engagement. Two possible
approaches of many include:
Longitudinal before/after implementation of peer review. Ideally
this involves randomised involvement by some but not all students
in a class, as otherwise it is difficult to eliminate confounds.
Partitioning students into intervention and control groups is always
complicated, in particular because of the ethical issues involved with
providing students with unequal opportunities.
Cross-sectional paired subjects matched for subject areas and
learning objectives, that either did or did not implement peer review.
A major issue here is that it is difficult to control for other
confounding variables. However, large-scale meta-analysis might
overcome this.
. Is peer review effective at improving the quality of student work?
Measuring improvements in the quality of student work is difficult
because grading may be subtly or subconsciously influenced by the
instructor’s awareness that students have participated in peer
review. One approach for objectively measuring whether work has
improved is to ask external expert assessors to grade mixed
collections of pre- and post-review versions of assignments using a
standardised rubric, blind to the review status of each piece of work.
Ideally, such studies should be accompanied by content analysis to
demonstrate a direct link between the content of reviews, the
reviewee’s responses to them, and quality of work.
In this article we have reviewed the motivation for, and the use of,
student peer assessment and reviewing, with special consideration to
computer science and software engineering. We currently see four main
categories of tools: trainin g-oriented, similarity checking-oriented,
customisation-oriented and writing skills-oriented. We have summarised
four mature online peer review systems, one from each category, and
reviewed the tools against general criteria for effectiveness.
With the article we have also wanted to persuade the uninitiated
instructor that student peer assessment is a worthwhile exercise. We have
provided links that may help getting started. Table 1 provided some
pointers to part of the published experience with the use of peer reviewing
in computing-related classes. More pointers to general discipline-agnostic
or writing-oriented advice is easily found on the world-wide web.
Computer Science Education 21
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
We conclude by noting that student peer review, like any other
learning tool, offers no guarantee of success unless it is embedded in a
well-designed, reflective and responsive curriculum. Nevertheless, within
such a curriculum, contributing student pedagogy and the technological
tools that support it have great potential to maximise learning outcomes.
Acknowledgement
We wish to thank the reviewers for their many helpful suggestions and pointers, including
a suggestion to consider the potential of educational design research in the study of
formative peer review.
References
Anewalt, K. (2005). Using peer review as a vehicle for communication skill development
and active learning. Journal of Computing in Small Colleges, 21(2), 148–155.
Boud, D., Cohen, R., & Sampson, J. (Eds.). (2001). Peer learning in higher education.
London: Kogan Page.
Boud, D., & Falchikov, N. (Eds.). (2007). Rethinking assessment in higher education:
Learning for the longer term. Oxon: Routledge.
Brown, S., & Glasner, A. (Eds.). (1999). Assessment matters in higher education: Choosing
and using diverse approaches. Buckingham: Society for Research into Higher
Education, and Open University Press.
Burkhardt, H. (2006). From design research to large-scale impact: Engineering research
in education [chapter 10]. In J. van den Akker, K. Gravemeijer, S. McKenney, & N.
Nieveen (Eds.), Educational design research. London: Routledge.
Cho, K., Chung, T.R., King, W.R., & Schunn, C. (2008). Peer-based computer-supported
knowledge refinement: An empirical investigation. Commmunications of the ACM, 51,
83–88.
Cho, K., & Schunn, C.D. (2007). Scaffolded writing and rewriting in the discipline: A web-
based reciproc al pe er review system. Computers and Education, 48, 409–426.
Cho, K., Schunn, C.D., & Wilson, R.W. (2006). Validity and reliability of scaffolded peer
assessment of writing from instructor and student perspectives. Journal of Educational
Psychology, 98, 891–901.
Collis, B., & Moonen, J. (2005). Contribution-oriented pedagogy. In C. Howard, J.V.
Boettcher, L. Justice, K.D. Schenk, P.L. Rogers, & G.A. Berg (Eds.), Encyclopedia of
distance learning (pp. 415–422). Harrisburg, PA: Information Science Reference.
Collis, B., & Moonen, J. (2006). The contributing student: Learners as co-developers of
learning resources for reuse in web environments. In D. Hung & M.S. Khine (Eds.),
Engaged learning with emerging technologies (pp. 49–67). Dordrecht: Springer.
Crespo, R.M., Pardo, A., & Kloos, C. (2004). An adaptive strategy for peer review. In
Proceedings of the 34th ASEE/IEEE Frontiers in Education Conference (FIE2004)
(pp. FF3F-7-F3F-13 ). Washington, DC: IEEE Computer Society.
Cross, K.P. (1998). Why learning communities? Why now? About Campus, 3(3), 4–11.
Denny, P., Luxton-Reilly, A., & Hamer, J. (2008). Student use of the PeerWise system. In
Proceedings of the 13th Annual SIGCSE Conference on Innovation and Technology in
Computer Science Education (pp. 73–77). New York, NY: ACM.
de Raadt, M., Toleman, M., & Watson, R. (2005). Electronic peer review: A large cohort
teaching themselves? In H. Goss (Ed.), Proceedings of ASCILITE 2005 (pp. 159–168).
Retrieved September 25, 2012, from www.ascilite.org.au/conferences/brisbane05/
blogs/proceedings/17_de Raadt.pdf.
Falchikov, N. (2005). Improving assessment through student involvement. New York, NY:
RoutledgeFalmer.
22 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Falchikov, N. (2007). The place of peers in learning and assessment. In D. Boud & N.
Falchikov (Eds.), Rethinking assessment in higher education: Learning for the longer
term (pp. 128–143). Oxon: Routledge.
Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A
meta-analysis comparing peer and teacher marks. Review of Educational Research, 70,
287–322.
Garousi, V. (2010). Applying peer reviews in software engineering education: An
experiment and lessons learned. IEEE Transactions on Education, 53, 182–193.
Gehringer, E.F. (2001). Electronic peer review and peer grading in computer-science
courses. In Proceedings of the 32nd SIGCSE Technical Symposium on Computer
Science Education (pp. 139–143). Proceedings published as SIGCSE Bulletin 33(1).
Gibbs, G. (1999). Using assessment strategically to change the way students learn. In S.
Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and
using diverse approaches (pp. 41–53). Buckingham: Society for Research into Higher
Education, and Open University Press.
Gouli, E., Gogoulou, A., & Grigoriadou, M. (2008). Supporting self-, peer-, and
collaborative-assessment in e-learning: The case of the PEer and Collaborative
ASSessment Environment (PECASSE). Journal of Interactive Learning Research,
19(4), 615–647.
Gruba, P., & Søndergaard, H. (2001). A constructivist approach to communication skills
instruction in computer science. Computer Science Education, 11, 203–219.
Ha
¨
ma
¨
la
¨
inen, H., Hyyrynen, V., Ikonen, J., & Porras, J. (2011). Applying peer-review for
programming assignments. International Journal for Information Technologies and
Security, 1, 3–17.
Hamer, J., Cutts, Q., Jackova, J., Luxton-Reilly, A., McCartney, R., Purchase, H., . . .
Sheard, J. (2008). Contributing Student Learning. SIGCSE Bulletin, 40(4), 194–212.
Hamer, J., Luxton-Reilly, A., Purchase, H.C., & Sheard, J. (2011). Tools for
‘‘contributing student learning’’. Inroads, 2, 78–91.
Hamer, J., Ma, K.T.K., & Kwong, H.H.F. (2005). A method of automatic grade
calibration in peer assessment. In A. Young & D. Tolhurst (Eds.), Proceedings of the
Seventh Australasian Computing Education Conference (ACE2004), volume 42 of
Conferences in Research and Practice in Information Technology (pp. 67–72). http://
crpit.com/Vol42.html.
Hu
¨
bscher-Younger, T., & Narayanan, N.H. (2003). Constructive and collaborative
learning of algorithms. In Proceedings of the 34th SIGCSE Technical Symposium on
Computer Science Education (pp. 6–10). New York, NY: ACM.
Hundhausen, C., Agrawal, A., Fairbrother, D., & Trevisan, M. (2009). Integrating
pedagogical code reviews into a CS1 course. In Proceedings of the 40th SIGCSE
Technical Symposium on Computer Science Education (pp. 117–122). New York, NY:
ACM.
Hundhausen, C., Agarwal, P., & Trevisan, M. (2011). Online vs face-to-face pedagogical
code reviews: An empirical comparison. In Proceedings of the 42nd SIGCSE Technical
Symposium on Computer Science Education (pp. 117–122). New York, NY: ACM.
iParadigms. (2011). Turnitin instructor user manual. Chapter 3: PeerMark. Oakland, CA:
iParadigms.
Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive
perspective.
Learning and Instruction, 20, 344–348. Commentary to Learning and
Instruction’s special issue on peer assessment.
Liu, N., & Carless, D. (2006). Peer feedback: The learning element of peer assessment.
Teaching in Higher Education, 11, 279–290.
Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment.
Computer Science Education, 19, 209–232.
Machanick, P. (2005). Peer assessment for action learning of data structures and algorithms.
In A. Young & D. Tolhurst (Eds.), Proceedings of the Seventh Australasian Computing
Education Conference (ACE2004),volume42ofConferences in Research and Practice in
Information Technology (pp. 73–82). http://crpit.com/Vol42.html.
Computer Science Education 23
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Mulder, R., & Pearce, J. (2007). PRAZE: Innovating teaching through peer review. In
R.J. Atkinson, C. McBeath, S.K.A. Soong, & C. Cheers (Eds.), Proceedings of
ASCILITE 2007 (pp. 727–736). Retrieved September 25, 2012, from www.ascilite.
org.au/conferences/singapore07/procs/mulder.pdf.
Navalta, J.W., & Lyons, T.S. (2010). Student peer review decisions on submitted
manuscripts are as stringent as faculty peer reviewers. Advances in Physiology
Education, 34, 170–173.
Olson, D.R. (2004). The triumph of hope over experience in the search for ‘‘what works’’:
A response to Slavin. Educational Researcher, 33(1), 24–26.
Patchan, M.M., Charney, D., & Schunn, C.D. (2009). A validation study of students’ end
comments: Comparing comments by students, a writing instructor, and a content
instructor. Journal of Writing Research, 1, 124–152.
Pearce, J., Mulder, R., & Baik, C. (2009). Involving students in peer review. Melbourne,
Australia: Centre for the Study of Higher Education, The University of Melbourne.
Retrieved September 25, 2012, from www.cshe.unimelb.edu.au/resources_teach/
teaching_in_practice/docs/Student_Peer_Review.pdf
Purchase, H.C. (2000). Learning about interface design through peer assessment.
Assessment and Evaluation in Higher Education, 25, 341–352.
Race, P., Brown, S., & Smith, B. (2005). 500 tips on assessment (2nd ed). Oxon:
RoutledgeFalmer.
Reeves, T. (2006). Design research from a technology perspective. In van den Akker, K.
Gravemeijer, S. McKenney, & N. Nieveen (Eds.), Educational design research
(Chapter 5) London: Routledge.
Reily, K., Finnerty, P., & Terveen, L. (2009). Two peers are better than one: Aggregating
peer reviews for computing assignments is surprisingly accurate. In Proceedings of the
ACM 2009 International Conference on Supporting Group Work (pp. 115–124).
New York, NY: ACM.
Russell, A.A. (2004). Calibrated Peer Review a writing and critical thinking
instructional tool. In Proceedings of the AAAS Conference on Invention and Impact:
Building Excellence in Undergraduate Science, Technology, Engineering and Mathe-
matics (STEM) Education (pp. 67–71). Washington, DC: American Association for
the Advancement of Science.
Russell, A.A., Chapman, O.L., & Wegner, P.A. (1998). Molecular science: Network-
deliverable curricula. Journal of Chemical Education, 75, 578–579.
Settle, A., Wilcox, C., & Settle, C. (2011). Engaging game design students using peer
evaluation. In Proceedings of ACM SIGITE’11 (pp. 73–78). New York, NY: ACM.
Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one.
Educational Researcher, 27, 4–13.
Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78(1),
153–189.
Silva, E., & Moreira, D. (2003). WebCoM: A tool to use peer review to improve student
interaction. ACM Journal on Educational Resources in Computing, 3(1). Article 3.
Sitthiworachart, J., & Joy, M. (2003). Web-based peer assessment in learning
computer programming. In Proceedings of the Third IEEE International Conference
on Advanced Learning Technologies (ICALT 03) (pp. 180–184). Washington, DC:
IEEE Computer Society.
Sitthiworachart, J., & Joy, M. (2004). Effective peer assessment for learning computer
programming. In Proceedings of the Ninth Annual SIGCSE Conference on Innovation
and Technology in Computer Science Education (pp. 122–126). New York, NY: ACM.
Sluijsmans, D., Dochy, F., & Moerkerke, G. (1999). Creating a learning environment
by usin g self-, peer- and co-assessment. Learning Envi ronments Research, 1,293
319.
Søndergaard, H. (2009). Learning from and with peers: The different roles of student peer
reviewing. In Proceedings of the 14th Annual SIGCSE/SIGCUE Conference on
Innovation and Technology in Computer Science Education (pp. 31–35). New York,
NY: ACM.
24 H. Søndergaard and R.A. Mulder
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
Strijbos, J.-W., & Sluijsmans, D. (2010). Unravelling peer assessment: Methodological,
functional, and conceptual developments. Learning and Instruction, 20, 265–269.
Introduction to Learning and Instruction’s special issue on peer assessment.
Topping, K. (1998). Peer assessment between students in colleges and universities. Review
of Educational Research, 68, 249–276.
Topping, K. (2010). Methodological quandaries in studying process and outcomes in peer
assessment. Learning and Instruction, 20, 339–343. Commentary to Learning and
Instruction’s special issue on peer assessment.
Trahasch, S. (2004). From peer assessment towards collaborative learning. In Proceedings
of the 34th ASEE/IEEE Frontiers in Education Conference (FIE2004) (pp. F3F-16–
F3F-20). Washington DC: IEEE Computer Society.
Trytten, D.A. (2005). A design for team peer code review. In Proceedings of the 36th
SIGCSE Technical Symposium on Computer Science Education (pp. 455–459).
Proceedings published as SIGCSE Bulletin 37 (1).
Turner, S.A., Quintana-Castillo, R., Pe
´
rez-Quin
˜
ones, M.A., & Edwards, S.H. (2008).
Misunderstandings about object-oriented design: Experiences using code reviews.
SIGCSE Bulletin, 40(1), 97–101.
van den Akker, J., Branch, R., Gustafson, K., Nieveen, N., & Plomp, T. (Eds.). (1999).
Design approaches and tools in education and training. Dordrecht: Kluwer.
van den Akker, J., Gravemeijer, K., McKenney, S., & Nieveen, N. (Eds.). (2006).
Educational design research. London: Routledge.
Van Zundert, M., Sluijsmans, D., & van Merrie
¨
nboer, J. (2010). Effective peer assessment
processes: Research findings and future directions. Learning and Instruction, 20, 270–
279.
Wiegers, K.E. (2002). Peer reviews in software: A practical guide. Boston: Addison-
Wesley.
Zeller, A. (2000). Making students read and review code. In Proceedings of the Fifth
Annual SIGCSE/SIGCUE Conference on Innovation and Technology in Computer
Science Education, (pp. 89–92). New York, NY: ACM.
Computer Science Education 25
Downloaded by [The University Of Melbourne Libraries] at 00:10 22 October 2012
... Finally, the instructor will spend an efficient amount of time on the overall revision of the manual calculations as well as the assessment outcome by taking advantage of the computer-based evidence. Bias in marking and lack of trust (Matinde, 2019) Automatic (Søndergaard & Mulder, 2012), random and blind assessment allocations (Naveh & Bykhovsky, 2021) Mechanisms for distributing assignments and collecting reviews (Søndergaard & Mulder, 2012) Use of the Workshop tool on Moodle (Naveh & Bykhovsky, 2021) High possibility of error in manual computational courses ...
... Finally, the instructor will spend an efficient amount of time on the overall revision of the manual calculations as well as the assessment outcome by taking advantage of the computer-based evidence. Bias in marking and lack of trust (Matinde, 2019) Automatic (Søndergaard & Mulder, 2012), random and blind assessment allocations (Naveh & Bykhovsky, 2021) Mechanisms for distributing assignments and collecting reviews (Søndergaard & Mulder, 2012) Use of the Workshop tool on Moodle (Naveh & Bykhovsky, 2021) High possibility of error in manual computational courses ...
... Minimising the influence of ''rogue'' reviewers (Søndergaard & Mulder, 2012) Detailed layout for qualitative assessment (Cifrian et al., 2020), evidence-based software assessment process Inconsistent marking (Sarah E. M. Meek, Louise Blakemore, & Leah Marks, 2017) Assigning more than 1 assessor (Naveh & Bykhovsky, 2021), evidence-based software assessment process ...
... Learning design research has recorded many practices that have taken up peer evaluation as an inherent element of the underlying need for learning design evaluation in teacher education contexts. Peer evaluation has been valued as a practical method of formative evaluation when the instructor's workload permits only providing a summative evaluation (Søndergaard & Mulder, 2012). It has been praised for bringing the constructivist learning principles into play, coupling the provision and use of feedback (Er et al., 2020;Nicol et al., 2014;Topping, 2021). ...
... Regarding how student teachers valued integrating peer evaluation in the learning design process through PeerLAND, findings showed that it stimulates reflection and promotes review skills (Finding 5, Finding 7). Similar to Søndergaard and Mulder (2012), peer evaluation was considered a practical formative evaluation form (Finding 4). The student teachers reported that studying, comparing, and evaluating peer designs intrigued them to reflect on their learning design practice and promoted their learning design skills by eliciting and refining their design ideas (Finding 6). ...
Article
Full-text available
Although previous research highlights the complementary relationship of learning design with TPACK, this is not the case for TPACK informing the development of digital learning design tools. In this paper, we present PeerLAND (Peer Evaluation of LeArNingDesigns). This learning design tool interweaves design and peer evaluation in an integrated process based on TPACK, promoting teachers' roles as designers and reviewers. It adopts a modular design approach to support teachers as designers explicitly represent their design ideas starting from pedadogical content knowledge and gradually cultivating all the TPACK knowledge domains. The learning design process ends with peer evaluation where teachers use TPACK-based criteria to provide constructive feedback to peers. We report on a study conducted in a teacher education context to evaluate PeerLAND. Specifically, we investigate: (i) how student teachers' knowledge develops through the learning design process supported by PeerLAND, and (ii) how they value peer evaluation through PeerLAND. Our findings suggest that putting TPACK into action through PeerLAND developed student teachers’ knowledge in every TPACK domain, except for content knowledge. Furthermore, peer evaluation is considered advantageous to student teachers for getting timely constructive feedback and refining their designs, and several ideas for improving the peer evaluation mechanism are proposed. Implications for practice or policy: PeerLAND is an online tool supporting the development and peer evaluation of technology-enhanced learning designs allowing teachers to work together and switch roles between designers and reviewers. The learning design process in PeerLAND is a ready to use, step by step process for training teachers in technology-enhanced learning design. It provides a replicable blueprint for organising curricula.
... -humanization of the educational process. The need to humanize the training process, taking into account the latest advances in neuroscience, outlined in Søndergaard and Mulder (2012), Tuya and Garcia-Fanjul (1999), Pickering and Howard-Jones (2007). Scholars note that the principles of modern educational innovations should be based on the ideas of anthropocentrism and provide for the reorientation of educational technologies to the development of personal abilities of future professionals; -developmental learning. ...
Article
Full-text available
The significance of the outlined direction research is determined by the fact that in modern conditions of postmodern society development, the issues of improving higher education and ensuring the quality of the educational process become relevant. The need for innovative renewal of the educational process of the higher school is determined by the emergence of a number of psychological and pedagogical problems, taking into account the conditions of postmodernity(mismatch between the needs of employers, the existing state of future professionals training and scientific and pedagogical staff’s qualifications, lack of mechanisms for selective processing of information, insufficient provision of information and psychological security of the individual, the need to develop an inclusive educational environment in higher education, etc.). The main conceptual principles of the innovative development of the educational environment in postmodern conditions are highlighted (humanization of the educational process, providing developmental learning and personality-oriented approach). The article analyzes the foreign and domestic experience of using innovative educational technologies (individualized, personality-oriented learning, as well as gamification of the future training professionals’ process), identifies their advantages and disadvantages. Peculiarities of their introduction into the practice of higher education in postmodern conditions are outlined. Prospects for further research are the development of practical recommendations for the introduction of innovative educational technologies in the future professionals training in higher education.
... At this point, peer assessment, conducted formatively, contributes to the shared goals of a learning group and to the learning process of the group members. Søndergaard and Mulder (2012) report that peer assessment has the potential to maximize learning outcomes as it contributes to one of the main purposes of a collaborative learning environment which is "to turn a class into an environment which students feel comfortable being part of, and to use the generated trust and goodwill to enhance learning" The key benefit of collaborative learning, along with motivational impact, is cognitive, according to Jolliffe (2007): ...
Thesis
Full-text available
The literature in the field of translation and interpreting indicates the intensification of constructivist approaches underpinning the efficiency of collaborative learning, situated learning, and project-based learning, particularly on the part of translator training. Tracing the reflections of the shift to constructivist pedagogy in interpreter training, this thesis aims to scrutinize the general pedagogical tendency of interpreter trainers, main practices applied, along with possible problems and solutions to overcome them, and thus contribute to improve interpreter training in Turkey at the undergraduate level. To serve this purpose, a qualitative research design was used to reach the experiences and views of trainers in-depth, and discuss the obtained data within the theoretical framework of theories of learning, interpreting skills, interpreting competences, competence models, main training, and assessment practices of interpreter training. Within the scope of the first data collection method, interviews were held with 26 interpreter trainers who serve in T&I programs at the undergraduate level in Turkey. Among qualitative data analysis methods, (Reflexive)Thematic Analysis was used for the analysis of the interview data. To this end, thematic coding was conducted via MaxQda software, and 862 codes were created under 3 main themes. In the second step of the data collection, 4 different interpreting courses were observed by the researcher in terms of course content, class participation of students, constructivist activities applied in the class, and trainer feedback. The observational data were interpreted and discussed against the interview data by indicating parallelism and contradiction between the two datasets. It was found that the trainers adopt both behaviorist and constructivist approaches in different aspects of the training based on varying criteria such as the elective/compulsory status of the course, physical means, students’ language levels, and contact hours among others. The targeted skill and competence acquisitions highlight the idea that undergraduate interpreter training targets basic interpreting skills rather than training professional interpreters. The most commonly encountered challenges in interpreter training and recommendations of the trainers to overcome them for now and the future are also listed at length. Key Words : interpreter training, interpreting skills, interpreting competences, behaviorism, constructivist approach, thematic analysis.
... As highlighted earlier, peer assessment has many benefits. The intra-group peer assessment used in this study has the capacity to facilitate collaborative learning, and the shared activity within the groups can produce a community of learning (Søndergaard & Mulder, 2012). ...
Article
Full-text available
This study investigates the impact of peer assessment on students’ engagement in their learning in a group work context. The study used regression analysis and was complemented by qualitative responses from a survey of 165 first-year undergraduates in a UK university. Findings suggest that students’ perception of their contribution to group work fosters engagement and enhances their learning in a group. Also, that students’ perception and the overall experience of rating their peers’ work impact their engagement within a group. The study contributes to the literature by focusing on the assessment of the entire learning journey within a group rather than the final group output. In particular, the study highlights the significant contributions of peer assessment in managing student engagement in modules and/or assessments for large cohorts.
... Peer feedback is characterized by rich comments, whereas peer assessment refers to grading. In contrast, Søndergaard and Mulder (2012) distinguish between peer assessment, by which they refer to the case when students assess other students' work and provide feedback, and peer grading, by which they mean that students grade their peer's work and this grade has an effect on the peer's final mark. In this study, we subscribe to Nicol et al.'s (2014) definition of the term peer review, which is understood "as an arrangement whereby students evaluate and make judgments about the work of their peers and construct a written feedback commentary. ...
Article
Previous research has emphasized both the importance of giving and receiving peer feedback for the purpose of active learning, as well as of university students’ engagement in reflection to improve learning outcomes. However, requiring students to explicitly reflect on peer reviewing is an understudied learning activity in higher education that may contribute to the utilization of peer-feedback and promote further learning. In this study, we suggest reflection on peer reviewing as one approach to providing a platform for students to engage in reflective practices and for stimulating active learning in higher education, and to make that learning visible to the educator. We examine 26 undergraduate students’ reflections on peer-review to identify categories of reflection and what students have learnt from the peer reviewing process. Our findings reveal six different categories of reflection suggesting students’ active engagement in learning and pointing to the ways educators can direct and instruct students how to reflect. We discuss how these findings can inform university lecturers in the use of reflection upon peer reviewing as a pedagogical tool in higher education.
Article
Full-text available
Objective - This article aims to examine the relationship between learning resources toward online learning experiences. This is an attempt to examine the impact of personal persistence, collaborative learning, and digital lecture skills on e-satisfaction as well as on learning engagement Methodology/Technique - This article has collected and analysed perceptual responses from 106 students of selected private universities in Nigeria. Data were collected with a purposive sampling approach. The data was calculated by using the Smart PLS application. The second-order construction was applied to the research model and hypothesizes testing. Findings – The results showed that learning engagement was strongly influenced by e-satisfaction and indirectly by collaborative learning. Collaborative learning is influenced by personal persistence directly and digital teaching skills Novelty - This article has found empirical facts of digital lecturing skills as an influential institutional resource that influences other resources – such as grit as a personal resource and collaborative learning as a social resources JEL Classification: M12, M15 Keywords: Engagement, Satisfaction, Digital Lecturing Skill Reference to this paper should be made as follows: Saputra, N; Onyemaechi, U; Sutanto, H. (2022). Synergizing Learning Resources and Online Experience: The Pivotal Role of Digital Lecturing Skill in Higher Education, J. Mgt. Mkt. Review, 7(2), 54 – 65. https://doi.org/10.35609/jmmr.2022.7.2(1)
Article
Student-directed projects—projects in which students have individual control over what they create and how to create it—are a promising practice for supporting the development of conceptual understanding and personal interest in K–12 computer science classrooms. In this article, we explore a central (and perhaps counterintuitive) design principle identified by a group of K–12 computer science teachers who support student-directed projects in their classrooms: in order for students to develop their own ideas and determine how to pursue them, students must have opportunities to engage with other students’ work. In this qualitative study, we investigated the instructional practices of 25 K–12 teachers using a series of in-depth, semi-structured interviews to develop understandings of how they used peer work to support student-directed projects in their classrooms. Teachers described supporting their students in navigating three stages of project development: generating ideas, pursuing ideas, and presenting ideas. For each of these three stages, teachers considered multiple factors to encourage engagement with peer work in their classrooms, including the quality and completeness of shared work and the modes of interaction with the work. We discuss how this pedagogical approach offers students new relationships to their own learning, to their peers, and to their teachers and communicates important messages to students about their own competence and agency, potentially contributing to aims within computer science for broadening participation.
Article
Full-text available
This article is a sequel to the conversation on learning initiated by the editors of Educational Researcher in volume 25, number 4. The author’s first aim is to elicit the metaphors for learning that guide our work as learners, teachers, and researchers. Two such metaphors are identified: the acquisition metaphor and the participation metaphor. Subsequently, their entailments are discussed and evaluated. Although some of the implications are deemed desirable and others are regarded as harmful, the article neither speaks against a particular metaphor nor tries to make a case for the other. Rather, these interpretations and applications of the metaphors undergo critical evaluation. In the end, the question of theoretical unification of the research on learning is addressed, wherein the purpose is to show how too great a devotion to one particular metaphor can lead to theoretical distortions and to undesirable practices.
Article
Full-text available
Article
Full-text available
Book
In our contemporary learning society, expectations about the contribution of education and training continue to rise. Moreover, the potential of information and communication technology (ICT) creates many challenges. These trends affect not only the aims, content and processes of learning, they also have a strong impact on educational design and development approaches in research and professional practices. Prominent researchers from the Netherlands and the USA present their latest findings on these issues in this volume. The major purpose of this book is to discuss current thinking on promising design approaches and to present innovative (computer-based) tools. The book aims to serve as a resource and reference work that will stimulate advancement in the field of education and training. It is intended to be useful in academic settings as well as for professionals in design and development practices.
Article
PeerWise is a web-based system that supports the creation of student-generated test banks of multiple choice questions. Students contribute question stems and answers, provide explanations, answer questions contributed by other students, rate questions for difficulty and quality, and participate in on-line discussions of all these activities. In 2007, the system was used in four computing classes that varied in level, instructors, and student reward. We present results that show common patterns of response from students, and outline some initial investigations into the impact of the system on student performance. Our main findings are: external motivators are needed only for question generation; exam performance is correlated with participation in on-line discussions; and, despite student enthusiasm, drill-and-practice use does not contribute to exam success.