Conference PaperPDF Available

Incorporating student peer review and feedback into the assessment process

Authors:

Abstract

The paper provides a detailed discussion of the design, application, and results of a computer-based approach used to solicit student self and peer assessment and feedback on nine learning outcomes linked to ABET 2000. Several issues are addressed including: the efficacy of student self and peer review, correlation with faculty ratings, faculty and student acceptance, and process management
Incorporating Student Peer Review and Feedback
into the Assessment Process.
Jack McGourty
Fu Foundation School of Engineering and Applied Science
Columbia University
Peter Dominick
NSF Gateway Coalition for Undergraduate Engineering Education
Drexel University
Richard R. Reilly
Wesley J. Howe School of Technology Management
Stevens Institute of Technology
Abstract - The paper provides a detailed discussion of the
design, application, and results of a computer-based
approach used to solicit student self and peer assessment
and feedback on nine learning outcomes linked to ABET
2000. Several issues will be addressed including: the
efficacy of student self and peer review, correlation with
faculty ratings, faculty and student acceptance, and process
management.
Introduction
In spite of the growing number of team-based projects being
used in colleges and universities, the application of student
peer review processes to support assessment activities has
been limited. Furthermore, qualified and systematic methods
for peer evaluation and feedback are either non-existent or
require additional resources, which are commonly
unavailable. This paper will discuss the application and
results of a simple, automated and quantified approach to an
assessment and feedback methodology successfully used with
hundreds of engineering students in several programs
including New Jersey Institute of Technology, Ohio State
University, and Stevens Institute of Technology.
Rationale
Student outcome assessment has become a primary focus for
higher education institutions in today’s competitive
environment. There is a great deal of pressure from both
industry and academic accreditation entities to incorporate a
broader set of student learning outcomes and sound
assessment techniques into new courses and other
educational programs. For example, the Accreditation Board
of Engineering and Technology (ABET) has incorporated
eleven student learning outcomes and assessment as a key
criterion in its Engineering Criteria 2000. According to the
ABET criteria, the focus of an institution’s assessment
efforts should be on the measurement of student learning
outcomes in a systematic and valid manner.
In general, an outcome-driven measurement system provides
critical information to educators on the effectiveness of the
design, delivery, and directing of an educational project,
activity, or program. Few educational institutions have a
comprehensive system for measuring program results in
terms of student learning outcomes. And of those that do,
the focus has been on the traditional set of student outcomes
in specific technical and basic science areas. However, the
movement in industry for practicing engineers and supported
by ABET, is the drive to prepare students for the
professional work environment they will encounter post
graduation. Newly graduated engineers will need
competencies beyond the traditional knowledge of science
and basic engineering principles. As professionals, they will
require skills to help them function in multidisciplinary
teams, work with complex systems of products and services,
and strive towards continuous self-learning.
To accomplish this skill development, faculty have
increasingly turned to the use of cooperative learning
techniques in the classroom. [1,2,3] These techniques
promote interdependence amongst students, encourage
interaction, require information sharing and typically include
the use of team-based projects as cornerstones of the learning
process. One of the keys to making cooperative learning
successful is the shifting of the student’s role from a passive
receiver of information into an active participant. In a
cooperative learning environment, students themselves are
often in the best position to provide one another with
meaningful feedback regarding both their technical and
interpersonal performance. In spite of this fact, the
applications of peer review processes in the classroom are
limited. Furthermore, qualified and systematic methods of
peer evaluation and feedback are almost never used and
typically require time and resources that are not easily
available.
How does a Formal Peer Feedback System
Impact Students’ Performance?
A formal peer feedback approach provides students and
educators with many important benefits. First, introducing a
systematic peer review system helps to reinforce key learning
objectives. The behaviorally specific information contained
in the survey helps to define and make salient to the student
what is required in order to perform effectively. [4] This is
especially important when considering the eleven ABET EC
2000 student learning outcomes . Second, the fact that the
information is presented as part of a formal feedback system
sends a strong message to students that performance should
be improved. This message in and of itself, can often
encourage people to evaluate their own performance and
establish improvement goals. [5] Recent research on the use
of peer feedback systems suggests that students are likely to
demonstrate changes in behavior and skill acquisition
simply by completing the feedback instrument. For instance,
Dominick, Reilly & McGourty found that students who
completed a peer feedback instrument, but did not actually
receive feedback, were just as likely to improve their
performance as students who actually received feedback. [6]
This is not to suggest that feedback does not add value.
Providing feedback to the student is a critical component of
the peer review process. The students that receive timely and
detailed feedback are in a better position to have meaningful
performance improvement discussions with their instructor
and peers. They will also have a valuable performance record
that makes it easier to track their performance over longer
periods of time. Finally, the feedback can be of great value to
the instructor for teaching and assessing course outcomes.
By reviewing feedback results, instructors can better tailor
their teaching activities to specific needs identified for
individual students and the total class. Doing so is likely to
promote even greater learning and improvement by students.
The Team Developer
One approach that has been successfully used in the class-
room is a behavior-oriented survey called the Team
Developer. [7] Team Developer is designed to provide each
student with developmental feedback regarding his or her
effectiveness on several specific cognitive and behavioral
skills. Student team members rate both themselves and
their teammates on items designed to identify skills based
on behaviors that have been found to be important for
practicing engineers. Each student receives a developmental
feedback report that presents self and team ratings on each
survey item and highlights overall strengths and areas for
development. Gaps between self-perceptions and the
perceptions of others are clearly shown. Specific suggestions
for development, keyed to the behavioral areas, are provided
to assist team members in developing action plans based on
their personal feedback.
The Team Developer approach provides students and
educators with many important benefits. First, feedback is
based on the observation of specific behaviors rather than
subjective overall impressions. Thus, the student can act on
the information more readily. Second, students can use the
reports to plan their own development activities and monitor
progress over time. Third, the feedback reports allow
educators to tailor their teaching activities to specific needs
identified for individual students and the total class.
Development of Competency Based
Assessment Items
One of the keys to the successful use of a peer feedback
system, such as Team Developer, is the clear articulation of
the competencies and behaviors that are to be assessed. A
competency is a bundle of knowledge, skills and abilities
that relate to successful performance. Another way to think of
a competency is as a learning outcome associated with a
course or a program. Therefore, when designing a peer
assessment instrument, one of the first questions to answer
is, “What are the main competencies that relate to effective
performance in this course and or program?” For instance,
the competency, Analytical Skills could be defined as
follows:
Applies logic in solving problems and
analyzes problems from different points of
views. Translates academic theory into
practical applications and recognizes
interrelationships among problems and
issues.
Once a definition of the competency is established, the next
step is to further define it with illustrative and specific
behavioral examples. It is these behaviors that will form the
content of the peer feedback instrument. Defining
competencies in behavioral terms makes it far more likely
that people will be able to provide valid assessments and
also increases the likelihood that feedback will be actionable.
For our Analytical Skills competency, some sample
behaviors include:
Analyzes problems from different points of view
Recognizes interrelationships among problems
and issues
Applies logic in solving problems
Scales down information to what is important
While breaking competencies down into specific behavioral
examples is essential, it is also important to take a broader
strategic perspective as well. Ideally, one should be able to
identify the ways in which competencies for a particular
course or program relate back to broader institutional
objectives and also to outside accreditation criteria such as
ABET EC 2000. Doing so helps to ensure that the
instrument provides meaningful feedback to students and is
an integrated part of an institution’s overall assessment
strategy and program.
Using the Computer for Instrument
Administration
One of the biggest challenges many faculty faces when
attempting to implement peer feedback process is finding the
time to collect, tabulate and then disseminate information. A
computerized format like Team Developer’s helps to
eliminate many of these obstacles. Using a computerized
survey means that data can be collected and analyzed quickly
and that there can be a fast turnaround time for providing
feedback. This also means that more time can be spent
reviewing information and ensuring that the feedback process
is a meaningful one for students and instructors.
Of course using a computerized survey assumes that students
have access to computers and the appropriate level of
computer literacy. While these issues are not likely to be
problems for most of today’s students it is a factor to
consider when implementing the feedback process. Another
factor to consider is that while an automated system is
ultimately a time-saver, there is usually an initial need for
more up-front time to ensure that the process operates
smoothly and efficiently. A peer evaluation process will be
more effective if the instructor takes the time to discuss the
process with the students. Students need to be aware of the
rationale for receiving feedback from peers. Additionally,
they must understand how the competencies being measured
are linked to the course objectives.
Implementing a Feedback Process
Even a well designed and automated peer feedback process
will fail to produce meaningful results if it is not
implemented with care. To begin with, feedback providers
and recipients should be made aware of how the instrument
was developed and most importantly, how the information
they provide will be used (for example, developmental
versus evaluative). One of the advantages of Team Developer
is that feedback can be collected and provided in ways that
ensure confidentiality. Ensuring confidentiality is often an
important factor, especially for people who are new to the
peer feedback process.
In terms of encouraging improvement, there should ideally
be more than one administration of the instrument during the
time that peers are working with one another. Two
administrations of the instrument (at the midpoint and end of
a semester) has worked best with Team Developer. The
midpoint assessment should be conducted after the students
have had sufficient time to observe skills and behaviors of
their fellow team members. By the end of the semester, the
students have adequate opportunity to react to the peer
feedback they receive and to implement improvement efforts.
Another factor that can help to make the most out of the peer
feedback process is instructor involvement. For instance, at
Stevens Institute of Technology, instructors typically make
themselves available to students who wish to discuss the
feedback they have received. [7] In some cases, instructors
have facilitated discussions amongst peers in order to help
them better understand the feedback they have provided to
one another. Another way to strengthen the impact of a peer
feedback process is requiring students to prepare
development plans based upon their feedback.
Using Peer Feedback to Reinforce and Assess
ABET 2000 Learning Outcomes
Several undergraduate and graduate engineering programs
have incorporated the Team Developer into their curriculum.
For example, New Jersey Institute of Technology (NJIT)
used the Team Developer process to assess undergraduate
engineering students working on team design projects. [8]
The process at NJIT involved students rating self and peers
on 48 behaviorally specific items relating to nine core
learning outcomes: Analytical Thinking; Communication
Skills; Creative Problem Solving; Project Management;
Research Skills; Self-Learning; Systems Thinking;
Teamwork; and Technical Competence. Each of these
outcomes was linked to ABET 2000 Criteria 3 (a-k).
In another example, Team Developer was incorporated into
capstone graduate classes at Stevens Institute of Technology.
In these classes, students worked in teams of six to seven
members to solve business cases and make decisions
regarding a simulated technology-based organization.
Students rated themselves and their peers in four learning
outcome areas - Collaboration, Communication, Decision
Making, and Project Management.
In both cases, self and peer assessments were based on the
observations of specific behaviors of students working on
classroom projects throughout the semester. Students
engaged in frequent interaction while working on these
projects, thus providing good opportunity to observe specific
skills and behaviors of each other. Students were provided
with feedback on how they were rated by their peers on each
of the nine learning outcomes. Students were encouraged to
use the reports to plan their own development activities and
monitor progress over time. Faculty also received
information regarding how their classes faired in the
aggregate ratings of the nine learning outcomes as perceived
by their students.
Results
At NJIT, data was gathered from three separate sections with
a total of 158 students participating. Results from the Team
Developer were correlated with faculty ratings on the nine
learning outcomes and grades given at the end of class. [9]
Faculty rated student teams on each of the nine learning
outcomes. These team ratings, given by the faculty, were
significantly correlated with students’ average team peer-
ratings across all learning outcomes. Results clearly
demonstrate that student peer ratings are consistent with he
overall perceptions of faculty.
One interesting finding is the variation found among
correlations between self and peer ratings and students’
grades in freshman engineering design classes (Table 1.).
There are several potential reasons for the low relationships
between grades and self and peer ratings, especially in
communication and teamwork. Through interviews, we
found that faculty were still generating the majority of their
grading on technical competency, with less attention on the
cognitive and behavioral skills measured by the Team
Developer. This finding was substantiated through student
course surveys in which students rated the extent to which
the nine learning outcomes were emphasized during the
course.
Table 1. Correlations Peer/Self & Grades by Learning
Outcomes
Learning Outcomes Peer-Grade Self-Grade
Analytical Thinking .32* .35*
Communication .22 .38*
Creative Problem Solving .32* .35*
Project Management .39** .31*
Research Skills .59** .41**
Self-Learning .60** .54**
Teamwork .22 .21
Technical Competence .35* .23
Systems Thinking .38* .28
(* p<.05, **p<.01)
In the Stevens example, data was gathered from six separate
classes with a total of 178 students involved in the Team
Developer process. The Team Developer was administered
twice in each class. The first administration occurred during
the middle of the semester and was used to provide
developmental feedback to class participants on four team-
related learning outcomes: Collaboration, Communication,
Decision Making, and Project Management. The second
administration occurred during the final week of class and
feedback was delivered to students during their final
presentations.
The results demonstrated that students improved in all four
areas (Table 2.). Specifically, a series of paired t-tests found
significant differences between the means for the first
administration versus the second administration for learning
outcomes measured as well as the overall average across the
four areas combined.
Table 2. Average Peer Ratings for Students by Learning
Outcome (Rating Scale 1=Never to 5= Always)
Learning
Outcome
Time 1 Time
2
t
value
Collaboration 3.71 3.83 2.68**
Communication 3.55 3.74 4.67**
Decision Making 3.46 3.65 5.31**
Proj. Management 3.51 3.72 4.62**
Overall 3.55 3.74 6.64**
(** p< .01)
Conclusions
As engineering educators increase their attention to
developing and assessing student learning outcomes, peer
evaluation processes will play an integral part in the
education process. First, our studies show that student can
play an active role in their own development and
assessment. However, these self-assessment skills need to be
developed by providing a structured process to facilitate their
learning. Secondly, peer feedback processes can have an
impact on developing student learning outcomes as
prescribed by ABET EC 2000. An educational tool, as the
Team Developer can support an institution's objectives in
the development and assessment of these critical student
learning outcomes. Team Developer data on hundreds of
students has been collected. This data shows student self
and peer ratings can be consistent with faculty perceptions of
student performance. Additionally, when the process is
administered multiple times, individual team members
improve on learning outcomes significantly after peer
feedback.
References
1. McGourty, J. (1994). Designing & Teaching Team
Courses for Technology-Based Students. Paper published in
American Society of Engineering Education
Proceedings,Edmonton, Canada.
2. Hillborn, RB. (1994). Team learning for engineering
students. IEEE Transactions on Engineering Management,
37, 207-211.
3. Kernaghan JA, Cooke, RA (1986). The contribution of
the group process to successful project planning in R&D
settings.
IEEE Transactions on Engineering Management,
33, 134-140.
4. London M, Smither, JW. (1995). Can multi-source
feedback change perceptions of goal accomplishment, self-
evaluations, and performance related outcomes? Theory-
based applications and directions for research.
Personnel
Psychology, 48, 803-839.
5. Locke EA., Latham GP. (1990). A theory of goal setting
and task performance. Prentice Hall, NJ.
6. Dominick PG, Reilly, RR., McGourty J. (1997). The
effects of peer feedback on team member behavior. Group and
Organization Management, 22, 508-520.
7. McGourty, J. (1996). Promoting collaborative behavior in
the classroom: A field investigation (1996). Paper presented
at the Eleventh Annual Conference of the Society for
Industrial and Organizational Psychology, San Diego,
California.
8. McGourty, J, Sebastian, C., & Swart, W. (1998).
Developing a comprehensive assessment program for
engineering education. Journal of Engineering Education (in
press).
9. McGourty, J, Sebastian, C., & Reilly, R. (1997).
Incorporating student peer review and feedback into the
assessment process. Paper presented to the Best Assessment
Processes in Engineering Education: A Working
Symposium sponsored by National Science Foundation,
ABET, and Rose-Hulman Institute of Technology, April
1997, Terre Haute, Indiana.
... Skills need to be developed by a structured process, which includes educational methodologies. [23] This can be a concern in typical courses with full to overflowing syllabi. Where can a teacher find room to teach the mechanics of a learning tool when there is barely enough time to cover the required subject mater? ...
... Colleges and universities have already begun developing home-grown technology-based systems to improve data collection, analysis and reporting for ABET purposes. [12][13][14][15] The proposed model could be designed to interface with existing/future systems at the college/program levels. ...
... There is no single best source of feedback, as different sources have different perspectives and levels of expertise/relatability to offer. Although peer feedback is beneficial to students both giving and receiving the peer feedback [13], [14], in some cases students recognize that peers do not have the knowledge or expertise to provide useful and valuable feedback to one another, and those students are more inclined to seek feedback from an instructor [15]. Many times, who provides the feedback is also dependent on context of the learning and resources available in that context. ...
... McGourtny, et. al. 7 , discuss incorporation of student peer review and feedback into the assessment process. While others have attempted to present a serialized model based upon plan-do-check-act derived from six-sigma methodology [10][11][12] , very few comprehensive models for assessment and continuous improvement have been published. ...
Article
The ability to communicate effectively is critical in the accounting profession, yet research shows there is a gap between employer expectations and student abilities to adapt communication to various contexts. In this paper we introduce "Communication Roulette", a novel learning intervention that encourages students to practice their written communication skills by sensitizing them to the need to tailor their message to different audiences (i.e. shareholders, management, clients, etc.) using various communication formats (i.e. letter, email, presentation etc.) while simultaneously reinforcing their knowledge in important content areas. We provide implementation instructions and sample prompts, along with ideas to modify the intervention for a variety of classroom settings. Survey results indicate students find Communication Roulette increases both their confidence and ability to communicate effectively with different audiences and formats. We find these improvements in learning outcomes can largely be achieved through peer feedback alone, although we document incremental improvements in self-reported confidence with supplemental instructor feedback. Finally, pre- and post-assessments indicate an increase in student content knowledge following implementation of Communication Roulette.
Chapter
To better prepare the next generation of software professional, it is important to provide opportunities for them to work on real software project along with real customer during their studies. This is the reason universities around the world offer project-based capstone course. Such courses help students to understand what they will face in the industry and experience real customer interaction and challenges in collaborative work. In regards, University of Oulu, Finland offers a software factory (SWF) course to enhance the learning and experience multicultural teamwork. This paper presents the design of the SWF course and student and teacher experiences. It discusses the importance of reflective learning diaries and serious games. Additionally, this paper examines factors in the SWF learning environment that affect student learning in the SWF course. Survey data were collected from the last 6 years of SWF projects. The results show that students consider the SWF to be a good collaborative learning environment that helps them achieve academic triumphs and enhances various professional skills. The learning diaries are effective for increasing students’ learning experiences as well as providing an opportunity for teaching staff to monitor students’ progress and offer better facilitation. These results are helpful for academic institutions and industry when developing such a learning environment.
Article
Full-text available
Research studies have shown the benefits of employing written peer feedback (WPF) in classrooms around the world. However, implementing it appropriately in the classrooms is not consistent in the context of universities in Vietnam. The current paper explored the extent to which WPF had applied in the academic writing English classrooms at a university in Vietnam and the practical perceptions of the lecturers and students on this issue. 338 out of 996 students and three lecturers were participated to respond to the survey questionnaire and interviews for analysis. The study revealed that though WPF was carried out in most academic writing classrooms, the implementation was not sufficient to take full benefits of this kind of activity. The students were not sufficiently trained to use this activity appropriately. However, the students obtained highly positive perceptions towards the use of WPF in the classrooms because it provided students opportunities to learn from each other and improve their writing outcomes.
Article
Full-text available
We examined the effects of peer feedback on subsequent behavior using a four-dimensional model of team behavior. Participants (N= 75) were randomly assigned to teams, and teams were randomly assigned to one of three experimental conditions: feedback, exposure, or control. In the feedback condition, participants rated themselves and each other using a 24-item behavioral observation scale after completing the first of two decison-making tasks. Before performing the second task, they received individualized feedback reports summarizing their self- and peer ratings. Those assigned to the exposure condition completed the behavioral observation scale after the first task but did not receive feedback. The second task was videotaped and rated by experts blind to experimental condition. Results showed significantly higher ratings for participants in the feedback and exposure conditions. The findings extend previous research on multisource feed-back by isolating exposure to key behaviors as an important variable in behavioral improvement.
Article
Full-text available
Whether you're a manager, company psychologist, quality control specialist, or involved with motivating people to work harder in any capacity—Locke and Latham's guide will hand you the keen insight and practical advice you need to reach even your toughest cases. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
An empirical study that compares the relative effectiveness of groups vs. individuals in developing a project plan and focuses on two aspects of planning effectiveness (quality and acceptance) is presented. Members of 80 groups completed a simulation, the Project Planning Situation, first individually and then as interacting groups. The results show that the quality of the project plans developed by the groups was significantly higher than the average quality of the plans developed by members working independently. The groups' plans also were better than those that were derived through nominal techniques. It is open to question, however, whether the groups' plans were always superior to those of their best members. The effectiveness of the groups in planning is related to the two basic elements of group process: the rational and the interpersonal. The rational elements of process determined the quality of the plan and the interpersonal factor were associated with the groups' acceptance of the project plan. The management implications of these findings are discussed.
Article
This paper provides an overview of one institution's efforts to establish a comprehensive assessment program for continuous improvement of engineering education. A five step systematic process to develop an integrated assessment program from identifying educational objectives to applying measurement methods is explained in detail. Activities to encourage faculty participation and commitment are outlined. Four integrated assessment processes used by both faculty and students to assess and provide performance feedback are described. The focus of these assessment methods is on the measurement, development, and improvement of student learning outcomes aligned with ABET Engineering Criteria 2000. Preliminary results and lessons learned from the overall experience are highlighted.
Article
One of the top concerns on any recruiter's list is the amount of experience, if any, the new graduates being hired have had in working as part of a team. Aside from those students who have been involved in athletics, the concept of working as part of a team for a common goal is completely foreign to nearly all of our students. As an attempt to better prepare students for entry into a very team-oriented workplace after graduation and with the belief that cooperative team learning, in itself, has the potential for an enhanced learning experience, the format of a second semester junior electronics course was modified to accommodate pedagogical concepts more in line with team learning. This paper reports on the concepts implemented in this course and the results experienced to date (after three semesters of implementation)
Promoting collaborative behavior in the classroom: A field investigation Paper presented at the Eleventh Annual Conference of the Society for Industrial and Organizational Psychology Developing a comprehensive assessment program for engineering education
  • J Mcgourty
McGourty, J. (1996). Promoting collaborative behavior in the classroom: A field investigation (1996). Paper presented at the Eleventh Annual Conference of the Society for Industrial and Organizational Psychology, San Diego, California. 8. McGourty, J, Sebastian, C., & Swart, W. (1998). Developing a comprehensive assessment program for engineering education. Journal of Engineering Education (in press).