Conference PaperPDF Available

Innovative Grading for Design Exercises - A case Study from Aerospace Engineering

Authors:

Abstract and Figures

Individually grading group design exercises is a difficult thing for lecturers to carry out. This becomes even more complicated if many different lecturers are involved. One way of ensuring uniform assessment is the use of rubrics. This article relates to an innovative form of grading using rubrics in the grading of the capstone design project of the Faculty of Aerospace Engineering at Delft University of Technology. The article will address the advantages of using rubrics, the development of a rubric specific to this exercise, experiences so far as well as its reliability and conclusions on the suitability of the use of rubrics in grading group design exercises.
Content may be subject to copyright.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
INNOVATIVE GRADING FOR DESIGN EXERCISES
- A CASE STUDY FROM AEROSPACE ENGINEERING
Gillian Saunders-Smits, Joris Melkert
Faculty of Aerospace Engineering, Delft University of Technology, The Netherlands
G.N.Saunders@tudelft.nl; J.A.Melkert@tudelft.nl
ABSTRACT
Individually grading group design exercises is a
difficult thing for lecturers to carry out. This
becomes even more complicated if many different
lecturers are involved. One way of ensuring uniform
assessment is the use of rubrics. This article relates
to an innovative form of grading using rubrics in the
grading of the capstone design project of the Faculty
of Aerospace Engineering at Delft University of
Technology. The article will address the advantages
of using rubrics, the development of a rubric specific
to this exercise, experiences so far as well as its
reliability and conclusions on the suitability of the
use of rubrics in grading group design exercises.
Keywords: Assessment, rubrics, group design
exercises.
INTRODUCTION
Over the last decade the use of problem based
learning and in particular project based education
has increased in engineering education (de Graaff &
Kolmos, 2003). This teaching format has become
extremely popular in discipline of design. Design is
considered as the discipline in which all other mono-
disciplines are brought together in an integrated
way. Project based education is a format which
allows for students to proof and improve their ability
in these synthesizing skills.
Teaching design in a project based education format
has as added advantage that students can be taught
more skills than just core engineering skills.
Teaching skills such as working in teams, oral and
written communication skills, ability for life-long
learning as well as time and workload management
skills can also be included in a natural fashion. A
third advantage is that the educational situation
mirrors industry practice, thus making students more
prepared for the workplace.
Although in many universities across the world design
education was often already carried out by having
student design an object in small groups with the
introduction of project based education a paradigm
shift took place. Before the introduction of project
based education, the assessment took place on basis
of the quality of the design or the product only, and
therefore often resulted in a group grade.
In the current climate of ever stringent accreditation
criteria when it comes to assessment, enshrined into
law, an ever increasing need for accurate and fair
individual grading is becoming apparent. An example
of such policies are the fact that in Denmark all work
must be graded on an individual basis (Aalborg
University, 2011) and that in the Netherlands and
Belgium an institute must score a mark of ‘sufficient’
or higher on the assessment part of their
accreditation review else their accreditation will be
cancelled forthwith (Eerste Kamer, 2010).
Individual grading is therefore rapidly becoming the
norm. Next to that an individual students will be
rewarded for the work he/she gas done within the
project and will not be penalized when other team
members have not contributed to the work.
This paper describes the solution implemented at the
Faculty of Aerospace Engineering at Delft University
of Technology in the Netherlands to give individual
grades in a reliable way making use of the
assessment format of Rubrics (Stevens and Levi,
2005). In the paper the set up of the exercise will be
discussed as well as the theory behind rubrics as a
DIVERSITY AND UNITY
grading method, the implementation of rubrics in the
exercise, the reliability of the created rubric as a
grading tool and the experiences with the use of the
tool and lessons learned.
DESIGN/SYNTHESIS EXERCISE
The curriculum of the BSc at the Faculty of
Aerospace Engineering at Delft University of
Technology is very design orientated and aims to
recreate the design life cycle in its curriculum
(Kamp, 2011). Design projects and design education
are core part of the curriculum. These culminate in
the third-year Design/Synthesis Exercise (DSE), which
gives students a chance to apply the analysis
techniques learned in their more fundamental
courses.
With this design project the Bachelor curriculum is
concluded. It can be considered the capstone design
experience referred to by Accreditation Board for
Engineering and Technology (Engineering
Accreditation Commission, 2000.).
The exercise serves as final proof of competence of a
BSc student, not dissimilar to an apprenticeship
piece from the era of the guilds. Design topics
include aircraft designs, space craft designs, design
of space missions and the design of earth observation
missions including the design of the necessary
hardware such as for example satellites (Melkert,
2010 and earlier).
The exercise has been running for more than 10
years and has been well received in several
accreditations (Faculty of Aerospace Engineering,
2002 and QANU, 2008) and in 2003 won the Ritsema –
van Eck award for excellence in teamwork by both
staff and students.
LEARNING OBJECTIVES
The exercise has the following learning objectives.
At the end of the exercise the student must be able
to:
Design a multi-disciplinary (sub) system or
inventive arrangement of system elements using
techniques from systems engineering and taking
into account societal, environmental & ethical
considerations.
Autonomously acquire additional knowledge
required for obtaining the solution to the design
problem posed.
Communicate their design and its process to their
peers, the aerospace engineering academic staff
and informed third parties
Function as a member of a team and be able to
reflect on their performance in such a team
SET UP
These learning objectives are achieved by having
students work in a practice-mirroring design
environment (Brügemann et al., 2005). The DSE is a
ten week, full time activity for groups of ten
students. It takes half a semester to complete. The
study load is 15 credits in the European Credit
Transfer System which equates to a work load for the
student of 400 hrs.
Translating this to a working environment it means
that a group of students in the exercise carry out an
equivalent work load of 2,5 FTE. The exercise is
being organized twice a year and in total handles
approximately 300 students per year.
Students work together in groups of ten, in large
project rooms, each hosting 4 – 8 design teams. Each
student design team is supported throughout this
project by a principal tutor and two coaches from
the aerospace engineering faculty, each with
different aerospace fields of specialism to ensure the
multi-disciplinarity of the design. The principal tutor
is responsible for the design assignment. All design
groups have different design assignments to work on.
This makes the designs challenging for the students
but also hard to compare and thus grade.
During the exercise, the whole process of designing
is addressed, from the list of requirements up to the
presentation of the design. Typical aspects of real
design processes, such as decision making,
optimization and coping with conflicting
requirements are therefore encountered. Acquiring
experience often means going through iterative
processes, so design decisions must be continuously
PRODEEDINGS IASDR2011
reviewed to make sure that the design requirements
are met. During the exercise, the educational staff
reviews the students’ decision processes and overall
management of the project. Aspects of design
methodology and design management are also
reviewed.
The educational staff also provides technical
assistance for aspects of the projects where the
students lack sufficient background. This means that
the staff is playing several roles throughout the
project. One time they are the client another time
they are the expert in the field and yet another time
they are the teacher that grades the student. The
students have to distinguish between these roles
which may be confusing in the beginning.
The exercise provides the students with the
experience of working in a team for an extended
period of time. This means that the students must
learn to co-operate, schedule and meet targets,
manage the workload, and solve conflicts in a group
setting.
ASSESSMENT
The assessment of the design work of the students
and the design process is done throughout the whole
duration of the exercise. Each student will receive
an individual grade for the exercise. This grade is
given by the principal tutor and their coaches. The
grade consists of a group component (40%) reflecting
on the quality of the design and the process and
communication of the group as a whole as well as
individual component (60%) relating to the
individuals understanding of the design, the methods
used and the quality of their individual contribution
as well as their effort, communication skills and
team working skills.
The team of coaches will meet with the students in a
scheduled and a non-scheduled way. At least once
per week there are planned progress meetings.
Furthermore there are three formal reviews
throughout the exercise (base line review, mid term
review and final review). On the basis of these
meetings and reviews the coaches are required to
formally assess and grade the students twice
throughout the exercise.
The first grade, handed out after the Mid term
review, only serves as feedback to the students, the
second grade is given at the end of the exercise and
is the final grade. Next to that the students are
required to perform a peer and self evaluation. The
results of this evaluation will serve as input for the
coaches in the coaching and grading process.
(Andernach and Saunders-Smits, 2006 and van den
Bogaard & Saunders-Smits, 2008).
In the past principal tutors and their coaches were
given limited instruction with regards to grading the
exercise. It was assumed that each staff member was
more than qualified to judge designs. This was
mainly caused by the initial small set up of the
exercise (8 – 10 teams per year) run by a small group
of experienced, well attuned group of lecturers.
As the number of students studying aerospace
engineering slowly increased from a first year intake
of 150 students per year in 1996 to more than 500 in
2011 so did the number of project groups and tutors.
The need for a more uniform, easy to implement and
more reliable system arose. This has led to the
introduction of rubrics as an assessment tool.
RUBRICS
Rubrics are an assessment tool for the assessing of
and giving feedback on a student’s performance in
papers, essays, projects, presentations and other
open ended assignments (Stevens and Levi, 2005).
It is effectively a set of criteria and standards which
are directly linked to the learning objectives. Rubrics
are intended to ensure accurate, fair and universal
grading. They have as an added advantage that it
provides students with instant, detailed feedback
and its use saves lecturers time otherwise spend on
detailed grading and feedback.
The reliability of rubrics as a measurement scale is
well excepted and wide spread use of them is made
in the United States of America. There are many
reliable websites, listed in Stevens and Levi (2005),
available in which educators share their rubrics. It is
for these reasons that rubrics were selected as the
new assessment tool for the DSE
DIVERSITY AND UNITY
RUBRICS DEVELOPED IN DSE
For the DSE a comprehensive set of rubrics were
developed. They were split into two sets: one set of
rubrics for the group work, resulting in the group
component of the grade and a set of rubrics relating
to the individual student’s contribution and ability.
Based on the learning objectives from the exercise a
set of criteria was developed. The final list of these
criteria also known as dimensions is listed in table 1.
They can be distinguished in criteria for the group
performance and the individual performance.
For each of these criteria a 5-point scale was
developed describing the desired level for each point
of the scale. As a start point a set of rubrics was
used which have been developed at the United
States Air Force Academy for their capstone
Engineering 410/430 course, see table 2. The
developed rubrics are given in table 4 in the
appendix to the paper.
IMPLEMENTATION
It was decided in order to ensure a fair and uniform
use of the rubrics that the grading meetings would
from then on be chaired by one of the six members
of the organizing committee, thus ensuring uniform
application of the rubrics across the board.
These meetings would typically last one hour.
TRIAL RUN
The initial set of rubrics consisted of 40 criteria for
the group component of the grade and 14 criteria for
the individual grades. This was road tested in the
spring semester of 2010 and afterwards design
iterations took place reducing the number of criteria
for the group component to 30 and the number of
criteria for the individual component to 12.
This was done based on the feedback given by the 10
groups of lecturers who took part in the trial.
Changes included:
- merging similar criteria into one criterion,
- shifting criteria from group to individual grade,
- creating higher level criteria to avoid over-
focusing on certain criteria such as the reporting
criteria and doubling up by using the same
criterion twice, once for the group and once for
the individual.
Design
Originality of the solution
Consistency of the design
The quality of each sub design
Interface management (i.e. is the input of 1
subsystem consistent with the output of a
connected other subsystem) between sub designs
Feasibility analysis of the final design
(requirements check)
Sensitivity Analysis
Trade-off & Motivation
Market and/or Cost Analysis
Sustainability Awareness (The level to which
students are aware of the impact of their design,
not whether the design is sustainable!)
Risk
Budget management (e.g. mass, power, money)
Process
Communication within the Group
Use of resources (e.g. other members of staff,
coaches, computer resources, facilities, use of
team members, library, external contacts,
museums, company visits and contacts etc.)
Integrated use of systems engineering
Internal quality procedure
Integration of sub disciplines (i.e. the different
aerospace disciplines)
Show of unity during reviews
Communication
Weekly meetings
Loose staff and/or external contacts
Dealing with feedback in meetings and reviews
Coherence and completeness of report
Academic reproducibility of the results
Consistency of terminology and symbols
Quality and use of references in reports
Conclusions & recommendations in reports
Representiveness during presentations
Structure and coherence of the presentations
Contents of presentations
Coherence between presentations & reports
Ability to answer staff questions
Individual component
Quality of technical work done
Physics basis behind the design
Dealing with feedback
Showing of understanding of subject matter
Ability to answer staff questions
Identifiable output/ job performance
Attitude
Initiative
Management of resources
Communication within group and towards staff
Coherence and completeness of individual
contribution to report
Academic reproducibility of individual
contribution to report
Table 1: Grading criteria
PRODEEDINGS IASDR2011
EXPERIENCES WITH THE USE OF RUBRICS
In November of 2010 the new grading system was
formally introduced using the Rubrics. All staff and
students were formally informed of the changes
through meetings and course manuals.
The experiences overall were very positive. Staff
members overwhelmingly indicated that they felt
this system was much better, both in quality, and
speed. It allowed them to be fair to each student,
and gave them the feeling of being more objective.
Also they found it much easier to give the students
individual feedback both in terms of the content of
the feedback as well as the acceptance of the
feedback.
This was especially the case for the often less
experienced coaches, who are typically PhDs and
post-docs, for whom the DSE is often their first
teaching experience whilst their own design
experience is still limited. Staff members also felt
that having these detailed criteria allowed them to
steer the group more effectively when they go off
track and to observe the students with the criteria in
mind.
From a student point-of-view we have noticed a
much greater acceptance of grades. In the past there
would often be discussion on whether every tutor
used the same criteria or applied them equally.
These discussions have disappeared. Also student
feel that the grades are much better motivated and
feel that sufficient feedback is given.
RELIABILITY
There was some concern that upon introduction of
the new grading methods a significant difference in
the height of the grades would occur. To see if this
was the case the grades from the DSE run in the
spring of 2009 was compared to the grades in the fall
of 2010.
For this an independent t-test was carried out and it
was found that the average grade in the fall of 2010
was 8.13 with an SE of 0.94 compared to 8.06 in the
spring of 2009 with an SE of 0.25 with no significant
difference found (t(60.82) = -0.738, p > 0.05.
However the effect size r = 0.09 which is small so
more research is needed over the next few years to
see if this conclusion can become more substantive.
There was also an interest to see if the scales used
(Design, Process, Communication and Individual
Contribution) were reliable. In order to judge this
Cronbach’s alpha for each of the scales were
calculated using the scores of the fall 2010 DSE.
Cronbach’s alpha measures the reliability of the
scale.
For ability scales such as grades values of Cronbach’s
alpha of more than 0.7 are deemed reliable (Field,
2005). The results can be seen in table 3.
Scale Cronbach’s alpha N
Design 0.78
(see below) 6
Process 0.90 6
Communication 0.88 6
Individual Contribution
0.91 54
Table 3: Cronbach’s Alpha score of the Rubrics
As can be seen from table 3 the scales are very
reliable and well above the 0.7 mark. This means
these scales have a good underlying construct.
There are some comments to be made to the results
found. These calculations have been carried out
using data from only 6 groups so more data points
will need to be added later to improve the reliability
calculations. A problem already occurred for the
design scale as all 6 groups gave the same score for
originality of the design, which leads to a zero
variance for that scale item.
This means that the Cronbach’s alpha for the Design
scale excluded originality of design in its
calculations. This would indicate that this scale item
does not measure anything as it is not distinct
enough. In order to be sure of this the item has been
kept in for one more exercise to see if this remains
the case. If so it will be deleted.
DIVERSITY AND UNITY
CONCLUSIONS AND RECOMMENDATIONS
Overall the implementation of the new grading
system has been very successful. Although initially it
has been a lot of work, it pays itself back in its ease
of use. Rubrics are an excellent tool to individually
grade students who carry out design work in groups
with diverse assignment topics though shared
learning objectives. Once developed it is easy to
instruct new lecturers in the system and the
resulting grades have high acceptance amongst
students.
Statistical analysis so far has found that there has
been no significant change in the average grade
given and that the scales used are reliable
instruments of measurement, although more
research is needed as the current data set is still a
little small.
The authors whole-heartedly recommend the use of
Rubrics for open answer design exercises, although
sufficient time must be taken to develop the rubrics
to the required level.
ACKNOWLEDGEMENTS
The development and implementation of the new
grading method was done in close cooperation with
our colleagues Mirjam Snellen and Erwin Mooij.
Without them this project would not be as successful
as it currently is. We would like to express our
gratitude to them.
The authors would like to acknowledge the input,
support and camaraderie of their fellow DSE
organizers, Vincent Brügeman, Erwin Mooij, Mirjam
Snellen and NandoTimmer, the current and past
Directors of Education at Aerospace Engineering as
well as all participating staff and students in the
Design Synthesis Exercise.
REFERENCES
Aalborg University (2011), Danish law on grading, available from
http://www.asb.dk/en/programmes/ausummeruniversity/faculty
/academicinformation/coursework/ last accessed on 16 May 2011.
Andernach, T. and Saunders-Smits, G.N. (2006), The use of
Teaching-Assistants in Project Based Learning at Aerospace
Engineering, Proceedings of the 36th ASEE/IEEE Frontiers in
Education Conference, San Diego.
Table 2. Rubric developed for the Engineering 410/430 course at the United States Air Force Academy which served as a starting point
f
or our Rubric Develo
p
ment.
PRODEEDINGS IASDR2011
Bogaard, M.E.D. van den and Saunders-Smits, G.N. (2007), Peer
and Self evaluations as means to improve the assessment of
project based learning, Proceedings of the 37th ASEE/IEEE
Frontiers in Engineering education Conference, Milwaukee, USA.
Brügemann, V., Brummelen, H. van, Melkert, J., Kamp, A., Reith,
B., Saunders-Smits, G.N., Zandbergen, B. (2005), An example of
active learning in Aerospace Engineering, in E. de Graaff, G.N.
Saunders-Smits and M.R. Nieweg (Eds) Research and Practice of
Active Learning in Engineering Education, Amsterdam University
Press, p. 156-164.
Eerste Kamer (2010), Memorie van Toelichting Wetsvoorstel
Versterking besturing bij instellingenvoor hoger onderwijs, de
collegegeldsystematiek en de rechtspositie van studenten,
available from:
http://www.eerstekamer.nl/wetsvoorstel/31821_versterking_best
uring_bij#p6, last accessed 16 May 2011 (in Dutch).
Engineering Accreditation Commission (2000), Criteria for
Accrediting Engineering Programs, Accreditation Board for
Engineering and Technology Inc. (ABET), Baltimore.
Faculty of Aerospace Engineering (2002), Results of Assessment by
ABET and VSNU of Educational Program and Research Quality,
Period 1995-2000, Delft University of Technology, Delft.
Field, Andy, (2005). Discovering Statistics Using SPPS, 2nd edition,
London, SAGE Publications.
Graaff, Erik de, and Kolmos, Anette (2003), Characteristics of
Problem-Based Learning, International Journal of Engineering
education Vol. 19, No.5, pp. 657-662.
Kamp, Aldert (2011) , Delft Integrated Engineering Curriculum,
Proceedings of the 7th International CDIO Conference, Technical
University of Denmark, Copenhagen, Denmark, June 20 – 23.
Melkert, J.A.,editor, (2010) Delft Aerospace Design Projects
2010, Het Goede Boek, Huizen. (and earlier editions).
QANU (2008), Aerospace Engineering – Assessment of degree
courses, Quality Assurance Netherlands Universities, Utrecht.
Stevens, Dannelle D. and Levi, Antonia J., Introduction To Rubrics
- An Assessment Tool to Save Grading Time, Convey Effective
Feedback and Promote Student Learning, Virginia, Stylus
Publishing.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
APPENDIX: RUBRICS
Criterion Poor
1pt Marginal
2pts Average
3pts Good
4pts Excellent
5pts
Originality of the solution Solution copied from
external sources, but not
understood
Solution copied from
external sources, but
reasonably well
implemented
Solution copied from
external sources, and well
implemented
Solution is a combination
of existing solutions and
some own ideas
Solution is well thought out
with hardly any use of
existing solutions or
optimally used existing
solutions
Consistency of the design Inconsistencies on system
level are present The design is not
consistent due to poorly
designed or missing sub-
systems
Sub-system design is
there but misses detail to
allow for consistency
check
Design is consistent from
top level to sub-system
level
Design is consistent from
top level to detailed sub-
system level
The quality of each sub
design Sub-system design is
missing Sub-system design is not
complete All critical sub-systems
have been designed but
details are missing
All critical sub-systems
have been designed with
sufficient detail.
All sub-systems have
been designed in great
detail.
Interface management
(i.e. is the input of 1
subsystem consistent with
the output of a connected
other subsystem) between
sub designs
No interface management
between sub designs.
Design will not function
Poor interface
management between sub
designs. Design will
probably not function.
Interface management
between sub designs has
been done, but some
interfaces overlooked.
Design will function most
of the time
Interface management has
been properly done for all
sub designs. Design will
work.
Interface management has
been done for sub designs
and sub-sub designs.
Feasibility analysis of the
final design (requirements
check)
Most top-level
requirements (i.e.
requirements set by the
principal tutor) are not
met.
Some top-level
requirements (i.e.
requirements set by the
principal tutor) are not
met.
Only top-level
requirements (i.e.
requirements set by the
principal tutor) are met
All top level requirements
(i.e. requirements set by
the principal tutor) and
most derived (by the
student) requirements are
met
All top-level and derived
(by the student)
requirements are met (i.e.
requirements set by the
principal tutor). The list
with derived requirements
is extensive and covers all
sub systems.
Sensitivity Analysis no sensitivity analysis has
been performed Only one or two off-
nominal conditions have
been studied to check the
system's behaviour
Only a few system
parameters have been
varied over a limited range
to study the system
sensitivity
For the most important
system parameters a
detailed sensitivity
analysis has been
executed
All system parameters
have been addressed in a
complete sensitivity
analysis. Conclusions
w.r.t. robustness have
been drawn.
Trade-off & Motivation A single concept has been
presented with no trade-off A concept from 2-3
possible designs has been
chosen, with limited
motivation and/or incorrect
trade-off criteria
The best concept has
been chosen as the result
of a limited trade-off
process using limited or
incorrect trade-off criteria.
Slightly changing the
criteria could give a
different outcome.
A complete and consistent
trade-off process has been
done for a limited number
of potential design
solutions
A complete and consistent
trade-off process has been
done for a large number of
potential design solutions
Market and/or Cost
Analysis no market and/or cost
analysis performed In the design the market
and/or cost analysis has
been touched upon slightly
A sufficient market and/or
cost analysis has been
performed
The market and/or cost
analysis shows all the
relevant steps and has
been a valuable tool for
the design
The market and/or cost
analysis shows all the
relevant steps with lots of
detail and has been a
valuable tool for the
design throughout the
whole process.
Sustainability Awareness
(The level to which
students are aware of the
impact of their design on
sustainability, not whether
the design is sustainable!)
sustainability awareness
has not been addressed Sustainability awareness
has gotten some marginal
attention
Sustainability awareness
has been addressed
sufficiently
Sustainability awareness
has been addressed well Sustainability awareness
has been addressed very
well and it has been
integrated in the design
throughout the whole
process
Risk There is no risk
assessment made The topic of risk
assessment has only been
touched upon
A reasonable risk
assessment has been
made
There has been a
thorough risk assessment Risk assessment has
been integrated in the
design throughout the
whole design process
Budget management (e.g.
mass, power, monetary) There is no budget
management performed at
all
The topic of budget
management has been
addressed but only
marginally. The added
benefit for the design has
not been made clear
Sufficient care has been
taken of the budget
management
Budget management has
been taken into account
well.
Budget management has
been take into account
throughout the whole
design process. This has
resulted in a detailed
budget overview.
Table 4a Design Rubric
DIVERSITY AND UNITY
Criterion Poor
1pt Marginal
2pts Average
3pts Good
4pts Excellent
5pts
Communication within the Group Communication skills
ineffective. Little or no
effort to improve
communication
procedures
Communication skills
ineffective. Effort is
made to improve
communication
procedures
Generally gets the point
across. Tries to improve
in weak areas.
Communication usually
effective. Only minor
improvements in
communication
procedures needed
Communication very
effective.
Use of resources (e.g. other
members of staff, coaches, computer
resources, facilities, use of team
members, library, external contacts,
museums, company visits, company
contacts etc.)
No resources have
been used, or have
been wrongly used
Only a few resources
have been used, but not
exhaustive
Most resources have
been used, but not fully
exploited
All resources have been
used, and mostly
efficiently
All resources have been
fully exploited in an
efficient way
Integrated use of systems
engineering System engineering
principles are missing System engineering
principles have been
used sparsely
System engineering
principles have been
used, but not
consistently
In general, system
engineering principles
have been used
System engineering
principles have been
used at large and were
fully integrated in the
design process
Internal quality procedure There was no internal
quality procedure An internal quality
procedure was
established but not
adhered to
An internal quality
procedure was
established but not
always adhered to
An internal quality
procedure was
established and generally
adhered to
An internal quality
procedure was
established and adhered
to and corrected where
necessary
Integration of sub disciplines (i.e. the
different aerospace disciplines) There is no integration
of the subdisciplines,
all topics are stand
alone topics
There is marginal
integration of the
subdisciplines, all topics
seem to be more or less
stand alone topics
the subdisciplines show
a reasonable amount of
integration
The subdisciplines are
integrated, there is a
consistent coherence in
the results achieved
The subdisciplines are
integrated well, there is a
very consistent
coherence in the results
achieved
Show of unity during reviews Group appears to be
10 individuals with
different opinions
Group members do not
always share the same
views
Overall the group
members present the
work as a team, with
some individual touches
The group members
present the work as a
team.
Group acts as one
individual in
presentation, support
and understanding
Table 4b Process Rubric
Criterion Poor
1pt Marginal
2pts Average
3pts Good
4pts Excellent
5pts
Weekly meetings Weekly meetings are
held but no agenda
and/or minutes are
made
Weekly meetings are
held and token agenda
and/or minutes are
made
Weekly meetings are
held and to the point
agenda and/or minutes
are made
Weekly meetings are
held and to the point
agenda and/or minutes
are made which are
adhered to
Weekly meetings are
held and to the point
agenda and/or minutes
are made and action
points from meetings are
followed up correctly
Loose staff and/or external contacts staff and/or other were
never contacted
outside of staff
instigated meetings
and should have been
and/or were improperly
dealt with
staff and/or external
contacts were seldom
contacted outside of
staff instigated meetings
and should have been.
staff and/or external
contacts are
occasionally contacted.
Contact could have been
better
staff and or external
contact is frequent and
satisfactory
staff and/or external
contact is frequent and
to the point
Dealing with feedback in meetings
and reviews Feedback is not
accepted by the group
at all
Feedback is accepted
but ignored by the group Feedback is accepted by
the group and an
attempt is made to
account for it
Group shows serious
interest in understanding
the feedback and
accounting for it.
Feedback is accepted by
the group and is
optimally used
Coherence and completeness of
report The report is a
collection of individual
contributions without
coherence and
consistency. Essential
parts of the report are
missing
The report is a collection
of individual
contributions with some
coherence and
consistency. Not all
parts of the report are
present
The report is a collection
of individual
contributions with
reasonable coherence
and consistency. Some
minor parts of the report
are missing
The report is complete in
the process and design
description of the chosen
design. The report is
coherent and consistent,
although some individual
touches can be seen.
The report is complete in
the process and design
description, and provides
the reader with sufficient
material to cross check
the results and check the
alternatives.The report is
coherent and consistent,
and appears to be
written by one person.
Academic reproducibility of the
results The results cannot be
reproduced as they are
false
The results cannot be
reproduced as data is
missing
Most results can be
reproduced. Only some
data is missing
The results can be
reproduced with little
effort
Results are fully
reproducible
Consistency of terminology and
symbols No consistency in
terminology and
symbols. List of
symbols is missing.
Some consistency in
terminology and
symbols. Many symbols
are missing from list of
symbols.
Sufficient consistency in
use of symbols and
terminology. Most
symbols are accounted
for in list of symbols.
Good consistency in use
of symbols and
terminology. Complete
list of symbols
Excellent consistency in
use of terminology and
symbols with complete
list of symbols with clear
explanations.
PRODEEDINGS IASDR2011
Quality and use of references in
reports Hardly any references
given in report.
References given are
of poor quality
Insufficient references
are given. Quality of
references should be
improved.
Sufficient references are
given, most are of
sufficient quality
Appropriate use of
references of good
quality
Excellent use of
references of high quality
Conclusions & recommendations in
reports Conclusions and/or
recommendations are
missing
Poorly formulated
conclusions and
recommendations. Not
based on evidence in
report.
Most conclusions and
recommendations are
present and based on
evidence from report.
Some improvement
needed.
All conclusions and
recommendations are
present and based on
evidence from report.
Well formulated and
argued conclusions and
recommendations,
based on evidence from
report.
Representiveness during
presentations Group appears
uninterested. Their
appearance is untidy.
Team members are
contradicting each
other and cannot reach
consensus
Group's appearances
are untidy. They try to
express interest but
could do better. Team
members are
contradicting each other
but after discussion will
reach consensus.
Group appearance is
tidy and appear
interested with room for
some minor
improvements. Team
members are usually
consistent with their
answers but
occasionally "slip"
Group has a unified look
and shows interest.
Team members give
consistent answers, but
do not always help each
other out.
Group pays great
attention during
presentation and has a
unified look and comes
across professionally.
Team members give
consistent answers, and
add to each other in a
supportive and
structured way
Structure and coherence of the
presentations Structure and
coherence are missing Presentation is a
collection of individual
presentations with not
much coherence
Presentation is mostly
structured, but
coherence is partly
missing
Presentation is
structured and coherent,
but "individual touches"
can still be seen
Presentation is very
structured and coherent,
as if made and given by
a single person
Contents of presentations Presentation lacks
detail and does not
support conclusions.
Irrelevant information
presented
Presentation lacks
detail, although
information is relevant,
but not sufficient to
support conclusions
Presentation lacks
detail, and is barely
enough to support
conclusions
Presentation has
sufficient detail to
support conclusions
Presentation has the
right level of detail to
support the conclusions
and to understand the
recommendations
Coherence between presentations &
reports There is no coherence
between report and
presentation
The coherence between
report and presentation
is poor and needs
serious improvements.
The coherence between
report and presentation
is acceptable. Minor
improvements needed.
The coherence between
report and presentation
is good
The coherence between
report and presentation
is excellent
Ability to answer staff questions Group is not able to
answer staff questions
as they do not
understand the subject
matter
Group is barely able to
answer staff questions
due to poor
understanding of the
subject matter
Group is generally able
to answer staff questions
showing an average
understanding of the
subject matter
Group is able to answer
staff questions with
some detail due to a
good understanding of
the subject matter
Group is able to answer
staff questions in detail
due to an excellent
understanding of the
subject matter
Table 4c Communication Rubric
Criterion Poor
1pt Marginal
2pts Average
3pts Good
4pts Excellent
5pts
Quality of technical work done Work must be redone
by others to meet
standards
Work must be redone or
repaired to meet
standards
Quality of work is
acceptable. Work is of high quality.
A producer Work is of exceptional
quality
Physics basis behind the design shows no
understanding of the
physics behind the
design question
shows only marginal
understanding of the
physics behind the design
question
Has proven a reasonable
understanding of the
physics behind the
design question
shows a good
understanding of the
physics behind the
design question which
has led to a good
design
has understood the
physics behind the
design question
completely. Based on
this understanding has
come up with new
insights leading to new
unexpected solutions.
Dealing with feedback Feedback is not
accepted by the
individual at all
Feedback is accepted but
ignored by the individual Feedback is accepted by
the individual and an
attempt is made to
account for it
individual shows
serious interest in
understanding the
feedback and
accounting for it.
Feedback is accepted
by the individual and is
optimally used
Showing of understanding of subject
matter Student does not
understand subject
matter
Student shows poor
understanding of subject
matter
Student shows average
understanding of subject
matter
Student shows good
understanding of
subject matter
Student shows excellent
understanding of subject
matter
Ability to answer staff questions Student is not able to
answer staff questions Student is barely able to
answer staff questions Student is able to
answer staff questions Student is able to
answer staff questions
with some detail
Student is able to
answer staff questions
elaborately
Identifiable output/ job performance Hardly ever performs
the assigned tasks Performs almost all of
assigned tasks Performs all assigned
tasks Sometimes does more
than required. Consistently does more
than required.
Attitude Negative attitude which
adversely affects other
company members or
project.
Negative attitude toward
project and/or team neutral attitude towards
project and team Positive attitude toward
project and the team Positive and
professional attitude
which favorably
influences other
company members
DIVERSITY AND UNITY
Initiative Lets others do the work;
does the minimum
he/she thinks is needed
to get by
Tends to watch others
work. Gets involved only
when necessary.
Volunteers to help when it
will look good.
Gets involved enough to
complete tasks. Does
his/her share
Readily accepts tasks,
sometimes seeks more
work. Gets involved in
the project
Takes initiative to seek
out work, concerned
with getting the job
done. Very involved in
the project.
Management of resources Does little useful work in
group or out; wastes
his/her time and others.
Work is constantly late
Wastes most of group
time. Seldom seen doing
productive work. Some
tasks completed late.
Wastes some time in
group, but works hard
when a deadline is near.
Most tasks completed on
time.
Uses time effectively in
and of group.
Completes all tasks on
time.
Uses time effectively in
and out of group and
works to get others to do
the same. All tasks
completed on or ahead
of schedule.
Communication within group and
towards staff communication skills
ineffective. Makes little
or no effort to improve
Communication skills
ineffective. Makes an
effort to improve.
Generally gets the point
across. Tries to improve
in weak areas.
Communication usually
effective. Only minor
improvements needed
Communication very
effective.
Coherence and completeness of
individual contribution to report The contribution shows
no coherence at all.
Essential items that
should be discussed are
missing.
The contribution shows
marginal coherence at
best. Not all items that
should be discussed are
present
The contribution shows
moderate coherence.
Only some minor items
are missing
The contribution is
coherent. This part of
the report is complete
in the process and
design description of
the chosen design.
The contribution is
coherent and all topics
addressed are placed in
a logical relation with
respect to each other.
This part of the report is
complete in the process
and design description,
and provides the reader
with sufficient material to
cross check the results
and check the
alternatives.
Academic reproducibility of individual
contribution to report The results cannot be
reproduced as they are
false
The results cannot be
reproduced as data is
missing
Most results can be
reproduced. Only some
data is missing
The results can be
reproduced with little
effort
Results are fully
reproducible
Table 4d Individual Rubric
... These techniques were proposed in (Ward, Gruppen, and Regehr 2002) as alternatives to reduce the issues of the efficacy of self-assessment. Rubrics can also be used for large samples, as experienced by the second author of this review (Saunders-Smits and Melkert 2011). Moreover, rubrics are useful not only to conduct summative assessment but also to provide individual feedback to strengthen detected points in students that need improvement. ...
Article
Full-text available
The purpose of this systematic review is to evaluate the state-of-the-art of competency measurement methods with an aim to inform the creation of reliable and valid measures of student mastery of competencies in communication, lifelong learning, innovation/creativity and teamwork in engineering education. We identified 99 studies published in three databases over the last 17 years. For each study, purpose, corresponding methods, criteria used to establish competencies, and validity and reliability properties were evaluated. This analysis identified several measurement methods of which questionnaires and rubrics were the most used. Many measurement methods were found to lack competency definitions and evidence of validity and reliability. These show a clear need for establishing professional standards when measuring mastery of competencies. Therefore, in this paper, we propose guidelines for the design of reliable and valid measurement methods to be used by educators and researchers. ARTICLE HISTORY
ResearchGate has not been able to resolve any references for this publication.