ResearchPDF Available

To rubrics or not to rubrics? An experience using rubrics for monitoring, evaluating and learning in a complex project

Authors:

Abstract and Figures

In this Practice Note I share our experience using an evaluation and monitoring approach called 'rubrics' to assess a complex and dynamic project's progress towards achieving its objectives. Rubrics are a method for aggregating qualitative performance data for reporting and learning purposes. In M&E toolkits and reports, rubrics looks very appealing. It appears capable of meeting accountability needs (i.e. collating evidence that agreed-upon activities, milestones, and outcomes have been achieved) whilst also contributing to enhanced understanding of what worked, what was less successful, and why. Rubrics also seems to be able to communicate all of this in the form of comprehensive, yet succinct tables. Our experience using the rubrics method, however, showed that it is far more difficult to apply in practice. Nonetheless, its value-add for supporting challenging projects – where goal-posts are often shifting and unforeseen opportunities and challenges continuously emerging – is also understated. In this Practice Note, I share the process of shaping this method into something that seems to be a right fit for the project (at the time of producing this note the project is ongoing and insights still emerging).
Content may be subject to copyright.
In this Pracce Note I share our experience using an evaluaon and monitoring approach called ‘rubrics’ to
assess a complex and dynamic project’s progress towards achieving its objecves. Rubrics are a method for
aggregang qualitave performance data for reporng and learning purposes. In M&E toolkits and reports,
rubrics looks very appealing. It appears capable of meeng accountability needs (i.e. collang evidence that
agreed-upon acvies, milestones, and outcomes have been achieved) whilst also contribung to enhanced under-
standing of what worked, what was less successful, and why. Rubrics also seems to be able to communicate all
of this in the form of comprehensive, yet succinct tables. Our experience using the rubrics method, however,
showed that it is far more dicult to apply in pracce. Nonetheless, its value-add for supporng challenging
projects – where goal-posts are oen shiing and unforeseen opportunies and challenges connuously emerging
– is also understated. In this Pracce Note, I share the process of shaping this method into something that seems to
be a right t for the project (at the me of producing this note the project is ongoing and insights sll emerging).
Project context
The experiences shared in this Pracce Note come
from the Food Systems Innovaon (FSI) iniave. FSI
is a partnership between the Department of Foreign
Aairs and Trade (DFAT), the Commonwealth Science
and Industrial Research Organisaon (CSIRO), and
the Australian Centre for Internaonal Agricultural
Research (ACIAR). Launched in 2012, FSI’s aim
is to bring together Australian and internaonal
partners and experse to help improve the impact of
Australian-supported aid investments in agriculture
and food systems in the Indo-Pacic region.
FSI is ambious and complex. It brings together three
Australian government agencies into a new mode of
collaboraon (i.e., partnership versus donor-client
relaonship) whilst also working closely with
partners overseas and with global networks of food
systems experts and other domain specialists. It aims
to balance the planning and management of a
detailed, collecvely agreed-upon set of acvies
and associated outputs with exibility to take on new
opportunies and partner needs as they emerge.
These, along with other aspects that make FSI a
challenging project, have made it quite dicult for the
FSI team to idenfy and develop eecve monitoring,
evaluaon and learning (MEL) approaches and tools.
When we came across the rubrics approach, it seemed
to ck many of the boxes we needed: the ability to
monitor acvies and outputs delivered (primarily
quantave informaon) and assess progress towards
outcomes (largely qualitave evidence). An addional
value-add was that it looked promising as a plaorm
for fostering reecon and learning among the FSI
team and partners. We have yet to test the laer but
in this Pracce Note we share our experiences-to-date
using rubrics as a monitoring and evaluaon tool.
So what are rubrics?
Rubrics – oen called ‘evaluave rubrics’ – is a
qualitative assessment tool. Rubrics involves
articulang and clarifying ‘the things that maer’ in a
project or iniave, which can encompass aspects
related to the performance, quality, usefulness, and
eecveness of project acvies, services or products.
These are in essence the things that are considered
by those involved in the project as important to pay
aenon to. These are evaluated using a qualitave
rang system (e.g. excellent, good, adequate, poor).
Rubrics typically look like a table or a matrix.
There are many readily-available rubrics guidelines
and reports online. Some of the resources we used
and found most useful are listed on the last page of
this Pracce Note.
PRACTICE NOTES SERIES Managing for impact
To rubrics or not to rubrics?
An experience using rubrics for monitoring, evaluating and learning in a
complex project
Samantha Stone-Jovicich, CSIRO August 2015
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
2
Our experience with rubrics in FSI
We were rst introduced to rubrics by an evaluaon
praconer who was helping us reect on why the
approaches and tools we had developed were simply
not working for us (an indicator-based framework and
a home-grown narrave approach see Pracce Note
Journey to Fit for Purpose’ for more details). Rubrics
was proposed as a middle-ground between the
rigid, quantave, accountability-oriented indicator
M&E system, on the one hand and the uid, qualitave-
heavy, process-focused, reecon-oriented Learning
Trajectory approach, on the other hand. Below we
describe our journey with rubrics.
Sussing rubrics out
A small team of us (evaluaon praconer included)
spent half a day brainstorming whether the rubrics
approach was worth pursuing (see Figure 1 below).
This was a crical decision as we had already invested
a signicant amount of me in the development of
previous MEL tools only to nd them failing. At this
stage we were not only low in energy but also
quite cognisant, and nervous, that FSI did not have
a funconing, running MEL system to support the
management and implementaon teams.
So how did we go about making the decision? In our
discussions, three quesons dominated:
Would rubrics allow us to collate, in a non-
cumbersome and rapid way, the evidence needed
to demonstrate that we were delivering on what
we promised? – we thought yes
Equally crical, would it enable us to report on
our outcomes which, up to that point, we were
struggling to do? rubrics’ focus on outcomes
conrmed this for us
And, would the rubrics outputs (tables) be an
eecve way to communicate our achievements (as
well as challenges and deviaons from plans) to
FSI’s Steering Commiee? – again we thought yes
The rough road to a rst dra of rubrics
Idenfying the evaluaon criteria. We began by
looking at various guidelines available online on how
to develop rubrics. The rst thing these told us was that
we had to revisit FSI’s programme logic, parcularly
the outcome statements which are at the heart
of rubrics’ evaluaon process. For each outcome,
rubrics requires that you clearly dene the key
qualies or changes that are deemed crical for your
project’s progression towards achieving its goals.
FIGURE 1 Brainstorming the potential value of using rubrics is a critical step. For us, this entailed scrawling on a blackboard a rough
illustration of what it would look like and how it would support different groups within FSI’s governance structure and revisiting the
project’s programme logic using coloured cards.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
3
This sounds easy, but it’s not. We found pinpoinng
which specic aspects of outcomes we wanted our
evaluaon to focus on an incredibly challenging task.
It became quickly apparent to us that these would
dier depending on the ‘eye of the beholder’ (i.e. each
member of the FSI team and the Steering Commiee
would most likely dene these dierently). This is not
a new insight – the guidelines on rubrics emphasize
the importance of doing this using parcipatory,
consensus-building processes. Our challenge was that
we simply did not have the me to do this; we had
to pursue the less opmal (but, in projects, typically
more common) path of having to rely on a small set
of team members – those responsible for FSI’s MEL
system – to dene these. While far from ideal we did
have more than a year’s worth of working closely as a
team and liaising frequently with Steering Commiee
members. We did the best we could to ensure that
the reworked outcome statements reected (to the
best of our knowledge) the aggregate perspecves
of FSI members and partners. Aer many, many
iteraons we came up with a series of carefully worded
outcome statements (re-labelled ‘implementaon
and output performance statements’ and ‘primary
outcome’; see Figure 2 below). These formed the
basis for the next step – the evaluaon criteria used
in the rubrics tables.
We then created rubrics tables for each of the
outcome statements (an example for ‘learning events’
is provided in Figure 3 on the next page). We thought
the hardest part was behind us. Not so. We found
that developing the rangs that would be used to
assess the evaluaon criteria was equally dicult.
FIGURE 2 The FSI iniave’s outcome statements were rewrien in such a way that the specic qualies and changes being targeted become
explicit (words/phrases underlined). These subsequently formed the evaluation criteria listed in the rubrics table (see further below).
ACTIVITIES AND
OUTPUTS
PRIMARY
OUTCOME
IMPLEMENTATION AND OUTPUT PERFORMANCE
STATEMENTS
Relevant, timely, appropriately-designed learning events that
involve Australian and/or in-country and partners and their
networks, and are perceived as wor thwhile
Learning events
Knowledge products are relevant, practice-based and
practice-oriented, and aligned with current and emerging
(ag, food and nutrition) international development thinking,
practices and needs within Australia and overseas;
collaboratively-created, reader-friendly and
audience-appropriate, and produced and delivered in a timely
manner
Knowledge
products
An expanded range of relevant experts and development
practitioners who actively contribute to FSI Australian and
in-country partners’ and their networks’ international
development discussions, designs, and practices in ways that
are perceived as collaborative, salient, credible and useful
Expertise and
practice networks
FSI creates opportunities for in-country par tners to participate
in reec tion and learning in food systems innovation
In-country
engagement
Activities and a web-based platform that bring together
analyses, opinions, lessons, experiences and capacit y building
resources derived from FSI and its Australian and regional
partners; are easily accessed and easy to navigate; and
audience-appropriate and reader-friendly; and updated on a
regular and in a timely fashion
Outreach and
external visibility
FSI key partners (DFAT, ACIAR, CSIRO) work together
collaboratively, respecting the agreed-upon partnership
principles, to collectively learn and manage FSI in a responsive
and agile manner
FSI governance
Enhanced
knowledge-exchange;
learning; and
networking among
FSI partners and other
stakeholders in
Australia and
overseas, thereby
strengthening
capacity to progress
food systems
innovation
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
4
Dening the rangs categories. Guidelines on rubrics
provide very clear denions of each of the rang
categories (i.e., ‘poor’, ‘adequate’, ‘good’, ‘excellent’,
‘insucient evidence’). These are purposely generic
which meant that we had to reect on ‘so what do
these really mean for us in the context of FSI?’ We
came up with contextualised descripons of the
categories (see Figure 4 below). These were never
formally included in any of the rubrics reports; rather
they were used informally to help us pin-down what
we thought we needed to do next: provide a descripon
for each of the rang categories, for every evaluaon
criteria, across all outcome statements (an example
of what this meant for us is provided below). This is
where we made a costly mistake.
We spent weeks wring up the descripons for rangs
categories. Soon the rubrics became bigger than Ben Hur,
taking up pages and pages of tables lled with minuscule
wring (an example is provided in Figure 5). For us, the
proverbial ‘straw that broke the camel’s back’ came
with the feedback from team members: no-one could
agree on what constuted a ‘poor’, ‘adequate’, ‘good’,
and ‘excellent’ learning event in terms of its relevance,
for example. This was the case for the majority of the
other evaluaon criteria across dierent outcome
statements. We decided to do something we should
have done at the very start: consult an evaluaon prac-
oner who had extensive experience with rubrics. The
rst thing she told us (contrary to some of the online
guidelines we had consulted!) was that we did not need
to dene the rangs categories. “Let the people who
are aending the learning events (or involved in other
FSI acvies) tell you why they cked ‘poor’, ‘adequate,
‘good’ or ‘excellent’” she said. She added that, like us,
she had learned this lesson the hard way but that
years of trialling dierent approaches to rubrics had
demonstrated that the simpler method worked best.
EVALUATION CRITERIA RATINGS EVIDENCE
Poor Adequate Good Excellent Insu cient
evidence
Relevant
Timely
Appropriately designed
Involve Australian and in-country
partners and networks
Worthwhile
Enhance knowledge-exchange
Enhance learning
Enhance networking
FIGURE 3 Rubrics for FSI’s Learning Events
Implementaon and output performance statement: Learning events are relevant, mely, appropriately designed; involve Australian
and/or in-country partners and their networks; are perceived as worthwhile; and are eecve in progressing FSI’s primary outcome
(enhanced knowledge-exchange; learning; and networking)
FIGURE 4 The rangs categories used in rubrics, generic descripons, and our FSI-contextualised descripons
CATEGORIES GENERIC DESCRIPTIONS TR ANSLATION
(a.k.a. what this really meant for us within the context of FSI)
Poor Performance is weak. Does not meet
minimum expectations/requirements.
What we don’t want.
Adequate
Performance is reasonable. Still major
gaps or weaknesses.
Still not where we want to be; supporting status quo and
unlikely to contribute to new ways of thinking and practice
that will improve ODA impacts overseas.
Good
Performance is generally strong. Might
have a few slight weaknesses but
nothing serious and are being managed
eectively.
Exposing partners including in-country partners and their
networks to new ideas and practices which hopefully we
see being picked up in their policies, programs, project s.
Excellent
Performance is clearly very strong or
exemplary. Any gaps or weaknesses are not
signicant and are managed eectively.
The ideal; all conditions needed for maximising space for
innovation (individual to organisational learning, etc. to
systemic changes) are being enhanced.
Insucient
evidence
Evidence unavailable or of insucient
quality to determine performance.
Flags areas we need to be collecting more evidence on
and/or simply can’t because of time-lags or other factors.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
5
Towards a more manageable and
praccal rubrics (where we are at now)
With all of the rangs category descripons deleted,
the rubrics tables became so much more user-friendly
and useful. We were nally able to use them.
Gathering the evidence. We drew up a rubrics
table for each learning event we held i.e., for each
workshop, training events, and reecon session
(we also produced rubrics tables for the other
types of acvies and products FSI was engaging
in/producing). Rather than show the rubrics tables
to parcipants of the events, we put together a very
simple 1-page ‘survey’ that we invited parcipants
to ll out at the end of each event. Other sources
of evidence that enabled us to complete the rubrics
tables included: reecons from FSI team members
who had led or parcipated in the event; unsolicited
e-mails from parcipants about the event; and
follow-on acvies and ‘winsthat we could trace back
to having been triggered or inuenced by the event.
Reporng the rubrics. The report that emerged from
the rubrics tables was comprised of three dierent
layers of informaon (see Figures 6 and 7 on next
page): (1) a series of individual tables for each
acvity and output (e.g. each learning event held),
(2) a summary table that aggregates the data from
the individual tables (e.g. one table that summarises
all of the learning events held), and (3) a one page
rubrics table with supporng narrave that disls
the massive amount of informaon collated in the
individual and aggregate tables.
FIGURE 5 An example of our (erroneous) aempt to describe each rangs categories for each evaluaon criteria
EVA LUATION
CRITERIA
RATINGS
Poor Adequate Good Excellent Insucient
evidence
Majority of
learning events
Majority of
learning events
Majority of
learning events
Majority of
learning events
RELEVANT
Are on topics or
issues that are
irrelevant or
misaligned with key
partners’ and
learning event
participants’ interests
or needs, and with
global/regional/
country-specic
development priorities
and strategies/
programs/projects.
Are relevant
for Australia
(Canberra-based)
partners and not for
in-country partners.
Meet partners’
and participants’
mandated or
technical needs
(e.g. training in a
particular
M&E system).
Are not focused
only on technical
issues but also on
current or emerging
areas of
development thinking
and prac tice that are
aligned with global/
regional/
country-specic
development priorities
and strategies
(e.g. Aid4Trade,
private-public
partnerships).
Are on current and
emerging areas of
development thinking and
practice that are aligned
with global/regional/
country-specic
development priorities
and strategies, and
Make an explicit link to
the specic development
priorities and strategies/
programs/projects/practices
of participants’
organisations and
networks.
Not
enough
evidence
available to
assess this
objective.
TIMELY
Are out of sync (not
timely) with partners
and participants’
information and skills
needs for
enhancing their
capacity to design
and delivery eective
strategies/programs/
projects.
Are ‘hit-and-miss’,
i.e. some events
are oered in a
timely manner,
whereby partners
and participants
can make use of
the information;
whereas others
are out of sync.
For the most part
are oered at the
right time, i.e. partners/
participants are able
to incorporate at
least some of the
information and
learnings gained
from the event to
inform their strategies/
programs/projects/
practices.
Are oered at the right/
appropriate time, i.e. most
of the information and
learnings can be used to
inform partners’ and
participants’ strategies/
programs/projects/
practices.
Include some
development thinking/
practices that are emerging
or are on the cutting-edge
(i.e. are not mainstreamed
likely to be) that oers
partners and participants
a ‘head start’/’competitive
edge’ in global development
arena.
Not
enough
evidence
available to
assess this
objective.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
6
We found geng from the summary rubrics tables
(#2 in Figure 6) to the ‘summary report’ (#3) a
challenge. The praconer and online guidelines
we consulted recommended that a consultave,
parcipatory process be used (e.g., team members
and partners spend a day together looking at all
the individual and summary rubrics tables and
collecvely write up the summary report). Again, for
the same reasons previously menoned (i.e., me
constraints, logiscal challenges in geng everyone
together) we had to come up with a dierent process.
In our case it was having the project leader look at
the rubrics tables across all FSI acvies, outputs,
and outcomes and summarising them into a single
overarching table that was accompanied by a succinct
narrative statement; and it had to fit in 1 page
(see Figure 7 on the next page).
FIGURE 6 FSI Rubrics approach: An example for Learning Events (fabricated data)
Implementaon and output performance statement: Learning events are relevant, mely, appropriately designed; involve Australian
and/or in-country partners and their networks; are perceived as worthwhile; and are eecve in progressing FSI’s primary outcome
(enhanced knowledge-exchange; learning; and networking)
2. SUMMARY RUBRICS OF ALL EVENTS
LEARNING
EVENTS
EVALUATION CRITERIA &
RATINGS
Relevant
Timely
Appropriately
designed
Involve
partners
Worthwhile
Enhance
knowledge
Enhance
learning
Enhance
networking
Workshop 1
Seminar
Lunch seminar
Reection event
Workshop 2
Training event 1
Training event 2
Presentation
OVERALL
1. INDIVIDUAL RUBRICS FOR EACH EVENT
EVA LUATION
CRITERIA RATINGS EVIDENCE
Relevant Feedback from part icipants who lled pos t-event evaluation form (N=13):
Aligned w ith my current work requirements or needs (85%)
Knowledg e I gained can be used to improve my work (77%)
Structured in a way t hat supp orted my learning style (100%)
Benets of attending the seminar outweighed th e time away from the oce (85%)
Will share t he infor mation I learnt at the seminar with collea gues (77%)
I met people who have the potent ial to be va luable in my work (100%)
Selec tion of additional comments from participants:
I learn some n ew things in this presentaon. I will use some of the things learned to my work.”
The inform aon presented was very useful for me as a praconer and research er. The exampl e
gave me a clea r idea on how important it is to conside r these issu es.”
Other evidence of success:
Since the training, part icipant X and Y have re-designed their program to incorporate
concepts and practices from the event
Participant Z wrote the following, unsolici ted e-mail: I learned a lot from the event and was
wonderi ng if FSI will be oering a follow-on training course.”
Timely
Appropriately designed
Involve Australian and in -country
partners and networks
Worthwhile
Enhance knowledge-exchange
Enhance learning
Enhance networking
Sources of evidence:
Participant s’ responses to evaluat ion feedback form
Reections from FSI team member leading event
Feedback from collaborators and FSI steer ing committee
OVERALL RATING GOOD
A series of training events on private-public partnerships
and Theories of Change have been delivered and well
received. There is a growing demand from in-country
programs for similar events and 3 are being planned in
the nex t quarter.
3. SUMMARY REPORT
Excellent
Good
Adequate
Insucient information
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
7
This one-page summary report is what we
communicated to our Steering Commiee. All of
the other more detailed tables were included but
(and this is important – we discuss why in the next
secon), they were led away as appendices.
So what do we like about rubrics?
1. ITS MULTIPLE-FUNCTIONALITY
The main purpose of the rubrics is to enable
evaluaon of progress towards outcomes. It does this
really well, but it also is quite eecve as a monitoring
tool for assessing delivery of acvies and outputs. For
example, the summary tables (table #2 in Figure 6)
provide an easy way to list all of the acvies and
outputs that have been achieved. It thus works well for
accountability purposes, which is a desired, demanded
and important aspect of M&E systems. Furthermore,
the process of rang the evaluaon criteria essenally
captures qualitave informaon. Supplemenng
the ‘ck-the-box’ (or, in our case, ‘colour the box’)
rangs categories with wrien and verbal feedback
in the ‘evidence’ column provides another layer
of informaon: what was it about those parcular
qualies or changes that were important? And why
(or why not)? For us, these complimentary narrave-
based sources of evidence enable the M&E system
to have performance management capability, which
opens the possibility of exploring the less tangible
aspects and dynamics (e.g. knowledge brokering,
trust building, co-ownership) that were facilitang
(and, reversely, hindering) FSI to progress towards
its goals.
2. NOT INDICATORS, BUT NOT TOO FAR OFF
Using indicators for measuring success is standard
pracce in many organisaons and projects. So much
so that it is arguably imperave that whatever MEL
system you come up with has to have some capacity
to ‘hold conversaons’ with exisng assessment
structures formally being used by the organisaons
or groups you are working with. What we liked
about the rubrics was that the evaluaon criteria
are essenally indicators but the process of rang
these brings in addional informaon on the ‘how’,
‘why’, and ‘so what’? Even the most die-hard
advocates of quantave indicators acknowledge
the importance of being able to capture this more
qualitave, process-level informaon.
FIGURE 7 Overarching summary report based on rubrics tables (fabricated data)
ACTIVITIES/
OUTPUTS RATING COMMENTARY, EVIDENCE AND PROSPECTS
Learning events Good A series of training events on private-public partnerships and Theories of
Change have been delivered and well received. It was a slow start but there is
a growing demand from in-countr y programs for similar events and 3 are being
planned in the next quar ter.
Expertise and
practice
networks
Adequate External expertise has been used in the areas of M&E, partnership development
and impact investing. Engaging more readily accessible regional expertise has
been a challenge but tasks have been dened to strengthen these ties, with an
immediate focus linking to the inclusive agribusiness innovation round table.
In-country
engagement
Excellent The project has been building strong alliances and collaborations with par tner
projects in East Africa, South Asia, Indonesia and Timor Leste. This is
demonstrated by positive feedback and increasing demand for further
activities from in-country partners. Two major events in South Asia are
being planned that will further strengthen these relationships and enhance
knowledge-exchange, learning and capacity-building.
Outreach and
external
visibility
Poor The interact ive web based platform developed for the project is not being
used. In the coming quarter, the site will be substantially redesigned to reect
its greater out reach orientation and encourage a wider set of stakeholders to
engage with and contribute mater ial.
FSI Governance Good FSI has been eectively responding to numerous and frequent requests from
FSI key partners and their overseas partners to collaborate. Partnership
principles, as ar ticulated in the Partnership Agreement, are being used to
guide FSI governance processes . There is further work to be done to foster
joint learning and ownership by all partners .
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
8
3. YOUR PICK OF PROCESSES TO CONSTRUCT
RUBRICS
The rubrics approach is not wed to any singular
approach to developing them. One can choose to
take a boom-up, parcipatory route or a more
top-down one, or somewhere in the middle. This
makes it quite a exible approach that can be
developed in a context-appropriate way, that is
sensive to me- and funding-constraints, and other
issues (for example, conicts among project/program
parcipants which may make it undesirable or unre-
alisc to pursue a parcipatory approach to building
rubrics). In our case, me-constraints and urgency
dictated a less parcipatory approach to developing
the rubrics.
4. INDUCTIVE, EMERGENT RATINGS PROCESS
The process of dening the outcome statements
involves, to various degrees, bringing together the
dierent perspecves and interests of those involved
in a project or iniave. But perhaps one of the
greatest (and understated) value-adds of the rubrics
is not that it just integrates diverse viewpoints but
that it can do so in an inductive, emergent way
(i.e. as one gets more and more feedback, one starts
to get a beer picture of what ‘excellent’, ‘good’,
and ‘poor’ means through the eyes of parcipants).
This is parcularly true with regards to the rangs.
Rather than needing to dene ‘poor’ upfront (which
then throws you in the conundrum we faced), it
becomes dened over me as one accumulates
feedback from a range of people and perspecves
(e.g. dierent team members, external partners,
parcipants in project sponsored acvies, etc.).
This is not insignicant. Let me share an example.
Had we used the predened rangs categories we
had rst come up with and used those to assess the
learning events, a good poron of them would have
more or less bombed on the selected evaluaon
criteria. That is because our (FSI team’s) assumpons
about what comprised a “successful” learning event
was heavily informed by our broader thinking
underpinning the project: that to achieve innovaon
in food systems thinking and pracces, our learning
events had to go beyond the standard-of-the-mill
stu, i.e. training courses, seminars (in other words,
transfer-of-knowledge stu). We have since found,
however, that parcipants who aended conven-
onal learning events overall tended to rate them more
posively then we would have. More importantly, the
wrien comments that accompanied their rangs
reveal a more nuanced, and complex, picture of the
role of these types of learning events in contribung
towards food systems innovaon.
Having said this, it is not always easy to get parcipants
or users to provide comments and thus help give
meaning to the criteria evaluaon. For example,
we have not been able to set up a mechanism for
capturing feedback – in a succinct, regular, and
non-cumbersome manner – from readers of our
reports, Pracce Notes, and the dozens of other
knowledge products we have produced. While
this does not mean that we cannot use rubrics for
assessing knowledge products, it does mean that
our assessment is limited because it is being done
through the eyes of project team members only.
5. SUCCINCT YET RICH BRINGS SIMPLICITY
TO THE COMPLEXITY OF PROJECTS WITHOUT
BEING SIMPLISTIC
Finally, we liked the nested tables and what they provide
in terms of informaon. They enable moving from the
niy griy to broad level assessments. The one-page
summary report that draws upon the detailed tables
are perfect for the quick read on short ights between
Brisbane and Canberra, or during commung-to-work-
me in Jakarta. For me-poor people with mulple
compeng demands (as most members serving project
management and steering commiees are) this is
essenal. The more detailed tables can be included as
appendices, allowing those interested in more details
to have access to the evidence base.
What have we found challenging?
1. THOSE *%! RATING CATEGORIES
We have already said enough on this issue. We
highlight it again as our blunder of developing detailed
descripons for the rangs categories is apparently a
very common mistake. And a very me consuming one.
2. THE WHOLE IS MORE THAN SIMPLY THE
SUM OF ITS PARTS
Rubrics, because it forces you to break down specic
aributes and changes you want to assess, it focuses
your aenon on dierent bits and pieces of a project.
Don’t get me wrong – this is very valuable (no one can
argue that it is not important to know that a workshop
you invested lots of money in and lost considerable
sleep over, was great at helping people build networks
and enhance knowledge). However, while rubrics are
great for assessing individual or parcular acvies and
outputs in isolaon they are less eecve in capturing
the cumulave outcomes or impacts of mulple dimen-
sions of a project, as it does not capture how they
interact and what emerges as a result. Also the more
components you have the more daunng becomes the
task of pulling the data together into tables.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
9
3. IT IS A LONG, TIME-CONSUMING PROCESS
It took us 10 months to get from inially assessing
whether the rubrics approach was worth pursuing to
developing a report. There are many reasons why it
took so long. We were novices in this approach and
this meant a steep learning curve and me-consuming
mistakes along the way. We also aimed to cover
over a year’s worth of FSI acvies and outputs; it
was not a maer of developing a handful of rubrics
tables but, rather, several dozens. Moreover, as
highlighted in this Pracce Note, rubrics is not just
a table but rather involves a whole series of steps
and processes. These began with renement of
the outcomes statements, to the development of
dierent tools, to gathering and collang feedback
and other sources of evidence. Each of these bits
and pieces that lead to the actual rubrics table
takes considerable forethought and me. And
nally (and perhaps most signicant) was the fact
that developing rubrics was only one of many, many
other project acvies that the team had to juggle at
the same me. It was thus by necessity a piecemeal
process that had to t and be woven in and amongst
other project acvies, the majority of which were
of greater priority. That said, now that we have the
basic structure of the rubrics table and report along
with the methods for collang supporng evidence,
we ancipate that using the rubrics approach for
assessing future acvies and outputs can be done
relavely quickly.
In summary: if we had to do it all over again, what would we have
done dierently?
RUBRIC STEPS
AND
PROCESSES
WHAT WE DID
WHAT WE WOULD HAVE
DONE DIFFERENTLY
(IN AN IDEAL WORLD)
WHY
Rening the project
outcome statements
and dening the
evaluaon criteria
Two team members
worked together to
reword FSI’s outcome
statements as dened in
the programme logic, and
in the process dene the
evaluation criteria.
A face-to-face participatory
process whereby the whole FSI
team, Steering Committee
members and other key
stakeholders jointly rework
the outcome statements and
dene the evaluation criteria.
Other practitioners’ experience using
rubrics have found that having everyone
jointly articulate and agree upon, in an
explicit and transparent manner, the
evaluation criteria (i.e. what matters,
what is important to focus on) increases
interest in and ownership of the rubrics
process and ultimately, the assessments
made.
Developing the
rangs
We rst dened, for each
evaluation criterion, what
each of the ratings
category were
(i.e. detailed descriptions
of what would constitute
an ‘excellent’, ‘good’,
‘adequate’, and ‘poor’
learning event along each
of the criteria identied).
Leave the ratings undened
(what we ended up doing,
once we realised our mistake
in attempting to ll them out);
allow them to become dened
over time; feedback from
participants/users accumulated.
Dening the ratings is incredibly
time-consuming and produces lengthy,
unreadable (and ultimately unusable)
rubrics tables.
It is impossible to develop a set of ratings
descriptions that will be agreed-upon by
everyone (unless several days are spent,
face-to-face, to jointly develop them).
They are rigid: having a set of
predetermined ratings descriptions does
not allow for changes in how ‘excellent’,
for example, may be perceived and
understood by participants/users over the
course of a project.
Allowing participants/users to assign their
own meaning provides a mechanism for
dierent perspectives of what mat ters
and why to emerge (arguably leading to
more accurate and valid assessments).
Other practitioners’ experiences using
undened ratings has shown that it works
just as well, if not better.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
10
RUBRIC STEPS
AND
PROCESSES
WHAT WE DID
WHAT WE WOULD HAVE
DONE DIFFERENTLY
(IN AN IDEAL WORLD)
WHY
Gathering evidence The evidence for the
majority of FSI’s activities
and products is FSI team
members’ personal
assessments and judgement
calls. The exception is the
Learning Events for which
we developed a concise
survey to gather feedback
from participants.
Develop other
non-cumbersome, systematic
methods for incorporating a
broader set of opinions and
perspectives for the other
FSI activities.
Currently, the FSI assessments provided
through the rubrics are very
internally-focused. Dierent viewpoints
from a broader range of people are required
to get more diverse lines of
evidence. This is needed for achieving a
more transparent, shared, and valid
assessment of how well a project is tracking.
Communicang
the rubrics
First drafts of the
rubrics-based report
included in the main body
of the report all three levels
of rubrics information
(individual tables, summary
tables, and overarching
summary report).
We subsequently reduced
the report to one-page,
ling all the detailed
tables as appendices.
Focus on developing the
one-pager and co-develop it
with the Steering Commit tee.
The look and feel of the report depends
on your audience. In our case, our main
audience has been the Steering Committee.
Previous exper iences with MEL
reporting made it clear to us that whatever
we produced had to be ‘short and sweet’.
The one-pager we produced (which is still
in the works) was designed to meet the
needs, and t in with the busy schedules
of the Steering Committee.
So, to rubrics or not to rubrics?
While we are sll in experimenng with rubrics, we can
share some emerging insights and words of wisdom.
1. GIVE YOURSELF PLENTY OF TIME TO PUT
TOGETHER THE RUBRICS
As we learned the hard way, pung together the
rubrics is not quick nor easy. It is a mul-step process,
whose steps and me-frames vary depending on
how well-developed the project programme logic is,
the degree of consensus among project members
and other key stakeholders about the parcularies
of the outcomes most crically in need of being
tracked, and compeng demands from other project
acvies and priories.
2. TAKE AN EMERGENT APPROACH TO
DEFINING THE RATINGS USED TO ASSESS
THE EVALUATION CRITERIA
Let those providing feedback – be it event parcipants
or team members dene what ‘poor’, ‘adequate’,
‘good’, and ‘excellent’ is. As a crical mass of assessments
come back, look for paerns or common themes
among people’s responses.
3. EXPERIMENT WITH IT UNTIL IT WORKS
FOR YOU
The generic approach to developing rubrics outlined
in most guidelines may or may not work for your
project. You need to allow some me to experiment
with dierent versions.
4. DON’T GO AT IT BLINDLY BUILD ON THE
EXPERIENCES OF OTHERS USING RUBRICS
There are some fantasc resources out there,
including people with extensive experience applying
rubrics in dierent formats and varying contexts, and
modifying them along the way. It is worth invesng
in connecng with others who have actually applied
rubrics and building on their lessons learned.
5. BE CAUTIOUS ABOUT TURNING RUBRICS
INTO A JACK-OF-ALL-TRADES MEL TOOL
The rubrics can seem very aracve to use as THE
monitoring and evaluaon tool. In all likelihood, it
needs to be complemented with other approaches.
For example, almost fortuitously, we found that a
monthly e-mail lisng detailing all of the acvies and
outputs that had been completed and were underway
(with a few bullet point details) is, by far a, more eecve
monitoring tool. We have yet to trial the rubrics’ value
for learning-purposes, so the jury is sll out on this
one. But we think it has potenal if it is complemented
with other tools that have been developed explicitly
for triggering reecon and learning.
Rubrics for monitoring, evaluang and learning in a complex project – Samantha Stone-Jovicich, CSIRO
11
Some useful resources
King, J., McKegg, K., Oakden, J. and Wehipeihana,
N. (2013) Rubrics: A Method for Surfacing Values
and Improving the Credibility of Evaluaon. Journal
of Muldisciplinary Evaluaon 9 (21): 11. hp://
journals.sfu.ca/jmde/index.php/jmde_1/article/
view/374
Oakden, J. and Weenink, M. (2015) What’s on the
rubric horizon: Taking stock of current pracce and
thinking about what is next. Presented at ANZEA
Conference, Auckland, 6 8 July 2015. hp://prag-
matica.nz/wp-content/uploads/pragmatica-nz/
sites/326/150718-Oakden-Weenink-ANZEA-vxx.pdf
Oakden, J. (2013a) Evaluave rubrics – passing fad or an
important contribuon to evaluaon pracce? Presented at
ANZEA Conference, Auckland, 6 – 8 July 2015. hp://
www.anzea.org.nz/wp-content/uploads/2013/10/
Judy-Oakden-2013-Evaluave-rubrics-fad-or-here-
to-stay.pdf
Oakden, J. (2013b) Evaluaon rubrics: how to ensure
transparent and clear assessment that respects diverse
lines of evidence. BeerEvaluaon, Melbourne, Victoria.
http://betterevaluation.org/sites/default/files/
Evaluaon%20rubrics.pdf
About the author
Dr Samantha Stone-Jovicich is an anthropologist with the
Commonwealth Scienc and Industrial Research Organisaon
(CSIRO). Her research focuses on understanding why innovave
and creave approaches (such as social learning approaches) that
have emerged to address global social-environmental challenges
are so dicult to implement eecvely. Her primary interest is
strengthening science’s contribuon to on-the-ground impacts
by rethinking sciensts’ roles, pracces, and the communicaon
of scienc knowledge. Prior to joining CSIRO in 2006, she was
an applied researcher working with small scale farmers and loggers
in Brazil and other Lan American countries.
This series of Pracce Notes facilitate shared learning
and innovaon to improve pracce amongst research and
development praconers. The series is a deliverable of
The Food Systems Innovaon (FSI) Iniave, a partnership
between the Australian Government Department of
Foreign Aairs and Trade (DFAT), the Commonwealth
Scienc and Industrial Research Organisaon (CSIRO),
the Australian Centre for Internaonal Agricultural
Research (ACIAR) and the Australian Internaonal Food
Security Research Centre (AIFSRC).
LIMESTONE AVENUE, CAMPBELL ACT 2601
PO BOX 225, DICKSON ACT 2606 AUSTRALIA
TEL. +61 2 6276 6621 • ABN 41 687 119 230
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background: The challenges of valuing in evaluation have been the subject of much debate; on what basis do we make judgments about performance, quality, and effectiveness? And according to whom? (Julnes, 2012b). There are many ways identified in the literature for carrying out assisted valuation (Julnes, 2012c). One way of assisting the valuation process is the use of evaluative rubrics. This practice-based article unpacks the learnings of a group of evaluators who have used evaluative rubrics to grapple with this challenge. Compared to their previous practice, evaluative rubrics have allowed them to surface and deal with values in a more transparent way. In their experience when evaluators and evaluation stakeholders get clearer about values, evaluative judgments become more credible and warrantable. Purpose: Share practical lessons learned from working with rubrics. See the article here. https://journals.sfu.ca/jmde/index.php/jmde_1/article/view/374
What's on the rubric horizon: Taking stock of current practice and thinking about what is next
  • J Oakden
  • M Weenink
Oakden, J. and Weenink, M. (2015) What's on the rubric horizon: Taking stock of current practice and thinking about what is next. Presented at ANZEA Conference, Auckland, 6 -8 July 2015. http://pragmatica.nz/wp-content/uploads/pragmatica-nz/ sites/326/150718-Oakden-Weenink-ANZEA-vxx.pdf
Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence
  • J Oakden
Oakden, J. (2013b) Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence. BetterEvaluation, Melbourne, Victoria. http://betterevaluation.org/sites/default/files/ Evaluation%20rubrics.pdf
Evaluative rubrics -passing fad or an important contribution to evaluation practice?
  • J Oakden
Oakden, J. (2013a) Evaluative rubrics -passing fad or an important contribution to evaluation practice? Presented at ANZEA Conference, Auckland, 6 -8 July 2015. http:// www.anzea.org.nz/wp-content/uploads/2013/10/ Judy-Oakden-2013-Evaluative-rubrics-fad-or-hereto-stay.pdf