Article

Rubrics: A Method for Surfacing Values and Improving the Credibility of Evaluation

Authors:
  • Pragmatica - a member of the Kinnect Group
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Background: The challenges of valuing in evaluation have been the subject of much debate; on what basis do we make judgments about performance, quality, and effectiveness? And according to whom? (Julnes, 2012b). There are many ways identified in the literature for carrying out assisted valuation (Julnes, 2012c). One way of assisting the valuation process is the use of evaluative rubrics. This practice-based article unpacks the learnings of a group of evaluators who have used evaluative rubrics to grapple with this challenge. Compared to their previous practice, evaluative rubrics have allowed them to surface and deal with values in a more transparent way. In their experience when evaluators and evaluation stakeholders get clearer about values, evaluative judgments become more credible and warrantable. Purpose: Share practical lessons learned from working with rubrics. See the article here. https://journals.sfu.ca/jmde/index.php/jmde_1/article/view/374

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Rubric development is a particularly valuable opportunity to surface implicit values and desired outcomes, generating shared understanding about what matters [24,25]. Because a rubric allows different levels of achievement (rather than only a yes or no checkbox), it is a structure for measuring not just the presence of an attribute but the quality of that attribute [26]. ...
... Because a rubric allows different levels of achievement (rather than only a yes or no checkbox), it is a structure for measuring not just the presence of an attribute but the quality of that attribute [26]. Rubrics can be valuable tools even in complex situations [25][26][27]. Rubrics are commonly used tools in program evaluation and accreditation processes. Requiring a program leader to commit to rating their program at a specific level on each item likely supports critical thinking about the current state of their program over time. ...
... The TEPA had a 4-point rating scale, but this scale was generic (there was no description of what each level of achievement might look like). For the PTEPA rubric, we developed instead an analytic rubric, in which each level of achievement ("scale point") is described specifically [25,26]. This choice was intended to enhance the reliability of ratings and provide more concrete descriptions of achievement in each area on the rubric. ...
Article
Full-text available
Given the insufficient number of well-qualified future physics teachers in the U.S., physics programs often seek guidance for how to address this national need. Measurement tools can provide such guidance, by both defining excellence in physics teacher education (PTE) and providing a means to measure progress towards excellence. This paper describes the development of such a measurement tool—the Physics Teacher Education Program Analysis rubric. The rubric was developed by identifying common features and practices at 8 “thriving” PTE programs, defined as large U.S. programs consistently graduating 5 or more future physics teachers in a year. The rubric consists of 89 items, each with 3 levels of achievement (developing, benchmark, exemplary), plus a not present level, which are organized into 6 standards. The rubric has demonstrated a variety of forms of validity, including a strong theoretical basis, empirical validation through program visits, and expert review. The rubric and its associated supporting materials are intended to help program leaders in using a process of continuous improvement and assessment to strengthen existing PTE programs or to establish new pathways for student licensure. The rubric also provides substantive opportunities for research, through further validation and development of the rubric, and by using rubric results to learn more about effective practices in physics teacher education.
... In parallel to checklists, evaluators have been making the "valu" part of "evaluation" explicit through rubrics for many years (Davidson, 2005). In fact, in some countries, the stakeholder-centered "rubric revolution" has led to a fundamental national change in how evaluators work with clients (Davidson, Wehipeihana, & McKegg, 2011;King et al., 2013). That change has been underway in the United States, too, though documentation in professional outlets for the U.S. evaluation community is sparse (see, e.g., Martens's, 2018, review of the literature on rubric use in program evaluation, which found the largest number in American Journal of Evaluation: 10 articles in the last 25 years). ...
... The main advantage of using a rubric is that key evaluative criteria are defined as concretely as possible. Developing those descriptions is an opportunity to bring shared meaning to communicating about central ideas (King et al., 2013). Table 1-in rubric form-of the hypothetical item on stakeholder involvement. ...
... The interested reader is encouraged to peruse the referenced articles-those with a leading asterisk (*) are reports that describe design or use of a rubric. These range from overviews and syntheses (e.g., Dickinson & Adams, 2017;Martens, 2018) to details about richly complex rubrics (e.g., Clinton, 2014;Gajda, 2004;King et al., 2013). ...
Article
Full-text available
This brief report describes the conception, development, and use of a rubric in evaluating the feasibility of a new program. The evaluators searched for a meta-analytic tool to help organize ideas about what data to collect, and why, in order to create a detailed story of feasibility of implementation for the client. The main advantage of using the rubric-based tool is that it lays out key evaluative criteria that are defined as concretely as possible. The article gives a brief overview of the literature on the use of rubrics in evaluation, illustrates the use of a feasibility of implementation rubric as a tool for development, analysis, and reporting, and concludes with recommendations emergent from the use of the rubric.
... Although there are multiple ways to approach evaluative reasoning (e.g., see Schwandt, 2015) a common model comprises four steps of: i) establishing criteria of merit, worth, or significance-dimensions of performance that are relevant and important to an evaluative judgement; ii) defining standards for each criterion, specifying "what the evidence would look like at different levels of performance" (Davidson, 2014); iii) gathering and analysing evidence of performance against the standards; and iv) synthesising the results into an overall judgement (Fournier, 1995). Evaluation rubrics (Davidson, 2005) offer a practical approach to support this process (King, McKegg, Oakden, & Wehipeihana, 2013). King and OPM (2018) described an approach to operationalising King's (2017) theoretical model. ...
... This collaborative process was important because it facilitated a nuanced understanding of context, political economy, adaptations, and real-world functioning. The process of co-developing criteria and standards surfaced a range of values and perspectives among the different actors and provided a forum to incorporate and reconcile these within the more formally documented expectations (King et al., 2013). As this VfM evaluation was being undertaken for a DFID programme, the criteria and standards were subsequently reviewed and signed off by DFID, providing assurance of their face validity for the core purpose of the VfM assessment. ...
... After clarifying the criteria and standards, evidence that is relevant to those criteria and standards can be identified. This sequence of evaluation design helps ensure the validity of indicators and that the choice of evaluation methods is aligned with the context and values embedded in the programme (King et al., 2013). ...
... Although application of the VfM framework is limited, existing case studies report including monitoring and evaluation advisors, technical advisors, program management, foundation staff, and trustees (King & Allan, 2018;Kinnect Group & Foundation North, 2016). This suggests that in practice the VfM framework has been oriented toward primary user inclusion rather than a more broadly defined stakeholder group of "people who matter" (King et al., 2013) that includes those who are impacted upon by the intervention. ...
... By thinking through what each performance level means prior to data collection, participation in rubric design may also stimulate conversations about how key stakeholders would use findings under various result scenarios (Davidson, 2005). King et al. (2013) argue that rubrics also make evaluative judgements more transparent, thereby increasing the likelihood findings will be used. ...
... Proponents of the VfM framework, and rubrics more generally, claim it is effective at strengthening stakeholder engagement in evaluation processes, enhancing transparency of judgements, building trust among stakeholders, and improving use of evaluation findings (King & Allan, 2018;King & Guimaraes, 2016;King et al., 2013;Oakden, 2013). King et al. (2013) reflect that compared to their previous approaches, rubrics enabled them to identify and deal with values more transparently. ...
Article
Value for Money (VfM) is an evaluative question about the merit, worth, and significance of resource use in social programs. Although VfM is a critical component of evidence-based programming, it is often overlooked or avoided by evaluators and decision-makers. A framework for evaluating VfM across the dimensions of economy, effectiveness, efficiency, and equity has emerged in response to limitations of traditional economic evaluation. This framework for assessing VfM integrates methods for engaging stakeholders in evaluative thinking to increase acceptance and utilization of evaluations that address questions of resource use. In this review, we synthesize literature on the VfM framework and position it within a broader theory of Utilization-Focused Evaluation (UFE). We then examine mechanisms through which the VfM framework may contribute to increased evaluation use. Finally, we outline avenues for future research on VfM evaluation.
... A body of practice-informed literature about the use of rubrics is emerging. This discusses what value there is in involving stakeholders in all stages of evaluative reasoning, including making evaluative judgements against individual criteria and synthesising to reach an overall judgement (Davidson, 2014a;King, McKegg, Oakden, & Wehipeihana, 2013;Oakden, 2013). Using rubrics with clients and stakeholders increases the transparency of how such judgements have been reached (King et al., 2013). ...
... This discusses what value there is in involving stakeholders in all stages of evaluative reasoning, including making evaluative judgements against individual criteria and synthesising to reach an overall judgement (Davidson, 2014a;King, McKegg, Oakden, & Wehipeihana, 2013;Oakden, 2013). Using rubrics with clients and stakeholders increases the transparency of how such judgements have been reached (King et al., 2013). ...
... Alkin et al. (2012) conclude that literature that operationalises ways to reach these judgements is sparse, although Davidson (2005) suggests there has been good progress made in evaluation-specific methodologies for synthesising. Davidson herself has been influential in building evaluation capability in these methodologies in Aotearoa New Zealand and in North America (King et al., 2013). Julnes (2012b) gives an overview of four different methods for synthesis. ...
Article
The questions of who values, with whom, in what ways, and under what conditions concern all evaluators but are explicitly considered by some theorists more than others. Theorists placed on the valuing branch of Christie and Alkin’s (2013) evaluation theory tree emphasise valuing in their conceptualisation of evaluation, but even among these theorists there is diversity in the ways in which valuing is considered and realised in evaluation practice. This article explores this diversity within one aspect of valuing—the valuing involved in reaching a warranted conclusion about the overall merit, worth, or significance of an evaluand. It considers the extent to which the literature discusses overall evaluative conclusions as an element of evaluation practice; the extent to which drawing such conclusions is seen as the responsibility of the evaluator or stakeholders; and the methods that may be used to reach a warranted evaluative conclusion. The author concludes that there has been little empirical research undertaken on the valuing involved in reaching a warranted conclusion about the overall merit, worth, or significance of an evaluand. Much of the literature is evaluators theorising from different epistemological positions. Thus, while the literature does not definitively inform evaluators of whether they should always reach an overall evaluative conclusion, who they should involve, and what methods they should use, this review does support evaluators to reflect on these issues in their practice, and to make deliberate, informed decisions about the making—or not—of overall conclusions or judgements in future evaluations.
... These definitions provide a framework to ensure the evaluation (a) is aligned with the programme design, (b) collects and analyses needed evidence using appropriate methods, (c) draws sound conclusions and (d) tells a clear performance story that answers the VFM question. These definitions can be set out in rubrics (Davidson, 2005;King et al., 2013), though this is not a requirement. What is mandatory is explicit evaluative reasoning, using values (i.e., what matters to people) as the basis of criteria to make evaluative judgements from evidence. ...
... Evaluative reasoning is the means by which criteria and metrics from economic evaluation can be combined with wider values and synthesised to reach an evaluative judgement (King, 2017). The process of developing VFM criteria and standards can also be intentionally used to foster stakeholder engagement and participation in evaluation -facilitating situational responsiveness, validity, and evaluation use (Davidson, 2005;Dickinson & Adams, 2017;King et al., 2013;Martens, 2018). Criteria and standards provide a focal point for engaging with evaluation users and stakeholders, facilitating negotiation and explicit agreement about the basis upon which judgements are made and the types of evidence that are needed and valued. ...
... While rubrics reduce personal subjectivity, they can still be affected by shared bias (Scriven, 1991). It is therefore important to guard against cultural biases and groupthink by involving an appropriate mix of stakeholders and perspectives (King et al., 2013) as well as relevant evidence and benchmarks where available (King & OPM, 2018). ...
Article
Evaluation and economics each have distinct approaches to valuing. These approaches are traditionally separated by disciplinary boundaries. However, they can and should be combined. Value for money (VFM), in particular, is a shared domain of the two disciplines, because it is an evaluative question about the economic problem of resource use. A theoretical and practical model for combining valuing approaches has been developed through doctoral research. This article presents and reflects on an example – an international development programme where VFM has been assessed using mixed methods (qualitative, quantitative and economic). Under this approach, evaluative reasoning provides the means for integrating economic values with other criteria and evidence. Deliberation with stakeholders strengthens the valuing process, enhancing validity, credibility and use.
... This served to reconnect and reenergise the team, so when planning the fourth grant application we focused first on tightening up the methodology section. One of the service user researchers shared his experiences of working with rubrics; a method of qualitatively evaluating services that could encompass multiple perspectives and outcomes (King et al., 2013). He knew of a local expert willing to support the project by developing our own rubric, so we added a methodology consultant to the research team and committed to an important change in the shape of the project. ...
... When things went awry: the project, part two During the early stages of the project we believed that the team selected a research approach called a rubric, a subjective scoring tool that sets up different grades associated with different aspects of service delivery (King et al., 2013). We started with the development of an over-arching position that "supported housing was about the right services, delivered well, supporting recovery". ...
Article
Purpose – Co-production in the context of mental health research has become something of a buzzword to indicate a project where mental health service users and academics are in a research partnership. The notion of partnership where one party has the weight of academic tradition on its side is a contestable one, so in this paper the authors “write to understand” (Richardson and St Pierre, 2005) as the purpose of this paper is to examine the experiences of working in a co-produced research project that investigated supported housing services for people with serious mental health problems. Design/methodology/approach – The authors set out to trouble the notion of co-produced research though a painfully honest account of the project, while at the same time recognising it as an idea whose time has come and suggesting a framework to support its implementation. Findings – Co-production is a useful, albeit challenging, approach to research. Originality/value – This paper is particularly relevant to researchers who are endeavouring to produce work that challenges the status quo through giving voice to people who are frequently silenced by the research process.
... A rough indicator of how important rubrics were to each publication is the number of times the word rubric appeared in the article. The frequency of use ranged from 1 time in a single article (Brandon, Smith, Ofir, & Noordeloos, 2014;Braverman, 2013;Petersen, 2002;Roberts-Gray, Gingiss, & Boerm, 2007) to 133 times in a single article (King, McKegg, Oakden, & Wehipeihana, 2013). Figure 8 shows a substantial increase since 2004 in both the number of times the term rubric was used in individual articles (blue line) and the dispersion of the 20 articles contained in this study (red line). ...
... Figure 11. Example of holistic generic rubric from "Evaluative Rubrics: A method for surfacing values and improving the credibility of evaluation," by J. King et al. (2013). Reprinted with permission. ...
Article
Rubrics are well-established tools used in a variety of educational settings, such as student assessment, teacher performance, and curriculum review. This study investigates the extent to which and how rubrics are being used in program evaluation. After exploring the background, or etymology, of the word rubric, a review of literature is conducted. Results reveal that rubric use in program evaluation is relatively rare, although increasing. Rubrics are predominately used in education and health program evaluation to transform data from one form to another, to characterize organizational functioning, and to derive explicitly evaluative conclusions. Program evaluators use rubrics during data collection and data analysis study phases, and to synthesizing findings into conclusions. This paper is the first systematic study of the use of rubrics in program evaluation. It presents a picture of how program evaluation practitioners and scholars are using or discussing rubrics.
... As advocated by Davidson and Chianca (2016), for an evaluation to truly be evaluative, conclusions need to move beyond the "what" to the "so what" of evaluative judgments. As a useful next step for MATES Junior, we suggested developing evaluation rubrics (King et al., 2013) based on the theory of change and the priorities identified in Stage 4. We also used the discussion to raise concerns about particular design limitations (e.g. resources and capacity required to implement a robust experimental or quasi-experimental outcome evaluation). ...
Article
The evaluation models described in the literature may be interpreted as prescriptive and uniform approaches to practice but, in the real world, practitioners are likely to blend aspects of different models to achieve multiple goals. Despite the commonality of pluralistic approaches in evaluation practice, literature on theoretical integration is sparse. This article guides readers through a theoretically integrative evaluation design process and explicates how different theories informed design decisions. The process integrates program theory–driven and utilization-focused evaluation with evaluability assessment and eclectically draws on principles, methods, and tools from other models. This integrative approach to evaluation aims to increase process use for intended users through shared decision-making, organizational learning, and capacity building while simultaneously producing a robust and relevant evaluation design suited to stakeholder needs and the evaluation context. The authors describe the process utilizing a case example to contribute to the literature on theoretically integrative evaluation practice.
... The discipline of evaluation is underpinned by a logic of evaluative reasoning that enables judgments to be made from empirical evidence (Davidson, 2005;Scriven, 2012 (Schwandt et al, 2016, p.1) enhances the credibilit y and use of evaluation by providing a transparent and agreed basis for making judgments (King et al., 2013). The key steps involved in explicit evaluative reasoning can be summarised as follows: ...
... The discipline of evaluation is underpinned by a logic of evaluative reasoning that enables judgments to be made from empirical evidence (Davidson, 2005;Scriven, 2012 (Schwandt et al, 2016, p.1) enhances the credibilit y and use of evaluation by providing a transparent and agreed basis for making judgments (King et al., 2013). The key steps involved in explicit evaluative reasoning can be summarised as follows: ...
... • Criteria (in our case, element) • Levels of how a specific criterion (element) is satisfied (e.g., poor, adequate, good, and excellent [36]). It is possible that all criteria are defined at the same level, but they can also differ within one area. ...
Conference Paper
Full-text available
The primary focus of this paper is to propose a methodology for prioritizing the elements in the Digital Maturity Framework for Higher Education Institutions (DMFHEI) and assessing the digital maturity level (ML) of HEIs in Croatia. Developing the DMFHEI requires the application of a sophisticated methodology, which includes a set of methods, techniques, and instruments. Some of the analyses performed are qualitative, such as the comparison of similar frameworks and strategic documents, while others are quantitative, such as the Q-sorting method, focus groups, and multi-criteria decision-making methods. In the framework development phase, the well-known multi-criteria decision-making method the analytic hierarchy process/analytic network process (AHP/ANP) was implemented to prioritize the main areas and elements identified in the framework. The results of prioritization are shown in this paper, as well as the influence of the area and element priorities on the general digital ML of the institution.
... not just "more / less valuable", but, for example, "good enough / not good enough" and more specifically to address questions like "Was the change made quickly enough?". The use of evaluative rubrics (King, McKegg, Oakden, and Wehipeihana, 2013) is a very promising way to facilitate this process and provide the necessary interpretations of the levels of the variables in question. ...
Preprint
Full-text available
Accepted at JMDE. This version is pre copy-editing. One advantage of traditional logic models, in which the variables are ordered into a neat system of layers ("inputs", "outputs" etc) in a strict hierarchical format is that it is easy to see which variables are under "our" control (namely, all those with no "parents") and which variables are valued (namely, all those without "children"). This kind of format is too strict to be useful for accurately a wide variety of modelling project theories-so, the problem arises: if we are to use a more flexible format, how can we show which variables we value, and which we control? This article introduces two symbols which are added to a project theory to mark variables we value ("♥") and variables we control ("▶"). We can call the resulting model a "Theory of Change".
... The use of evaluative rubrics (King, McKegg, Oakden, & Wehipeihana, 2013) is a very promising way to facilitate this process and provide the necessary interpretations of the levels of the variables in question . ...
Article
Full-text available
Background: This article addresses two problems. The first is the Flexibility Problem: If we are to use a more flexible format for theories of change than for traditional logic models, one in which we can no longer assume that we only value things which are at the end of causal chains, nor that we intervene on all the things at the beginning of causal chains, how then can we show which things we value, and which things we intervene on? The second is the Definition Problem: What is the difference between a theory showing the causal influences within and around a project and, more specifically, a theory of change for the project?
... Det giver mulighed for at afgøre, hvilken af flere rangordnede kategorier gestanden for vurdering tilhører (Davidson, 2005). Evalueringsrubrikker kendes isaer fra den angelsaksiske verdens uddannelsessystemer, hvor de anvendes til karaktergivning; men de anvendes i stigende grad også på andre dele af evalueringsfeltet (King et al., 2013). ...
Article
Full-text available
Et stigende fokus på evidens og dokumentation i velfærdsprofessionerne er også kommet til myndighedsarbejdet på området for børn og unge i udsatte positioner. Der er stigende opmærksomhed på måleredskaber, som kan understøtte myndighedssocialrådgiveres opfølgning på barnets progression. Artiklen præsenterer en model for de elementer, videnskilder og redskaber, der kan indgå i en sags-behandlers opfølgning. Med Trivselslinealen som case vises, hvordan forskellige forståelser kan føre til flere gyldige anvendelsesformer. Der argumenteres for, at det er vigtigt at afklare forståelsen af begreberne progression og progressionsmåling, så redskaberne kan finde en hensigtsmæssig og gyldig anvendelse i en sagsbehandlers opfølgning og i den lokale vidensudvikling.
... For example, there is a general and working logic of evaluation (Fournier, 1995(Fournier, , 2005Scriven, 1991). Evaluation-specific methods, such as synthesis methods and criteria development methods, have been developed (Davidson, 2015;King, McKegg, Oakden, & Wehipeihana, 2013;Nunns, & Roorda, 2010). A substantial knowledge base on different evaluation approaches exists (Alkin, 2013;Shadish, 1999;Stufflebeam, & Coryn, 2014). ...
Article
Full-text available
Background: Despite consensus within the evaluation community about what is distinctive about evaluation, confusion among stakeholders and other professions abounds. The evaluation literature describes how those in the social sciences continue to view evaluation as applied social science and part of what they already know how to do, with the implication that no additional training beyond the traditional social sciences is needed. Given the lack of broader understanding of the specialized role of evaluation, the field struggles with how best to communicate about evaluation to stakeholders and other professions. Purpose: This paper addresses the need to clearly communicate what is distinctive about evaluation to stakeholders and other professions by offering a conceptual tool that can be used in dialogue with others. Specifically, we adapt a personnel evaluation framework to map out what is distinctive about what evaluators know and can do. We then compare this map with the knowledge and skill needed in a related profession (i.e., assessment) in order to reveal how the professions differ. Findings: We argue that using a conceptual tool such as the one presented in this paper with comparative case examples would clarify for outsiders the distinct work of evaluators. Additionally, we explain how this conceptual tool is flexible and could be extended by evaluation practitioners in a myriad of ways.
... A rubric is a tool used in educational and developmental contexts for defining and assessing what "good" and "effective" mean at different levels of performance in a complex domain with hard-to-measure constructs (King, McKegg, Oakden, & Wehipeihana, 2013;Oakden, 2013). They are also used for evaluating the effectiveness of particular interventions, with multiple levels of progress toward the end goals (Davidson, Wehipeihana & McKegg, 2011). ...
Chapter
Full-text available
The transformation of individuals and organizations is increasingly expressed as a strategic reality and intent by users of leadership development services (Harvard Business Publishing, 2018). The field of vertical leadership development (VLD) focuses on the semipredictable patterns of transformations in the ways people think and act in increasingly more complex and integrated ways (action logics) and is well-suited to interpreting, encouraging and measuring this new reality of strategic transformation. The field of VLD has enjoyed recent success and is gaining momentum around the globe in helping people address complex challenges. However, the growth of the field of VLD is potentially limited by biases in how the work is theorized and practiced, as well as how it is perceived and engaged by practitioners, clients, coaches, students, teachers and other end-users across the vast array of human contexts and cultures. In particular, we observe that both practitioners and clients, as well as the embedding contexts, are often based in conventional action logics. The result can be a lot of transformation talk but little transformation walk. Intentional, sustained organizational transformation “walk” requires a footing in post-conventional logics. In this chapter, we analyze these limitations and propose solutions tested in our research and practice. Our aim is increased inclusion, engagement and utility for vertical theory and practice, in support of the positive development of people and societies worldwide.
... Rubrics can be considered a rating table or matrix that provides scaled levels of achievement. They set out an agreed understanding and provide a transparent basis for making evaluative judgements about aspects of a program (King et al., 2013). Through the process of developing rubrics stakeholders make it clear what is valued about a program. ...
Article
Full-text available
Health professionals deliver a range of health services to individuals and communities. The evaluation of these services is an important component of these programs and health professionals should have the requisite knowledge, attributes, and skills to evaluate the impact of the services they provide. However, health professionals are seldom adequately prepared by their training or work experience to do this well. In this article we provide a suitable framework and guidance to enable health professionals to appropriately undertake useful program evaluation. We introduce and discuss “Easy Evaluation” and provide guidelines for its implementation. The framework presented distinguishes program evaluation from research and encourages health professionals to apply an evaluative lens in order that value judgements about the merit, worth, and significance of programs can be made. Examples from our evaluation practice are drawn on to illustrate how program evaluation can be used across the health care spectrum.
... A rubrica, que pode ser construída a partir de diferentes formatos (KING et al., 2013), é elaborada com base em critérios ou, como preferimos chamar, em dimensões da performance discente definidas para o cumprimento eficaz de uma tarefa ou de um conjunto de tarefas estipuladas. A quantidade de dimensões é variável de acordo com a complexidade da tarefa ou ao quanto de detalhamento o professor (ou este em conjunto com os alunos) queira propor para fracionar a qualidade de sua execução. ...
Article
Full-text available
Este artigo busca descrever a experiência de uma oficina voltada à qualificação da avaliação de desempenho acadêmico via utilização de rubricas (BROOKHART, 2013; FRANCIS, 2018; HOWELL, 2014), realizada junto a estudantes da Licenciatura em Matemática de uma universidade pública no interior do Rio Grande do Sul. A pertinência da temática se dá pelo fato de que a formação inicial, de forma geral, dispensa poucas oportunidades aos licenciandos, ao longo dos seus cursos de graduação, para refletirem e se apropriarem a respeito dos sentidos produzidos pela avaliação nos contextos de ensino. Dessa forma, a produção de espaços alternativos e complementares à sala de aula regular, como os promovidos por meio de oficinas, pode contribuir para ampliar as experiências dos graduandos a respeito da capacidade de se tornarem avaliadores mais qualificados no exercício dos seus papéis imediatos (como alunos, através da autoavaliação ou da coavaliação entre pares) ou futuros (como professores). Os resultados da experiência da oficina piloto apontam para a necessidade de ampliação das oportunidades coletivas para discussão sobre as práticas avaliativas, bem como instrumentalizar os licenciandos com ferramentas orientadas teoricamente, com vistas a qualificar os procedimentos vinculados à avaliação, como pode ser o exemplo da avaliação via utilização de rubricas.
... This hyper-focus on rubrics, numbers, and purported grades contrasted with the Grade 1-2 teachers who recognized that the use of rubrics and writing was not a good fit for their students (Chapman and Inman 2009;King et al. 2013). In using rubrics, they could not get a sense of student understanding. ...
Article
A makerspace is a place where people create artifacts while sharing ideas, equipment, and knowledge. In so doing, makers develop a range of knowledge and skills, such as creativity, problem-solving, collaboration, and self-regulation, to help them achieve their goals. These skills are broadly touted as key for learning and transferable across disciplines and making contexts. This article will first review the state of play in the literature to assess skill development. Secondly, itreports on the trial of an assessment framework developed through a literature review and implemented in a maker learning environment with an elementary school context. Finally, the article concludes with implications for practice.
Article
Surveys of two independent random samples of American Evaluation Association (AEA) members were conducted to investigate application of the logic of evaluation in their evaluation practice. This logic consists of four parts: (1) establish criteria, (2) set standards, (3) measure performance on criteria and compare to standards, and (4) synthesize into a value judgment. Nearly three-fourths (71.84% ± 5.98%) of AEA members are unfamiliar with this logic, yet a majority also indicate its importance and utility for evaluation practice. Moreover, and despite unfamiliarity with the four steps of the logic of evaluation, many AEA members identify evaluative criteria (82.41% ± 3.34%), set performance standards (60.55% ± 7.39%), compare performance to standards (62.14% ± 5.98%), and synthesize into an evaluative conclusion (75.00% ± 5.80%) in their evaluation practice. Much like the working logic of evaluation, however, application of the general logic varies widely.
Article
Public libraries are increasingly investing in makerspaces and seeking to evaluate their offerings. Defining success is the first step of the evaluation process yet proves difficult due to the heterogeneity of making participants and purposes and the individualized, self-directed nature of makerspace participation. This study identified and compared seven evaluative criteria that represented participants' and library definitions of success for one public library makerspace. Findings revealed that at least one criterion drawn from library objectives was relevant for each participant in the sample, yet none of the seven criteria was relevant for every participant. In an evaluation, drawing criteria exclusively from library objectives and/or applying criteria uniformly could underestimate the benefits of the makerspace. Drawing criteria from both library and participant perspectives and using individualized criteria that vary across the population could yield an assessment that reflects the breadth of purposes and benefits associated with the makerspace.
Article
Background This article outlines the methods being used to evaluate a community-based public health intervention. This evaluation approach recognizes that not only is the intervention, Healthy Families NZ, complex, but the social systems within which it is being implemented are complex. Methods To address challenges related to complexity, we discuss three developing areas within evaluation theory and apply them to an evaluation case example. The example, Healthy Families NZ, aims to strengthen the prevention system in Aotearoa/New Zealand to prevent chronic disease in 10 different geographic areas. Central to the evaluation design is the comparative case method which recognizes that emergent outcomes are the result of ‘configurations of causes’. ‘Thick’, mixed-data, case studies are developed, with each case considered a view of a complex system. Qualitative Comparative Analysis is the analytical approach used to systematically compare the cases over time. Conclusions This article describes an approach to evaluating a community-based public health intervention that considers the social systems in which the initiative is being implemented to be complex. The evaluation case example provides a unique opportunity to operationalize and test these methods, while extending their more frequent use within other fields to the field of public health.
Article
Rubrics are used by evaluators who seek to move evaluations from being mere descriptions of an evaluand (i.e., the programme, project or policy to be evaluated) to determining the quality and success of the evaluand. However, a problem for evaluators interested in using rubrics is the literature relating to rubric development is scattered and mostly located in the education field with a particular focus on teaching and learning. In this short article we review and synthesise key points from the literature about rubrics to identify best practice. In addition we draw on our rubric teaching experience and our work with a range of stakeholders on a range of evaluation projects to develop evaluation criteria and rubrics. Our intention is to make this information readily available and to provide guidance to evaluators who wish to use rubrics to make value judgements as to the quality and success of evaluands.
Article
Evaluations of policies and programs often use a theory of change to articulate how the intervention is intended to function and the mechanisms by which it is supposed to generate outcomes. When an evaluation includes cost and efficiency considerations, economic and other concepts can be added to a theory of change to articulate a theory of value creation that articulates the mechanisms by which the intervention should use resources efficiently, effectively and create sufficient value to justify the resource use. This paper introduces some theories of value creation that are often implicit in program designs. Making these theories explicit can support clearer evaluative thinking about value for money - including specification of criteria and standards that are aligned with the theory, methods of inquiry that test the theory, and well-reasoned judgements that answer evaluative questions about value for money. Implications for evaluation practice will be discussed.
Article
Full-text available
Este trabalho é um relato de caso referente a uma intervenção com a metodologia Problem-based Learning (PBL) no formato post-holing em uma disciplina regular do curso de graduação em fisioterapia. Diante do cenário pandêmico a pesquisa foi realizada de forma remota e buscou-se, aplicando uma análise qualitativa e descritiva, descrever sobre os efeitos da aplicação da metodologia sobre as dimensões do engajamento discente e a percepção docente. Embora tenha sido adaptada para o ensino remoto emergencial, primou-se por seguir fielmente os constructos da metodologia e percebeu-se que esta atende ao que se propõe, despertando o pensamento crítico, visão de pesquisa, discussão e construção do trabalho colaborativo.
Research
Full-text available
In this Practice Note I share our experience using an evaluation and monitoring approach called 'rubrics' to assess a complex and dynamic project's progress towards achieving its objectives. Rubrics are a method for aggregating qualitative performance data for reporting and learning purposes. In M&E toolkits and reports, rubrics looks very appealing. It appears capable of meeting accountability needs (i.e. collating evidence that agreed-upon activities, milestones, and outcomes have been achieved) whilst also contributing to enhanced understanding of what worked, what was less successful, and why. Rubrics also seems to be able to communicate all of this in the form of comprehensive, yet succinct tables. Our experience using the rubrics method, however, showed that it is far more difficult to apply in practice. Nonetheless, its value-add for supporting challenging projects – where goal-posts are often shifting and unforeseen opportunities and challenges continuously emerging – is also understated. In this Practice Note, I share the process of shaping this method into something that seems to be a right fit for the project (at the time of producing this note the project is ongoing and insights still emerging).
ResearchGate has not been able to resolve any references for this publication.