Conference PaperPDF Available

The validity and reliability of adaptive comparative judgements in the assessment of graphical capability

Authors:
  • Technological University of the Shannon
  • Technological University of the Shannon: Midlands Midwest

Figures

Content may be subject to copyright.
The Validity and Reliability of Adaptive Comparative
Judgements in the Assessment of Graphical Capability
Dr Niall Seery, Jeffrey Buckley, Andrew Doyle and Dr Donal Canty
Department of Design and Manufacturing Technology
University of Limerick
Abstract
The valid and reliable assessment of capability is of paramount importance in education.
Operationalizing assessment practices of divergent problems can be particularly challenging due
to the variety of potential responses. This paper investigates the use of Adaptive Comparative
Judgements (ACJ) in the assessment of graphical capability. A cohort of undergraduate Initial
Technology Teacher Education (ITTE) students (N=128) participated in this study which involved
completing a design task and subsequently assessing the work of their peers through ACJ and
criterion referenced assessment. The performance from both methods was analysed and identified
a high level of reliability for ACJ. Correlations between the criteria scores and ACJ parameter
values suggest its validity as an assessment mechanism, however they also present the potential
for additional variable to be influencing holistic judgements.
Introduction
The ultimate aim of graphical education is the espousal of graphical capability. In an
educational context however this capacity is often externalized through the medium of design
where the fluid nature of the design process often makes it difficult to explicitly identify criteria. It
is therefore important that the operationalization of assessment practices considers the overarching
principles of graphical capability. Delahunty, Seery and Lynch (2012), through a review of the
pertinent literature, offer a variety of aptitudes associated with graphical education which include
cognitive capacities such as spatial cognition and deductive reasoning, communication skills such
as modelling and graphicacy, designerly proficiencies such as ideation and problem solving, and
suggest consideration for the pertinent knowledge base. While these skills are not mutually
exclusive, for example modelling could also be conceived as a designerly act depending on the
intent of the model, the broad categories form a conceptual model which facilitates in framing the
principles of graphical capability. These core principles appear to be graphicacy (both
communication and interpretation), design (having an understanding of the stages and functions of
design, being innovative, and being able to externalise ideas), and the pertinent knowledge base
(having a conceptual understanding of graphical principles), which are all underpinned by an
architecture of cognitive abilities such as fluid reasoning and spatial ability.
104
Daniel Webster College – 2016
The assessment of graphical capability under these core principles requires a mechanism
which can appropriately reward capacity despite the inherent difficulty in the explicit observation
of criteria. Sadler (2009) highlights two critical problems with the use of criterion referenced
assessment in this situation in that the sum of the criteria scores may not always reflect the
intuitive or holistic mark of the assessor, and that there may be criteria missing from an
assessment rubric that are important or alternatively may set the particular work aside as
exemplary. Additionally, making a judgment about a piece of work based on abstract or generic
criteria can be quite difficult.
The use of Adaptive Comparative Judgements (ACJ) (Pollitt, 2012) however affords a
mechanism which has previously been identified as a reliable approach for the assessment of
graphically orientated conceptual design tasks (Seery, Lane, & Canty, 2011). Based on
Thurstone's (1927) law of comparative judgement, ACJ can alleviate the issues with criterion
based assessment identified by Sadler (2009) as it is operationalized by judges making binary
judgments between two pieces of evidence. Multiple judgements on pairs of work ultimately result
in the generation of a rank order of the work. The issues identified with individual judgment are
avoided by having multiple judges assessing work thus nullifying personal biases. The reliability
in the ACJ method stems from the adaptive nature of the software, in that specific pieces of work
are selected as pairs for adjudication when additional judgements are needed to reach a consensus
on their rank position. The ACJ method relies on a holistic judgment with overarching criteria
used to guide the assessor in making a professional judgment (Kimbell et al., 2009). Perhaps the
most significant aspect of ACJ lies in its capacity to facilitate adjudications on varying criteria.
While a judge may base an initial judgement on certain criteria, subsequent judgements may be
subjected to different criteria depending on the nature of the work.
Therefore, considering the capacity of ACJ to incorporate professional and holistic
judgements, the primary purpose of this study is to examine its validity and reliability in the
assessment of graphical capability.
Method
A cohort (N=128) of undergraduate Initial Technology Teacher Education (ITTE) students in
the 3rd year of their degree programme participated in this study as part of a Design and
Communication Graphics (DCG) module. All participants had previously completed three
prerequisite graphics education modules prior to this study. The focus of these modules was on
developing an understanding of plane and descriptive geometry with a particular emphasis on
developing competencies related to freehand sketching, parametric CAD modelling, technical
drafting and conceptual design.
71st EDGD Mid Year Proceedings
105
The initial phase of the study involved each of the participants engaging with a thematic
conceptual design brief (Table 1). The brief required the participants to design an aid for an
elderly person(s) to enhance their quality of life. No explicit criteria except for a size limitation on
the final portfolio were incorporated into the brief. Instead students were required to evidence their
own understanding of graphical capability.
Table 1. Design brief utilized in the study
Brief:
Population pyramids for many developed countries highlight the reality of an aging population.
The inevitability of growing older brings with it many challenges to everyday activities. This calls
for new and innovative thinking to enrich the lives of our elderly and ensure facilitation of the
emotional, physiological, and social needs that guarantee an independent, dynamic and stimulated
life.
Reinforcing the link between technology and society;
Design and model a personal device/artefact that will enhance the quality of life for an elderly
person.
Criteria:
From a culmination of your knowledge and experience to date demonstrate evidence of graphical
capability
Upon completion of the design task, the second phase of the study required the participants to
assess the portfolios using two methods. Initially, all participants assessed the work in an ACJ
session. For this, participants each made 10 judgements on unique pairs of coursework.
Participants were instructed to make judgements based on evidence of graphical capability.
Finally, subsequent to the ACJ session each participant then graded a randomized selection of
portfolios (mean = 14.67) on a ten point scale (1 = lowest, 10 = highest) under criteria aligning
with the core principles of graphical capability previously discussed (Table 2). The average grades
received for each portfolio under the individual criteria were derived as well as an average total
score across all criteria to support comparisons with the ACJ data.
106
Daniel Webster College – 2016
Table 2. Grading system and codex used for data analysis
Code
Criteria
Communication
Overall rate how effective the portfolio was communicated
Creativity
Rate how innovative or creative the design solution was
Stages
How well did the student define the stages of the design approach
Functions
Rate the selection of appropriate functions (i.e. was the use of
CAD/sketching/etc. appropriate for the stage of the design that the student
used them in?)
Principles
Rate the evidence that supports the level of the knowledge displayed of
graphical principles
Findings
To analyse the data it was first necessary to elicit the performance rank created from the ACJ
session. Each portfolio attained a specific parameter value based on the outcomes of the
judgements it was involved in. The rank (Figure 1) illustrates a very high level of interrater
reliability of 0.961.
Figure 1. Portfolio parameter values and standard error bars indicating ACJ rank position
Subsequent to this, a preliminary graphical analysis was conducted to observe any underlying
relationships between the portfolios ACJ rank position and the performance on the grading
criteria. This involved graphing the mean score achieved for each criterion against the rank
positions. An example of this is shown below in Figure 2 which illustrates a positive relationship
between the portfolios rank position and the average score achieved across all grading criteria. A
similar positive trend emerged in all cases.
-15
-10
-5
0
5
10
15
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
Parameter Value
Rank Position
71st EDGD Mid Year Proceedings
107
Figure 2. Mean 'average score' score and ACJ rank position
To examine these relationships more explicitly, a correlational analysis was conducted
between average scores for all criteria and the parameter values achieved by each portfolio. All
observable correlations were statistically significant at the p < 0.001 level with moderate
correlations (r = .403 to r = .507) emerging between the parameters values and grading criteria.
Correlations between each of the grading criteria range from high (r = .760) to very high (r =
.956).
Table 3. Correlation matrix of performance variables
ACJ Parameter
Creativity
Stages
Functions
Principles
Average
ACJ Parameter
_
Communication
.493**
Creativity
.403**
_
Stages
.484**
.760**
_
Functions
.465**
.735**
.817**
_
Principles
.504**
.764**
.847**
.933**
_
Average
.507**
.867**
.923**
.943**
.956**
_
**. Correlation is significant at the 0.001 level (2-tailed).
Discussion and Conclusion
The results of this study are of particular interest in the assessment of graphical capability.
The use of ACJ proved highly reliable through the achievement of an interrater reliability score of
0.961. This result corroborates the findings of Seery et al., (2011) who achieved a similar score.
3
4
5
6
7
8
9
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
Mean Score
Rank Position
108
Daniel Webster College – 2016
With respect to the validity of ACJ, the high correlations amongst all of the grading criteria
suggest that they are all aspects of the same construct which is posited to be graphical capability.
However as only moderate correlations are observable with the parameter value, this presents a
degree of misalignment which suggests that additional variables are contributing to rank position.
As no criteria correlated excessively highly with the parameter relative to the others, this suggests
one single criterion was not the sole focus of the judging cohort which aligns with the holistic
nature of ACJ. It is posited that the grading criteria list is omitting critical elements associated
with the task which would strengthen the correlation between the ACJ parameter and the average
criteria score. This could take the form of additional variables or a bifurcation of the current
variables. Ultimately it appears that ACJ has the capacity to validly measure the construct of
graphical capability as biases towards specific elements are not present, however the question
regarding the nature of additional variables impacting on its adjudication has now emerged.
References
Delahunty, T., Seery, N., & Lynch, R. (2012). The Growing Necessity for Graphical Competency.
In T. Ginner, J. Hallström, & M. Hultén (Eds.), PATT26 (pp. 144152). Stockholm, Sweden:
PATT.
Kimbell, R., Wheeler, T., Stables, K., Shepard, T., Martin, F., Davies, D., … Whitehouse, G.
(2009). E-scape Portfolio Assessment: Phase 3 Report. London: Goldsmiths College.
Pollitt, A. (2012). Comparative Judgement for Assessment. International Journal of Technology
and Design Education, 22(2), 157170.
Sadler, D. R. (2009). Transforming Holistic Assessment and Grading into a Vehicle for Complex
Learning. In G. Joughin (Ed.), Assessment, Learning and Judgement in Higher Education
(pp. 4563). Netherlands: Springer.
Seery, N., Lane, D., & Canty, D. (2011). Exploring the Value of Democratic Assessment in
Design Based Activities of Graphical Education. In 118th Annual American Society of
Engineering Education Conference. Vancouver, British Columbia: American Society for
Engineering Education.
Thurstone, L. L. (1927). A Law of Comparative Judgement. Psychological Review, 34(4), 273
286.
71st EDGD Mid Year Proceedings
109
... First-year engineering students are typically required to engage in team and problem-based activities through introductory coursework to support the development of design capabilities [1], [11]. This type of activity is typically assessed using rubrics, portfolios, and criteriongrading tools [2]- [6]. However, there are issues in assessing open-ended and divergent tasks in this manner including; reliability, teacher bias, excessive time investment, and timeliness of feedback [1]- [5]. ...
... Validity has been demonstrated in both approaches to making judgements [9]. In past studies, judges have been students, professionals, and instructors/professors [1], [3], [4], [6], [8]- [10]. The number of judgements made by each judge depends on the number of portfolios and judges. ...
... This feedback is collated and provided to each student [7]. By judging portfolios on multiple occasions, a consensus is reached on the positioning of the work within the cohort, and a rank-ordered list is produced that can be used to determine grades or monitor progress (see [3], [4], [6], [7], [9] for detailed descriptions of tabulation and grade computation using ACJ assessment). Where students serve as judges, they may be assigned their work to judge. ...
Conference Paper
Full-text available
Design projects are an important part of many first-year engineering programs. The desire to employ holistic assessment strategies to student work with open-ended and divergent responses has been widely noted in the literature. Holistic strategies can provide insight into the role of qualities (e.g., professional constructs) that are not typically conducive to standard assessment rubrics. Adaptive Comparative Judgement (ACJ) is an assessment approach that is used to assess design projects holistically. The assessment of projects using ACJ can be carried out by experts or students to scaffold their learning experience. This Work-in-Progress paper explores the use and benefits of ACJ for assessing design projects specifically focusing on first-year engineering students and educators. Further, conference attendees will be provided the opportunity throughout the conference to engage with the ACJ software to experience how this system can work in practice for assessing student design projects.
... What determined the hierarchical structure and how this varied between students would shed additional light and perspective on the mental processes involved with ACJ. F. Seery et al. (2016), gave students a design project that was followed by student-made judgments of work using ACJ. Students also hand graded (on a scale of 1-10) select portfolios for points. ...
... Additionally, university students peer-graded a mock examination question using ACJ and generally praised the approach for learning and were more inclined to this approach of peer-grading than assigning traditional marks (scores). Further investigation into the selection of judgment criteria, or the lack thereof (see Canty et al. 2017Canty et al. , 2019Seery et al. 2016) and its impact on the selection process is needed. Judges selecting the better of two items, with different criteria for selection in mind, may arrive at the same conclusion, or not; regardless, an understanding of the impact of the criteria for selection is needed to further explore the ramifications of ACJ for assessment and learning. ...
Article
Full-text available
Adaptive Comparative Judgment (ACJ), an approach to the assessment of open-ended problems which utilizes a series of comparisons to produce a standardized score, rank order, and a variety of other statistical measures, has demonstrated high levels of reliability and validity and the potential for application in a wide variety of areas. Further, research into using ACJ, both as a formative and summative assessment tool, has been conducted in multiple contexts across higher education. This systematized review of ACJ research outlines our approach to identifying, classifying, and organizing findings from research with ACJ in higher education settings as well as overarching themes and questions that remain. The intent of this work is to provide readers with an understanding of the current state of the field and several areas of potential further inquiry related to ACJ in higher education.
... According to Pollitt (2012b), this holistic approach embedded in CJ with multiple judges rules out personal biases, leading to higher consistency in judgment among the assessors. Seery and Buckley (2016). Each data point represents one portfolio. ...
Chapter
There is a growing demand for the use of digital tools in assessment. Few approaches show innovative benefits beyond being logistical aids. Comparative judgment (CJ) has the potential to enhance educational practices by providing a mechanism for reliable assessment, supporting formative feedback, and by supporting critical discourse on evidence of learning. This chapter provides an overview of CJ as it has been used in educational assessment and describes how it can be facilitated by digitalization by providing illustrative examples of research studies, mainly undertaken for formative purposes. Specifically, this chapter provides an introduction to CJ and a description of its theoretical roots, presents possible approaches and agendas for the use of CJ ranging from being a pedagogical tool in a classroom to being a mediator for continuing professional development and discusses implications for practice and future research needs. Ultimately, it is envisaged that this chapter will act as a source of inspiration for educational stakeholders who wish to use CJ to add value to their practice.
... ACJ has been tested, implemented, researched, and scrutinized extensively in summative assessment settings with student portfolios (e.g., Bartholomew, Strimel, & Jackson, 2017;Newhouse, 2011;Seery, Delahunty, Canty, & Buckley, 2017), student essays (e.g., Pollitt & Whitehouse, 2012;Steedle & Ferrara, 2016;van Daal, Lesterhuis, Coertjens, Donche, & De Maeyer, 2019), and mathematics (Jones & Alcock, 2012). Additionally, ACJ has also been piloted with students in formative assessment scenarios (e.g., Bartholomew, Yoshikawa, & Strimel, 2018;Bartholomew, Zhang, Garcia-Bravo, & Strimel, 2019;Kimbell, 2018;Seery, Buckley, Doyle, & Canty, 2016;Seery et al., 2012). ...
Chapter
This chapter focuses on the development, adaptation, and revising of the Taiwanese-based STEM2TV (STEM for Taiwan, Thailand, and Vietnam) module toward the needs of New Asia. The STEM2TV project aims to investigate and develop the next generation of STEM education for this region. After the introduction to this chapter, we present our STEM2TV project in four sections: (1) context setting—how our research project starts by leveraging similarities, accommodating the differences between the partner countries, as well as presenting our struggles in implementing STEM courses in different classrooms; (2) research framework—how we gather knowledge from other (mostly western) countries as well as from our previous experiences to guide STEM education research toward our STEM2TV project; (3) preliminary findings from the pilot data—how we are building strong relationships and partnerships by trialing STEM modules and assessments together, to fit the needs of “New Asia’s” countries, based on more comprehensive and contextual views toward STEM (this section also shares some educators’ and teachers’ feedback from two collaborative case studies in Vietnam and Thailand in 2018); and (4) future directions of the project—the benefits, contributions, and future vision of this research for STEM education communities.
... ACJ has been tested, implemented, researched, and scrutinized extensively in summative assessment settings with student portfolios (e.g., Bartholomew, Strimel, & Jackson, 2017;Newhouse, 2011;Seery, Delahunty, Canty, & Buckley, 2017), student essays (e.g., Pollitt & Whitehouse, 2012;Steedle & Ferrara, 2016;van Daal, Lesterhuis, Coertjens, Donche, & De Maeyer, 2019), and mathematics (Jones & Alcock, 2012). Additionally, ACJ has also been piloted with students in formative assessment scenarios (e.g., Bartholomew, Yoshikawa, & Strimel, 2018;Bartholomew, Zhang, Garcia-Bravo, & Strimel, 2019;Kimbell, 2018;Seery, Buckley, Doyle, & Canty, 2016;Seery et al., 2012). ...
Chapter
While Science, Technology, Engineering, and Mathematics (STEM) education is being lobbied as a mechanism for teaching students both conceptual knowledge and procedural skills, the assessment of these skills can be difficult. The open-ended nature of STEM education assignments contributes to challenges related to reliability, validity, and feasibility. Adaptive comparative judgment (ACJ), an approach to assessment which relies on pairwise comparisons rather than value-based decisions, has shown promise in relieving some of the aforementioned difficulties with STEM conceptual knowledge and procedural skill assessment. Utilizing a holistic approach to assessment through ACJ has shown promise, especially in the potential for identifying, and assessing, student competencies in important STEM skill areas. Several case studies, with accompanying explanations and potential application directions, are included in line with direction for further inquiry and discussion.
... The use of Adaptive Comparative Judgement (ACJ) (Pollitt, 2012b) as a method of assessment has been proven to be both valid and reliable in the assessment of design based competencies (Kimbell, 2012;Pollitt, 2012aPollitt, , 2012bSeery & Buckley, 2016;Seery, Canty, & Phelan, 2012, Ryan et al. 2017. Based on Thurstone's (1927) Law of Comparative Judgement, assessment is carried out by a group of 'judges' making binary decisions on of quality of work evidenced in multiple pairs of portfolios. ...
Article
Full-text available
One of the leading frameworks in engineering education specifically associated with design based competencies is the CDIO framework. This has been incorporated internationally into many institutions offering engineering education courses. Characterized by four unique stages, the CDIO framework affords an ideal scenario to incorporate a continuous assessment model. This paper presents a proposed synthesis between CDIO and Adaptive Comparative Judgement (ACJ). In particular, the opportunity to provide feedback through the ACJ system is theorized to have potentially positive educational effects. As part of a larger study, this approach is in the process of being refined prior to implementation as a pilot study for feasibility which will ultimately be succeeded by large-scale implementation to determine any potentially positive effect sizes.
... The design worksheets and prototypes were evaluated by the teachers using the Adaptive Comparative Judgment (ACJ) assessment technique-a method that has recently gained attention as a reliable and valid method of evaluating open-ended design problems (Bartholomew 2017;Bartholomew et al. 2017;Kimbell 2012;Seery et al. 2016;Bartholomew et al. 2018). Through the ACJ software CompareAssess, teachers assessed students design portfolios at the conclusion of the assignment (the portfolios consisted of the design worksheet and an image of their prototype). ...
Article
Full-text available
This study examined the design cognition and achievement results of both kindergarten and fourth grade students engaged in engineering design-based instructional activities. Relationships between design cognition and student grade level, as well as quality of student work, were investigated. 30 concurrent think-aloud protocols were collected from individual primary students as they worked in groups to design and make a solution to a design task. The concurrent think-aloud protocols were examined and coded to determine the duration of time the participants devoted to a pre-established set of mental processes for technological problem solving. Significant differences between kindergarten and fourth grade participants were found in the amount of time various cognitive processes were employed. Fourth grade students dedicated significantly more time to the mental processes of Creating, Defining Problems, Measuring, and Testing than kindergarten students. In addition, when examining the think-aloud protocols along with the evaluations of the participant’s design work, it was found that more time devoted to the cognitive process of Managing could be a significant predictor of lower design achievement. These findings can highlight potential areas for improving educational practice based on the cognitive abilities of students at different grade levels and the quality of their design work. As engineering design-based activities become more prevalent for the teaching of STEM-related content and practices, the results of this research, and the employed methodology, may demonstrate a promising practice for better understanding and assessing such education efforts.
... The assignments were underpinned by broad criteria including (1) quality and coherency in communication, (2) innovation, (3) knowledge of stages of design, (4) knowledge of functions of design, and (5) an understanding of geometric principles. A separate study conducted by Seery and Buckley (2016) determined that performance in a similar conceptual graphical design task is indicative of learning and competency with respect to these criteria. ...
Article
Full-text available
Educational assessment has profound effects on the nature and depth of learning that students engage in. Typically there are two core types discussed within the pertinent literature; criterion and norm referenced assessment. However another form, ipsative assessment, refers to the comparison between current and previous performance within a course of learning. This paper gives an overview of an ipsative approach to assessment that serves to facilitate an opportunity for students to develop personal constructs of capability and to provide a capacity to track competence based gains both normatively and ipsatively. The study cohort (n = 128) consisted of undergraduate students in a Design and Communication Graphics module of an Initial Technology Teacher Education programme. Four consecutive design assignments were designed to elicit core graphical skills and knowledge. An adaptive comparative judgment method was employed to rank responses to each assignment which were subsequently analysed from an ipsative perspective. The paper highlights the potential of this approach in developing students’ epistemological understanding of graphical and technological education. Significantly, this approach demonstrates the capacity of ACJ to track performance over time and explores this relative to student ability levels in the context of conceptual design.
... According to Pollitt (2012b), this holistic approach embedded in CJ with multiple judges rules out personal biases, leading to higher consistency in judgment among the assessors. Seery and Buckley (2016). Each data point represents one portfolio. ...
Presentation
Eva will present comparative judgement and findings from an on-going international comparative study focusing on unpacking teachers’ assessment practices in USA, UK and Sweden
... The use of Adaptive Comparative Judgement (ACJ) (Pollitt, 2012b) as a method of assessment has been proven to be both valid and reliable in the assessment of design based competencies (Kimbell, 2012;Pollitt, 2012aPollitt, , 2012bSeery & Buckley, 2016;Seery, Canty, & Phelan, 2012). Based on Thurstone's (1927) Law of Comparative Judgement, assessment is carried out by a group of 'judges' making binary decisions on of quality of work evidenced in multiple pairs of portfolios. ...
Conference Paper
Full-text available
One of the leading frameworks in engineering education specifically associated with design based competencies is the CDIO framework. This has been incorporated internationally into many institutions offering engineering education courses. Characterized by four unique stages, the CDIO framework affords an ideal scenario to incorporate a continuous assessment model. This paper presents a proposed synthesis between CDIO and Adaptive Comparative Judgement (ACJ). In particular, the opportunity to provide feedback through the ACJ system is theorized to have potentially positive educational effects. As part of a larger study, this approach is in the process of being refined prior to implementation as a pilot study for feasibility which will ultimately be succeeded by large-scale implementation to determine any potentially positive effect sizes.
Article
Full-text available
This paper aims to explore the value of graphical competency within contemporary technology education and society. In an attempt to establish the key perspectives on the contemporary merit of achieving competencies associated with graphical education, a review of the literature from both national and international sources was undertaken. Graphical education in Ireland, in the context of technology education, has its roots in voca-tional education. The traditional subject focused on developing knowledge and competencies pri-marily associated with craft based outcomes (Seery, Lynch, & Dunbar, 2010). This philosophy has shifted focus in recent times in reaction to the ever-changing needs of our global society. However, as McLaren (2008) discusses the recent changes in curriculum across many countries, as having left many educators questioning the role of communication graphics in today's educational milieu. With the availability of resources such as digital media and CAD systems in industry, many have questioned whether graphical education is a redundant subject area (McLaren, 2008). Therefore, an analysis of the role of graphical competency in a broader technological context is required. Plane geometry, knowledge of projection systems, standards and conventions are core skills that were associated with an early conception of graphical education. Today however, a broader skills set is envisaged to comprise contemporary graphical capability within technological education. Elements of graphical education, such as spatial ability, are core cognitive aptitudes that have been identified as vital to many vocational fields (Steinhauer, 2011) most notably, but not exclusively, the technology and engineering subjects. However, they are also vital in everyday tasks such as communicating ideas and finding one's way in an environment (Hegarty, Richardson, Montello, Lovelace, & Subbiah, 2002). Graphical education also lends itself to the development of further cognitive complexities such as modelling and ideation. As Baynes (2010) discusses, graphical competencies are directly related to practice and understanding in design. Considering the visual nature of society, the requirement for graphical competencies becomes more urgent. Technology education offers a suitable arena for the development of these compe-tencies and more specifically graphical education where the associated skills are taught explicitly.
Article
Full-text available
Historically speaking, students were judged long before they were marked. The tradition of marking, or scoring, pieces of work students offer for assessment is little more than two centuries old, and was introduced mainly to cope with specific problems arising from the growth in the numbers graduating from universities as the industrial revolution progressed. This paper describes the principles behind the method of Comparative Judgement, and in particular Adaptive Comparative Judgement, a technique borrowed from psychophysics which is able to generate extremely reliable results for educational assessment, and which is based on the kind of holistic evaluation that we assume was the basis for judgement in pre-marking days, and that the users of assessment results expect our assessment schemes to capture.
Chapter
Full-text available
For students to develop the capacity to produce complex works of consistently high quality, they need to be able to monitor the quality of their work during its production. This requires that they know what constitutes high quality; how to compare the quality of their emerging work with their model of high quality; and how to modify their work accordingly. A common approach to assessing complex works is to apply a set of fixed criteria. This practice gives rise to various anomalies, two of which are analysed in this chapter. This phenomenon calls into question the fundamental validity of using preset criteria as the preferred approach for judging quality. Instead, holistic judgments are required. A teaching approach that deliberately blurs the boundary between pedagogy and assessment so students can develop the ability to make holistic appraisals is outlined in the second part of this chapter.
Article
A significant change in the philosophy of graphical education in Ireland has taken place since 2007. The introduction of a new subject Design and Communication Graphics (DCG) has broadened the traditionally focused syllabus. The understanding of geometric and descriptive principles in the context of predefined applications is now governed by a subject that supports conceptual endeavours. DCG provides students with the opportunity to develop a skill set that will allow them explore and learn within and beyond their subject domain through the medium of design without make. With the objective of codifying the initial teacher education practices, an introspective analysis was taken to explore student"s performance within a core graphics module at the University of Limerick. Students from year 3 of the undergraduate Materials and Construction Education and Materials and Engineering Education initial teacher education programmes were tasked with a thematic design brief that required them to produce a graphical portfolio of their design solutions. To encourage diverse, imaginative, and creative engagement in this design activity a democratic non-criterion referenced approach to the assessment was employed. Students judged their peer"s work and agreed on a ranked order of the strongest, weakest and relative positions of each design portfolio. The relationship between the student"s assessment heuristics and their performance in the design task are discussed in the context of evidence of learning. The paper explores the interdependence of domain specific knowledge and skills, pedagogical strategy, and flexibility in evaluating student capability and competency in graphical education. The need to establish the currency that defines effective pedagogical intervention is presented.
Article
( This reprinted article originally appeared in Psychological Review, 1927, Vol 34, 273–286. The following is a modified version of the original abstract which appeared in PA, Vol 2:527. ) Presents a new psychological law, the law of comparative judgment, along with some of its special applications in the measurement of psychological values. This law is applicable not only to the comparison of physical stimulus intensities but also to qualitative judgments, such as those of excellence of specimens in an educational scale. The law is basic for work on Weber's and Fechner's laws, applies to the judgments of a single observer who compares a series of stimuli by the method of paired comparisons when no "equal" judgments are allowed, and is a rational equation for the method of constant stimuli.
E-scape Portfolio Assessment: Phase 3 Report
  • R Kimbell
  • T Wheeler
  • K Stables
  • T Shepard
  • F Martin
  • D Davies
  • G Whitehouse
Kimbell, R., Wheeler, T., Stables, K., Shepard, T., Martin, F., Davies, D., … Whitehouse, G. (2009). E-scape Portfolio Assessment: Phase 3 Report. London: Goldsmiths College.