Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Considering the challenges associated with the teaching of engineering design, and recognizing potential differences in design values of individuals from various backgrounds, this study investigated the utility of adaptive comparative judgment (ACJ) as a method for informing the teaching and practice of engineering design. The authors investigated a series of research questions to examine the use of ACJ as a formative method for influencing an engineering student’s design decision-making as well as identifying/comparing design values of different engineering education stakeholders. The study results indicate that the ACJ process enabled students to gain insights for enhancing their designs through the critique of peer-work and the receipt of feedback on their own projects. Also, the results revealed similarities/differences between the ways instructors, students, and practicing engineers judged design projects. In light of these findings, it appears ACJ can be a valuable formative assessment tool for informing the practice/processes of education related to engineering design.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Crafting effective interview questions is a skill that many students struggle with, often due to their limited experience and understanding of qualitative research methods (Doody & Noonan, 2013;Kvale & Brinkmann, 2009). LbE has demonstrated its effectiveness in helping students understand what "good" looks like when dealing with abstract design thinking concepts such as PoV statements (Bartholomew et al., 2020), incorporating geometric principles into design work (Seery et al., 2018), electronic portfolios Seery et al., 2012), and engineering design in first year engineering courses (Strimel et al., 2021). By providing structured examples for comparison, LbE can help students overcome challenges in formulating insightful and open-ended interview questions, which are crucial for gathering rich qualitative data to inform the design process. ...
... Researchers in Ireland Seery et al., 2018) and the USA (Bartholomew et al., 2020;Mentzer et al., 2021;Strimel et al., 2018) have found that comparative judgement, in this case adaptive comparative judgement, can be a powerful learning tool when students were engaged as evaluators of the items (rather that experts or teachers) and used this experience as a learning exercise for their own work. Several studies (Bartholomew et al., 2020Jackson et al., 2023;Mentzer et al., 2023;Strimel et al., 2021) have shown the value, promise, and positive gains associated with LbE -especially in open-ended design settings. ...
... This suggests that the practical implications of the LbE tool, although positive, may be limited. Nonetheless, this finding aligns closely with the findings of (Bartholomew et al., 2020), and is consistent with other studies showing positive results with LbE as a treatment (Bartholomew, Mentzer, et al., 2019;Strimel et al., 2021). Notably, our findings extend this success to interview assignments conducted by students individually, a dimension previously unexplored in LbE research. ...
Article
Full-text available
This conceptual replication study, building upon Bartholomew (2020), addresses a notable gap in the literature by investigating the potential of using learning by evaluating (LbE) as an interview primer for individual assignments in design coursework. While peer feedback commonly involves both giving and receiving feedback, LbE uniquely focuses on the benefits of giving feedback to engage students actively in learning, reflection, and critical thinking. In this study, the LbE process is utilized to foster student insights that may be transferred to their own work in the preparation and conducting of qualitative research interviews. Conducted as a quasi-experimental study in an entry-level design thinking course with a large sample of undergraduate students (n = 325) this research explores specific ways that exposure to LbE as an evaluative process enhances students’ abilities to perform qualitative research interviews, even without explicit teacher feedback. Findings, consistent with previous studies on LbE, indicate that students exposed to the intervention prepared a higher proportion of open-ended questions in their interview guides, demonstrating improved preparation for each interview. However, the transfer of preparation to performance revealed that certain skills such as asking probing questions and prompting for information, beyond the scope of the preparation of an interview guide, may have contributed to the observation that interview lengths did not significantly improve. Methodologically, this study employed random sampling, and a comparisons of treatment and control group interview guides and interview duration were conducted through independent samples t-tests. The significance of these findings suggest that LbE is an effective pedagogical strategy, specifically for individualized student work, expanding upon findings of previous studies. Educators and researchers may find this exploration of peer feedback through LbE on design thinking and qualitative research interviews to be of particular value as they seek to optimize the impact of peer feedback in enhancing student learning experiences.
... For learning purposes, LbE has shown promise in facilitating student learning and growth (Bartholomew et al., 2018a(Bartholomew et al., , 2018b(Bartholomew et al., , 2019Canty et al., 2017;Sluijsmans et al., 1998a;Strimel et al., 2021). Additional work has shown that students' decision-making during the peer assessment process promotes critical thinking (Jackson et al., 2022;Sluijsmans et al., 1998aSluijsmans et al., , 1998b and an analytical approach to design (Nicol et al., 2014). ...
... Several recent studies have incorporated adaptive comparative judgment (ACJ) into STEM education setting, especially in project-based design thinking processes (Bartholomew et al., 2018a(Bartholomew et al., , 2018b(Bartholomew et al., , 2019(Bartholomew et al., , 2020Dewit et al., 2021;Strimel et al., 2021). Bartholomew et al., (2018aBartholomew et al., ( , 2018b examined ACJ to evaluate the middle school students' learning through an open-ended problem assigned in a technology and engineering education course. ...
... A few studies delve into student-based ACJ and its educational benefits. Strimel et al. (2021) explored students' ACJ process on their peer work and results showed that students reflected on their own design during the ACJ and thus gain future ideas to implement into their designs. Dewit et al. (2021) incorporated ACJ in the higher educational context, a product-service-system design project for first year master's students, and found that these students could incorporate analytical thinking and metacognition, which improved their learning experiences. ...
Article
Full-text available
Adaptive comparative judgment (ACJ) has been widely used to evaluate classroom artifacts with reliability and validity. In the ACJ experience we examined, students were provided a pair of images related to backpack design. For each pair, students were required to select which image could help them ideate better. Then, they were prompted to provide a justification for their decision. Data were from 15 high school students taking engineering design courses. The current study investigated how students’ reasoning differed based on selection. Researchers analyzed the comments in two ways: (1) computer-aided quantitative content analysis and (2) qualitative content analysis. In the first analysis, we performed sentiment analysis and word frequency analysis using natural language processing. Based on the findings, we explored how the design thinking process was embedded in student reasoning, and if the reasoning varied depending on the claim. Results from sentiment analysis showed that students tend to reveal more strong positive sentiment with short comments when providing reasoning for the selected design. In contrast, when providing reasoning for those items not chosen, results showed a weaker negative sentiment with more detailed reasons. Findings from word frequency analysis showed that students valued the function of design as well as the user perspective, specifically, convenience. Additionally, students took aesthetic features of each design into consideration when identifying the best of the two pairs. Within the engineering design thinking context, we found students empathize by identifying themselves as users, define user’s needs, and ideate products from provided examples.
... This change has been driven by design being mandated by ABET as a core skill that graduates should be equipped with upon graduation. To foster the development of these skills, design projects have been incorporated into the first year of many engineering programs in addition to capstone senior design projects [1], [3]. ...
... ACJ is a holistic assessment approach which involves intentionally and adaptively pairing two items of work which are assessed by a number of individual judges to produce a rank order of performance within a group [3], [4], [11], [12]. The intentional and adaptive pairing of items of work is driven by an algorithm [13] which pairs work based on maximizing the information gained resulting from decisions made by panellists to accelerate the achievement of a reliable rank order of performance [3], [4], [7], [11]. ...
... ACJ is a holistic assessment approach which involves intentionally and adaptively pairing two items of work which are assessed by a number of individual judges to produce a rank order of performance within a group [3], [4], [11], [12]. The intentional and adaptive pairing of items of work is driven by an algorithm [13] which pairs work based on maximizing the information gained resulting from decisions made by panellists to accelerate the achievement of a reliable rank order of performance [3], [4], [7], [11]. Software, such as RM Compare [14], uses this algorithm to automate the presentation of specific items of work to the judges. ...
Conference Paper
Full-text available
This Complete Research paper investigates the holistic assessment of creativity in design solutions in engineering education. Design is a key element in contemporary engineering education, given the emphasis on its development through the ABET criteria. As such, design projects play a central role in many first-year engineering courses. Creativity is a vital component of design capability which can influence design performance; however, it is difficult to measure through traditional assessment rubrics and holistic assessment approaches may be more suitable to assess creativity of design solutions. One such holistic assessment approach is Adaptive Comparative Judgement (ACJ). In this system, student designs are presented to judges in pairs, and they are asked to select the item of work that they deem to have demonstrated the greatest level of a specific criterion or set of criteria. Each judge is asked to make multiple judgements where the work they are presented with is adaptively paired in order to create a ranked order of all items in the sample. The use of this assessment approach in technology education has demonstrated high levels of reliability among judges (~0.9) irrespective of whether the judges are students or faculty. This research aimed to investigate the use of ACJ to holistically assess the creativity of first-year engineering students design solutions. The research also sought to explore the differences, if any, that would exist between the rank order produced by first-year engineering students and the faculty who regularly teach first-year students. Forty-six first-year engineering students and 23 faculty participated in this research. A separate ACJ session was carried out with each of these groups; however, both groups were asked to assess the same items of work. Participants were instructed to assess the creativity of 101 solutions to a design task, a "Ping Pong problem," where undergraduate engineering students had been asked to design a ping pong ball launcher to meet specific criteria. In both ACJ sessions each item of work was included in at least 11 pairwise comparisons, with the maximum number of comparisons for a single item being 29 in the faculty ACJ session and 50 in the student ACJ session. The data from the ACJ sessions were analyzed to determine the reliability of using ACJ to assess creativity of design solutions in first-year engineering education, and to explore whether the rankings produced from the first-year engineering students ACJ session differed significantly from those of the faculty. The results indicate a reasonably high level of reliability in both sessions as measured by the Scale Separation Reliability (SSR) coefficient, SSRfaculty = 0.65 ± 0.02, SSRstudents = 0.71 ± 0.02. Further a strong correlation was observed between the ACJ ranks produced by the students and faculty both when considered in terms of the relative differences between items of work, r = .533, p < .001, and their absolute rank position, σ = .553, p < .001. These findings indicate that ACJ is a promising tool for holistically assessing design solutions in engineering education. Additionally, given the strong correlation between ranks of students and faculty, ACJ could be used to include students in their own assessment to reduce the faculty grading burden or to develop a shared construct of capability which could increase the alignment of teaching and learning.
... First-year engineering students are typically required to engage in team and problem-based activities through introductory coursework to support the development of design capabilities [1], [11]. This type of activity is typically assessed using rubrics, portfolios, and criteriongrading tools [2]- [6]. ...
... This type of activity is typically assessed using rubrics, portfolios, and criteriongrading tools [2]- [6]. However, there are issues in assessing open-ended and divergent tasks in this manner including; reliability, teacher bias, excessive time investment, and timeliness of feedback [1]- [5]. Adaptive Comparative Judgement (ACJ) is a holistic assessment tool that can address such issues [1], [2], [5]- [7]. ...
... However, there are issues in assessing open-ended and divergent tasks in this manner including; reliability, teacher bias, excessive time investment, and timeliness of feedback [1]- [5]. Adaptive Comparative Judgement (ACJ) is a holistic assessment tool that can address such issues [1], [2], [5]- [7]. This paper will explore the capacity of ACJ to be used as an assessment and learning tool for first-year engineering students. ...
Conference Paper
Full-text available
Design projects are an important part of many first-year engineering programs. The desire to employ holistic assessment strategies to student work with open-ended and divergent responses has been widely noted in the literature. Holistic strategies can provide insight into the role of qualities (e.g., professional constructs) that are not typically conducive to standard assessment rubrics. Adaptive Comparative Judgement (ACJ) is an assessment approach that is used to assess design projects holistically. The assessment of projects using ACJ can be carried out by experts or students to scaffold their learning experience. This Work-in-Progress paper explores the use and benefits of ACJ for assessing design projects specifically focusing on first-year engineering students and educators. Further, conference attendees will be provided the opportunity throughout the conference to engage with the ACJ software to experience how this system can work in practice for assessing student design projects.
... Though the majority of studies initially limited the judges to trained graders/instructors, recent work has explored students' (or other untrained judges') competence as judges in ACJ (Rowsome et al., 2013;Jones and Alcock, 2014;Palisse et al., 2021). Findings suggest that, in many cases, students-and even out-of-class-professionals (e.g., practicing engineers; see Strimel et al., 2021) can reach similar consensus to that reached by trained judges or classroom teachers suggesting a shared quality consensus across different judge groups. ...
... For the second, Canty (2012) describes how misfit statistics can be used to identify outlier judges who importantly could have made reasoned judgments but are outliers in terms of having a different view of capability or learning than the majority of the cohort. Multiple studies use correlations between an ACJ rank and grades generated through the use of traditional rubrics as a measure of validity (Canty, 2012;Bartholomew et al., 2018aBartholomew et al., ,b, 2019bStrimel et al., 2021). Based on these studies, while not explicit, an implicit suggestion is being made that the hypothesis that ACJ offers a valid measure of assessment could be falsified if non-significant or negative correlations were observed in these investigations. ...
... Zhang notes that additional research into the role of context-specific language in the ACJ assessment experience is needed; understanding the potentially reciprocal role between understanding content, using appropriate language, and providing feedback may provide insight into new ways for using ACJ as a learning and assessment tool. U. Strimel et al. (2020) utilized a mixed methods approach to gather both qualitative (student written comments and questionnaire) and quantitative data (ACJ data output). In this study ACJ was used as a formative tool (by students) and summative assessment approach (by students, instructors, and industry experts). ...
... The number of ACJ articles by year. Note: As theStrimel et al. (2020) and theBuckley et al. (2020) articles were in the peer-review process at the time of publication preparation they were included as 2019 articles (both submitted in 2019) in the figures in this systematized review. These articles have since been published online, during the 2020 calendar year, thus leading to the 2020 citation for each inTable 1Locations where ACJ research has been conducted(2016)(2017)(2018)(2019) ...
Article
Full-text available
Adaptive Comparative Judgment (ACJ), an approach to the assessment of open-ended problems which utilizes a series of comparisons to produce a standardized score, rank order, and a variety of other statistical measures, has demonstrated high levels of reliability and validity and the potential for application in a wide variety of areas. Further, research into using ACJ, both as a formative and summative assessment tool, has been conducted in multiple contexts across higher education. This systematized review of ACJ research outlines our approach to identifying, classifying, and organizing findings from research with ACJ in higher education settings as well as overarching themes and questions that remain. The intent of this work is to provide readers with an understanding of the current state of the field and several areas of potential further inquiry related to ACJ in higher education.
... Bartholomew et al., 2021), and its adaptability across contexts (S. Huber et al., 2021;Strimel et al., 2021). In LbE, exemplars are adapted to individualized judgment sessions using web-based software, randomizing samples for pairwise comparison in such a way that each student makes a different set of comparisons, followed by a whole-class discussion. ...
Article
Full-text available
Learning by Evaluating (LbE) is an instructional approach that involves students making comparative judgements of pairs of artifacts, such as student work, portfolios, prototypes, or curated images related to a topic of instruction to enhance critical thinking and decision-making skills. Situated as a primer for learning, the efficacy of LbE stems from actively engaging students in the evaluation process, scaffolding their learning, fostering self-reflection in decision-making, and facilitating the transfer of acquired skills and concepts to academic contexts and project work. However, there is an opportunity to gain deeper insights into classroom integration of LbE and the factors that may influence the student experience. This study adopts a design-based research approach to analyze LbE within a secondary STEM education setting, with the objective of optimizing classroom integration. By analyzing student comments generated during LbE, the research explores factors shaping the students’ learning experiences, examining the extent to which students engage in informed decision-making, offer justifications, and express sentiments throughout the process. Additionally, the study explores how teachers strategically incorporate LbE into their classroom, aligning LbE sessions with curriculum objectives. Findings indicate a diverse pattern of student engagement, sentiments, and decision-making approaches across STEM classrooms. This study contributes to research on LbE by offering insights into the dynamics between teacher implementation and student engagement. The insights gained highlight the potential for refining the effectiveness of LbE within the classroom. Notably, the research emphasizes the significance of how LbE sessions are framed and strategically integrated to enrich the overall educational experience for both students and educators.
... Following the interactive discussion on setting up an ACJ assessment session, participants will be taken through the interface for ACJ (see Figure 1) and the process of making judgements and providing feedback on items of work. Feedback on items of work can be collated and provided to students to facilitate formative assessment [3]- [5]. Participants will then be invited to engage in an ACJ assessment session. ...
... A November 2021 search of the Journal of Engineering Education, for example, returns only a single article with the term "judgment" (search term: judg*) in the title or keywords -Jesiek et al.'s (2020) work on situational judgment as part of global engineering competency. A parallel search in the European Journal of Engineering Education also yielded only a single citation, which focused on using the process of adaptive comparative judgment in design pedagogy (Strimel et al. 2021). Consistent with the presence of judgment in the Civil Engineering BOK, the Journal of Civil Engineering Education (formerly Journal of Professional Issues in Engineering Education and Practice) does include some empirical research studies related directly to engineering judgment. ...
Article
Full-text available
Engineering judgment is critical to both engineering education and engineering practice, and the ability to practice or participate in engineering judgment is often considered central to the formation of professional engineering identities. In practice, engineers must make difficult judgments that evaluate potentially competing objectives, ambiguity, uncertainty, incomplete information, and evolving technical knowledge. Nonetheless, while engineering judgment is implicit in engineering work and so central to identification with the profession, educators and practitioners have few actionable frameworks to employ when considering how to develop and assess this capacity in students. In this paper, we propose a theoretical framework designed to inform both educators and researchers that positions engineering judgment at the intersection of the cognitive dimensions of naturalistic decision-making, and discursive dimensions of identity. Our proposed theory positions engineering judgment not only as an individual capacity practiced by individual engineers alone but also as the capacity to position oneself within the discursive community so as to participate in the construction of engineering judgments among a group of professionals working together. Our theory draws on several strands of existing research to theorize a working framework for engineering judgment that considers the cognitive processes associated with making judgments and the inextricable discursive practices associated with negotiating those judgments in context. In constructing this theory, we seek to provide engineering education practitioners and researchers with a framework that can inform the design of assignments, curricula, or experiences that are intended to foster students’ participation in the development and practice of engineering judgment.
... Given the large number of studies already available exploring issues of reliability and validity, this paper instead focuses on recent calls to explore the potential use of comparative judgement as a learning tool rather than an assessment tool (e.g., Strimel et al., 2020). As of yet, there has been minimal research looking at the role comparative judgement might play pedagogically, particularly in the context of mathematics. ...
Book
Full-text available
Leong, Y. H., Kaur, B., Choy, B. H., Yeo, J. B. W., & Chin, S. L. (Eds.). (2021). Proceedings of the 43rd Annual Conference of the Mathematics Education Research Group of Australasia (MERGA): Excellence in Mathematics Education: Foundations and Pathways. Singapore: MERGA. Available online at https://www.merga.net.au/Public/Publications/Annual_Conference_Proceedings/2021-MERGA-conference-proceedings.aspx
... For the second, Canty (2012) describes how misfit statistics can be used to identify outlier judges who importantly could have made reasoned judgments but are outliers in terms of having a different view of capability or learning than the majority of the cohort. Multiple studies use correlations between an ACJ rank and grades generated through the use of traditional rubrics as a measure of validity (Canty, 2012;Seery et al., 2012;Bartholomew et al., 2018aBartholomew et al., ,b, 2019bStrimel et al., 2021). Based on these studies, while not explicit, an implicit suggestion is being made that the hypothesis that ACJ offers a valid measure of assessment could be falsified if non-significant or negative correlations were observed in these investigations. ...
Article
Full-text available
There is a continuing rise in studies examining the impact that adaptive comparative judgment (ACJ) can have on practice in technology education. This appears to stem from ACJ being seen to offer a solution to the difficulties faced in the assessment of designerly activity which is prominent in contemporary technology education internationally. Central research questions to date have focused on whether ACJ was feasible, reliable, and offered broad educational merit. With exploratory evidence indicating this to be the case, there is now a need to progress this research agenda in a more systematic fashion. To support this, a critical review of how ACJ has been used and studied in prior work was conducted. The findings are presented thematically and suggest the existence of internal validity threats in prior research, the need for a theoretical framework and the consideration of falsifiability, and the need to justify and make transparent methodological and analytical procedures. Research questions now of pertinent importance are presented, and it is envisioned that the observations made through this review will support the design of future inquiry.
... Though the majority of studies initially limited the judges to trained graders/instructors, recent work has explored students' (or other untrained judges') competence as judges in ACJ (Rowsome et al., 2013;Jones and Alcock, 2014;Palisse et al., 2021). Findings suggest that, in many cases, students-and even out-of-class-professionals (e.g., practicing engineers; see Strimel et al., 2021) can reach similar consensus to that reached by trained judges or classroom teachers suggesting a shared quality consensus across different judge groups. ...
Article
Full-text available
Adaptive comparative judgment (ACJ) is a holistic judgment approach used to evaluate the quality of something (e.g., student work) in which individuals are presented with pairs of work and select the better item from each pair. This approach has demonstrated high levels of reliability with less bias than other approaches, hence providing accurate values in summative and formative assessment in educational settings. Though ACJ itself has demonstrated significantly high reliability levels, relatively few studies have investigated the validity of peer-evaluated ACJ in the context of design thinking. This study explored peer-evaluation, facilitated through ACJ, in terms of construct validity and criterion validity (concurrent validity and predictive validity) in the context of a design thinking course. Using ACJ, undergraduate students (n = 597) who took a design thinking course during Spring 2019 were invited to evaluate design point-of-view (POV) statements written by their peers. As a result of this ACJ exercise, each POV statement attained a specific parameter value, which reflects the quality of POV statements. In order to examine the construct validity, researchers conducted a content analysis, comparing the contents of the 10 POV statements with highest scores (parameter values) and the 10 POV statements with the lowest scores (parameter values)—as derived from the ACJ session. For the criterion validity, we studied the relationship between peer-evaluated ACJ and grader’s rubric-based grading. To study the concurrent validity, we investigated the correlation between peer-evaluated ACJ parameter values and grades assigned by course instructors for the same POV writing task. Then, predictive validity was studied by exploring if peer-evaluated ACJ of POV statements were predictive of students’ grades on the final project. Results showed that the contents of the statements with the highest parameter values were of better quality compared to the statements with the lowest parameter values. Therefore, peer-evaluated ACJ showed construct validity. Also, though peer-evaluated ACJ did not show concurrent validity, it did show moderate predictive validity.
... All final POVs, from both experimental and control groups, were extracted, collated, and anonymized for use in two additional ACJ sessions conducted at the conclusion of the course; (1) completed by all students, control and treatment (n = 550), and (2) completed by course instructors (n = 6). These two separate ACJ sessions were used to investigate the quality of student POV by both (1) the students and (2) the instructors as other research findings (e.g., Strimel et al. 2020) indicate differences in student and instructor perceptions during formative feedback and assessment experiences. The resulting statistics, derived from both the instructor and the student ACJ sessions, were used to investigate the guiding research question related to the impact on student performance of the LbE intervention. ...
Article
Full-text available
Traditional efforts around improving assessment often center on the teacher as the evaluator of work rather than the students. These assessment efforts typically focus on measuring learning rather than stimulating, promoting, or producing learning in students. This paper summarizes a study of a large sample of undergraduate students (n = 550) in an entry-level design-thinking course who engaged with Adaptive Comparative Judgment (ACJ), a form of assessment, as a learning mechanism. Following random assignment into control and treatment sections, students engaged in identical activities with the exception of a 20-minute intervention we call learning by evaluating (LbE). Prior to engaging in a Point Of View (POV) creation activity, treatment group students engaged in LbE by viewing pairs of previously-collected POV statements through ACJ; in each case they viewed two POV statements side-by-side and selected the POV statement they believed was better. Following this experience, students created their own POV statements and then the final POV statements, from both the control and treatment students, were collected and evaluated by instructors using ACJ. In addition, qualitative data consisting of student comments, collected during ACJ comparisons, were coded by the researchers to further explore the potential for the students to use class knowledge while engaging in the LbE review of peer work. Both the quantitative and qualitative data sets were analyzed to investigate the impact of the LbE activity. Consistent with other ACJ research findings, significant positive learning gains were found for students who engaged in the intervention. Researchers also noted that these findings did not indicate the actual quality of the assignments, meaning the while students who engaged in the LbE intervention were better than their peers, they were not necessarily “good” at the assignment themselves. Discussion of these findings and areas for further inquiry are presented.
Article
Full-text available
Adaptive comparative judgment (ACJ) has proven to be a valid, reliable, and feasible method for assessing student performance in open-ended design scenarios. In addition to the use of ACJ for purely assessment and evaluation, research has demonstrated an opportunity to identify the design values of judges involved with the ACJ process. The potential for ACJ, as a tool for understanding cultural design values, and potentially facilitating international collaboration, is intriguing. Therefore, this study established three panels of judges, from countries around the world, to assess one body of student work using the ACJ method. The similarities, differences, and findings from these assessment results were analyzed, revealing distinct design values, preferences, and differences for each group of judges from the different locations.
Article
Full-text available
Educational assessment has profound effects on the nature and depth of learning that students engage in. Typically there are two core types discussed within the pertinent literature; criterion and norm referenced assessment. However another form, ipsative assessment, refers to the comparison between current and previous performance within a course of learning. This paper gives an overview of an ipsative approach to assessment that serves to facilitate an opportunity for students to develop personal constructs of capability and to provide a capacity to track competence based gains both normatively and ipsatively. The study cohort (n = 128) consisted of undergraduate students in a Design and Communication Graphics module of an Initial Technology Teacher Education programme. Four consecutive design assignments were designed to elicit core graphical skills and knowledge. An adaptive comparative judgment method was employed to rank responses to each assignment which were subsequently analysed from an ipsative perspective. The paper highlights the potential of this approach in developing students’ epistemological understanding of graphical and technological education. Significantly, this approach demonstrates the capacity of ACJ to track performance over time and explores this relative to student ability levels in the context of conceptual design.
Conference Paper
Full-text available
The type of assessment implemented within an educational course has profound effects on the nature and depth of learning that students engage in. Typically there are two core types discussed within the pertinent literature. These are criterion and norm referenced assessment. The nature and impact of these modes of assessment have been explored within a variety of learning contexts. However, another and often overlooked form of assessment is ipsative enquiry. This refers to the comparison, by the student, of current performance to previous performance within a course of learning. Ipsative assessment has been shown to increase motivational effects among students and promotes inclusivity in the learning process. Despite the potential benefits, its role in technology education contexts is under researched. This paper gives an overview of an ipsative approach to assessment that serves two functions. Firstly, to facilitate an opportunity for each student to develop a personal construct of what it means to be capable and secondly to provide a capacity to track their level of competence based gain both normatively and ipsatively. The study tracks the performance of a cohort of student teachers (N = 128) in a core graphics module during a year three semester of an Initial Technology Teacher Education (ITTE) programme. Four consecutive graphical design tasks, focusing on the application of graphical principles, were designed to elicit core graphical skills and knowledge. An adaptive comparative judgment method (see Pollitt (2012) and Kimbell (2012)) was employed by the students to rank the responses to each task. The paper highlights the potential of this approach in developing students' epistemological understanding of graphical and technological education, while tracking competence based gain through ipsative enquiry within the collective performance of their peers. Significantly, this approach demonstrates the capacity to track performance over time. The paper concludes with a discussion around the benefits of utilising ipsative assessment in design and technology education.
Article
Full-text available
While research into the effectiveness of open-ended problems has made strides in recent years, less has been done around the assessment of these problems. The large number of potentially-correct answers makes this assessment difficult. Adaptive Comparative Judgment (ACJ), an approach based on assessors/judges working through a series of paired comparisons and selecting the better of two items, has demonstrated high levels of reliability and effectiveness with these problems. Research into using ACJ, both formative and summative, has been conducted at all grade levels within K-16 education (ages 5-18), with a myriad of findings. This paper outlines a systematic review process used to identify articles and synthesizes the findings from the included research around ACJ in K-16 education settings. The intent of this systematic review is to inform decision-makers weighing the potential for ACJ integration in educational settings with researched-based findings around ACJ in K-16 educational settings. Further, this review will also uncover potential areas for future researchers to investigate further into ACJ and its' implications in educational settings.
Article
Full-text available
While design-based pedagogies have increasingly been emphasized, the assessment of design projects remains difficult due to the large number of potentially “correct” solutions. Adaptive comparative judgment (ACJ), an approach based on assessors/judges working through a series of paired comparisons and selecting the better of two items, has demonstrated high levels of inter-rater reliability with design projects. Efforts towards using ACJ for assessing design have largely centered on summative assessment. However, evidence suggests that ACJ may be a powerful tool for formative assessment and design learning when undertaken by students. Therefore, this study investigated middle school students participated in ACJ at the midpoint and conclusion of a design project, both receiving and providing feedback to/from their peers through the ACJ process. Findings demonstrated promise for using ACJ, as a formative assessment and feedback tool, to improve student learning and achievement.
Article
Full-text available
This article examines the use of an alternative form of assessment for engineering design projects called adaptive comparative judgment (ACJ). The researchers employed an ACJ tool to evaluate undergraduate engineering student design projects in an effort to examine its' reliability, validity, and utility in comparison with traditional assessment techniques. The ACJ process employed multiple judges to compare the design artifacts of 16 first-year engineering majors. The authors conducted an analysis of the reliability and validity of the ACJ method compared to the traditional rubric used to evaluate the project and the performance data of each student's design prototype. For these design artifacts, ACJ demonstrated a strong alignment with traditional assessment methods (r s = 0.79, p < 0.01). Yet, neither ACJ nor traditional assessment results were significantly correlated with the actual performance of the design prototype. Additionally, the findings indicate the amount of time each judge devotes to judging student work using ACJ does not significantly impact the reliability of their assessment.
Article
Full-text available
The emphasis on and implementation of P-12 engineering education has continued to gain momentum in the United States (Grubbs & Strimel, 2016). Yet, the struggle to keep the technology and engineering education (TEE) school subject strongly positioned in elementary and secondary schools endures (Starkweather, 2015), which de Vries (2015) believes may be result of a lacking epistemic basis for the subject. For example, TEE worldwide can be viewed as deficient of agreed upon fundamental concepts, laws, and principles that could put it on a distinguishable level with science and mathematics education (De Vries, 2015). Unlike these other school subjects, TEE has had a history of evolving dynamic content. This content once involved woodworking and metalworking but now embraces topics such as computer-aided design, robotics, and control systems and fosters student abilities to tinker, design, create, critique, make, and invent (Starkweather, 2015). While increased focus on these more recent topics may be the main reason that a student chooses to become an engineer, engineering technologist, technician, computer scientist, and even a TEE teacher (See Figure 1), the subject continues to have a positioning problem (Starkwether, 2015). Strimel, Grubbs, and Wells (2016) offered an approach to harness the engineering momentum and realign TEE with post-secondary engineering-related studies. The purpose is to use engineering as the epistemic or knowledge base for the subject, thus establishing stable content and creating a better position for TEE within schools. As a result, more students could be exposed to coursework focused on engineering and technological literacy and consequently, improve their capabilities to design, invent, innovate, and address societal problems. It can be difficult, however, for students to understand what they will experience as they leave high school TEE programs and enter post-secondary studies. While all of the career fields listed in Figure 1 are important, this article focuses specifically on demystifying the transition from TEE to post-secondary engineering studies in an effort to outline some foundational content for the evolving TEE school subject and to provide a guide for teachers to use with students who may show interest in pursuing an engineering career.
Article
Full-text available
Feedback is an important part of design education. To better understand how feedback is provided to students on their engineering design work, we characterised and compared first-year engineering students’, undergraduate teaching assistants’, and educators’ written feedback on sample student design work. We created a coding scheme including two domains: Substance and Focus of feedback. Educators made more and longer comments than undergraduate teaching assistants, and undergraduate teaching assistants made more and longer comments than first-year students. The first-year students focused on giving specific directions in their feedback while educators and undergraduate teaching assistants asked thought-provoking questions. Students tended to make more comments about the ways that their peers had communicated their design work while educators and undergraduate teaching assistants made more comments about the design ideas presented in the sample work. This study offers implications for practice for supporting educators, undergraduate teaching assistants, and first-year engineering students to be able to provide feedback on design work.
Article
Full-text available
March 2017 technology and engineering teacher 13 feature article assessing open-ended design problems An emphasis on ACJ, as a promising design-assessment approach, will strengthen our position and potential as a vital element of K-12 education moving forward.
Article
Full-text available
There is much support in the research literature and in the standards for the integration of engineering into science education, particularly the problem solving approach of engineering design. Engineering design is most often represented through design-based learning. However, teachers often do not have a clear definition of engineering design, appropriate models for teaching students, or the knowledge and experience to develop integrative learning activities. The purpose of this article is to examine definitions of engineering design and how it can be utilized to create a transdisciplinary approach to education to advance all students' general STEM literacy skills and 21st century cognitive competencies. Suggestions for educators who incorporate engineering design into their instruction will also be presented.
Article
Full-text available
A perceived inability to assess creative attributes of students’ work has often precluded creativity instruction in the classroom. The Consensual Assessment Technique (CAT) has shown promise in a variety of domains for its potential as a valid and reliable means of creativity assessment. Relying upon an operational definition of creativity and a group of raters experienced in a given domain, the CAT offers the field of engineering education an assessment method that has demonstrated discriminant validity for dimensions of creativity as well as for technical strength and aesthetic appeal. This paper reports on a web-based adaptation of the CAT for rating student projects developed during a weeklong engineering camp. Images of resulting scale models, technical drawings, and poster presentation materials were displayed on a website which was accessed by a team of seven independent raters. Online survey software featuring a series of Likert-type scales was used for ratings. The raters viewed project images on larger computer screens and used iPads to input their assessments. This effort extended the accessibility of the CAT to raters beyond limitations of geographic location.
Article
Full-text available
Transparency regarding criteria for success in assessment processes is challenging for most teachers. The context of this study is primary school technology education. With the purpose to establish what criteria for success teachers put forward during the act of assessment, think-aloud protocols were collected from five primary teachers during an assessment act. Results are based on content analysis of think-aloud protocols and quantitative measures of reliability in order to ascertain teachers’ motives for decision-making when assessing Year 5 pupils’ multimodal e-portfolios. Findings show consensus among these teachers, focusing on the execution of the task in relation to the whole, rather than to particular pieces of student work. The results confirm the importance of task design, where active learning in combination with active tutoring is an integral part, including provision of time and space for pupils to finish their work.
Article
Full-text available
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone’s method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better. From many such comparisons a measurement scale is created showing the relative quality of students’ work; this can then be referenced in familiar ways to generate test results. The judges are asked only to make a valid decision about quality, yet ACJ achieves extremely high levels of reliability, often considerably higher than practicable operational marking can achieve. It therefore offers a radical alternative to the pursuit of reliability through detailed marking schemes. ACJ is clearly appropriate for performances like writing or art, and for complex portfolios or reports, but may be useful in other contexts too. ACJ offers a new way to involve all teachers in summative as well as formative assessment. The model provides strong statistical control to ensure quality assessment for individual students. This paper describes the theoretical basis of ACJ, and illustrates it with outcomes from some of our trials.
Article
Full-text available
This paper is based on the premises that the purpose of engineering education is to graduate engineers who can design, and that design thinking is complex. The paper begins by briefly reviewing the history and role of design in the engineering curriculum. Several dimensions of design thinking are then detailed, explaining why design is hard to learn and harder still to teach, and outlining the research available on how well design thinking skills are learned. The currently most-favored pedagogical model for teaching design, project-based learning (PBL), is explored next, along with available assessment data on its success. Two contexts for PBL are emphasized: first-year cornerstone courses and globally dispersed PBL courses. Finally, the paper lists some of the open research questions that must be answered to identify the best pedagogical practices of improving design learning, after which it closes by making recommendations for research aimed at enhancing design learning.
Article
Full-text available
In this article I describe the context within which we developed project e-scape and the early work that laid the foundations of the project. E-scape (e-solutions for creative assessment in portfolio environments) is centred on two innovations. The first concerns a web-based approach to portfolio building; allowing learners to build their portfolios in real-time, directly from hand-held peripheral technologies in studios, workshops, laboratories, and from off-site settings. The second concerns the development of a radical web-based approach to the assessment of performance as captured in these portfolios. In many parts of the world, portfolios feature as part of school-based assessments—including those undertaken for school-leaving and certification purposes. In this setting assessment reliability is critical and (judged by practice in England & Wales) is typically far from satisfactory. The approach developed within e-scape has radically improved assessment reliability. Whilst these two innovations represent the most dramatic outcomes of the project, they arose from a set if principles held by the team of researchers in the Technology Education Research Unit (TERU) at Goldsmiths University of London. And central to these principles is that both the portfolio and the assessment approaches should be embedded in a view of active learning, such that engagement with them has a positive impact on classroom practice. As the title suggests, this paper outlines the origins, underlying principles and early development of project e-scape.
Article
Full-text available
In the opening paper in this Special Edition I outlined the major issues that led to the establishment of project e-scape. The project was intended to develop systems and approaches that enabled learners to build real-time web-based portfolios of their performance (initially) in design & technology and additionally to build systems and approaches to facilitate the web-based assessment of those portfolios. The project was commissioned by the Qualifications and Curriculum Authority (QCA) with additional ‘buy-in’ from Awarding Bodies—who were seen by QCA as the leading beneficiaries of a successful project. The project was designed in three phases. I have outlined—in the Introduction to this Special Edition—the early exploratory work that we undertook within phase 1, the aim of which was to prove the viability of the concept. This was achieved, and QCA then commissioned phase 2 with a brief to build a working prototype system and run it through a national pilot-testing programme in 2006. Age 15 was the target age-group, aligning as closely as we could with the Awarding Body requirements for the General Certificate of Secondary Education (GCSE) that runs with age 16 learners. The successes of the phase 2 prototype—both as classroom activity and as reliable assessment—led QCA and Becta (the body responsible for funding ICT developments in schools) to commission phase 3 in which we explored the potential of the e-scape system for wider application. Specifically, we were required to demonstrate the transferability of the system to other curriculum areas beyond design & technology, and the scalability of the system if it were to be used for national assessment purposes, with hundreds of thousands of candidates. In this paper, I outline the approach that we adopted through the e-scape research; describe the major elements of the work both in terms of classroom/curriculum practice and in terms of new approaches to assessment; and analyse some of the key issues that arise from it.
Article
Full-text available
Article
Full-text available
Libraries are increasingly called upon to demonstrate student learning outcomes and the tangible benefits of library educational programs. This study reviewed and compared the efficacy of traditionally used measures for assessing library instruction, examining the benefits and drawbacks of assessment measures and exploring the extent to which knowledge, attitudes, and behaviors actually paralleled demonstrated skill levels. AN OVERVIEW OF RECENT LITERATURE ON THE EVALUATION OF INFORMATION LITERACY EDUCATION ADDRESSED THESE QUESTIONS: (1) What evaluation measures are commonly used for evaluating library instruction? (2) What are the pros and cons of popular evaluation measures? (3) What are the relationships between measures of skills versus measures of attitudes and behavior? Research outcomes were used to identify relationships between measures of attitudes, behaviors, and skills, which are typically gathered via attitudinal surveys, written skills tests, or graded exercises. Results provide useful information about the efficacy of instructional evaluation methods, including showing significant disparities between attitudes, skills, and information usage behaviors. This information can be used by librarians to implement the most appropriate evaluation methods for measuring important variables that accurately demonstrate students' attitudes, behaviors, or skills.
Article
Full-text available
( This reprinted article originally appeared in Psychological Review, 1927, Vol 34, 273–286. The following is a modified version of the original abstract which appeared in PA, Vol 2:527. ) Presents a new psychological law, the law of comparative judgment, along with some of its special applications in the measurement of psychological values. This law is applicable not only to the comparison of physical stimulus intensities but also to qualitative judgments, such as those of excellence of specimens in an educational scale. The law is basic for work on Weber's and Fechner's laws, applies to the judgments of a single observer who compares a series of stimuli by the method of paired comparisons when no "equal" judgments are allowed, and is a rational equation for the method of constant stimuli.
Conference Paper
Full-text available
Creativity is integral to design success, yet creativity is not well-defined nor easily evaluated. A goal of engineering design courses is to teach creativity, or at least creativity methods, e.g., brainstorming, random stimuli, etc., that when applied, should increase design creativity. However, it is unclear how creativity outcomes can be evaluated, or even if creativity should be evaluated within an engineering design course, such as for a design project. In this paper, we discuss creativity and approaches to evaluating creativity using both the literature and our experiences with evaluating design project creativity. We also describe future work required to further understand and develop methods of evaluating design project creativity with the aim of encouraging students to consciously work towards creative designs.
Conference Paper
Full-text available
Integrating more engineering contexts, introducing advanced engineering topics, addressing multiple ABET criteria, and serving under-represented student populations in foundation engineering courses are some of the opportunities realized by the use of a new framework for developing real-world client-driven problems. These problems are called model-eliciting activities (MEAs), and they are based on the models and modeling perspective developed in mathematics education. Through a NSF-HRD gender equity project that has funded the development, use, and study of MEAs in undergraduate engineering courses for increasing women's interest in engineering, we have found that the MEA framework fosters significant change in the way engineering faculty think about their teaching and their students. In this paper, we will present the six principles that guide the development of an MEA, detail our motivation for using the MEA framework to construct open-ended problems, and discuss the opportunities and challenges to creating, implementing, and assessing MEAs.
Article
Improving graphics education may begin with understanding best practices for providing, receiving, and improving formative feedback. Challenges related to anonymity, efficiency, and validity in peer critique settings all contribute to a difficult-to-implement process. This research investigates university-level computer graphics students while engaged in adaptive comparative judgement (ACJ), as a formative learning, assessment, and feedback tool, during several open-ended graphics design projects. A control group of students wrote feedback on papers in small group critiques while the experimental group students participated in ACJ, acting as judges of peer work and providing and receiving feedback to, and from, their peers. Relationships between the paper-based group approach and the ACJ approach and student achievement were explored. Further, this paper discusses the potential benefits, and challenges, of using ACJ as a formative assessment and peer feedback tool as well as student impressions of both approaches toward peer formative assessment and feedback.
Chapter
The endeavor to support creative and innovative activities within the construct of testing, grading, and rewarding in a standardized, reliable, and equitable way is a significant challenge for every subject. Technology education supports the development of a critical and inquisitive disposition (Williams 2011), yet one can question the capacity to effectively and validly measure the capabilities that enact this disposition. This chapter highlights the importance of integrating professional judgment as a means of supporting a more effective assessment of the evidence and actions that allude to the characteristics of a technologically capable person. The chapter discusses the proximal and distal effects of using adaptive comparative judgment (ACJ) as a means of judging evidence of capability so as to demonstrate the validity of the assessment method while supporting the pragmatic requirements of formal education. The chapter also discusses critical aspects of the impact assessment practices have from the perspective of the teacher and the student. The chapter concludes by presenting ACJ as a central approach to effective assessment “as” learning.
Article
This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put greater emphasis on the assessment of teamwork performance (process assessment). The remaining methods try to strike a balance between product assessment and process assessment. To discuss the characteristics of these methods, graphical plots generated by the mathematical equations that collectively cover all possible team learning scenarios are discussed. Finally, a typical teamwork example is used to simplify the discussions. Although each of the methods discussed has its own merits for a particular application scenario, recent methods are relatively better in terms of a number of evaluation criteria.
Article
This paper presents the response of the technology teacher education programmes at the University of Limerick to the assessment challenge created by the shift in philosophy of the Irish national curriculum from a craft-based focus to design-driven education. This study observes two first year modules of the undergraduate programmes that focused on the development of subject knowledge and practical craft skills. Broadening the educational experience and perspective of students to include design based aptitudes demanded a clear aligning of educational approaches with learning outcomes. As design is a complex iterative learning process it requires a dynamic assessment tool to facilitate and capture the process. Considering the critical role of assessment in the learning process, the study explored the relevance of individual student-defined assessment criteria and the validity of holistic professional judgement in assessing capability within a design activity. The kernel of the paper centres on the capacity of assessment criteria to change in response to how students align their work with evidence of capability. The approach also supported peer assessment, where student-generated performance ranks provided an insight into not only how effectively they evidenced capability but also to what extent their peers valued it. The study investigated the performance of 137 undergraduate teachers during an activity focusing on the development of design, processing and craft skills. The study validates the use of adaptive comparative judgement as a model of assessment by identifying a moderate to strong relationship with performance scores obtained by two different methods of assessment. The findings also present evidence of capability beyond the traditional measures. Level of engagement, diversity, and problem solving were also identified as significant results of the approach taken. The strength of this paper centres on the capacity of student-defined criterion assessment to evidence learning, and concludes by presenting a valid and reliable holistic assessment supported by comparative judgements.
Article
Teaching engineering design through senior project or capstone engineering courses has increased in recent years. The trend toward increasing the design component in engineering curricula is part of an effort to better prepare graduates for engineering practice. This paper describes the standard practices and current state of capstone design education throughout the country as revealed through a literature search of over 100 papers relating to engineering design courses. Major topics include the development of capstone design courses, course descriptions, project information, details of industrial involvement, and special aspects of team-oriented design projects. An extensive list of references is provided.
Article
In this paper we report the results of an in-depth study of engineering student approaches to an open-ended design problem. To do this, verbal protocols were collected from 26 freshman (first year) and 24 senior (fourth year) engineering students as they designed a playground for a fictitious neighborhood. We analysed these protocols to document and compare the student design processes. The results show that the seniors produced higher quality designs. In addition, compared to the freshmen, the seniors gathered more information, considered more alternative solutions, transitioned more frequently between design steps and progressed further into the final steps of the design process.
Paper presented at the 106th Mississippi Valley Technology Teacher Education Conference
  • S R Bartholomew
  • N Mentzer
  • M D Jones
Bartholomew, S. R., N. Mentzer, and M. D. Jones. 2019. "Learning by Evaluating." Paper presented at the 106th Mississippi Valley Technology Teacher Education Conference, Nashville, Tennessee.
Adaptive Comparative Judgment: A Better Way of Assessment
  • S R Bartholomew
  • G J Strimel
Bartholomew, S. R., and G. J. Strimel. 2017. "Adaptive Comparative Judgment: A Better Way of Assessment." Techniques 92 (3): 44-49.
Investigating the Reliability of Adaptive Comparative Judgment
  • T Bramley
Bramley, T. 2015. "Investigating the Reliability of Adaptive Comparative Judgment." Cambridge Assessment 36.
Collaborative Design Decision-making as Social Process
  • C Campbell
  • W M Roth
  • A Jornet
Campbell, C., W. M. Roth, and A. Jornet. 2018. "Collaborative Design Decision-making as Social Process." European Journal of Engineering Education. doi:10.1080/03043797.2018. 1465028.
What is Engineering's Place in STEM Certification
  • N Fortenberry
Fortenberry, N. 2018. "What is Engineering's Place in STEM Certification?" ASEE Responds. National Science Teachers Association Blog.
Letter to Next Generation Science Standards Writing Committee
  • M H Hosni
Hosni, M. H. 2013, January 29. Letter to Next Generation Science Standards Writing Committee. https://www.asme.org/ getmedia/25be935b-0b66-466e-acf1-e212b534a386/ PS1301_ASME_Board_on_Education_Comments_on_the_Second_Public_Draft_of_the_Next_ Generation_Science_Standards.aspx
Design Sketching: A Lost Skill
  • T R Kelley
Kelley, T. R. 2017. "Design Sketching: A Lost Skill." Technology & Engineering Teacher 76 (8): 8-13.
On 'Reliability' bias in ACJ
  • A Pollitt
Pollitt, A. 2015. On 'Reliability' bias in ACJ. Cambridge Exam Research. https://www.researchgate.net/publication/ 283318012_On_'Reliability'_bias_in_ACJ.
Are the Engineering Design Processes Used in Engineering and Technology Education Classrooms an Accurate Reflection of the Practices Used in Industry and Other Technical Fields?” Paper presented at the 103rd Annual Mississippi Valley Technology Teacher Education Conference
  • E M Reeve
Reeve, E. M. 2016. "Are the Engineering Design Processes Used in Engineering and Technology Education Classrooms an Accurate Reflection of the Practices Used in Industry and Other Technical Fields?" Paper presented at the 103rd Annual Mississippi Valley Technology Teacher Education Conference, Rosemont, IL.
Adaptive Comparative Judgment as an Alternative to the Delphi Method for Establishing a Concept Inventory for Graphics
  • N Seery
  • T Delahunty
  • S Sorby
  • M Sadowski
Seery, N., T. Delahunty, S. Sorby, and M. Sadowski. 2018. "Adaptive Comparative Judgment as an Alternative to the Delphi Method for Establishing a Concept Inventory for Graphics." Paper presented at the American society for engineering education design Graphics Division Annual Conference, Kingston, Jamaica.
Learning by Evaluating.” Paper presented at the 106th Mississippi Valley Technology Teacher Education Conference
  • S R Bartholomew
  • N Mentzer
  • M D Jones
What is Engineering’s Place in STEM Certification?” ASEE Responds
  • N Fortenberry