ArticlePublisher preview available

Learning by evaluating (LbE) through adaptive comparative judgment

Springer Nature
International Journal of Technology and Design Education
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Traditional efforts around improving assessment often center on the teacher as the evaluator of work rather than the students. These assessment efforts typically focus on measuring learning rather than stimulating, promoting, or producing learning in students. This paper summarizes a study of a large sample of undergraduate students (n = 550) in an entry-level design-thinking course who engaged with Adaptive Comparative Judgment (ACJ), a form of assessment, as a learning mechanism. Following random assignment into control and treatment sections, students engaged in identical activities with the exception of a 20-minute intervention we call learning by evaluating (LbE). Prior to engaging in a Point Of View (POV) creation activity, treatment group students engaged in LbE by viewing pairs of previously-collected POV statements through ACJ; in each case they viewed two POV statements side-by-side and selected the POV statement they believed was better. Following this experience, students created their own POV statements and then the final POV statements, from both the control and treatment students, were collected and evaluated by instructors using ACJ. In addition, qualitative data consisting of student comments, collected during ACJ comparisons, were coded by the researchers to further explore the potential for the students to use class knowledge while engaging in the LbE review of peer work. Both the quantitative and qualitative data sets were analyzed to investigate the impact of the LbE activity. Consistent with other ACJ research findings, significant positive learning gains were found for students who engaged in the intervention. Researchers also noted that these findings did not indicate the actual quality of the assignments, meaning the while students who engaged in the LbE intervention were better than their peers, they were not necessarily “good” at the assignment themselves. Discussion of these findings and areas for further inquiry are presented.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
International Journal of Technology and Design Education (2022) 32:1191–1205
https://doi.org/10.1007/s10798-020-09639-1
1 3
Learning byevaluating (LbE) throughadaptive comparative
judgment
ScottR.Bartholomew1 · NathanMentzer2· MatthewJones3· DerekSherman4·
SwetaBaniya5
Accepted: 11 November 2020 / Published online: 21 November 2020
© Springer Nature B.V. 2020
Abstract
Traditional efforts around improving assessment often center on the teacher as the evalua-
tor of work rather than the students. These assessment efforts typically focus on measuring
learning rather than stimulating, promoting, or producing learning in students. This paper
summarizes a study of a large sample of undergraduate students (n = 550) in an entry-level
design-thinking course who engaged with Adaptive Comparative Judgment (ACJ), a form
of assessment, as a learning mechanism. Following random assignment into control and
treatment sections, students engaged in identical activities with the exception of a 20-min-
ute intervention we call learning by evaluating (LbE). Prior to engaging in a Point Of View
(POV) creation activity, treatment group students engaged in LbE by viewing pairs of pre-
viously-collected POV statements through ACJ; in each case they viewed two POV state-
ments side-by-side and selected the POV statement they believed was better. Following this
experience, students created their own POV statements and then the final POV statements,
from both the control and treatment students, were collected and evaluated by instructors
using ACJ. In addition, qualitative data consisting of student comments, collected during
ACJ comparisons, were coded by the researchers to further explore the potential for the
students to use class knowledge while engaging in the LbE review of peer work. Both the
quantitative and qualitative data sets were analyzed to investigate the impact of the LbE
activity. Consistent with other ACJ research findings, significant positive learning gains
were found for students who engaged in the intervention. Researchers also noted that these
findings did not indicate the actual quality of the assignments, meaning the while students
who engaged in the LbE intervention were better than their peers, they were not necessar-
ily “good” at the assignment themselves. Discussion of these findings and areas for further
inquiry are presented.
Keywords Adaptive comparative judgment· Assessment· Design-thinking· Learning by
evaluating· Peer assessment
* Scott R. Bartholomew
scottbartholomew@byu.edu
Extended author information available on the last page of the article
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... tional psychology and methodology. This conceptual replication study seeks to investigate the influence of engaging students in evaluating previous student work as a primer to working on their project on individual student work whereas previously it was investigated in the context of small teams. Learning by evaluating (LbE) is a process coined by Bartholomew et. al (2020), that involves using comparative judgement with student exemplars within the human-centered design process to help students "design better." ...
... Crafting effective interview questions is a skill that many students struggle with, often due to their limited experience and understanding of qualitative research methods (Doody & Noonan, 2013;Kvale & Brinkmann, 2009). LbE has demonstrated its effectiveness in helping students understand what "good" looks like when dealing with abstract design thinking concepts such as PoV statements (Bartholomew et al., 2020), incorporating geometric principles into design work (Seery et al., 2018), electronic portfolios Seery et al., 2012), and engineering design in first year engineering courses (Strimel et al., 2021). By providing structured examples for comparison, LbE can help students overcome challenges in formulating insightful and open-ended interview questions, which are crucial for gathering rich qualitative data to inform the design process. ...
... design in first year engineering courses (Strimel et al., 2021). By providing structured examples for comparison, LbE can help students overcome challenges in formulating insightful and open-ended interview questions, which are crucial for gathering rich qualitative data to inform the design process. By replicating and expanding on the insights of Bartholomew et. al (2020), this research aims to contribute to the ongoing discourse on the role of LbE in shaping students' readiness for interviews to inform the design thinking process. ...
Article
Full-text available
This conceptual replication study, building upon Bartholomew (2020), addresses a notable gap in the literature by investigating the potential of using learning by evaluating (LbE) as an interview primer for individual assignments in design coursework. While peer feedback commonly involves both giving and receiving feedback, LbE uniquely focuses on the benefits of giving feedback to engage students actively in learning, reflection, and critical thinking. In this study, the LbE process is utilized to foster student insights that may be transferred to their own work in the preparation and conducting of qualitative research interviews. Conducted as a quasi-experimental study in an entry-level design thinking course with a large sample of undergraduate students (n = 325) this research explores specific ways that exposure to LbE as an evaluative process enhances students’ abilities to perform qualitative research interviews, even without explicit teacher feedback. Findings, consistent with previous studies on LbE, indicate that students exposed to the intervention prepared a higher proportion of open-ended questions in their interview guides, demonstrating improved preparation for each interview. However, the transfer of preparation to performance revealed that certain skills such as asking probing questions and prompting for information, beyond the scope of the preparation of an interview guide, may have contributed to the observation that interview lengths did not significantly improve. Methodologically, this study employed random sampling, and a comparisons of treatment and control group interview guides and interview duration were conducted through independent samples t-tests. The significance of these findings suggest that LbE is an effective pedagogical strategy, specifically for individualized student work, expanding upon findings of previous studies. Educators and researchers may find this exploration of peer feedback through LbE on design thinking and qualitative research interviews to be of particular value as they seek to optimize the impact of peer feedback in enhancing student learning experiences.
... As points out, students benefit from this practice in four distinct ways: 1) exposure to new ideas, 2) critical comparison and evaluation of pairings, 3) providing and receiving feedback, and 4) an increased understanding of assignment criteria. LbE has been shown to help students who are engaged in resolving open-ended design challenges to define not only what a 'good' solution looks like, but to also encourage them to embrace challenges and learn from the mistakes of others to foster motivation toward project goals (Bartholomew, Mentzer, Jones, et al., 2022). ...
... Past studies on LbE have focused on the efficacy of comparisons as a treatment (Bartholomew, Mentzer, Jones, et al., 2022;Mentzer et al., 2021), the quality of items used for pairwise comparison (S. Bartholomew et al., 2021), and its adaptability across contexts (S. ...
... LbE in practice builds on the use of CJ or ACJ in educational settings (S. Bartholomew, Mentzer, Jones, et al., 2022). Specifically, wherein the research around ACJ and CJ has primarily focused on educational evaluation and assessment by teachers at the conclusion of a project, LbE positions the use of ACJ and CJ by students as an intentional primer for learning near the beginning of a project. ...
Article
Full-text available
Learning by Evaluating (LbE) is an instructional approach that involves students making comparative judgements of pairs of artifacts, such as student work, portfolios, prototypes, or curated images related to a topic of instruction to enhance critical thinking and decision-making skills. Situated as a primer for learning, the efficacy of LbE stems from actively engaging students in the evaluation process, scaffolding their learning, fostering self-reflection in decision-making, and facilitating the transfer of acquired skills and concepts to academic contexts and project work. However, there is an opportunity to gain deeper insights into classroom integration of LbE and the factors that may influence the student experience. This study adopts a design-based research approach to analyze LbE within a secondary STEM education setting, with the objective of optimizing classroom integration. By analyzing student comments generated during LbE, the research explores factors shaping the students’ learning experiences, examining the extent to which students engage in informed decision-making, offer justifications, and express sentiments throughout the process. Additionally, the study explores how teachers strategically incorporate LbE into their classroom, aligning LbE sessions with curriculum objectives. Findings indicate a diverse pattern of student engagement, sentiments, and decision-making approaches across STEM classrooms. This study contributes to research on LbE by offering insights into the dynamics between teacher implementation and student engagement. The insights gained highlight the potential for refining the effectiveness of LbE within the classroom. Notably, the research emphasizes the significance of how LbE sessions are framed and strategically integrated to enrich the overall educational experience for both students and educators.
... In general, assessment practices have improved over time (e.g., greater access to technology for facilitating assessment) (Robertson et al., 2019), but relatively little has changed in terms of students' participation in assessment processes-for the most part, in a linear sequence, students submit work, teachers evaluate this work, assign grades, and then the teacher moves to a new topic. This formulaic approach to assessment often coincides with assessment signaling the end of student learning as opposed to a key step in the learning process (Bartholomew et al., 2020). Recent work with evaluation-a key element of assessment traditionally engaged in by the teacher-has demonstrated the potential of these evaluation activities to play a much larger role in students' learning. ...
... Students have specifically called out benefits of this approach such as its ability to help them gain confidence (Canty, 2012) and improve their own work . This process has been applied in a variety of fields and has been shown to have positive effects in a myriad of courses such as undergraduate design courses, English, Engineering, and Business (Bartholomew & Jones, 2020). ...
... With various LbE (and associated ACJ) research demonstrating the potential for enhancing student learning, questions around the potential to modify or enhance LbE have risen (Bartholomew et al., 2020;Buckley, Kimbell, & Seery, 2022. Specifically, findings related to increased student achievement following LbE have been accompanied by questions into how the examples evaluated by the students may influence the subsequent learning and performance of students; specifically, Bartholomew & Yauney (2022) questioned the potential for improving LbE by intentionally varying the quality of work presented to students during LbE. ...
Article
Full-text available
Classroom research has demonstrated the capacity for significantly influencing student learning by engaging students in evaluation of previously submitted work as an intentional priming exercise for learning; we call this experience Learning by Evaluating (LbE). Expanding on current LbE research, we set forth to investigate the impact on student learning by intentionally differing the quality of examples evaluated by the students using adaptive comparative judgement. In this research, university design students (N = 468 students) were randomly assigned to one of three treatment groups; while each group evaluated previously collected student work as an LbE priming activity, the work evaluated by each group differed in quality. Using a three-group experimental design, one group of students only evaluated high quality examples, the second only evaluated low quality examples, and the third group of students evaluated a set of mixed-quality examples of the assignment they were about to work on. Following these LbE priming evaluations, students completed the assigned work and then their projects were evaluated to determine if there was a difference between student performance by treatment condition. Additional qualitative analysis was completed on student LbE rationales to explore similarities and differences in student cognitive judgments based on intervention grouping. No significant difference was found between the groups in terms of achievement, but several differences in group judgement approach were identified and future areas needing investigation were highlighted.
... Along with the written justification for the decisions, studies found that ACJ can be implemented as a meaningful assessment and feedback tool, in which students can improve learning and achievement (Bartholomew & Jones, 2022;Bartholomew et al., 2019). In this case, our research team has named the priming process Learning by Evaluating (LbE) which stimulates and promotes meaningful learning for students (Bartholomew, 2021;Bartholomew & Yauney, 2022;Bartholomew et al., 2020). Beginning student efforts through LbE builds foundational understanding and has been shown to produce meaningful learning for students (Bartholomew, 2021;Bartholomew & Yauney, 2022;Bartholomew et al., 2020). ...
... In this case, our research team has named the priming process Learning by Evaluating (LbE) which stimulates and promotes meaningful learning for students (Bartholomew, 2021;Bartholomew & Yauney, 2022;Bartholomew et al., 2020). Beginning student efforts through LbE builds foundational understanding and has been shown to produce meaningful learning for students (Bartholomew, 2021;Bartholomew & Yauney, 2022;Bartholomew et al., 2020). ...
... Several recent studies have incorporated adaptive comparative judgment (ACJ) into STEM education setting, especially in project-based design thinking processes (Bartholomew et al., 2018a(Bartholomew et al., , 2018b(Bartholomew et al., , 2019(Bartholomew et al., , 2020Dewit et al., 2021;Strimel et al., 2021). Bartholomew et al., (2018aBartholomew et al., ( , 2018b examined ACJ to evaluate the middle school students' learning through an open-ended problem assigned in a technology and engineering education course. ...
Article
Full-text available
Adaptive comparative judgment (ACJ) has been widely used to evaluate classroom artifacts with reliability and validity. In the ACJ experience we examined, students were provided a pair of images related to backpack design. For each pair, students were required to select which image could help them ideate better. Then, they were prompted to provide a justification for their decision. Data were from 15 high school students taking engineering design courses. The current study investigated how students’ reasoning differed based on selection. Researchers analyzed the comments in two ways: (1) computer-aided quantitative content analysis and (2) qualitative content analysis. In the first analysis, we performed sentiment analysis and word frequency analysis using natural language processing. Based on the findings, we explored how the design thinking process was embedded in student reasoning, and if the reasoning varied depending on the claim. Results from sentiment analysis showed that students tend to reveal more strong positive sentiment with short comments when providing reasoning for the selected design. In contrast, when providing reasoning for those items not chosen, results showed a weaker negative sentiment with more detailed reasons. Findings from word frequency analysis showed that students valued the function of design as well as the user perspective, specifically, convenience. Additionally, students took aesthetic features of each design into consideration when identifying the best of the two pairs. Within the engineering design thinking context, we found students empathize by identifying themselves as users, define user’s needs, and ideate products from provided examples.
... How this is described will relate to the purpose of the assessment. For example, Whitehouse and Pollitt (2012) asked assessors to make comparisons based on "evidence of a higher level of development" when using ACJ for summative assessment in a geography task, while Bartholomew et al. (2022) used ACJ formatively in the middle of a design activity asking which piece of work was more "strong" in fulfilling specified criteria. By changing the assessment question from the assignment of a score against a criterion to a binary judgement the process becomes much more reliable by capitalizing on Thurstone's (1927) Law of Comparative Judgment. ...
Article
Full-text available
The use of adaptive comparative judgement (ACJ) for assessment in technology education has been topical since its introduction to the field through the escape project coordinated by the Technology Education Research Unit in the United Kingdom. In the last decade, however, there has been an increasing volume of research examining how ACJ can and should be used in the technology classroom. This research has grown in volume to the point where there are now systematic reviews being conducted on the topic. There is a limitation in the use of ACJ within the field in that there does not exist an open-source tool to facilitate its widespread interrogation. Existing proprietary solutions exist and offer exceptional functionality and user experience, but they cannot be easily responsive to needs within the technology education community because they serve a much wider audience, and they cannot be easily used to experiment on algorithm optimization as to do so would be costly. In response to this need, an ACJ shinyapp has been developed. It is presented in this article from a technical perspective with a view that this description can afford needed transparency in the use of ACJ and that having such a tool now permits more systematic investigation into the impactful pedagogical usage of ACJ.
... They become adept at navigating and addressing complex design problems through this immersive educational experience. To augment the effectiveness of this model, researchers have integrated a novel assessment approach known as Learning by Evaluating (LbE) [8], [9], [10], [11], which derives its methodology from Adaptive Comparative Judgment (ACJ). ...
... Instead, our project is developing, refining, and testing a protocol in which students evaluate prior work to prime them for learning while designing, through what we call Learning by Evaluating (LbE) [1], [2]. This approach introduces two important changes to the currently practiced paradigm: 1) actively engaging students-in addition to the teacher-in the critique and evaluation process; and 2) performing this evaluation of example work prior to embarking on a design task, as opposed to review at the end. ...
... Through academic exchanges and interactions, students can broaden their horizons and deepen their knowledge,and focus on measuring learning rather than stimulating, promoting, or producing learning in students. [27] ...
... According to Bandura's social learning theory, learners can gain new skills, strategies and perspectives by observing others. In addition, evaluating the designs of their peers as well as observing them also contributes to their learning (Groenendijk et al. 2013) because playing an active role in the assessment process helps students to gain clear information about the evaluation criteria, thus helping them to perform their tasks better (Bartholomew et al. 2022). Therefore, increasing students' knowledge about evaluating designs has a key role in improving their creative skills (Lai & Hwang 2015). ...
Article
This study aimed to investigate the views of students enrolled on a desktop publishing course of the flipped classroom model adapted to a design course conducted in an online learning environment. The model was implemented over one semester, and at the end, semi-structured interviews were conducted with 65 volunteer students. Content analysis was used to analyse the students' views. It was determined that delivering course content through instructor-created videos had a positive effect on student views of the course. In addition, the students stated that doing assignments outside the classroom and evaluating them during the course contributed significantly to their learning design. Finally, student views on the feasibility of conducting the course through traditional design teaching methods in an online learning environment were examined. The students stated that delivering the course in live online classes may have both positive and negative aspects.
Article
This study evaluates a new introductory equality, diversity, and inclusion activity deployed in a first-year undergraduate engineering programme. It investigates the overall effectiveness of the activity at improving student understanding of equality, diversity, and inclusion (EDI) topics and how the use of adaptive comparative judgement (ACJ) contributed to this understanding. It also explores student perceptions of learning about EDI and participating in ACJ sessions. After completing the activity, students participated in an online survey and a single focus group. Survey results were analysed quantitively whilst focus groups results were analysed qualitatively using reflexive thematic analysis (RTA). Although the activity was somewhat effective at introducing EDI topics, and ACJ had a broadly positive impact, more development work is needed; student engagement with the activity and with their peers needs to be improved, and the way ACJ is deployed and used needs to be optimized. This article was published open access under a CC BY licence: https://creativecommons.org/licences/by/4.0 .
Chapter
Full-text available
This paper concerns the process by which learners come to understand the meaning of quality in design & technology. This sense of quality is central to learners' development of autonomous technological capability, and equally to teachers' ability to make good judgements about the progress of their students. Underpinning it all is assessment, which is-after all-just about deciding what counts as 'good' and what as 'not good' or 'less good'. This is problematic because all sorts of different things might be good or-equally-bad. So it is a complex matter to be able to say that this piece of work is good, and that one also is good, but for quite different reasons. Or that this one (which contains X) is good, whilst that one (which also contains X) is poor. For students to develop this complex sense of 'good' it is they who must do the assessments so that they can evolve their view about quality and endlessly share it-and refine it-within their community of practice. The approach is not criterion-based assessment, nor norm-based assessment, but is better described as construct-based assessment, in which learners hold a multi-dimensional construct (an explanatory variable that is not directly observable but that nevertheless is real and useful) of what counts as quality and that enables them to discriminate between one performance and another. This is especially important in performance disciplines where the fact that 'the whole is more than the sum of the parts' renders reductionist methods at best inefficient and at worst invalid.
Article
Full-text available
Adaptive Comparative Judgment (ACJ) is an assessment method that facilitates holistic, flexible judgments of student work in place of more quantitative or rubric-based methods. This method “requires little training, and has proved very popular with assessors and teachers in several subjects, and in several countries” (Pollitt 2012, p. 281). This research explores ACJ as a holistic, flexible, interdisciplinary assessment and research tool in the context of an integrated program that combines Design, English Composition, and Communications courses. All technology students at Polytechnic Institute at Purdue University are required to take each of these three core courses. Considering the interdisciplinary nature of the program’s curriculum, this research first explored whether three judges from differing backgrounds could reach an acceptable level of reliability in assessment using only ACJ, without the prerequisites of similar disciplinary backgrounds or significant assessment experience, and without extensive negotiation or other calibration efforts. After establishing acceptable reliability among interdisciplinary judges, analysis was also conducted to investigate differences in student learning between integrated (i.e., interdisciplinary) and non-integrated learning environments. These results suggest evaluators from various backgrounds can establish acceptable levels of reliability using ACJ as an alternative assessment tool to more traditional measures of student learning. This research also suggests technology students in the integrated/interdisciplinary environment may have demonstrated higher learning gains than their peers and that further research should control for student differences to add confidence to these findings.
Conference Paper
Full-text available
Understanding the best practices of providing, receiving, and improving the formative feedback process in design is critical to improving student creative graphics education. Situated in a university-level computer graphics course this research studied the impacts on student performance of students engaged in adaptive comparative judgment (ACJ), as a formative learning and assessment tool, during several open-ended design problems. Students participated in ACJ, acting as judges of peer work and providing and receiving feedback to, and from, their peers. This paper will examine the relationships between using ACJ and student achievement and will specifically visit the implications of situating ACJ in the midst of an open-ended graphic design project. Further, this paper will explore the potential of using ACJ as a formative assessment and feedback tool.
Article
Full-text available
While research into the effectiveness of open-ended problems has made strides in recent years, less has been done around the assessment of these problems. The large number of potentially-correct answers makes this assessment difficult. Adaptive Comparative Judgment (ACJ), an approach based on assessors/judges working through a series of paired comparisons and selecting the better of two items, has demonstrated high levels of reliability and effectiveness with these problems. Research into using ACJ, both formative and summative, has been conducted at all grade levels within K-16 education (ages 5-18), with a myriad of findings. This paper outlines a systematic review process used to identify articles and synthesizes the findings from the included research around ACJ in K-16 education settings. The intent of this systematic review is to inform decision-makers weighing the potential for ACJ integration in educational settings with researched-based findings around ACJ in K-16 educational settings. Further, this review will also uncover potential areas for future researchers to investigate further into ACJ and its' implications in educational settings.
Article
Considering the challenges associated with the teaching of engineering design, and recognizing potential differences in design values of individuals from various backgrounds, this study investigated the utility of adaptive comparative judgment (ACJ) as a method for informing the teaching and practice of engineering design. The authors investigated a series of research questions to examine the use of ACJ as a formative method for influencing an engineering student’s design decision-making as well as identifying/comparing design values of different engineering education stakeholders. The study results indicate that the ACJ process enabled students to gain insights for enhancing their designs through the critique of peer-work and the receipt of feedback on their own projects. Also, the results revealed similarities/differences between the ways instructors, students, and practicing engineers judged design projects. In light of these findings, it appears ACJ can be a valuable formative assessment tool for informing the practice/processes of education related to engineering design.
Article
Comparative Judgement (CJ) aims to improve the quality of performance-based assessments by letting multiple assessors judge pairs of performances. CJ is generally associated with high levels of reliability, but there is also a large variation in reliability between assessments. This study investigates which assessment characteristics influence the level of reliability. A meta-analysis was performed on the results of 49 CJ assessments. Results show that there was an effect of the number of comparisons on the level of reliability. In addition, the probability of reaching an asymptote in the reliability, i.e., the point where large effort is needed to only slightly increase the reliability, was larger for experts and peers than for novices. For reliability levels of .70 between 10 and 14 comparisons per performance are needed. This rises to 26 to 37 comparisons for a reliability of .90.
Article
Improving graphics education may begin with understanding best practices for providing, receiving, and improving formative feedback. Challenges related to anonymity, efficiency, and validity in peer critique settings all contribute to a difficult-to-implement process. This research investigates university-level computer graphics students while engaged in adaptive comparative judgement (ACJ), as a formative learning, assessment, and feedback tool, during several open-ended graphics design projects. A control group of students wrote feedback on papers in small group critiques while the experimental group students participated in ACJ, acting as judges of peer work and providing and receiving feedback to, and from, their peers. Relationships between the paper-based group approach and the ACJ approach and student achievement were explored. Further, this paper discusses the potential benefits, and challenges, of using ACJ as a formative assessment and peer feedback tool as well as student impressions of both approaches toward peer formative assessment and feedback.
Book
This book examines the history of formative assessment in the US and explores its potential for changing the landscape of teaching and learning to meet the needs of twenty-first century learners. The author uses case studies to illuminate the complexity of teaching and the externally imposed and internally constructed contextual elements that affect assessment decision-making. In this book, Box argues effectively for a renewed vision for teacher professional development that centers around the needs of students in a knowledge economy. Finally, Box offers an overview of systemic changes that are needed in order for progressive teaching and relevant learning to take place.
Book
This book goes back to the basic purpose of assessment to show teachers what your students know and are able to do. The 22 activities in this book will help your students become active, engaged, responsible, and caring learners. This “how to” book is filled with activities which will enable you to keep your students active and engaged, facilitate cooperative group projects without losing control, raise academic achievement, apply multiple intelligences in your classroom, and teach your students how to think.