Excellent onderwijs Frans is onderwijs dat effectief is, ofwel Effectief in Frans.
Om dat doel te bereiken hebben we sterke leerkrachten Frans nodig, want zoals onderzoek ons leert, maakt de leerkracht het verschil. Onderzoek leert ons echter ook dat de kwaliteit van de lessen en van het niveau Frans in de lagere school beter kan. Dit boek biedt handvatten om het niveau van je lessen Frans op te krikken met behulp van instructiestrategieën die bewezen effectief zijn - al dan niet met behulp van educatieve technologie.
De auteurs van dit boek gaan voorbij aan technologische hypes, waarvan de geschiedenis geleerd heeft dat ze de voorspellingen in het beste geval niet hebben waargemaakt. Om EdTech een ambitieuze en duurzame plaats te geven in het onderwijs, bepleiten ze een evidence-informed aanpak. Net zoals in Wijze lessen. Twaalf bouwstenen voor effectieve didactiek vertrekken de auteurs vanuit belangrijke inzichten uit wetenschappelijk onderzoek over technologie en onderwijs. Voor ieder van de twaalf bouwstenen beschrijven ze aandachtspunten
en waardevolle kansen van EdTech. Robuust wetenschappelijk onderzoek vertalen ze naar concrete handvatten voor de klaspraktijk. Als leraar sta je centraal te midden van alle kansen die Edtech biedt. Het is de didactiek die de tool bepaalt en niet omgekeerd.
Excellent onderwijs Frans is onderwijs dat effectief is, ofwel Effectief in Frans. Om dat doel te bereiken hebben we sterke leerkrachten Frans nodig, want zoals onderzoek ons leert, maakt de leerkracht het verschil. Onderzoek leert ons echter ook dat de kwaliteit van de lessen en van het niveau Frans in de lagere school beter kan. Wil jij als (toekomstig) leerkracht in je lessen Frans het niveau opkrikken met behulp van instructiestrategieën die bewezen effectief zijn? Wil je verkennen hoe technologie hierin ondersteunend kan zijn? Ja, ja, ja?! Dan heb je het juiste boek in handen! In deze publicatie krijg je wetenschappelijke duiding bij belangrijke effectieve instructiestrategieën die geïllustreerd worden met heel concrete praktijkvoorbeelden voor de lessen Frans in de lagere school. Je krijgt ook een overzicht van technologische tools die je – waar zinvol – kan inzetten ter ondersteuning van de effectieve instructiestrategieën.
Tests have been vastly used for the assessment of learning in educational contexts. Recently, however, a growing body of research has shown that the practice of remembering previously studied information (i.e., retrieval practice) is more advantageous for long-term retention than restudying that same information; a phenomenon often termed “testing effect.” The question remains, however, whether such practice can be useful to improve learning in actual educational contexts, and whether in these contexts specific types of tests are particularly beneficial. We addressed these issues by reviewing studies that investigated the use of retrieval practice as a learning strategy in actual educational contexts. The studies reviewed here adopted from free-recall to multiple-choice tests, and involved from elementary school children to medical school students. In general, their results are favorable to the use of retrieval practice in classroom settings, regardless of whether feedback is provided or not. Importantly, however, the majority of the reviewed studies compared retrieval practice to repeated study or to “no-activity.” The results of the studies comparing retrieval practice to alternative control conditions were less conclusive, and a subset of them found no advantage for tests. These findings raise the question whether retrieval practice is more beneficial than alternative learning strategies, especially learning strategies and activities already adopted in classroom settings (e.g., concept mapping). Thus, even though retrieval practice emerges as a promising strategy to improve learning in classroom environments, there is not enough evidence available at this moment to determine whether it is as beneficial as alternative learning activities frequently adopted in classroom settings.
The development of students’ higher order learning is a critical component of education. For decades, educators and scientists have engaged in an ongoing debate about whether higher order learning can only be enhanced by building a base of factual knowledge (analogous to Bloom’s taxonomy) or whether higher order learning can be enhanced directly by engaging in complex questioning and materials. The relationship between fact learning and higher order learning is often speculated, but empirically unknown. In this study, middle school students and college students engaged in retrieval practice with fact questions, higher order questions, or a mix of question types to examine the optimal type of retrieval practice for enhancing higher order learning. In laboratory and K-12 settings, retrieval practice consistently increased delayed test performance, compared with rereading or no quizzes. Critically, higher order and mixed quizzes improved higher order test performance, but fact quizzes did not. Contrary to popular intuition about higher order learning and Bloom’s taxonomy, building a foundation of knowledge via fact-based retrieval practice may be less potent than engaging in higher order retrieval practice, a key finding for future research and classroom application.
Testing (having students recall material) and worked examples (having students study a completed problem) are both recommended as effective methods for improving learning. The two strategies rely on different underlying cognitive processes and thus may strengthen different types of learning in different ways. Across three experiments, we examine the efficacy of retrieval practice and worked examples for different learning goals and identify the factors that determine when each strategy is more effective. The optimal learning strategy depends on both the kind of knowledge being learned (stable facts vs. flexible procedures) and the learning processes involved (schema induction vs. memory and fluency building). When students’ goal was to remember the text of a worked example, repeated testing was more effective than repeated studying after a 1-week delay. However, when students’ goal was to learn a novel math procedure, the optimal learning strategy depended on the retention interval and nature of the materials. When long-term retention was not crucial (i.e., on an immediate test), repeated studying was more optimal than repeated testing, regardless of the nature of materials. When long-term retention was crucial (i.e., on a 1-week delayed test), repeated testing was as effective as repeated studying with nonidentical learning problems (that may enhance schema induction), but more effective than repeated studying with identical learning problems (that may enhance fluency building). Testing and worked examples are both effective ways to learn flexible procedures, but they do so through different mechanisms.
With the rise of large-scale academic assessment programs around the world, there is a need to better understand the factors predicting students’ achievement in these assessment exercises. This investigation into national numeracy assessment drew on ecological and transactional conceptualizing involving student, student/home, and school factors. Student factors comprised mathematics ability, gender, and year group. Student/home factors comprised mathematics tutoring, mathematics competition participation, computer support for mathematics, and practice mathematics tests. School factors included school-average mathematics ability, school-average practice mathematics tests and competition participation, and socioeducational status. These educational ecology factors were modeled as predictors of mathematics motivation. In turn, educational ecology factors and mathematics motivation were modeled as predictors of numeracy achievement. Data were drawn from N = 12,736 Australian elementary (Years 3 and 5) and secondary (Years 7 and 9) school students from 231 schools participating in a national numeracy assessment exercise. Multilevel structural equation modeling revealed that student and student/home factors (Level 1) and school factors (Level 2) explained significant variance in student- and school-level mathematics motivation. In turn, these factors explained significant variance in student- and school-level numeracy achievement. Findings hold implications for the nature, breadth, and depth of efforts aimed at improving mathematics motivation and numeracy achievement in large-scale assessment programs.
In-course assessment, such as midterms, quizzes or presentations, is often an integral part of higher education courses. These so-called intermediate assessments influence students’ final grades. The current review investigates which characteristics of intermediate assessment relate to these grades. In total, 88 articles were reviewed that examined the relationship between intermediate assessment and student grades. Four main characteristics were identified: the use of feedback, whether the assessment is mandatory, who is the assessor, and the reward students get for participating. Results indicate that corrective feedback leads to the most positive results, but elaborate feedback may benefit lower achieving groups. No difference in results was found for mandatory versus voluntary intermediate assessments.
Peer assessment seemed to be beneficial, and rewarding students with course credit improves grades more than other rewards. Three scenarios are presented on how teachers can combine the different characteristics to optimise their intermediate assessment.
The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration, concrete examples, and dual coding. We describe the basic research behind each strategy and relevant applied research, present examples of existing and suggested implementation, and make recommendations for further research that would broaden the reach of these strategies.
Despite widespread assertions that enthusiasm is an important quality of effective teaching, empirical research on the effect of enthusiasm on learning and memory is mixed and largely inconclusive. To help resolve these inconsistencies, we conducted a carefully-controlled laboratory experiment, investigating whether enthusiastic instructions for a memory task would improve recall accuracy. Scripted videos, either enthusiastic or neutral, were used to manipulate the delivery of task instructions. We also manipulated the sequence of learning items, replicating the spacing effect, a known cognitive technique for memory improvement. Although spaced study reliably improved test performance, we found no reliable effect of enthusiasm on memory performance across two experiments. We did, however, find that enthusiastic instructions caused participants to respond to more item prompts, leaving fewer test questions blank, an outcome typically associated with increased task motivation. We find no support for the popular claim that enthusiastic instruction will improve learning, although it may still improve engagement. This dissociation between motivation and learning is discussed, as well as its implications for education and future research on student learning.
Repeated retrieval practice is a powerful learning tool for promoting long-term retention, but students use this tool ineffectively when regulating their learning. The current experiments evaluated the efficacy of a minimal intervention aimed at improving students’ self-regulated use of repeated retrieval practice. Across 2 experiments, students made decisions about when to study, engage in retrieval practice, or stop learning a set of foreign language word pairs. Some students received direct instruction about how to use repeated retrieval practice. These instructions emphasized the mnemonic benefits of retrieval practice over a less effective strategy (restudying) and told students how to use repeated retrieval practice to maximize their performance—specifically, that they should recall a translation correctly 3 times during learning. This minimal intervention promoted more effective self-regulated use of retrieval practice and better retention of the translations compared to a control group that received no instruction. Students who experienced this intervention also showed potential for long-term changes in self-regulated learning: They spontaneously used repeated retrieval practice 1 week later to learn new materials. These results provide a promising first step for developing guidelines for teaching students how to regulate their learning more effectively using repeated retrieval practice.
Educational Impact and Implications Statement
Can we find inexpensive and easily adaptable modifications to teaching methods that positively impact student outcomes? These studies provide a positive answer to that question. The work is based on laboratory findings that frequent tests and frequent attempts to recall the same material (1) aid learning and memory, and (2) help students apply what they’ve learned to new problems. The present studies took place in large-enrollment college classes across four semesters. Within each semester two sections of an undergraduate course were taught in a highly similar fashion, primarily differing in the number of tests given and whether items that appeared on an earlier test were repeated on the final exam. In addition, some of the repeated items were identically so, while other ‘repeated’ items tested the same concepts but with different wording. We found evidence that frequent testing and repetition of tested items can improve course performance up to about 10%, though the results varied across the studies so further work is needed to clarify why. We also observed that under some circumstances students did as well or even better on re-worded test items as they did when the item was repeated in exactly the same words.
The episodic context account of retrieval-based learning proposes that retrieval enhances subsequent retention because people must think back to and reinstate a prior learning context. Three experiments directly tested this central assumption of the context account. Subjects studied word lists and then either restudied the words under intentional learning conditions or made list discrimination judgments by indicating which list each word had occurred in originally. Subjects in both conditions experienced all items for the same amount of time, but subjects in the list discrimination condition were required to retrieve details about the original episodic context in which the words had occurred. Making initial list discrimination judgments consistently enhanced subsequent free recall relative to restudying the words. Analyses of recall organization and retrieval strategies on the final test showed that retrieval practice enhanced temporal organization during final recall. Semantic encoding tasks also enhanced retention relative to restudying but did so by promoting semantic organization and semantically based retrieval strategies during final recall. The results support the episodic context account of retrieval-based learning. (PsycINFO Database Record
Understanding and optimizing spacing during learning is a central topic for research in learning and memory and has substantial implications for real-world learning. Spacing memory retrievals across time improves memory relative to massed practice-the well-known spacing effect. Most spacing research has utilized fixed (predetermined) spacing intervals. Some findings indicate advantages of expanding over equal spacing (e.g., Landauer & Bjork, 1978); however, evidence is mixed (e.g., Karpicke & Roediger, 2007), and the field has lacked an integrated explanation. Learning may instead depend on interactions of spacing with an underlying variable of learning strength that varies for learners and items, and it may be better optimized by adaptive adjustments of spacing to learners' ongoing performance. Two studies investigated an adaptive spacing algorithm, Adaptive Response-Time-based Sequencing or ARTS (Mettler, Massey & Kellman, 2011) that uses response-time and accuracy to generate spacing. Experiment 1 compared adaptive scheduling with fixed schedules having either expanding or equal spacing. Experiment 2 compared adaptive schedules to 2 fixed "yoked" schedules that were copied from adaptive participants, equating average spacing across conditions. In both experiments, adaptive scheduling outperformed fixed conditions at immediate and delayed tests of retention. No evidence was found for differences between expanding and equal spacing. Yoked conditions showed that learning gains were due to adaptation to individual items and learners. Adaptive spacing based on ongoing assessments of learning strength yields greater learning gains than fixed schedules, a finding that helps to understand the spacing effect theoretically and has direct applications for enhancing learning in many domains. (PsycINFO Database Record
A robust finding within laboratory research is that structuring information as a test confers benefit on long-term retention-referred to as the testing effect. Although well characterized in laboratory environments, the testing effect has been explored infrequently within ecologically valid contexts. We conducted a series of 3 experiments within a very large introductory college-level course. Experiment 1 examined the impact of required versus optional frequent low-stakes testing (quizzes) on student grades, revealing students were much more likely to take advantage of quizzing if it was a required course component. Experiment 2 implemented a method of evaluating pedagogical intervention within a single course (thereby controlling for instructor bias and student self-selection), which revealed a testing effect. Experiment 3 ruled out additional exposure to information as an explanation for the findings of Experiment 2 and suggested that students at the college level, enrolled in very large sections, accept frequent quizzing well. (PsycINFO Database Record
Matching phonemes (speech sounds) to graphemes (letters and letter combinations) is an important aspect of decoding (translating print to speech) and encoding (translating speech to print). Yet, many teacher candidates do not receive explicit training in phoneme-grapheme correspondence. Difficulty with accurate phoneme production and/or lack of understanding of sound-symbol correspondence can make it challenging for teachers to (a) identify student errors on common assessments and (b) serve as a model for students when teaching beginning reading or providing remedial reading instruction. For students with dyslexia, lack of teacher proficiency in this area is particularly problematic. This study examined differences between two learning conditions (massed and distributed practice) on teacher candidates’ development of phoneme-grapheme correspondence knowledge and skills. An experimental, pretest-posttest-delayed test design was employed with teacher candidates (n = 52) to compare a massed practice condition (one, 60-min session) to a distributed practice condition (four, 15-min sessions distributed over 4 weeks) for learning phonemes associated with letters and letter combinations. Participants in the distributed practice condition significantly outperformed participants in the massed practice condition on their ability to correctly produce phonemes associated with different letters and letter combinations. Implications for teacher preparation are discussed.
Testing in school is usually done for purposes of assessment, to assign students grades (from tests in classrooms) or rank them in terms of abilities (in standardized tests). Yet tests can serve other purposes in educational settings that greatly improve performance; this chapter reviews 10 other benefits of testing. Retrieval practice occurring during tests can greatly enhance retention of the retrieved information (relative to no testing or even to restudying). Furthermore, besides its durability, such repeated retrieval produces knowledge that can be retrieved flexibly and transferred to other situations. On open-ended assessments (such as essay tests), retrieval practice required by tests can help students organize information and form a coherent knowledge base. Retrieval of some information on a test can also lead to easier retrieval of related information, at least on delayed tests. Besides these direct effects of testing, there are also indirect effects that are quite positive. If students are quizzed frequently, they tend to study more and with more regularity. Quizzes also permit students to discover gaps in their knowledge and focus study efforts on difficult material; furthermore, when students study after taking a test, they learn more from the study episode than if they had not taken the test. Quizzing also enables better metacognitive monitoring for both students and teachers because it provides feedback as to how well learning is progressing. Greater learning would occur in educational settings if students used self-testing as a study strategy and were quizzed more frequently in class.
Concern that students in the United States are less proficient in mathematics, science, and reading than their peers in other countries has led some to question whether American students spend enough time in school. Instead of debating the amount of time that should be spent in school (and on schoolwork), this article addresses how the available instructional time might be optimally utilized via the scheduling of review or practice. Hundreds of studies in cognitive and educational psychology have demonstrated that spacing out repeated encounters with the material over time produces superior long-term learning, compared with repetitions that are massed together. Also, incorporating tests into spaced practice amplifies the benefits. Spaced review or practice enhances diverse forms of learning, including memory, problem solving, and generalization to new situations. Spaced practice is a feasible and cost-effective way to improve the effectiveness and efficiency of learning, and has tremendous potential to improve educational outcomes. The article also discusses barriers to adopting spaced practice, recent developments, and their possible implications.
Generative learning involves actively making sense of to-be-learned information by mentally reorganizing and integrating it with one’s prior knowledge, thereby enabling learners to apply what they have learned to new situations. In this article, we present eight learning strategies intended to promote generative learning: summarizing, mapping, drawing, imagining, self-testing, self-explaining, teaching, and enacting. First, we provide an overview of generative learning theory, grounded in Wittrock’s (1974) generative model of comprehension and reflected in more recent frameworks of active learning, such as Mayer’s (2014) select-organize-integrate (SOI) framework. Next, for each of the eight generative learning strategies, we provide a description, review exemplary research studies, discuss potential boundary conditions, and provide practical recommendations for implementation. Finally, we discuss the implications of generative learning for the science of learning, and we suggest directions for further research.
As an attempt to follow through on the claims made by proponents of intentional vocabulary learning, the present study set out to examine whether and how digital flashcards can be incorporated into a university course to promote the vocabulary learning of English language learners. The overall research findings underscore the value of learning vocabulary with digital flashcards as an alternative to more conventional resources, and draw attention to the relative merits of embedding digital flashcards in collaborative learning tasks in classroom settings. This article then concludes by considering practical implications for supporting intentional vocabulary learning with the use of digital flashcards.
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.
Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.
The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
The testing effect is a finding from cognitive psychology with relevance for education. It shows that after an initial study period, taking a practice test improves long-term retention compared to not taking a test and—more interestingly—compared to restudying the learning material. Boundary conditions of the effect that have received attention include the test format, retrieval success on the initial test, the retention interval, or the spacing of tests. Another potential boundary condition concerns the complexity of learning materials, that is, the number of interacting information elements a learning task contains. This insight is not new, as research from a century ago already had indicated that the testing effect decreases as the complexity of learning materials increases, but that finding seems to have been nearly forgotten. Studies presented in this special issue suggest that the effect may even disappear when the complexity of learning material is very high. Since many learning tasks in schools are high in element interactivity, a failure to find the effect under these conditions is relevant for education. Therefore, this special issue hopes to put this potential boundary condition back on the radar and provide a starting point for discussion and future research on this topic.
Van Gog and Sweller (2015) claim that there is no testing effect—no benefit of practicing retrieval—for complex materials. We show that this claim is incorrect on several grounds. First, Van Gog and Sweller’s idea of “element interactivity” is not defined in a quantitative, measurable way. As a consequence, the idea is applied inconsistently in their literature review. Second, none of the experiments on retrieval practice with worked-example materials manipulated element interactivity. Third, Van Gog and Sweller’s literature review omitted several studies that have shown retrieval practice effects with complex materials, including studies that directly manipulated the complexity of the materials. Fourth, the experiments that did not show retrieval practice effects, which were emphasized by Van Gog and Sweller, either involved retrieval of isolated words in individual sentences or required immediate, massed retrieval practice. The experiments failed to observe retrieval practice effects because of the retrieval tasks, not because of the complexity of the materials. Finally, even though the worked-example experiments emphasized by Van Gog and Sweller have methodological problems, they do not show strong evidence favoring the null. Instead, the data provide evidence that there is indeed a small positive effect of retrieval practice with worked examples. Retrieval practice remains an effective way to improve meaningful learning of complex materials.
The target articles in the special issue address a timely and important question concerning whether practice tests enhance learning of complex materials. The consensus conclusion from these articles is that the testing effect does not obtain for complex materials. In this commentary, I discuss why this conclusion is not warranted either by the outcomes reported in the target articles or by the available evidence from prior research. Importantly, the weight of the available evidence does not alter the prescription for teachers and students to use practice testing to enhance learning of complex materials. However, the special issue highlights the need for more empirical and theoretical work on test-enhanced learning for complex materials, to further examine when and why these effects may be limited and to inform efforts to optimize test-enhanced learning for educationally relevant materials and tasks.
Evidence for the superiority of guided instruction is explained in the context of our knowledge of human cognitive architecture, expert–novice differences, and cognitive load. Although unguided or minimally guided instructional approaches are very popular and intuitively appealing, the point is made that these approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process. The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide “internal” guidance. Recent developments in instructional research and instructional design models that support guidance during instruction are briefly described.
A major decision that must be made during study pertains to the distribution, or the scheduling, of study. In this paper, we review the literature on the benefits of spacing, or spreading one's study sessions relatively far apart in time, as compared to massing, where study is crammed into one long session without breaks. The results from laboratory research provide strong evidence for this pervasive “spacing effect,” especially for long-term retention. The metacognitive literature on spacing, however, suggests that massing is the preferred strategy, particularly in young children. Reasons for why this is so are discussed as well as a few recommendations regarding how spacing strategies might be encouraged in real-world learning. While further research and applicability questions remain, the two fields—education and cognitive science—have made huge progress in recent years, resulting in promising new learning developments.
Students' self-reported study skills and beliefs are often inconsistent with empirically supported (ES) study strategies. However, little is known regarding instructors' beliefs about study skills and if such beliefs differ from those of students. In the current study, we surveyed college students' and instructors' knowledge of study strategies and had both groups evaluate the efficacy of learning strategies described in six learning scenarios. Results from the survey indicated that students frequently reported engaging in methods of studying that were not optimal for learning. Instructors' responses to the survey indicated that they endorsed a number of effective study skills but also held several beliefs inconsistent with research in learning and memory (e.g., learning styles). Further, results from the learning scenarios measure indicated that instructors were moderately more likely than students to endorse ES learning strategies. Collectively, these data suggest that instructors exhibited better knowledge of effective study skills than students, although the difference was small. We discuss several notable findings and argue for the improvement of both students' and instructors' study skill knowledge.
Background
Spaced-repetition and test-enhanced learning are two methodologies that boost knowledge retention. ALERT STUDENT is a platform that allows creation and distribution of Learning Objects named flashcards, and provides insight into student judgments-of-learning through a metric called `recall accuracy`. This study aims to understand how the spaced-repetition and test-enhanced learning features provided by the platform affect recall accuracy, and to characterize the effect that students, flashcards and repetitions exert on this measurement.Methods
Three spaced laboratory sessions (s0, s1 and s2), were conducted with n=96 medical students. The intervention employed a study task, and a quiz task that consisted in mentally answering open-ended questions about each flashcard and grading recall accuracy. Students were randomized into study-quiz and quiz groups. On s0 both groups performed the quiz task. On s1 and s2, the study-quiz group performed the study task followed by the quiz task, whereas the quiz group only performed the quiz task. We measured differences in recall accuracy between groups/sessions, its variance components, and the G-coefficients for the flashcard component.ResultsAt s0 there were no differences in recall accuracy between groups. The experiment group achieved a significant increase in recall accuracy that was superior to the quiz group in s1 and s2. In the study-quiz group, increases in recall accuracy were mainly due to the session, followed by flashcard factors and student factors. In the quiz group, increases in recall accuracy were mainly accounted by flashcard factors, followed by student and session factors. The flashcard G-coefficient indicated an agreement on recall accuracy of 91% in the quiz group, and of 47% in the study-quiz group.Conclusions
Recall accuracy is an easily collectible measurement that increases the educational value of Learning Objects and open-ended questions. This metric seems to vary in a way consistent with knowledge retention, but further investigation is necessary to ascertain the nature of such relationship. Recall accuracy has educational implications to students and educators, and may contribute to deliver tailored learning experiences, assess the effectiveness of instruction, and facilitate research comparing blended-learning interventions.
Problem statement: Evaluation, an important step in educational settings, is usually understood as a process to measure what students know or what they have learned. A variety of methods can be used for assessment and tests are one of the most important and widely-used. While being tested, one may learn or retrieve previously learned information via some mental processes that work on the memory. This phenomenon is called the "testing effect." Despite some disadvantages, tests can also be used as learning materials. So, we will present our study on the testing effect in the classroom setting. Purpose of study: The purpose of this study was to investigate whether the testing effect occurs in a classroom setting while using a test consisting of multiple choice and matching questions and a worksheet that summarizes the topic, and also to examine the effects of feedback and time. Methods: In this study, the testing effect was investigated in a college chemistry course, and 98 pre-service science teachers participated. A pretest, post-test, control group research design was followed to investigate the testing effect. A pre-test that has 100 short-answer questions was performed and students were grouped according to scores from that test. Seven groups (six experimental and one control) were constituted with the requirement that each group had the same average score on the pre-test. An intervening test was applied to four groups (two of them received feedback immediately after the test), a worksheet that summarizes the topic was studied by two groups and one group (control group) had no additional activity. The same pre-test was applied as a post-test to determine final retention. Three groups received this post-test a day later, and the other three experimental groups and the control group received it a week later. Final retention of previously learned information and the effects of testing, receiving feedback and re-studying were investigated. Finding and Results: The results of this study showed that exposing students to supporting practices has a positive effect on retention of previously learned information regardless of the type of the practice. Specifically, tests, which educational professionals frequently use to assess their students' learning, should be used to support teaching and learning processes instead of just to determine the level of learning. Conclusions and Recommendation: The results have important implications for classroom practice. That is, since much research supports the claim that testing has an important effect on students' retention of previously learned information, it, therefore, should be used to improve classroom practices, and support teaching and learning processes.
The spacing effect refers to the frequently observed finding that distributing learning across time leads to
better retention than massing it into one single study session. In the present study, we examined whether
the spacing effect generalises to primary school vocabulary learning. To this aim, children from Grade 3
were taught the meaning of 15 new words using a massed procedure and 15 other new words using a
spaced procedure. The 15 words in the massed condition were divided into three sets of five words, and
each set was taught three times in one of three learning sessions. In the spaced condition, learning was
distributed across the three sessions: All 15 words were practised once in each of the three learning
sessions. At the retention tests after 1 week and after 5 weeks we observed that the meaning of spaced
words was remembered better than the meaning of massed words.
In the current study, I examined the impact of periodic pop-quizzes on cumulative final-exam scores. Specifically, I compared the impact of using no quizzes, graded quizzes, and ungraded quizzes on final exam scores of introductory psychology students. Quizzed students also completed a survey with questions probing how the students felt about the inclusion of quizzes in the course. Students taking ungraded pop-quizzes outperformed students taking graded pop-quizzes and students taking no quizzes on the final exam. Students taking ungraded pop-quizzes felt positive about having quizzes in their classes. The current findings have implications for research on the mitigating impact of anxiety on test-enhanced learning (Tse and Pu, 2012; Hinze and Rapp, in press) and on pedagogical strategy selection for educators.
Marginal knowledge refers to knowledge that is stored in memory, but is not accessible at a given moment. For example, one might struggle to remember who wrote The Call of the Wild, even if that knowledge is stored in memory. Knowing how best to stabilize access to marginal knowledge is important, given that new learning often requires accessing and building on prior knowledge. While even a single opportunity to restudy marginal knowledge boosts its later accessibility (Berger, Hall, & Bahrick, 1999), in many situations explicit relearning opportunities are not available. Our question is whether multiple-choice tests (which by definition expose the learner to the correct answers) can also serve this function and, if so, how testing compares to restudying given that tests can be particularly powerful learning devices (Roediger & Karpicke, 2006). In four experiments, we found that multiple-choice testing had the power to stabilize access to marginal knowledge, and to do so for at least up to a week. Importantly, such tests did not need to be paired with feedback, although testing was no more powerful than studying. Overall, the results support the idea that one's knowledge base is unstable, with individual pieces of information coming in and out of reach. The present findings have implications for a key educational challenge: ensuring that students have continuing access to information they have learned.
This study examined whether practice testing with short-answer (SA) items benefits learning over time compared to practice testing with multiple-choice (MC) items, and rereading the material. More specifically, the aim was to test the hypotheses of retrieval effort and transfer appropriate processing by comparing retention tests with respect to practice testing format. To adequately compare SA and MC items, the MC items were corrected for random guessing. With a within-group design, 54 students (mean age = 16 years) first read a short text, and took four practice tests containing all three formats (SA, MC and statements to read) with feedback provided after each part. The results showed that both MC and SA formats improved short- and long-term memory compared to rereading. More importantly, practice testing with SA items is more beneficial for learning and long-term retention, providing support for retrieval effort hypothesis. Using corrections for guessing and educational implications are discussed.
Though retrieving information typically results in improved memory on a subsequent test (the testing effect), Peterson and Mulligan (2013) outlined the conditions under which retrieval practice results in poorer recall relative to restudy, a phenomenon dubbed the negative testing effect. The item-specific-relational account proposes that this occurs when retrieval disrupts interitem relational encoding despite enhancing item-specific information. Four experiments examined the negative testing effect, showing the following: (a) The basic phenomenon is replicable in free recall; (b) it extends to category-cued recall; (c) it converts to a positive testing effect when the final test is recognition, a test heavily reliant on item-specific information; (d) the negative testing effect in recall, robust in a pure list design, reverses to a positive testing effect in a mixed-list design; and (e) more generally, the present testing manipulation interacts with experimental design, such that an initially negative effect becomes positive or an initially positive effect becomes larger as the design changes from pure-list to mixed-list. The breadth of results fits well within the item-specific-relational framework and provides evidence against 2 alternative accounts. Finally, this research indicates that the testing effect shares important similarities with the generation effect and other similar memory phenomena. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Answering multiple-choice questions improves access to otherwise difficult-to-retrieve knowledge tested by those questions. Here, I examine whether multiple-choice questions can also improve accessibility to related knowledge that is not explicitly tested. In two experiments, participants first answered challenging general knowledge (trivia) multiple-choice questions containing competitive incorrect alternatives and then took a final cued-recall test with those previously tested questions and new related questions for which a previously incorrect answer was the correct answer. In Experiment 1, participants correctly answered related questions more often and faster when they had taken a multiple-choice test than when they had not. In Experiment 2, I showed that the more accurate and faster responses were not simply a result of previous exposure to those alternatives. These findings have practical implications for potential benefits of multiple-choice testing and implications for the processes that occur when individuals answer multiple-choice questions.
Presently, the most common approach to examining the testing effect is using a free recall form of retrieval practice. In this experiment, we compared free recall to other retrieval-based study strategies including practice quizzing, test-generation, and keyword. We also examined the possible benefit of coupling these retrieval-based strategies with free recall. A total of 338 undergraduates were randomly assigned to one of the nine conditions: a repeated retrieval (study-test) learning condition paired with one of the four retrieval-based strategies, a single retrieval (study-study) learning condition paired with a retrieval-based strategy, or a rehearsal (study-study-study) condition. Following a 7-day delay, students completed a test assessing retention of information learned. There was a significant interaction between learning condition (repeated vs. single retrieval practice) and type of retrieval-based strategy. Free recall and practice quizzing were the most effective types of retrieval practice, and coupling test-generation and practice quizzing with free recall led to significant benefits in performance.
This chapter provides a comprehensive review of the past decade of research on retrieval-based learning. It describes common approaches used to study retrieval practice and outlines theoretical accounts of retrieval-based learning. This chapter reviews research that has manipulated initial retrieval practice activities in a wide variety of ways, extended the benefits of retrieval practice to final assessments that measure educationally meaningful learning outcomes, and generalized retrieval-based learning across learner populations, to different types of materials, and to authentic educational contexts.
This essay reviews research on retrieval-based learning, which refers to the general finding that practicing active retrieval enhances long-term, meaningful learning. The idea that retrieval promotes learning has existed for centuries, and the first experiments demonstrating retrieval practice effects were carried out near the beginning of experimental research on learning and memory. Interest in retrieval practice was sporadic during the past century, but the topic has received intense interest in recent years as part of a broader movement to integrate research from cognitive science with educational practice. The essay provides a selective review of foundational research and contemporary work that has been aimed at deepening our theoretical knowledge about retrieval practice and integrating retrieval-based learning within educational activities and settings.
Retrieval practice improves memory for many kinds of materials, and numerous factors moderate the benefits of retrieval practice, including the amount of successful retrieval practice (referred to as the learning criterion). In general, the benefits of retrieval practice are greater with more than with less successful retrieval practice; however, learning items to a higher (vs. lower) criterion requires more time and effort. If students plan on relearning material in a subsequent study session, does the benefit of learning to a higher criterion during an initial session persist? In Session 1, participants studied and successfully recalled Swahili-English word pairs one, two, three, four, five, six, or seven times. In subsequent sessions, all of the pairs were relearned to a criterion of one correct recall at one-week intervals across four or five successive relearning sessions. Experiments 1 and 2 revealed that the substantial benefits of learning to a higher initial criterion during the first session do not persist across relearning sessions. This relearning-override effect was also demonstrated in Experiment 2 after a one-month retention interval. The implications of relearning-override effects are important for theory and for education. For theories of test-enhanced learning, they support the predictions of one theory and appear inconsistent with the predictions of another. For education, if relearning is to occur, using extra time to learn to a higher initial learning criterion is not efficient. Instead, students should devote their time to subsequent spaced relearning sessions, which produce substantial gains in recall performance.
Although many researchers acknowledge that Assessment for Learning can significantly enhance student learning, the factors facilitating or hindering its implementation in daily classroom practice are unclear. A systematic literature review was conducted to reveal prerequisites needed for Assessment for Learning implementation. Results identified prerequisites regarding the teacher, student, assessment and context. For example, teachers must be able to interpret assessment information on the spot, student engagement in the assessment process is vital, assessment should include substantial, constructive and focussed feedback, and the school should have a school-wide culture that facilitates collaboration and encourages teacher autonomy. The results of this review contribute to a better understanding of the multiple facets that need to be considered when implementing Assessment for Learning, from both a theoretical and a practical standpoint.
This study examines the effect of recognition-based retrieval practice on vocabulary learning in a university Chinese class. Students (N=26) were given practice retrieving new vocabulary (single or two-character words) in a series of simple form recognition tests administered over four weeks. The test sets consisted of target vocabulary that appeared in the previous week's lesson and distracter items drawn from upcoming vocabulary. Tests were group-administered via PowerPoint and students used a checklist response to indicate whether a given item had appeared in the previous week's material. Responses relied on episodic knowledge of previous exposure and required no processing of semantic information. Students were able to reliably identify the target items in the retrieval task with performance on these items being found superior to that for supplementary list control words on midterm and final vocabulary tests. The findings indicate that a focus on word forms can have a measurable effect on vocabulary learning in the classroom and underscores the efficacy of retrievalbased testing (the testing effect, Barcroft, 2007; Roediger & Karpicke, 2006) in facilitating vocabulary learning. The implications for recognition-based retrieval practice in vocabulary instruction in the Chinese classroom are discussed.
Audience Response Systems (ARS) are thought to be a good way of using technology to increase engagement in the classroom and have been widely adopted by many instructors seeking to improve academic performance through student engagement. While researchers have examined the degree to which they promote cognitive and non-cognitive learning outcomes in the classroom, most of their findings are largely mixed and inconclusive. This meta-analysis seeks to resolve the conflicting findings. Specifically, the meta-analysis compared classrooms that did, and did not use ARS-based technologies on different cognitive and non-cognitive learning outcomes to examine the potential effects of using ARS. Overall, we found small but significant effects of using ARS-based technologies on a number of desirable cognitive and non-cognitive learning outcomes. Further analysis revealed that knowledge domain, class size, and the use of clicker questions, are among factors that significantly moderated the summary effect sizes observed among the studies in the meta-analysis. These findings hold significant implication for the implementation of clicker-based technologies in the classroom.
Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined.
To download the paper : http://www.sciencedirect.com/science/article/pii/S2211949315000022
Education ideally should induce learning that lasts for years and more. A wealth of research indicates that, to achieve long-lasting retention, information must be practiced and/or tested repeatedly, with repeated practice well distributed over time. In this paper we discuss the behavioral, neuroimaging and neurophysiological findings related to the effect of distributed practice and testing as well as the resulting theoretical accounts. Distributed practice and testing appear to be powerful learning tools. We consider implications of these learning principles for educational practice.
Regular assessment is a vital part of effective teaching and learning. For this, the "one-minute paper" has been popular among faculty. While many teachers who have used it find huge benefits from it, there are also several weaknesses of this tool that are commonly reported by its users. This paper suggests the "daily quiz" as a better tool for assessing and promoting students' learning. The tool provides a better incentive setup that elicits a more sustained serious response-effort from the students, as well as a sharper focus in assessing cognitive learning. Furthermore, research results in the literature on the effects of frequent testing and the notion of "effortful retrieval" support the author's experience with the daily quiz. End-of-term surveys of students' opinions about the usefulness of the daily quiz also confirm the tremendous benefits from this tool. This paper compares the major characteristics of the "minute-paper" versus the daily quiz, based on a survey of the literature as well as the author's experiences with these tools. This paper will be of interest to those who employ (or are planning to employ) frequent assessment in their classes. It gives them an analysis of the costs and benefits of two common assessment tools.