Article

The Value of Time Limits on Internet Quizzes

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This study evaluated 15-min time limits on 10-item multiple-choice quizzes delivered over the Internet. Students in a computer-assisted course in human development spent less time on quizzes and performed better on exams when they had time limits on their quizzes. We conclude that time limits are associated with better learning and exam performance because they reduce the opportunity to look up answers in lieu of learning the material.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, the fact that using online course materials only approached significance and that the benefit carried over to some but not all of the tests suggests that the findings could be explained by other factors, such as inherent student interest in the material or something else altogether unrelated to the effectiveness of the online tools. Brothen and Wambach (2004) showed that taking online quizzes was actually related to poorer performance on exams if students used these quizzes to replace more effective learning techniques such as reading the chapters. They found that students used the textbook to help answer the quiz questions and stopped studying the material once the maximum was achieved on the quiz. ...
... Thus, whereas some studies have found that students using online study aids score higher on in-class exams (Bartini, 2008;Grimstad & Grabe, 2004;Van Camp & Baugh, 2014), others report that quizzing provided little to no improvement in course performance (Brothen & Wambach, 2004;Daniel & Broida, 2004). It cannot be assumed that including an online study tool will necessarily improve student learning. ...
... There are a number of reasons for this that deserve exploration. For example, one possibility is that students were using the online activities in ways that did not demand sufficiently deep processing of the information (e.g., Brothen & Wambach, 2004). Unfortunately, in real classrooms it is not feasible (or desirable) for instructors to look over their students' shoulders while they use online tools-this defeats the purpose of their use, that is, for students to practice the material on their own time in addition to processing it in-class. ...
Article
Full-text available
Online quizzing tools cost both money (students) and time (faculty and students) to implement; if online quizzes boost class performance, then the extra cost can be justified. Although many studies have found that students who use online quizzes do better on tests than their classmates who do not, these studies are frequently confounded by a variety of factors. For example, some studies allow students to self-select into user and nonuser groups (e.g., Grimstad & Grabe, 2004), leaving open a question about good students being more likely to use online quizzing tools. Others have found that online quizzing produces only marginal and selective improvement in course performance (e.g., Bartini, 2008). In contrast to these more hesitant findings, other areas of the literature reveal robust evidence for a number of best practices for improving student learning (see Dunn, Saville, Baker, & Marek, 2013) such as testing and spacing effects. Our objective was to see if online study tools could produce these effects in real undergraduate classes (as opposed to within the laboratory). Undergraduate students enrolled in introductory psychology courses were provided access to publisher-provided content (Norton) and were required to use online quizzing. In the first experiment, the spacing of the online content was manipulated. In the second experiment, the requirement for completing online quizzing was manipulated within subjects. Online tools did not improve performance on in-class quizzes nor did they influence in-class exam performance. The findings of our study underscore the importance of creating online content that demonstrates improved student learning. (PsycINFO Database Record (c) 2015 APA, all rights reserved)
... Intellectify is a powerful website designed to challenge and entertain users by thinking about questions in various categories. A combination of innovative web design tools, machine learning, user base design and content integration is designed to engage your audience while maintaining consistency [1] [2]. ...
... In the online Q&A space, the integration of machine learning adds a higher level of intelligence to Intellectify, enabling self-assessment and transforming content delivery. Through advanced algorithms, Intellectify dynamically adjusts test problems based on user performance to provide an enhanced learning experience [2]. ...
Conference Paper
Full-text available
Intellectify is a dynamic quiz website with five different categories, each with five points indicating the importance of the category. The platform uses machine learning to measure user experience and provide better insights. User journeys begin with accessing information on each topic, allowing for a wide range of learning experiences. Intellectify seamlessly integrates educational content with interactive quizzes to suit a wide audience of all ages. Combining visually appealing content creates a gamified environment that increases user engagement. Intellectify prioritizes analysis and dissemination of information, presenting itself as a versatile tool for users looking for quick and meaningful insights on topics. Focusing on accessibility and fun, the website becomes a useful resource for children and teens looking to expand their basic knowledge of social networking and chatting.
... They were defined at any time except for the weekly WMTP I courses. According to research conducted by Brothen and Wambach [24], timed electronic quizzes are better to enhance exam performance more than untimed quizzes. In addition to this, it was found that they are associated with better learning because of reducing the opportunity to look up answers. ...
... For the quizzes, 10-15 minutes were defined in accordance with the content, so that the students were not given the opportunity to benefit from any source during the quiz. Similarly, in the literature, it was stated that keeping the quizzes duration short and preventing access to course materials increased the student's performance [20], [24]. ...
Article
Full-text available
This research examined the effects of online quizzes on the music theory achievement of freshman music teaching students. For this purpose, the students who took the Western Music Theory and Practice I course were determined as the study group and experimental research was conducted. A pre-assessment test was given to determine students’ knowledge level about music theory and the median value of the test was determined as the cut-off point. The cut-off point was established as the experimental group, while the cut-off point was established as the control group. In the semester, four online quizzes were given to the experimental group apart from the midterm and final exams. Finally, a final test was applied to whether there was a significant difference between the groups. Consequently, there was no significant difference between the two groups. However, it was seen that the experimental group scores are slightly higher than the control group scores, thus the experimental group achieved the success of the control group. When it was compared to the pre-assessment test scores, it shows that the students in the experimental group achieved a remarkable positive difference in the process. The discussion includes recommendations about the use of online quizzes.
... Roediger and Karpicke (2006) discuss an extensive literature review about the positive effect of using tests to improve participants' retention. The testing effect (Kuo & Hirshman, 1996;Roediger & Karpicke, 2006;Rowland, 2014), in its generality, is quite a robust phenomenon, detected both in laboratory settings and field studies involving college students (see Brothen & Wambach, 2001, 2004Collins et al. 2018;Glodowski et al., 2019;McDaniel et al., 2007;Roediger & Karpicke, 2006;Yong & Lim, 2016). For example, in arranging tests as recall of word lists, paired words (i.e., foot-shoe, or nonsense-words as, GRATspoon), or prose (to simulate the material commonly used in college settings), many laboratory studies showed consistently better performance on retention for the participants who had the opportunity to be exposed to tests (in form of self-recitation, elaboration of speculative questions or scenarios, multiple-choice questions, or short open answers) as compared to the participants who were required to only study the text. ...
... Several dimensions of feedback have been investigated to determine its impact on performance. For example, Brothen and Wambach (2004) analyzed whether or not the correct answers should be shown together with the incorrect answers. Other analyzed dimensions concerned (a) the different type of feedback content provided (Chase & Houmanfar, 2009), (b) whether the feedback should be immediately presented after each answer submission or at the end of the entire set of questions (Butler et al., 2007), (c) the frequency of feedback presentation, or (d) its localization with respect to the behavior targeted for change (just after it, or before a new opportunity to behave) (Aljadeff-Abergel et al., 2017;Sleiman et al., 2020). ...
Article
Full-text available
Modern societies need higher education systems that are strongly grounded on scientific knowledge and evidence-based teaching tools. This laboratory-based study extended Chase and Houmanfar (2009), examining the effects of basic and elaborate feedback on participants’ performance measured with two sets of tests administered after a college-level text was studied. The use of Tobii® Pro X2-60 eye trackers allowed for detection of students’ orientation toward the displayed feedback. The eye tracker results showed that participants spent more time reading their incorrect answers than the correct answers, but this lingering of their eye gazes on the screen did not have a positive correlation with the performance and the final test. However, other factors such as the feedback provision need to be more closely considered. The experimental findings also demonstrated that the presence of feedback following the first test phase improved the performance of the final test and that the elaborate feedback had the largest effect when the participants where naïve about the topic studied during the experimental session.
... In context of this paper, the definition of a formative assessment is limited to an assessment that aims to provide students with feedback on their knowledge state, in order to help them direct their future learning endeavors. Even though online formative assessments, also called web-based formative assessments (Henly 2003), web-based quizzes (Daniel and Broida 2004) and online quizzes (Dobson 2008), are positively perceived in higher education (Bälter et al. 2013;Marden et al. 2013) and, when available, used by a majority of students (Baartman 2008;Henly 2003;Horn and Hernick 2015), their exact outcomes vary significantly depending on their properties and context factors: these include time limits (Brothen and Wambach 2004), presentation settings (Daniel and Broida 2004), and number of allowed attempts (Marden et al. 2013). The repeated notion in the literature of a lack of studies focusing on the effects of formative assessments performed in authentic educational settings is therefore not surprising (Brame and Biel 2015;Carrillo-de-la-Peña et al. 2009;McDaniel et al. 2011;McDaniel et al. 2012), and studies that have been published on this topic do not always report unanimous results. ...
... All FAs had a 10 min time limit, as recommended by Brothen and Wambach (2004). No limits were imposed on their accessing, and every access and submitted response was recorded. ...
Article
Full-text available
This study aimed to investigate the effects of using online formative assessments on students’ learning achievements. Using a quasi-experimental study design with one control group (no formative assessments available), and two experimental groups receiving feedback in available online formative assessments (knowledge of the correct response – KCR, or elaborated feedback – EF), it was investigated how feedback type in combination with learning content complexity will affect students’ learning achievements when used in-vivo, in a digital signal processing university course. Data generated by the two experimental groups was additionally used to investigate differences in using online formative assessments based on the feedback type. Study findings suggest online formative assessments are a very efficient educational intervention for this domain. The acquired data suggests that students quickly recognized the value of the formative assessments and that more than 90% of students have used them extensively. Statistically significant improvements in learning achievements were observed in the KCR group compared to the control group (p < 0.01, Cohen’s d between 0.691 and 1.080, depending on the learning content complexity), and KCR group compared to the EF group (p < 0.01, Cohen’s d = 0.877, in the case of most complex of the three learning contents used). No statistically significant differences were found in formative assessment usage between the two experimental groups, aside from the difference in the time between consecutive formative assessment attempts, indicating students did make use of the available feedback. Reported results are significant for demonstrating the potential of online formative assessments in achieving the desired learning outcomes in higher education, as well as for gaining insights into students’ habits of using them.
... It is possible that students are looking up answers and not using the items to test their understanding. Brothen and Wambach (2004) determined that questions answered without a time limit resulted in study question performance that was nearly uncorrelated with examination scores and that a substantial correlation could be generated by requiring that students answer questions within 60 s. Answers requiring more than 60 s were automatically counted as incorrect. ...
... This study task could be given additional weight, but we are concerned with how students might respond to a task that represents a significant portion of their grades and that we cannot monitor in the online setting. As we explained in justifying the limitation imposed on question response time, researchers have found that students were likely to look up for answers rather than rely on unaided understanding if given additional time (Brothen & Wambach, 2004). It appears that students are willing to subvert the intended benefits of study questions as a formative evaluation of understanding. ...
Article
Accurate identification of what a learner does not know is essential for efficient self-directed learning. The accuracy of this awareness, often described as calibration, has been operationalized in several ways. Calibration data are often collected in applied settings by having students predict a future examination score. This method is efficient but not a direct measurement of the awareness of specific strengths and weaknesses. Online technology allows a practical way to collect more specific, local data; that is, the accuracy of confidence ratings for individual assessment items. These two methods of estimating calibration, global and local, were contrasted as predictors of performance in an introductory college course. Both measures were demonstrated to be significant and unique predictors of future examination performance. Online study environments requiring certitude judgments for study questions and offering immediate opportunities for review may offer the means for improving the efficiency of self-directed learning.
... In view of these considerations, this study focuses on the performance of online activities by university students. These activities constitute a useful tool for the teacher, as they enhance students' understanding of the facts and concepts presented in class (Brothen and Wambach 2004), as well as for the students themselves, who can thereby test their comprehension of the class content. ...
... Thus, if the activities are intended to allow students to test their knowledge of the material and to become familiar with it-and not merely to help students in memorizing the results of the proposed tasks-a positive effect on results will be obtained (Brothen and Wambach 2001). The presentation of comments following students' responses to questions has been shown to have a positive impact on their results (Brothen and Wambach 2004;Ryan 2006). ...
Article
In blended learning, the internet acts as an instrument to complement traditional forms of instruction, in the belief that the incorporation of new information and communication technologies may lead to more efficient and effective education. This paper presents a study carried out in the University of Granada, during the first year of undergraduate courses, which considered a total of 1,128 students organized into 17 groups during the academic year 2009–2010 and focused on the students’ voluntary use of online learning activities. The results show that the students’ participation in these activities and the number of tasks completed both had a positive effect on the students’ final marks. The time employed in carrying out online tasks did not influence the results achieved but the marks obtained in such activities were a significant factor. In addition, the students’ respective background, rate of class attendance and interest in the subject were explanatory variables of the outcome.
... Moreover, the effect of quizzes on students' performance on exam questions about other class material was mixed at best, suggesting the data reflect an actual effect of completing the quizzes, rather than a general performance effect based on student ability or motivation. The success of this exploration is encouraging in light of the decline in reading compliance over time (Brothen & Wambach, 2004; Burchfield & Sappington, 2000; Clump et al., 2004; Sappington et al., 2002). Although we did not specifically examine mechanisms underlying the effectiveness of reading quizzes, there are several possible explanations. ...
... Additionally, researchers might examine in an integrative way which features of reading quizzes result in better comprehension of the material. Whereas past research has revealed multiple dimensions that influence quiz effectiveness (e.g., required vs. voluntary [Grimstad & Grabe, 2004], mastery-based vs. single-attempt Brothen & Wambach, 2004]), researchers have explored these characteristics in isolation. The next step would be to systematically manipulate various combinations of quiz features and assess which combinations optimize student learning. ...
Article
Assigned textbook readings are a common requirement in undergraduate courses, but students often do not complete reading assignments or do not do so until immediately before an exam. This may have detrimental effects on learning and course performance. Regularly scheduled quizzes on reading material may increase completion of reading assignments and therefore course performance. This study examined the effectiveness of compulsory, mastery-based, weekly reading quizzes as a means of improving exam and course performance. Completion of reading quizzes was related to both better exam and course performance. The discussion includes recommendations for the use of quizzes in undergraduate courses.
... Quizzes were available to students from 1:00 p.m. on Friday afternoon until 11:59 p.m. Sunday night. Quizzes were not proctored, but several Blackboard ® assignment settings were used to encourage retrieval of information as opposed to collaborative work or restudy (see Brothen & Wambach, 2004). Question order and multiple-choice answer order were randomized. ...
Article
Full-text available
Background Undergraduate STEM instructors want to help students learn and retain knowledge for their future courses and careers. One promising evidence-based technique that is thought to increase long-term memory is spaced retrieval practice, or repeated testing over time. The beneficial effect of spacing has repeatedly been demonstrated in the laboratory as well as in undergraduate mathematics courses, but its generalizability across diverse STEM courses is unknown. We investigated the effect of spaced retrieval practice in nine introductory STEM courses. Retrieval practice opportunities were embedded in bi-weekly quizzes, either massed on a single quiz or spaced over multiple quizzes. Student performance on practice opportunities and a criterial test at the end of each course were examined as a function of massed or spaced practice. We also conducted a single-paper meta-analysis on criterial test scores to assess the generalizability of the effectiveness of spaced retrieval practice across introductory STEM courses. Results Significant positive effects of spacing on the criterial test were found in only two courses (Calculus I for Engineers and Chemistry for Health Professionals), although small positive effect sizes were observed in two other courses (General Chemistry and Diversity of Life). Meta-analyses revealed a significant spacing effect when all courses were included, but not when calculus was excluded. The generalizability of the spacing effect across STEM courses therefore remains unclear. Conclusions Although we could not clearly determine the generalizability of the benefits of spacing in STEM courses, our findings indicate that spaced retrieval practice could be a low-cost method of improving student performance in at least some STEM courses. More work is needed to determine when, how, and for whom spaced retrieval practice is most beneficial. The effect of spacing in classroom settings may depend on some design features such as the nature of retrieval practice activities (multiple-choice versus short answer) and/or feedback settings, as well as student actions (e.g., whether they look at feedback or study outside of practice opportunities). The evidence is promising, and further pragmatic research is encouraged.
... Online quizzes motivate students to complete reading assignments, increase participation in class discussions, and improve performance in exams [6,9,20]. The use of online quizzes outside class also give teachers additional time for a more lively, in-depth discussions, higher order thinking and active learning activities in the class [8,9,17]. ...
... They report that while in general web-based quizzes are not always as effective as in-class quizzes, the use of time limits and randomized blocks were effective strategies. Brothen and Wambach (2004) also investigated the issue of time limits for online quizzes. Their research indicated that time limits improve performance by forcing students to prepare before the quiz, rather than looking up the answers after starting the quiz. ...
Article
Full-text available
Statistics is a difficult subject for many students, but it is also a requirement for most graduate-level business programs. Mastery of difficult subject matter is achieved through repetition. In this article, we review the process for setting up a multiattempt quiz system in an MBA-level introductory statistics course and investigate the impact it had on student behavior and learning. We analyzed a dataset of more than 3,000 quiz attempts to determine the impact repeated trials had on the time required to complete the quiz and the resulting score. We also examined the probability students will make an additional attempt and students' bias for a round number score. We found that scores improved and that quiz duration times decreased with repetition. We also discovered that student's preference to achieve round number goals impacts their willingness to make additional attempts.
... 5 Numerous studies have been conducted on timed versus non-timed assessments. [6][7][8][9][10][11] Most of these research findings revealed a positive correlation between students' performance in the assessment and the breadth of their time spent in online learning, [12][13][14] while some found just the opposite. 15 Imposing a time limit helps the students to improve their performance in the assessment 16,17,18 ; however, the statistical method applied to conclude such observation has been questioned by education psychologists. 4 A difficult calculation question that does not provide a formula adds additional stress to the student. ...
Article
Introduction The purpose of this study was to evaluate the amount of time spent per problem and the level of accuracy per problem, based on the presence or absence of a stressor. The impact on accuracy created by stress due to the lack of the formula prompt during an assessment is a major focus of this study. Methods Sixty-nine first-year pharmacy students were tested with four calculation questions (Qs) divided between two quizzes. The first quiz contained three multiple-choice questions (MCQs), Q1 to Q3, and no formulas to assist students. The second quiz contained one MCQ, Q4, and provided a formula to assist students. The degree of difficulty of Q1, Q2, Q3 was set lower. Also, Q3 and Q4 were identical. The only difference was the inclusion of the formula to assist the student. The absence of the formula on the first quiz served as the stressor, which impacted the average response time and level of accuracy. Analysis was performed for determining the difference among the groups of students based on their rate of accuracy and the rapidness of response. Results The mean time to respond to the question with the formula was not significantly different from the mean time to respond to the question with no formula. While the rapidness of response increased due to confidence in the formula provided, accuracy in response selection decreased. Conclusion The absence of cognitive stressors contributed to boosting student confidence and rapidness of response but reduced the accuracy.
... Evidence suggests (Brothen and Wamback, 2004;Johnson and Kiviniemi, 2009), and our experience confirms, that online quizzes motivate students to complete assigned readings; increase participation in class discussion; and improve their performance in summative assessments. The adoption of developmental online quizzes as a formative learning activity has enabled us to engage students in collaborative learning activities such as group projects and classroom discussion. ...
... [14,15] When taking quizzes, students are encouraged to pay closer attention in class and improve the understanding of the reading and presented materials. [16,17] Since 1987, Marsh has suggested that the purpose of student evaluation of teaching (SET) is to provide instructors and administration with feedback on their teaching and personnel decisions. [18] Moreover, Yao and Grady emphasized that the purpose of SETs were to improve the quality of teaching and to collect information regarding instructors for use in hiring, tenure, and promotion decisions. ...
... 37-40;Holtzman 2008;McKeachie and Svinicki 2006, p. 93;Nguyen and McDaniel 2015). For example, some instructors may rely on the questions offered by the publishing company for the textbook used in their courses (e.g., Brothen and Wambach 2004;Marcell 2008). McKeachie and Svinicki (2006) and Nguyen and McDaniel (2015) advise against complete reliance on these types of questions because student performance on these questions may not be an accurate assessment of the students' understanding of the main course objectives. ...
Article
Full-text available
Several researchers have shown quizzes effectively support college student success; however, instructors can implement quizzes in multiple ways. We conducted a systematic review of the literature on quizzes in undergraduate courses using the PRISMA method (i.e., Moher et al. Public Libr Sci Med 6(7):e1000097, 2009. https://doi.org/10.1371/journal.pmed.1000097). We searched peer-reviewed journals in the ERIC database and included studies in which researchers manipulated a quiz (or some aspect of a quiz) and measured the effects on out-of-class preparation, class attendance, in-class participation, and/or performance on exams in the context of a traditional face-to-face undergraduate course. We used this body of literature to develop evidence-based recommendations for how instructors can program quizzes to improve college student behavior within their courses and promote overall student success in higher education. Limitations and areas for future research are discussed.
... Online quizzes motivate students to complete reading assignments, increase participation in class discussions, and improve performance in exams [6,9,20]. The use of online quizzes outside class also give teachers additional time for a more lively, in-depth discussions, higher order thinking and active learning activities in the class [8,9,17]. ...
... For example, frequent testing was associated with greater delayed recall of presented information (Roediger & Karpicke, 2006), better retrieval of information (Carpenter & DeLosh, 2006), and higher exam scores not only for college students (Cone, 1990;Kling, McCorkle, Miller, & Reardon, 2005;Powell, 1977), but also for medical students (Larsen, Butler, & Roediger, 2013) Other research, however, shows a circumstantial benefit of frequent testing. For example, frequent testing was beneficial only in addition to a thorough review of the tested material (Brothen & Wambach, 2004). In summary, because there are mixed results in the literature about the effectiveness of frequent testing (Bell, Simone, & Whitfield, 2015), while previous studies propose that increasing exam frequency may have such positive effects such as reducing SES-based achievement gaps (e.g., Pennebaker, Gosling, & Ferrell, 2013), frequent testing and SES will be important components of the current dissertation. ...
Thesis
Full-text available
Is online education an effective and viable alternative to face-to-face education? The purpose of this dissertation was to evaluate the effectiveness of online education at The University of Texas at Austin (UT-Austin). The dissertation focused on Synchronous Massive Online Courses (SMOCs) at The University of Texas at Austin since 2012. This dissertation analyzed the extent to which course effectiveness varies as a function of lecture environment, comparing SMOCs to similar face-to-face (FTF) courses. In total, 25,726 students across 53 courses at UT-Austin were included in analyses. Researchers compiled all relevant student and course data archived in university databases and merged that with course data compiled from archived course syllabi. Then, Hierarchical Linear Modeling was used to test how (a) final course grades vary as a function of lecture environment (SMOC or FTF), controlling for socioeconomic status, scholastic aptitude, and course exam frequency, (b) subsequent semester grades vary as a function of lecture environment (SMOC or FTF), controlling for socioeconomic status, scholastic aptitude, and course exam frequency, and (c) course completion rates vary as a function of lecture environment (SMOC or FTF), controlling for socioeconomic status, scholastic aptitude, and course exam frequency. The primary goal of this project was to examine the effectiveness of SMOCs in comparison to FTFs. Course effectiveness was operationally defined with three objective outcomes: final course grades, subsequent semester GPAs, and course completions. Findings show that there were no significant differences between SMOCs and FTFs on any of these objective measures. That is, SMOCs neither outperform nor underperform FTFs in final grades, subsequent semester GPAs, or course completions. Because previous studies propose that increasing exam frequency may reduce SES-based achievement gaps (e.g., Pennebaker, Gosling, & Ferrell, 2013), and there are some mixed results in the literature about the effectiveness of frequent testing (e.g., Bell, Simone, & Whitfield, 2015), a secondary goal of this dissertation focused on the interaction of SES and exam frequency in the context of course effectiveness outcomes. Exam frequency interacted with lecture environment; such that for FTFs, there was no substantial difference in final course grades by exam frequency; however, for SMOCs, students with more exams had higher final course grades than students with fewer exams. The highest final grades were earned by students in SMOCs that provided the highest exam frequencies (while accounting for control variables). Exam frequency also interacted with socioeconomic status (SES); such that for lower SES students, when exam frequencies are lower the probabilities of course completion are lower than when exam frequencies are higher; and when exam frequencies are higher, the probabilities of course completion are higher than when exam frequencies are lower. For higher SES students, the probabilities of course completion did not vary by exam frequency. Given these findings, increasing exam frequencies in course structures is recommended. Looking across a wide range of course topics and courses, and large number of students, this dissertation provides evidence that SMOCs are as effective as FTFs on objective course outcomes, both short- and long-term. This includes final course grades, subsequent semester GPAs, and course completion rates as course effectiveness measures. Economically, SMOCs are able to reach thousands of students by relying on fewer faculty without the need for large classrooms. At the same time, it frees faculty to teach more and smaller upper division courses. Although the results of the SMOC and FTF courses were generally similar, the additional payoffs of the SMOCs make them a promising tool for the future of undergraduate education. If the high standard of educational course effectiveness is based in the traditional FTF course, then a comparable SMOC course meets that high standard.
... For example, many university courses are still conducted with the standard lecture delivery and tutorial system, with minimal formative assessments, such as a mid-term written assignments and group projects. Online quizzes can offer avenues for students to monitor their progress and identify weaknesses in their specific knowledge[ 4 _ T D $ D I F F ] [63]. This approach can also provide relevant and prompt feedback to help them direct their future study efforts [31,32]. ...
Article
Traditionally, parasitology courses have mostly been taught face-to-face on campus, but now digital technologies offer opportunities for teaching and learning. Here, we give a perspective on how new technologies might be used through student-centred teaching approaches. First, a snapshot of recent trends in the higher education is provided; then, a brief account is given of how digital technologies [e.g., massive open online courses (MOOCs), flipped classroom (FC), games, quizzes, dedicated Facebook, and digital badges] might promote parasitology teaching and learning in digital learning environments. In our opinion, some of these digital technologies might be useful for competency-based, self-regulated, learner-centred teaching and learning in an online or blended teaching environment.
... For students who have not read material ahead of time and are completing the quizzes by looking up the answers in the textbook (often linked as an e-book) as they respond, feedback could be seen as irrelevant. Brothen and Wambach (2004) have termed this the "quiz-to-learn" strategy, wherein students skip reading the textbook prior to the quiz and use the quiz itself as a way to learn the material. To try to minimize students' use of this strategy, Brothen and Wambach manipulated the amount of time students had to take online quizzes across two sections of a human development course. ...
Article
Full-text available
As online learning becomes more prevalent in higher education, faculty are likely to increasingly turn to textbook technology supplements (TTSs) as a tool to enhance student learning outside the classroom. In 3 experiments, we tested whether using TTS multiple-choice quizzes (Experiments 1 and 2) or essay responses (Experiment 2) improved student learning as measured by publisher-provided multiple choice assessments administered in class. In Experiment 1 (N = 75), using a between-subjects design, we found no significant difference in performance for either in-class quizzes, p = .88, or exam performance, p = .79, as a function of whether students were required to use a TTSs or not. In Experiment 2 (N = 173), using a within-subjects design, we found a small but significant (3.5%) benefit to student learning on content for which they completed online quizzes (p = .002, partial η2 = .06). Experiment 3 (N = 90), again using a within-subjects design, found no significant benefit to learning when students were required to write essay responses to online questions (p = .13, partial η2 = .03). We suggest that the behaviors faculty intend to promote with the use of such tools (e.g., repeated practice, effortful attention, etc.) may not match the behaviors that these types of tools reinforce. They may serve as vehicles for engaged learning practices for some students, but for others, they may represent another system that must be “worked” to satisfy a requirement or get a good score.
... Additionally, this study proposes to incorporate on-line quizzes as an additional element to improve student reading compliance and student learning. On-line quizzes have been used in the prior literature investigating student learning (Brothen & Wambach, 2004;Daniel & Broida, 2004;Marchant, 2002;Maurer, 2006) with mixed results, and on-line quiz scores have been reported to significantly predict scores on subsequent assessments over the same material, such as exams (Anthis & Adams, 2002). However, no prior investigation has used the combination of both on-line and in-class daily reading quizzes over the same material. ...
Article
Full-text available
This study compared students’ daily in-class reading quiz scores in an introductory Child Development course across five conditions: control, reading guide only, reading guide and on-line practice quiz, reading guide and on-line graded quiz, and reading guide and both types of on-line quizzes. At the beginning of class, students completed a 5-item quiz over the assigned readings. With the exception of the control section, all students had access to an instructor-designed reading guide for each of the 20 assigned readings. Results revealed that reading guides significantly increased student learning as demonstrated by increased scores on the in-class reading quizzes, with marginal additional gains when practice quizzes were also utilized. The addition of on-line graded quizzes resulted in lower scores on in-class quizzes. Results held even after multiple subsidiary analyses controlling for time spent studying. These findings suggest that reading guides may be a valuable study aid for improving student learning.
... On the basis of our experience in face-to-face courses and best practices for design and administering of multiple-choice exams (e.g., Brothen & Wambach, 2004; Chronicle of Higher Education, 2010), we decided that 1-2 minutes per question would allow sufficient time for all students to complete the exams. So we set up the time available for open-book quizzes (10 questions) at a maximum of 30 minutes to encourage them to read the textbook beforehand but still allow time to Semester I self-assessment (SA) usage. ...
... This feedback helped students learn from their mistakes (Brothen & Wambach, 2001; Ryan, 2006; Johnson & Kiviniemi, 2009). Credit for passing an online quiz was awarded only when all of the questions selected randomly from a question bank were answered correctly in a limited amount of time (Brothen and Wambach, 2004; Hadsell, 2009; Johnson & Kiviniemi, 2009). Time limits on Internet quizzes reduced the student's ability to use their textbooks to look up the answers as they work through the quiz. ...
... online quiz through the use of a 15-min time limit (1,4) and randomization of the questions was incorporated to discourage students from cheating on the quizzes. The potential for students to use books, notes, and each other as resources when taking online quizzes represents a possible drawback for using online quizzes in summative assessment. ...
Article
Full-text available
Review quizzes can provide students with feedback and assist in the preparation for in-class tests, but students often do not voluntarily use self-testing resources. The purpose of the present study was to evaluate if taking a mandatory online review quiz alters performance on subsequent in-class tests. During two semesters of a single-semester introductory anatomy and physiology course, students were required to complete brief online quizzes after each textbook chapter had been covered during lecture as well as the day before an in-class test. During the next two semesters, students were not required to take the online review quizzes. Overall scores on chapter specific in-class tests were higher (P < 0.05) during the semesters in which students took online review quizzes (82.9 ± 14.3%) compared with when they did not (78.7 ± 15.5%), but all in-class tests were not improved. Scores on comprehensive midterm examinations were higher (83.0 ± 12.9% vs. 78.9 ± 13.7%, P < 0.05) but not on final examinations (72.4 ± 13.8% vs. 71.8 ± 14.0%) between those with online review quizzes and those without, respectively. Overall scores on in-class tests and comprehensive examinations were higher (P < 0.05) during the semesters in which students took online review quizzes (83.4 ± 16.8%) compared with when they did not (80.3 ± 17.6%). These data suggest that an online review quiz taken the day before an in-class test increases performance on some in-class tests. However, online review quizzes taken after completion of each chapter do not consistently enhance performance on comprehensive examinations. Copyright © 2015 The American Physiological Society.
... The TEL literature concerning e-assessment tends to focus on implementations and their comparison to offline assessments (see for example Thelwall (2000), Sim, Holifield, and Brown (2004)); effects of particular settings or restrictions such as per-item time-limits (Brothen, 2012;Brothen & Wambach, 2004); or "effective approaches" to implementation (e.g. Nicol & Macfarlane-Dick, 2006). ...
... Another method of timing relates to quizzing and practice within eLearning. Timed quiz takers often take less time to complete quizzes than untimed quiz takers, and generally demonstrate greater retention of the course material, when assessed at a later time (Brothen & Wambach, 2004). Finally, color and contrast can be simple but powerful tools for making information quickly accessible and understandable to learners. ...
Article
Full-text available
Given the rapid growth of technology use in business and industry, there is an increasing need to research best practices for producing online learning materials. For behavior analysts, eLearning is a topic worth researching, as it can potentially merge the promise of Skinner’s teaching machines with the growing area of organizational behavior management (OBM). This paper presents an overview of some of the published research that addresses three components of generating palatable and effective eLearning: characteristics of eLearning users, the most critical components of instructional design, and how technology can be used effectively to enhance eLearning. The results show that using technology provides both aesthetic benefits through multimedia and visual effects, as well as instructional benefits that can maximize learner retention of the course material. Understanding the best practices for creating eLearning allows all of these benefits to be realized and put into action.
... Others have made textbook study-guide completion worth a part of the class grade (Dickson, Miller, & Devoley, 2005). Several instructors have used online quizzing to motivate students to read the book more often during the course of the semester (Brothen & Wambach, 2004;Daniel & Broida, 2004;Marchant, 2002). ...
Article
Full-text available
The authors constructed the Textbook Assessment and Usage Scale (TAUS) to measure students' textbook evaluations. They tested the scale in 6 introductory and 3 upper level classes. In Studies 1 and 2, the authors developed the TAUS, tested its psychometric properties, and determined which factors predicted how much students read the book and student exam scores. In Study 3, the authors used different introductory textbooks; and in Study 4, they used upper level classes. The authors found initial data to support the psychometric properties of the TAUS. Sex of student, student perceptions of the quality of visuals, pedagogical aids, photographs, writing, and course design predicted student text reading and exam scores. The number of significant predictors varied across textbooks and classes studied.
... Efforts to discourage these strategies, however, were very effective. So, whereas quizzing is generally effective, unproctored online quizzing may not be: Success is accomplished under very specific conditions (see also Brothen & Wambach, 2004). To test whether or not there is a "best way" to prepare for and take the online quizzes, Gurung (2004) compared the students who read the chapter and then took an online quiz with those students who just took the quiz with the book open looking for the answers for each question (students who guessed, copied, or took the quizzes in any other way were excluded). ...
... No performance differences were found. However, Brothen and Wambach (2004) compared timed quizzes with untimed quizzes and found that imposing a time limit improved performance. ...
Article
Full-text available
Computer-based instruction (CBI) has been growing rapidly as a training tool in organizational settings, but close attention to behavioral factors has often been neglected. CBI represents a promising instructional advancement over current training methods. This review article summarizes 12 years of comparative research in interactive computer-based instruction relevant to employee training techniques. The results demonstrate that CBI is an effective and viable training technique, and several areas in need of further examination are detailed.
... The education literature generally supports the premise that using online quizzes in a F2F course contributes positively to students' learning experiences and outcomes. Examples of such research include Norman et al. (2000), Brothen and Wambach (2004, 2007), Daniel and Broida (2004), Dunbar (2004), Bandy (2005), Bol et al. (2005), Marcus (2005), Schnusenberg (2005), Peng (2006, 2007), and Waite (2007). On a related theme, instruction modes currently used in business schools are classified into three categories by one line of research: (a) the campus mode that no Internet-based instructional technologies are applied; (b) the online mode; (c) the hybrid mode, i.e., by Terry (2007). ...
Article
This paper examines applications of the web-enhanced instruction mode. Giving online quizzes does appear to be a better use of class seat time. Results of the embedded online assessment given in a undergraduate finance capstone course delivered by this instruction mode are analyzed. Students’ learning achievement of this newer pedagogical method is evident. Most students perceive this method as beneficial to their studies in finance. Hence, university administrators should provide encouragements and incentives for finance faculty members to use Internet-based technologies in face-to-face instructions.
... The education literature generally supports the premise that using online quizzes in the F2F course delivery contributes positively to students' learning experience and outcomes. Examples of such research include Norman et al. (2000), Brothen and Wambach (2004), Daniel and Broida (2004), Dunbar (2004), Bandy (2005), Bol et al. (2005), Marcus (2005, Schnusenberg (2005), Peng (2006), Brothen and Wambach (2007), and Waite (2007). ...
Article
Full-text available
The primary benefit of providing out-of-class online quizzes in a face-to-face class is to gain more in-class time. A study designed to investigate this issue was conducted during the Spring 2006 and Spring 2007 semesters. Thirty-one and 34 Corporate Finance undergraduate students from each semester, and 33 and 36 Investments undergraduate students from each semester participated in this study. Do students cheat whilst taking online versus in-class quizzes? Key results indicate no significant differences between online versus in-class administered quizzes. This finding alleviates concerns about student cheating and hence frees up in-class time for additional materials and interactions. The process of administering an online quiz is discussed in detail. The monetary cost of using a test generator program to create an online quiz is nominal in comparison with the licensing fee of any online course management software. Giving online quizzes does appear to be a better use of class seat time, and this pedagogical method is recommended to faculty delivering courses using face-to-face instructional design, especially those who are teaching corporate finance or investments.
Article
Full-text available
This study investigates the effectiveness of asynchronous online quizzes in improving student learning outcomes in higher education. Specifically, we compare the impact of two teaching methods - Synchronous Lecture and Asynchronous Tutorial pair (SLAT) versus Asynchronous Lecture and Synchronous Tutorial pair (ALST) - in delivering weekly quizzes to 70 undergraduate computer science students. Our results show that the SLAT outperformed the ALST method in enhancing students' academic performance after each learning unit. The findings highlight the potential of asynchronous quizzes as a valuable learning tool, particularly when combined with live lecture classes and asynchronous tutorials. These results have implications for educators looking to implement blended learning models that prioritize student engagement and academic achievement.
Article
Full-text available
In a company law course, the face-to-face lecture was flipped with the purpose of improving the lecture experience of students and their preparedness to engage in deep-learning seminar activities after the lecture. The flipped lecture consisted of a bundle of three tasks: a set of three pre-recorded videos narrating the lecture slides, an online graded quiz, and an oral feedback session on the quiz’s solutions. The students completed the first two tasks outside class but the third task in class. Data from a student survey and course grades were used to evaluate the performance of the flipped lecture compared with the face-to-face lecture. Although the results showed that, on average, there was only a slight difference in students’ preference and preparedness in favour of the flipped lecture, this difference turned out to be larger in magnitude and statistically significant when the sample was split by student GPA scores. The flipped lecture yielded greater benefits to Low-GPA students, compared with High-GPA students, especially when Low-GPA students watched the pre-recorded videos at least twice. Findings suggest that the group of students who need greater support in learning are the more likely to enjoy and profit from the flipped lecture.
Chapter
This chapter will discuss how to use the revised Bloom's Taxonomy as a framework for not only creating learning objectives but also as a guide to select tools to reduce the use of discussion forums and support faculty in broadening their thinking about the variety of tools that can be used to address learning objectives. Tools include: wikis, blogs, quizzes, journals, Web 2.0 tools, infographics and videos. These tools reduce faculty dependence on discussion forums and support faculty in broadening their thinking about the variety of tools that can be used to address learning objectives to create manageable workloads.
Article
This article presents an experimental study of the assessment made by university students of their level of digital competence in the use of mobile devices such as smartphones, laptops and tablets. The study was part of an investigation into ubiquitous learning with mobile devices and is based on the analysis of responses from a sample of 203 university students at eleven European and Latin American universities. Participants were asked questions about their performance on a set of digital activities that tested various components of digital competence. The analysis methodology was based on Item Response Theory (IRT). The survey data was analysed by applying a statistical model to represent the probability of obtaining an affirmative answer to each activity proposed. This enabled us to identify the difficulty and discrimination parameters of each activity. As an outcome of the study, measures on latent digital competence in individual participants were articulated. The results allowed us to describe how a number of devices and activities interacted. Understanding these types of interactions is necessary for a continued development of the evaluation of digital competence in students.
Research
Full-text available
PROCEEDINGS OF THE 11th INTERNATIONAL CONFERENCE ON MOBILE LEARNING 2015, Page 99
Article
Teachers of psychology and their students have been advised to allocate 1 min for each multiple-choice item on tests. Is this a realistic and useful rule? How much time do students actually take to complete multiple-choice tests and how do individual differences and item characteristics affect it? This article reports the results of four studies involving low and high stakes multiple-choice tests delivered online in proctored and unproctored environments. I provide test taking time data so that instructors can make better choices and give better advice to students about allocating time for tests. These results also suggest that students take about the same amount of time to answer multiple-choice questions as did students nearly a century ago.
Article
The trend to convert laboratory findings on the conditions associated with optimal memory into recommendations for teaching strategies and learning aids will harm students if findings fail to generalize to students' usual learning environments. Moreover, it is likely that pedagogies function differently for students with different degrees of background knowledge, time, and interest in the subject matter; that some support activities will prevent students from honing their ability learn from narrative material without guided learning; and that an overuse of learning aids will tax students' ability to use them effectively. We contrast two approaches to developing pedagogy-memory first and pedagogical ecology-and explain how the human factors approach of pedagogical ecology could be a more satisfying model for the scholarship of teaching and learning. © 2009 Association for Psychological Science.
Article
This research evaluated an online study task intended to improve the study metacognition and examination performance of inexperienced college students. Existing research has commonly operationalized metacognition as the accuracy of examination score predictions. This research made use of the average discrepancy between rated confidence in individual study question responses and the accuracy of the responses to these questions as a second way to assess metacognition. Study questions accuracy, average study question confidence discrepancy, and examination predictions were contrasted as predictors of examination performance. Study question accuracy and examination predictions were significant predictors of performance. The usefulness of the accuracy of confidence ratings in predicting examination performance may have been diminished by the decreasing use of the online study questions by those who performed most poorly on examinations. Those students who ended up scoring in the bottom third on course examinations made significantly less use of the study system after the first examination. The results indicate that performance on study questions offers a useful way to predict future examination performance. The accuracy of rated confidence did improve across examinations possibly indicating students become more aware of the accuracy of their understanding and the variable may have potential value if student compliance with the proposed use of online study questions can be improved.
Article
This study investigates voluntary use of online study questions, the relationship of study question use to examination performance, and the relationship of aptitude to study question use following an initial phase during which students either received course points for passing mastery quizzes or for completing a designated number of study questions. The results indicate a) students who first received points for completing study questions later made greater voluntary use of study questions, b) less able readers made less voluntary use of study questions than more able readers, and c) less able readers performed better on course examinations when awarded course points for completing a required number of study questions rather than quizzes.
Article
Online quizzes were introduced into an undergraduate Exercise Physiology course to encourage students to read ahead and think critically about the course material before coming to class. The purpose of the study was to determine if the use of the online quizzes was associated with improvements in summative exam scores and if the online quizzes were valid predictors of summative exam performance. A retrospective analysis was performed on the course scores from three different groups of Exercise Physiology students. Students in group 1 completed the original version of the course, those in group 2 completed an updated version of the course that included more rigorous exam questions, and those in group 3 completed the same updated version of the course but with the addition of 10 required online quizzes. Results showed that the overall mean summative exam score from group 3 was significantly higher than that from group 2 (81.79 +/- 8.26 and 78.72 +/- 9.61, respectively). A significant positive correlation (r = 0.50) was also found between individual mean online quiz scores and individual mean exam scores for those students in group 3. It was concluded that the formative online quizzes did enhance summative exam performance and that the online quizzes were valid predictors of exam performance.
Article
A meta-analysis of findings from 108 controlled evaluations showed that mastery learning programs have positive effects on the examination performance of students in colleges, high schools, and the upper grades in elementary schools. The effects appear to be stronger on the weaker students in a class, and they also vary as a function of mastery procedures used, experimental designs of studies, and course content. Mastery programs have positive effects on student attitudes toward course content and instruction but may increase student time on instructional tasks. In addition, self-paced mastery programs often reduce the completion rates in college classes.
Article
Feedback is an essential construct for many theories of learning and instruction, and an understanding of the conditions for effective feedback should facilitate both theoretical development and instructional practice. In an early review of feedback effects in written instruction, Kulhavy (1977) proposed that feedback’s chief instructional significance is to correct errors. This error-correcting action was thought to be a function of presentation timing, response certainty, and whether students could merely copy answers from feedback without having to generate their own. The present meta-analysis reviewed 58 effect sizes from 40 reports. Feedback effects were found to vary with control for presearch availability, type of feedback, use of pretests, and type of instruction and could be quite large under optimal conditions. Mediated intentional feedback for retrieval and application of specific knowledge appears to stimulate the correction of erroneous responses in situations where its mindful (Salomon & Globerson, 1987) reception is encouraged.
Article
The personalized system of instruction (PSI) replaces lectures with written materials and frequent testing and feedback to insure that students master the material [1]. Student volunteer proctors typically provide most student-staff contact by correcting students' quizzes immediately and suggesting remediation. Numerous studies show that PSI is highly effective but necessitates a course structure that is difficult for instructors to manage. This study extends the work by PSI practitioners to computerize the testing procedures to provide feedback and guide further study. Students in our PSI introductory psychology course improved their subsequent performance on computer-based quizzes that gave feedback. They also improved their quiz performance as the term progressed. We discuss these results in the context of helping students improve as learners.
Article
Institutions of higher education currently must develop policies for accommodating students with disabilities. The most common practice is for instructors to provide "accommodations" in class procedures for students certified as disabled. This approach is difficult because instructors lack models of how to respond to requests for accommodation. In this article, we describe a personalized system of instruction (Keller, 1968), a mastery learning model (Bloom, 1976) for an introductory psychology course that makes accommodation simply part of what occurs in class on a regular basis.
Article
Feedback is an essential construct for many theories of learning and instruction, and an understanding of the conditions for effective feedback should facilitate both theoretical development and instructional practice. In an early review of feedback effects in written instruction, Kulhavy (1977) proposed that feedback’s chief instructional significance is to correct errors. This error-correcting action was thought to be a function of presentation timing, response certainty, and whether students could merely copy answers from feedback without having to generate their own. The present meta-analysis reviewed 58 effect sizes from 40 reports. Feedback effects were found to vary with control for presearch availability, type of feedback, use of pretests, and type of instruction and could be quite large under optimal conditions. Mediated intentional feedback for retrieval and application of specific knowledge appears to stimulate the correction of erroneous responses in situations where its mindful (Salomon & Globerson, 1987) reception is encouraged.
Article
The phenomena of studying in academic domains are characterized and analyzed with special reference to the role of learning strategies. Characteristics peculiar to studying for academic purposes are described, and an autonomous learning model of studying is presented. Derived from selected theory and research, the major components of the model are study outcomes, study activities, course characteristics (including the nature of criterion performance), and student characteristics. In accord with the model, four principles hypothesized as determinants of the effectiveness of studying are proposed: specificity, generativity, executive monitoring, and personal efficacy. The kinds of additional evidence needed to verify the principles are specified.
Article
A meta-analysis of findings from 108 controlled evaluations showed that mastery learning programs have positive effects on the examination performance of students in colleges, high schools, and the upper grades in elementary schools. The effects appear to be stronger on the weaker students in a class, and they also vary as a function of mastery procedures used, experimental designs of studies, and course content. Mastery programs have positive effects on student attitudes toward course content and instruction but may increase student time on instructional tasks. In addition, self-paced mastery programs often reduce the completion rates in college classes.
Article
Computerized quizzes are becoming more available to psychology students and instructors. We hypothesized that students' ineffective use of such quizzes would predict poor course performance. Students in a personalized system of instruction life span human development course used 26 computerized, multiple-choice chapter quizzes to help them master the course textbook. Students who used a "prepare-gather feedback-restudy" strategy were more successful than students using quizzes to learn course material. We discuss these findings in the context of helping students become more effective learners.
Course management systems
  • Edutools