ArticlePDF Available

A Complete SMOCkery: Daily Online Testing Did Not Boost College Performance

Authors:

Abstract

In an article published in an open-access journal, (Pennebaker et al. PLoS One, 8 (11), e79774, 2013) reported that an innovative computer-based system that included daily online testing resulted in better student performance in other concurrent courses and a reduction in achievement gaps between lower and upper middle-class students. This article has had high impact, not only in terms of citations, but it also launched a multimillion-dollar university project and numerous synchronous massive online courses (SMOCs). In this study, I present a closer look at the data used in the Pennebaker et al. study. As in many cases of false claims, threats to internal validity were not adequately addressed. Student performance increases in other courses can be explained entirely by selection bias, whereas achievement gap reductions may be explained by differential attrition. It is hoped that the findings reported in this paper will inform future decisions regarding SMOC courses. More importantly, our field needs watchdogs who expose such unsupported extravagant claims—especially those appearing in pay-to-publish journals.
REPLICATION
A Complete SMOCkery: Daily Online Testing
Did Not Boost College Performance
Daniel H. Robinson
1
Accepted: 29 November 2020 /
#The Author(s) 2021
Abstract
In an article published in an open-access journal, (Pennebaker et al. PLoS One, 8(11),
e79774, 2013)reportedthataninnovativecomputer-basedsystemthatincludeddaily
online testing resulted in better student performance in other concurrent courses and a
reduction in achievement gaps between lower and upper middle-class students. This
article has had high impact, not only in terms of citations, but it also launched a
multimillion-dollar university project and numerous synchronous massive online courses
(SMOCs). In this study, I present a closer look at the data used in the Pennebaker et al.
study. As in many cases of false claims, threats to internal validity were not adequately
addressed. Student performance increases in other courses can be explained entirely by
selection bias, whereas achievement gap reductions may be explained by differential
attrition. It is hoped that the findings reported in this paper will inform future decisions
regarding SMOC courses. More importantly, our field needs watchdogs who expose such
unsupported extravagant claimsespecially those appearing in pay-to-publish journals.
Keywords SMOC .internal validity .selection bias .quackery .differential attrition
When it comes to improving student achievement, there is no limit to novel interventions,
treatments, and policies appearing in empirical, scientific journals. Unfortunately, fewer and
fewer of these recommendations are based on experimental methods where the researcher
randomly assigns students to treatment groups. In educational research journals, for example,
the trends indicate fewer intervention studies and more observational studies accompanied
with recommendations for practice (Hsieh et al. 2005; Reinhart et al. 2013; Robinson et al.
2007). Moreover, these recommendations for practice based on shaky evidence are often
repeated in later articles (Shaw et al. 2010).
https://doi.org/10.1007/s10648-020-09588-0
*Daniel H. Robinson
daniel.robinson@uta.edu
1
College of Education, The University of Texas at Arlington, 504 Hammond Hall, Arlington, TX
76019, USA
Published online: 6 January 2021
Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Fortunately, most of the snake oil recommendations for improving education do not make it
to classrooms where they could actually do some damage. However, there are notable
exceptions where such educational quackery has been implemented and continued to damage
the already fragile reputation of educational research (e.g., Robinson and Bligh 2019;
Robinson and Levin 2019). In a highly cited article (117 in Google Scholar and 45 in Web
of Science as of November 25, 2020), Pennebaker et al. (2013) reported that an innovative
online testing system resulted in better student performance in other courses and a reduction in
achievement gaps between lower and upper middle-class students. This system is part of the
first synchronous massive online course (SMOC) that was launched in 2013 at the University
of Texas at Austin. News of the impressive results by Pennebaker et al. spread quickly and
were subsequently cited in several other articles. For example, Takooshian et al. (2016) stated:
An especially encouraging result was reported by University of Texas researchers who
compared the effectiveness of an online version of introductory psychology with a
traditional version (Pennebaker et al. 2013). Not only did psychology exam scores
increase by approximately half a letter grade when the course was taught onlinethe
socioeconomic achievement gap in course grades was cut in half. (p. 142)
Similarly, Straumsheim (2013) interpreted the results as follows:
As more and more of the coursework continued to shift toward digital, the data showed a
clear trend: Not only were students in the online section performing the equivalent of
half a letter grade better than those physically in attendance, but taking the class online
also slashed the achievement gap between upper, middle and lower-middle class
students in half, from about one letter grade to less than half of a letter grade…“We
are changing the way students are approaching the class and the way they study,
Pennebaker said…“ThatsonethingthatIm actually most excited aboutThis project
could never have been built here at the university without heavy research behind it.
Originally, the professors hoped the class would attract 10,000 non-university students
willing to pay a few hundred dollars for the for-credit class. Indeed, the headline for a
Wall Street Journal article about the pairs innovation trumpeted Online class aims to
earn millions.That hasnt happened. The class, offered each fall, still mostly consists of
regular University of Texas undergrads. And while Gosling believes the model will
eventually spread to other universities, as far as he knows it hasnt done so yet, perhaps
because of the expertise and hefty investment required. Still, the model has been so
successful the university has since developed SMOC versions of American government
and U.S. foreign policy classes (Clay 2015,p.54).
Despite the hefty investmentrequired, based on the fantastic findings and press from the
Pennebaker et al. (2013) study, in 2016, the University of Texas at Austin named Pennebaker
the executive director of Project 2021 that was supposed to revamp undergraduate education
by producing more online classes(Dunning 2019, p. 1). The university initially committed
$16 million to Project 2021, which included monies for increasing the number of SMOC
production studios.
The first piece of the grand idea came from Professor James W. Pennebaker. He and a
colleague had brought software into a class that allowed professors to quiz students
during every class, and the data showed that learning disparities between students
decreased. They then created an online course, initially livestreamed from a studio using
1214 Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
greenscreens. It was called a synchronous massive online course,or SMOC, and UT
was proud that it was the first. (Conway 2019,p.1)
Pennebaker had also recently been awarded the APA Award for Distinguished Scientific
Applications of Psychology. This seemed to be a perfect example of a distinguished scientist
applying findings from psychology to improve undergraduate educationsomething that is
unfortunately rare (Dempster 1988). Also, unfortunately, things did not turn out so well. By
2018, after only two years into a five-year initiative, Project 2021 was suddenly dead and the
controversy surrounding it was covered in the Chronicle of Higher Education (Ellis 2019).
It seems the initiative didnt grow from student demand or from research showing a
definitive opportunity to serve students better. It grew from one professor who had a
success in one course, and from the outside momentum in the education world toward
digitized or reimaginedlearning experiencesof which data about learning outcomes
is actually pretty shaky. (Conway 2019,p.1)
Indeed, the data are shaky.The evidence used to support the SMOC was not based on a carefully
controlled comparison between the SMOC and face-to-face courses. Instead, the Pennebaker et al.
(2013) study was an ex post facto comparison of students who took a completely in-class version of
the introductory psychology course in 2008 with those who had the online quizzes in 2011. As
mentioned earlier, this observationalapproach is consistent with the latest trends in educational
research where researchers avoid random assignment of students to experimental conditions.
Despite the shaky evidence, the SMOC did not die along with Project 2021. On the contrary, the
production of SMOC courses at the University of Texas was ramped up. Compared to the 26 that
were produced in the 20152016 academic session, 90 were produced during 20182019, and over
29 were planned for Summer 2019 (Dunning 2019).
How could one article have such an impact? How was the University of Texas at Austin
duped into spending time and money on this bullsh-initiative? Undoubtedly, the extraordinary
claims had something to do with it. The notion that a single course could have a causal effect
of improving student performance in other courses both the following semester and, incred-
ulously, the same semester is simply amazing. The other claim of reducing achievement gaps
likely resonated with most educators who have been working on this problem for decades. But,
similar to the first claim, there are no known interventions that reduce achievement gaps.
Otherwise, we would be using them and would no longer have gaps.
Method
In this study, I examined these claims by taking a closer look at the data used in the Pennebaker et al.
(2013) article. As previously mentioned, threats to internal validity were not adequately addressed.
Thus, I simply looked at alternative reasons why the daily online testing students in 2011
experienced advantages over the traditional instruction students from three years earlier (2008).
Results
As with any comparison study that does not randomly assign students to experimental
conditions, one should first look for possible preexisting student differences that could explain
1215Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
any subsequent performance differences. The first possible threat to internal validity I exam-
ined was history. In other words, was there something that occurred between 2008 and 2011
that could explain the increase in GPA in the other courses? Grade inflation is certainly a
possibility that could account for some of the improved performance of students in 2011
compared with those in 2008. Indeed, the University of Texas at Austin undergraduate average
GPA had risen steadily since a few years before 2008 and a few years after 2011.
The actual difference between undergraduate average GPA in 2011 compared with 2008 is
0.07 (3.27 3.20). This difference, however, is considerably less than the differences in GPAs
reported by Pennebaker et al. (2013) of 0.11 and 0.12. Thus, although grade inflation could
partly explain the GPA differences between the 2008 and 2011 students, it cannot fully
account for the differences.
The next possible threat to internal validity I examined was selection bias. As many people
know, there exist, at most universities, differences in GPA among various majors. For
example, it is well known that education majors typically have higher GPAs than do
engineering majors. Thus, if one of the groups in a comparison study has more students from
an easieror hardermajor than the other group, this preexisting difference could surface in
any outcome variables that use the same measure or a similar one. In the Pennebaker et al.
(2013) study, indeed, they used student semester GPA as the main outcome measure to gauge
whether the daily online testing led to better student performance in their other courses.
Now, the assumption here is that students typically take most of their courses in their major
area. In fact, at the University of Texas at Austin, students take only 42 h (out of 120 total) of
core courses. The rest are in their major or minor areas and a handful of electives. Thus, if one
assumes that students take most of their courses in their major or closely related areas, then it
can also be assumed that their GPA for any given semester will reflect group differences that
exist according to major. In other words, grades in social work courses are typically higher
than those in natural science courses. Thus, we would expect a group that has more social work
students to have a higher GPA than a group with fewer such students. The opposite would be
true for a group with more business students.
I accessed the student major data for the 994 students who were enrolled in the introductory
psychology course at the University of Texas at Austin in the Fall semester of 2008 and for the
941 enrolled in 2011. Note that these totals are different from the 935 and 901, respectively,
that were reported in Pennebaker et al. (2013). Table 1below shows the average GPAs for all
courses by subject areas by year (2008 and 2011), and the numbers of students in the
psychology courses who were majoring in those areas.
To get the expected GPA of the entire class simply based on student major, I multiplied the
number of students by the average GPA of the subject area courses to get a weighted number. I
then summed the weighted numbers and divided by the total number of students to get a
weighted average GPA for each group. This majoreffect size for the online testing group
over the traditional group (3.29 3.18 = 0.11) is almost identical to the reported advantages
reported by Pennebaker et al. (2013) for both the concurrent semester (3.07 2.96 = 0.11) and
the subsequent semester (3.10 2.98 = 0.12). Thus, the student performance increases can be
fully explained by selection bias: there were different proportions of students from majors that
naturally tend to have higher or lower grades in those major courses. With regard to internal
validity, when an alternative explanation exists that can account for an experimentaleffect,
then that experimental effect becomes bogus.
Finally, as for the reduction in achievement gaps, Pennebaker et al. (2013) acknowledged
that the online testing courses were more rigorous due to daily quizzes. Typically, with
1216 Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
increased rigor comes increased drop rates. I decided to examine a third threat to internal
validity, differential attrition, that might explain the reduction in achievement gaps. Differen-
tial attrition occurs when participants in one group drop out of the study at a higher rate than
other groups. For example, suppose a company that runs a fitness bootcamp claims that its
average participant loses 15 pounds by the end of the four-week camp. However, out of every
100 participants that show up on day one, an average of 80 fail to finish the entire bootcamp
due to its extreme rigor. Of the 100 people in the control group who did not participate, zero
drop out (no rigor) and thus remain at the end of the four weeks. Weight loss comparisons are
made between the 20 who finished the bootcamp and the 100 control group participants. Thus,
the bootcamps claim is exaggerated. Whereas the completers might experience an impressive
weight loss, the average person who pays for the camp might not experience any weight loss.
Similarly, in 2008 when the psychology course was less rigorous with no daily quizzes,
only 32 students dropped the course. Comparatively, in 2011 when the rigor was increased,
almost twice as many students (58) dropped. Students from lower SES families unfortunately
tend to drop courses at higher rates than do their richer counterparts. It is certainly possible that
many of these students who dropped were from the low middle class. Thus, any analysis
would show a reduction in the performance differences between the low and high middle-class
students. This certainly is not as much of a smoking gunas the selection bias findings. But
does anyone actually believe that daily online testing would reduce achievement gaps?
Discussion
During the current pandemic in 2020, many colleges and universities are struggling to deliver
online instruction. Scholars and practitioners are arguing whether online instruction is just as
effective as face-to-face instruction. The encouraging findings reported by Pennebaker et al.
(2013) not only allowed some to conclude that online instruction may be equally effective, but
the suggestion that online may be more effective than face-to-face undoubtedly spurred efforts
to shift more and more instruction to online environments. But, as the present findings suggest,
such enthusiasm for online instruction may not be supported by the data.
Table 1 GPAs by major for both the 2008 and 2011 groups. Particular differences are highlighted in italic
typeface
2008 2011
Major NGPA Weighted NGPA Weighted
Business 108 3.30 356.4 92 3.29 302.68
Education 56 3.63 203.28 93 3.59 333.87
Engineering 84 3.18 267.12 41 3.27 134.07
Fine arts 29 3.51 101.79 19 3.56 67.64
Communication 59 3.37 198.83 38 3.33 126.54
Natural sciences 304 2.92 887.68 263 3.08 810.04
Geosciences 0 4 2.77 11.08
Liberal arts 313 3.18 995.34 216 3.22 695.52
Nursing 10 3.80 38 21 3.84 80.64
Social work 31 3.70 114.7 23 3.59 82.57
Undergrad studies 0 131 3.44 450.64
Totals 994 3.18 3163.14 941 3.29 3095.29
1217Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Are there any negative consequences of assuming that a SMOC version of a course
might be better than a face-to-face version? How many more SMOCs should the
University of Texas at Austin develop? Daily testing benefits are a robust phenomenon
in cognitive psychology (e.g., Roediger and Karpicke 2006) and no reasonable person
would argue against employing this strategy in any course. However, the benefit of
having students frequently retrieve newly learned information is only revealed during
later comprehensive testing such as a final exam. No one has ever claimed that frequent
testing can improve student performance in other courses. And there are certainly no
course-wide interventions that improve student performance in other concurrent courses!
Such unicorns have yet to be found. Similarly, reducing achievement gaps has been a
goal in education for over 50 yearsever since the Elementary and Secondary Education
Act of 1965. Sadly, very little progress has been made on this front. Daily online testing
is no magic bullet that will solve the problem.
This is certainly not the first time that findings published a widely cited educational
research article have been later refuted. Recently, Urry et al. (in press) conducted a direct
replication of Mueller and Oppenheimer (2014) who had found that taking notes using a laptop
was worse for learning than taking notes by hand. The findings of Urry et al. refuted the earlier
claim, but not until the Muehler and Oppenheimer study had been cited 278 times (Web of
Science, as of November 25, 2020)!
In the early stages of the pandemic in 2020, US President Trump promoted the drug
hydroxychloroquine as an effective treatment of Covid-19. Unfortunately, there was then, and
remains today, absolutely no evidence that the drug improves outcomes for those inflicted with
Covid-19 (Jha 2020). In fact, some studies have shown that it causes more harm than good.
Yet, many Americans began taking the drug. This is understandable, given that so many
people have a hard time with the notion of scientific evidence. But can we as easily excuse
public research universities from making similar mistakes? Year after year, with the arrivals of
newly appointed provosts and presidents, universities tout their latest bullsh-initiatives that will
cost millions of dollars and promise to be game changers. Does anyone ever follow up to see if
such spending did any good? Should universities appoint watchdogs to ensure that money is
not wasted chasing such windmills?
Finally, what about the responsibilities of the scientific community? As previously men-
tioned, the Pennebaker et al. (2013) study has been cited in several scientific publications
according to the Web of Science. How did it first get past an editor and reviewers? PLOS ONE
claims to be a peer-reviewed open-access scientific journal. From their website, they claim to
evaluate research on scientific validity, strong methodology, and high ethical standards.
They also report that the average time to the first editorial decision for any submitted paper is
1214 days. Most reputable journals take much longer than this. During my time as an
associate editor for the Journal of Educational Psychology, I handled over 500 submissions.
The average number of days to the first editorial decision was over 30 days. The fact that
PLOS ONE is much faster may reflect a difference in the review process and the $1700
publication fee.
It is hoped that future incredulous findings will be fully vetted during the review process
before appearing in widely available outlets. Perhaps authors should not be encouraged to
publish their work in strictly pay-to-publish journals. All members of the scientific community
need to consider using the strongest possible methods and carefully note study limitations.
Pennebaker et al. (2013) could have easily designed a randomized experiment to test the
effectiveness of the SMOCs. With almost one thousand students enrolling in the introductory
1218 Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
psychology course each semester, it would have been easy to randomly assign half of them to
either a SMOC or control, face-to-face section. Finally, we should all take care to only cite
studies that have scientific merit and not repeat bogus claims. If bogus claims do find their way
into journals, we have a duty to call out such claims.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and
indicate if changes were made. The images or other third party material in this article are included in the article's
Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included
in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy
of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Clay, R. A. (2015). SMOCs: the next great adventure..Monitor on Psychology, 46(7), 54.
Conway, M. (2019). Innovation ambitions gone awry at UT Austin. Nonprofit Quarterly. Retrieved from https://
nonprofitquarterly.org/innovation-ambitions-gone-awry-at-ut-austin/
Dempster, F. N. (1988). The spacing effect: a case study in the failure to apply the results of psychological
research. American Psychologist, 43(8), 627634.
Dunning, S. (2019). After 2021: what the end of Project 2021 means for UTs innovation centers. The Daily
Texan. Retrieved from https://www.dailytexanonline.com/2019/03/13/after-2021-what-the-end-of-project-
2021-means-for-ut%E2%80%99s-innovation-centers
Ellis, L. (2019). How UT-Austins bold plan for reinvention went belly up. The Chronicle of Higher Education.
Retrieved from https://www.chronicle.com/interactives/Project2021?cid=wsinglestory_hp_1a
Hsieh, P.-H., Hsieh, Y.-P., Chung, W.-H., Acee, T., Thomas, G. D., Kim, H.-J., You, J., Levin, J. R., &
Robinson, D. H. (2005). Is educational intervention research on the decline? Journal of Educational
Psychology, 97(4), 523529.
Jha, A. (2020). Opinion: The snake-oil salesmen of the senate. The New York Times. Retrieved from https://
www.nytimes.com/2020/11/24/opinion/hydroxychloroquine-covid.html
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: advantages of longhand
over laptop note taking. Psychological Science, 25(6), 11591168. https://doi.org/10.1177/
0956797614524581.
Pennebaker, J. W., Gosling, S. D., & Ferrell, J. D. (2013). Daily online testing in large classes: boosting college
performance while reducing achievement gaps. PLoS One, 8(11), e79774. https://doi.org/10.1371/journal.
pone.0079774.
Reinhart, A. L., Haring, S. H., Levin, J. R., Patall, E. A., & Robinson, D. H. (2013). Models of not-so-good
behavior: yet another way to squeeze causality and recommendations for practice out of correlational data.
Journal of Educational Psychology, 105(1), 241247.
Robinson, D. H., & Bligh, R. A. (2019). Educational muckrakers, watchdogs, and whistleblowers. In P.
Kendeou, D. H. Robinson, & M. McCrudden (Eds.), Misinformation and fake news in education (pp.
123131). Charlotte, NC: Information Age Publishing.
Robinson, D. H., & Levin, J. R. (2019). Quackery in educational research. In J. Dunlosky & K. A. Rawson
(Eds.), Cambridge handbook of cognition and education (pp. 3548). Cambridge: Cambridge University
Press.
Robinson, D. H., Levin, J. R., Thomas, G. D., Pituch, K. A., & Vaughn, S. R. (2007). The incidence of causal
statements in teaching and learning research journals. American Educational Research Journal, 44(2), 400
413.
Roediger, H. L., & Karpicke, J. D. (2006). Test-enhance learning: taking memory tests improves long-term
retention. Psychological Science, 17(3), 249255.
Shaw, S. M., Walls, S. M., Dacy, B. S., Levin, J. R., & Robinson, D. H. (2010). A follow-up note on prescriptive
statements in nonintervention research studies. Journal of Educational Psychology, 102(4), 982988.
Straumsheim, C. (2013). DontcallitaMOOC.InsideHigherEd.Retrieved from https://www.insidehighered.
com/news/2013/08/27/ut-austin-psychology-professors-prepare-worlds-first-synchronous-massive-online
1219Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Takooshian, H., Gielen, U. P., Plous, S., Rich, G. J., & Velayo, R. S. (2016). Internationalizing undergraduate
psychology education: trends, techniques, and technologies. American Psychologist, 71(2), 136147. https://
doi.org/10.1037/a0039977.
Urry, H. L., et al. (in press). Dont ditch the laptop yet: a direct replication of Mueller and Oppenheimers (2014)
study 1 plus mini-meta-analyses across similar studies. Psychological Sciences.
PublishersNote Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
1220 Educational Psychology Review (2021) 33:1213–1220
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
Article
Full-text available
In this commentary we present an analogy between Johann Wolfgang Von Goethe's classic poem, The Sorcerer's Apprentice, and institutional learning analytics. In doing so, we hope to provoke institutions with a simple heuristic when considering their learning analytics initiatives. They might ask themselves, "Are we behaving like the sorcerer's apprentice?" This would be characterized by initiatives lacking faculty involvement, and we argue that when initiatives fit this pattern, they also lack consideration of their potential hazards, and are likely to fail. We join others in advocating for institutions to, instead, create ecosystems that enable faculty leadership in institutional learning analytics efforts. Keywords Institutional learning analytics · Faculty involvement · Analogy Every step and saying That he used, I know, And with sprites obeying My arts I will show. Johann Wolfgang Von Goethe, The Sorcerer's Apprentice Translated by Zeydel (1955) In Goethe's classic poem, an apprentice to a sorcerer finds himself unsupervised. Hoping to demonstrate his own abilities, the apprentice recites a spell, enchanting a
Article
Full-text available
Cambridge Core - Cognition - The Cambridge Handbook of Cognition and Education - edited by John Dunlosky
Article
Full-text available
How can we best internationalize undergraduate psychology education in the United States and elsewhere? This question is more timely than ever, for at least 2 reasons: Within the United States, educators and students seek greater contact with psychology programs abroad, and outside the United States, psychology is growing apace, with educators and students in other nations often looking to U.S. curricula and practices as models. In this article, we outline international developments in undergraduate psychology education both in the United States and abroad, and analyze the dramatic rise of online courses and Internet-based technologies from an instructional and international point of view. Building on the recommendations of the 2005 APA Working Group on Internationalizing the Undergraduate Psychology Curriculum, we then advance 14 recommendations on internationalizing undergraduate psychology education-for students, faculty, and institutions.
Article
Full-text available
The authors examined intervention studies that appeared in 4 educational psychology journals (Cognition & Instruction, Contemporary Educational Psychology, Journal of Educational Psychology, Journal of Experimental Education) and the American Educational Research Journal (AERJ) in 1983 and from 1995 to 2004. The majority of studies included adults (age 18 and older) as participants, administered brief (less than 1 day) interventions, assessed intervention effects immediately following the intervention, and did not report treatment integrity. Most studies included multiple outcome measures and exhibited an increase in effect-size reporting from 4% in 1995 to 61% in 2004. The percentage of total articles based on randomized experiments decreased over the 21-year period in both the educational psychology journals (from 40% in 1983 to 34% in 1995 to 26% in 2004) and AERJ (from 33% to 17% to 4%). Limitations of the study and future research issues are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Two previous studies examining 5 empirical educational psychology research journals (Hsieh et al., 2005; Robinson, Levin, Thomas, Pituch, & Vaughn, 2007) found that in the 21-year period from 1983 to 2004, there was a decrease in intervention and randomized experimental research, whereas in the 10-year period from 1994 to 2004, there was an increase in recommendations for practice based on nonintervention observational/correlational research. The present study extends this research to determine the extent to which statistical modeling analyses were conducted in articles published in the 10-year period from 2000 to 2010. The aforementioned trends continued: In 2010, only 23% of the published empirical research studies used random assignment, and 46% of nonintervention observational/ correlational articles included recommendations for practice. Additionally, the percentage of the latter articles that used statistical modeling analyses increased from 15% in 2000 to 54% in 2010. Moreover, across the 2 time periods, observational/correlational articles that incorporated modeling analyses were about 1.7 times more likely to contain recommendations for practice than such articles that did not incorporate modeling analyses (59% vs. 35%, respectively). These findings suggest that educational researchers may be overstepping the warrants of statistical modeling techniques by using them to confirm, rather than simply to disconfirm, causal hypotheses derived from correlational data (Whitehurst, 2003).
Article
Full-text available
An in-class computer-based system, that included daily online testing, was introduced to two large university classes. We examined subsequent improvements in academic performance and reductions in the achievement gaps between lower- and upper-middle class students in academic performance. Students (N = 901) brought laptop computers to classes and took daily quizzes that provided immediate and personalized feedback. Student performance was compared with the same data for traditional classes taught previously by the same instructors (N = 935). Exam performance was approximately half a letter grade above previous semesters, based on comparisons of identical questions asked from earlier years. Students in the experimental classes performed better in other classes, both in the semester they took the course and in subsequent semester classes. The new system resulted in a 50% reduction in the achievement gap as measured by grades among students of different social classes. These findings suggest that frequent consequential quizzing should be used routinely in large lecture courses to improve performance in class and in other concurrent and subsequent courses.
Article
Full-text available
The authors examined the methodologies of articles in teaching-and-learning research journals, published in 1994 and in 2004, and classified them as either intervention (based on researcher-manipulated variables) or nonintervention. Consistent with the findings of Hsieh et al., intervention research articles declined from 45% in 1994 to 33% in 2004. For nonintervention articles, the authors recorded the incidence of “causal” statements (e.g., if teachers/schools/parents did X, then student/child outcome Y would likely result). Nonintervention research articles containing causal statements increased from 34% in 1994 to 43% in 2004. It appears that at the same time intervention studies are becoming less prevalent in the teaching-and-learning research literature, researchers are more inclined to include causal statements in nonintervention studies.
Article
In this direct replication of Mueller and Oppenheimer's (2014) Study 1, participants watched a lecture while taking notes with a laptop (n = 74) or longhand (n = 68). After a brief distraction and without the opportunity to study, they took a quiz. As in the original study, laptop participants took notes containing more words spoken verbatim by the lecturer and more words overall than did longhand participants. However, laptop participants did not perform better than longhand participants on the quiz. Exploratory meta-analyses of eight similar studies echoed this pattern. In addition, in both the original study and our replication, higher word count was associated with better quiz performance, and higher verbatim overlap was associated with worse quiz performance, but the latter finding was not robust in our replication. Overall, results do not support the idea that longhand note taking improves immediate learning via better encoding of information.
Article
The Cambridge Handbook of Cognition and Education - edited by John Dunlosky February 2019
Article
The spacing effect would appear to have considerable potential for improving classroom learning, yet there is no evidence of its widespread application. I consider nine possible impediments to the implementation of research findings in the classroom in an effort to determine which, if any, apply to the spacing effect. I conclude that the apparent absence of systematic application may be due, in part, to the ahistorical character of research on the spacing effect and certain gaps in our understanding of both the spacing effect and classroom practice. However, because none of these concerns seems especially discouraging, and in view of what we do know about the spacing effect, classroom application is recommended.
Article
Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students' capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers' tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.