ChapterPDF Available

The Inevitability of Epic Fail: Exploding the Castle with Situated Learning

Authors:
1
CHAPTER 12
THE INEVITABILITY OF EPIC FAIL:
EXPLODING THE CASTLE WITH SITUATED LEARNING
Stephen T. Slota, Ph.D., University of Connecticut
Michael F. Young, Ph.D., University of Connecticut
Deconstruction of previous large-scale efforts to leverage innovative technology toward school
improvement reveals a developmental trend of initial excitement followed by careful research, promising results,
andtragicallysome mutation that leads to an eventual loss of impact. This applies to all major educational
technology endeavors over the last half-century and is observable in any situation where designers and innovators
have close and direct involvement with teacher training but the innovation’s core theoretical foundations and key
concepts are gradually (or abruptly) shed during and post-implementation. For those of us who study and design
playful learning environments, this emphasizes a critical need to ask: “Why do technology-rich research
innovations always seem to fail once they grow beyond the control of the design team?”
In this closing chapter, we describe the three pillars of a concept we refer to as Epic Fail. Doing so will help
us identify applicable, overarching concerns with contemporary instructional design and provide suggestions that
can help address difficulties with dissemination, local customization, and modding beyond the scope of an original
designer’s control. Additionally, it will help us tie suggestions to non-game innovations in order to highlight how
such issues exist across educational subfields and how weeducators, designers, and researchersmight be able
to make inevitable epic failures a bit less inevitable.
A Basis in Situated Cognition
As noted throughout this book, ecological psychology research suggests that students play games and
participate in online communities based on goals and intentions that arise in-the-moment through social
interactions with other players, in-game objects, and non-player characters (NPCs)in other words, interactions
that emerge on-the-fly and in the context of a narrative of play (See Chapter 1, Castle Upon a Hill). Yet, nearly all
standard empirical studies of instructional game development and implementation to date (i.e., horse-race, t-test
comparisons of playful learning vs. traditional classroom learning) have presumed that game design and playful
2
learning behave like pharmaceutical medications that are chemically identical and organized in pre-measured
dosages for all learners. Educational researchers have hyperfocused on game effects (measured via high-stakes or
other exams) by controlling for course content, learner characteristics, and value-added features (e.g., core
mechanics, visuals, narrative structure, etc.) under the assumption that fiddling with a particular independent
variable will yield a particular, predictable result.
But consider the logic underpinning this approach. Generally speaking, if one assumption is true (i.e.,
individual prior experiences inherently shape individual future experiences), the other (i.e., games can be a magic
“pill” to treat learning), necessarily, cannot be. Are players individual learners who experience gameplay on a
personal level that cannot be replicated, even across the same player playing the same game multiple times? Or
are individuality, pre-existing experience, and environmental context irrelevant with respect to solving
macroscopic instructional challenges? The paradoxical nature of these questions is the primary reason we see
current educational game research as fundamentally flawed (and believe it’s a blindspot for practicing teachers,
technology innovators, and others seeking to add educational games to their respective instructional toolkits).
We base our argument on the notion that game players play with slightly different goals in mind each
time they enter a given world or universe. Irrespective of whether or not a player has previously played the same
game with the same individual objective(s), micro-differences in player attitudes, knowledge, and movements
through the game environment will always yield new emergent goals and experiences (even if a new play session
might superficially appear indistinguishable from an earlier one). The phenomenon becomes more pronounced as
we consider the wide variation of game interactions across players and across content/update patches of a
particular game. This is why, returning to our earlier analogy, the highly-individual and continually-evolving nature
of play renders game-based instruction (in and of itself) a mostly useless treatment for instructional maladies:
individual games prescribed as “pills” would need dynamic content capable of self-modification to produce
positive outcomes for any individual patient, and patients would need personalized dosages (or different
medications altogether) to treat the same underlying illness.
Thus far, the messy, inconclusive un-interpretability of player experiences across games, players, and time
(studied via meta-analytical techniques) has yielded little actionable data (see: Clark et al., 2016; Vogel et al., 2006;
Wouters et al., 2013; Young et al., 2012) and underscored our suggestion that data reduction by way of averaging
3
across studies, game players, games, and curricular content stands in opposition to the fundamental tenets of
situated cognition and ecological psychology. Instead, analyses should be targeting the dynamic interactions
between players and game affordances (given that distillation of player-game-environment interactions into a one-
size-fits-all result obscures important learning outcomes at the individual difference level). To drive this point
home, we need only examine real world cases where traditional empirical researchspecifically in the realm of
educational technology designfell apart during or after implementation.
The Inevitability of Epic Fail
Looking back on a number of high-profile, technology-driven projects aimed at creating new approaches
to instruction, there is a decades-long pattern of learning science, instructional design, and cognitive theory
coalescing into some spectacular innovation that eventually disintegrates as quickly as it came into beinga
phenomenon we refer to as Epic Fail.
In general, Epic Fail begins with early data collection revealing one or more important findings for a given
instructional variable (e.g., student-side variables like test achievement, self-regulation, course completion,
motivation, and engagement OR teacher-side variables like advanced content knowledge, wait time, inquiry-based
pedagogy use, and collaboration among teams of teachers). Once the results are published, widespread
excitement and adoption begins to spread (e.g., interdistrict collaboration, statewide or national implementation).
However, follow-up research conducted weeks, months, or years later ultimately reveals that the project’s
purported benefits were not sustained: they arise when project originators are closely involved with teacher
training and classroom implementation but disappear when project originators pull back to let the innovation
thrive on its own. Notable victims of this cycle include Logo (Papert, 1980), The Adventures of Jasper Woodbury
(Cognition and Technology Group at Vanderbilt [CTGV], 1992), Apple’s Hypercard, inquiry-based simulations like
Model-It (Fretz et al. 2002), andperhaps unsurprisinglygame-based instruction (with positive short-term and
negative long-term outcomes befalling a variety of educational video games [Honey & Hilton, 2011; Young et al.,
2012], games for non-academic instruction [e.g., Aronowsky, Sanzenbacher, Thompson, Villanosa, & Drew, 2012;
Sylvan, Larsen, Asbell-Clark, & Edwards, 2012] and scholastic tabletop roleplaying games [e.g., Slota, Travis, &
Ballestrini, 2012]).
4
With such limited long-term success and sustained impact on classroom learning, we can’t help but wonder: “Why
do technology-rich research innovations always seem to fail once core designers are no longer directly
involved?” In the following subsections, we’ll explore our question using a few of the high-profile examples cited
above. We’ll also propose three possible (if not probable) reasons for the inevitability of Epic Fail: 1) Fatal
Mutation Due to Assimilation; 2) Loss of Fidelity; and 3) Failure to Thrive.
Learning from the Past
There is still debate over the extent to which even simple technologies like calculators and word
processors have meaningfully affected school classrooms, nevermind more advanced tools like geospatial mapping
software, programming languages, 3D avatar-based virtual worlds, scientific modeling environments, and online
video series used to flip classrooms. The rise and subsequent fall of said technologies demonstrates how no
amount of time, effort, and money can guarantee sustained, long-term change. As it turns out, the inherent
complexity of novel, tech-driven instruction can undermine both good ideas and intentions:
Logo
Based on Papert’s exploration of constructionism-based learning environments, Logo was developed in
1967 as a programming language that made LISP-like artificial intelligence programming accessible to elementary
education students through the graphical representations of a small, robotic half-sphere that resembled a small
plastic turtle. It received widespread research and acclaim by teachers at the time of its implementation, but
rather than acknowledging a new paradigm for student learning, schools tended to assimilate Logo into their tried-
and-true method of education: direct instruction. Once the developers and researchers left the school
environment, classroom teachers defaulted to teaching about Logo in place of teaching with Logo. As a result,
students were led to memorize Logo programming commands (e.g., FD 50 RT 90) from teacher-generated
worksheets. After peaking in the mid-1980s (largely due to the development of the Apple II computer), most of
Logo’s 197 compilers/interpreters fell out of educational use despite numerous research studies demonstrating
Logo’s instructional effectiveness when implemented as Papert originally envisioned. Not long after, Papert (1993)
5
concluded that large-scale school reform was likely impossible and chose to focus on smaller-scale projects
(Papert, 1997).
The Adventures of Jasper Woodbury
Vanderbilt University’s Jasper Woodbury videodisc series was a professionally-filmed video production
designed to support middle school mathematics and problem solving. Drawing on contemporary learning theories
(i.e., situated cognition, anchored instruction), these videos were meant to provide a meaningfully-authentic
learning context prior to and in conjunction with other instructional activities. Unfortunately, fearing that students
were unprepared to handle the series’ outward complexity, many teachers chose to educate their classes using the
stories as post-instruction word problems rather than a context for inquiry-driven learning. Once the program’s
researchers ceased their direct interaction with participating educators, the videos became mainly capstone
activities, relieving them of any potential they once had for grounding abstract learning in the real world. The
migration of videodisc to DVD coupled with rapid antiquation of the series’ content (e.g., the price of gas as part of
the calculations) further sealed the program’s fate. By the late 1990s, Jasper had been shelved.
HyperCard
HyperCard was an Apple, Inc. invention similar to a programmable powerpoint slide presentation that
served as a framework for several other learning technology innovations in the late-1990s and early-2000s. It was
most widely used by teachers and subject matter experts hoping to create focused classroom tutorials, collect
dribble file data for assessments, and interactively control videodisc players. However, rapid technological
advancement during the 1990s rendered HyperCard obsolete and sidelined all related instructional materials. Once
Apple stopped supporting the system in favor of HTML and Java (the last update was made in 1998 even though
services were supported until March 2004), work derived from thousands of HyperCard programming hours was
lost or abandoned. With no simple migration path, HyperCard vanished from the classrooms its creators hoped it
would revolutionize. Like other instructional innovations, the project’s failed implementation highlights how
Information Age technologies come and go so quickly that tech-specific innovative pedagogy may be doomed to
fail (or at least fade) even if project originators remain available for widespread training and implementation.
Inquiry-Based Science Simulations
6
Acknowledging that the advanced sciences were increasingly reliant upon algorithm-driven theoretical
models of phenomena rather than direct observational data, cognitive scientists in the early 2000s began providing
middle and high school students with tools that would enable inquiry-based exploration of dynamic environmental
systems. Tools like Model-It (Fretz et al., 2002) and the Virtual Solar System (Barab et al., 2000) were designed to
help students identify central variables and catalog complex variable interrelationships by developing theoretical
models that could be tested via animated simulation (Tsurusaki, Amiel, & Hay, 2003).
These and similar projects were meant to transform American education by encouraging science teachers
to adopt inquiry-based pedagogy when teaching about complex systems; to model a particular theory-driven
approach to instruction rather than assume the role of a be-all end-all instructional silver bullet; to usher a
wholesale re-envisioning of our educational institutions. As with other high-profile projects, work proceeding from
small-scale laboratory studies to school-based trials suggested that students were fully capable of using instincts
about environmental phenomena to construct workable theoretical models. But the broad pool of educators who
signed on via publishing companies and other distribution networks lacked direct access to the research literature
and frameworks of inquiry necessary for understanding program development and implementation. Not every
student in the class easily understood the modeling software. Students working in groups met with widely variable
success depending on difficult-to-predict group dynamics. Funding from public and private stakeholders
evaporated among reports of lost instructional time and teachers teaching how to navigate the tool rather than
the course content. Eventually, user interest in patches, updates, user-created mods, and online discussion hubs
flagged.
After just a few short years, Model-It and the Virtual Solar System had been relegated to the educational
innovation graveyard.
Understanding Trajectories of Failure
Given the number and scale of investments made to support endeavors like those described above, it
initially struck us as odd that epic failure could be so frequent (both in- and outside the realm of educational
technology; e.g., instructional design, adult education, creativity, learning science, special education, multicultural
education, etc.). Naturally, we would not be so naive as to think all educational technology projects could or
7
should take only a trajectory to success, but no single projectthat we know ofhas ever spurred large scale
educational reform, regardless of project originator, institution, or technology. Likewise, all major projects that we
know ofregardless of those same elementshave fallen apart before reaching successful large-scale
implementation. That leaves us asking: “What gives?”
After thoughtful analysis and discussion with our contemporaries, we have come to believe that there are
three major counter forces common to technology-rich educational research innovations that move developers
and researchers onto a trajectory of failure. Each counter force, while bearing some similarity to the others, is a
unique challenge that must be controlled pre-, during-, and post-implementationa difficult (if not impossible)
prospect.
Below, we explore how and why they occur in situ as a consequence of project implementation.
1) Fatal Mutation Due to Assimilation
Fatal Mutation Due to Assimilation refers to teacher-generated changes that are fundamentally in
opposition to a project’s theoretical foundations and goals. Our word choice here is intentional: like certain
genetic mutations in biological life, a single or small number of mutations to the theoretical DNA of a large-scale
project can bring about a swift, painful death. In the case of Logo, constructivist theory dictated that students
would discover the programming language in-context, including having younger students possess and share
information that older students had not yet learned. Once schools started rejecting this approach (arguing that
older students should learn more advanced content than younger ones), teachers began having their classes
memorize decontextualized commands before allowing any interaction with the cybernetic turtle. While some
teacher-directed modifications might have been less damaging than others, vulnerabilities emergent through the
project’s wide implementation made it possible—likely, even—for minor problems to erode Logo’s viability and
sustainability. For Papert (1980), this amounted to a kind of Piagetian Assimilation, with new ideas being forced
into existing schemata (i.e., direct, teacher-led instruction) rather than leveraged toward the reformation of
schools as learning ecologies.
2) Loss of Fidelity
8
For the purposes of this discussion, we characterize Loss of Fidelity as participants doing what the
designers and researchers intended but failing to focus on core content, adding materials that are antithetical to
project objectives, and/or watering down required activities to the point that they are no longer effective. This
can be thought of, in part, as personal teacher preference running up against designer recommendations, but the
problem is actually a bit more complex, occurring whenever there is a schism between individual teacher
intentions, classroom constraints, and school reality. Dusenbury et al. (2003) explored this precise issue in the
context of drug abuse prevention programs, identifying five major measures of fidelity: Dosage, Adherence,
Program Differentiation, Participant Responsiveness, and Quality of Program Delivery. We believe their framework
applies directly to the technology-based classroom interventions in our chosen examples.
Dosage includes agreed participation in a daily intervention program butrather than implementing the
set treatment each dayonly following through with that program once per week or less. In the case of a
schoolteacher, intervention might be interrupted by legitimate competing events (e.g., fire drills, required testing,
schedule changes), or it may manifest as a timing issue wherein dosage is miscalculated or deliberately modified to
fit some preconceived schedule (e.g., teaching about the tool instead of with it to ensure students can complete all
of their learning stations before the bell rings). Returning to one of our real world cases, teachers who sought to
utilize The Adventures of Jasper Woodbury but were concerned about the instructional time commitment would
often present the first episode (i.e., Journey to Cedar Creek) following direct instruction about distance, rate, and
time, thus “covering” the content in fewer than three days. Even if instruction took place during the appropriate
content unit, any teacher who modified the implementation timeline (regardless of reasoning) inherently altered
dosage as well (i.e., showing the videos after instruction instead of using them as a macro-context for the coming
week’s activities).
Adherence and Program Differentiation refer to the addition of instructional practices or pedagogies that
make a unique program more like a particular pre-existing program. While the researcher may wish to implement
a purely constructivist program, for instance, a participating classroom teacher might choose to add Classdojo™ or
a similar behaviorism-based tool to reinforce certain learning behaviors (i.e., adhering to Behaviorist pedagogy).
This eclectic approach can enhance an intervention, but it can also make novel innovations less distinguishable
9
from existing programs (thus preventing the researcher from assessing any uniquely added benefit or even
measuring the innovation’s general effectiveness).
Quality of Delivery refers to how well instructors understand the theoretical foundations of a given
innovation and dynamically interact with learners in a manner consistent with the underlying design principles,
especially when their guides and prepared curriculum don’t work out exactly as planned. Teaching “in the cracks”
(i.e., in a live classroom where interactions cannot be scripted) requires implementers to “fill” non-program
activities and discussion with information and responses that are consistent with the designer’s theoretical
framework. This bears a direct relationship to Participant Responsiveness, the way instructional interventions are
received by the target audience (i.e., both teachers and students). Because instruction is intended to induce
particular learning experiences and interactions, miscommunication or ineffectual implementation may lead the
audience to miss the intervention’s situated value. When elements like Dosage (e.g., how much Jasper or Logo
instruction is needed before measurable changes in math achievement can be expected) conflict with school
scheduling or administrative initiatives, Quality of Delivery and Participant Responsiveness tend to suffer dramatic
setbacks.
3) Failure to Thrive
The third counter force, Failure to Thrive, represents a pattern wherein lack of researcher oversight or
sustained grant funding causes instructors to gradually shift away from program goals, theories, and procedures
present at the time of initial implementation. In part, this appears to involve situations where participating
educators “do it for the researcher(s)” as a personal favor, or for the status of being part of the research team, or
to obtain resources/benefits for participating in a grant-funded project. Once the project originators leave, the
teachers simply move on or revert to prior instructional practices.
In describing a situated view of naval quartermasters, Hutchins (1995) addressed how success arises from
interaction among people and artifacts in the world. From this perspective, failure can occur when any one of
these potential interactions is interrupted: teacher-tool, researcher/designer-tool, and teacher-researcher. Each
interaction must be functioning and ongoing in some form to provide the feedback necessary for sustaining
technology-rich programs over time. While teachers often crave interactions with talented adults and welcome the
10
opportunity to share their insights, debate with researchers, reflect on and explain their own pedagogy, and
receive critiques of their teaching from academic peers, the social and cognitive factors arising from broken
interaction can obscure progress toward a common objective. Barron (2003) explained this as smart groups
capable of generating workable solutions ignoring those solutions as a result of structural social dynamics.
Following this line of logic, any interruption of teacher-researcher interaction, intended or not, may allow
misunderstanding, lack of personal buy-in or time investment, social conflict, or other social dynamics to
overshadow the project’s original goals. These issues eventually consume program implementation and push
participating teachers back into their respective comfort zones.
Avoiding the Precipice of Epic Failure
With the advent of major government efforts to improve schools (e.g., No Child Left Behind, Race to the
Top, Common Core State Standards, Every Student Succeeds), sweeping instructional change is even more
complicated to achieve than it was during the late 20th century. Devotion to improved performance (as tracked via
traditional quantitative measures) has come largely at the expense of innovation, and direct instruction in the form
of test preparation has served almost exclusively as the means to contend with the ever-growing clamor for
accountability and data-driven decision making. This has forced designers to meet parent, teacher, district, and
researcher needs while simultaneously avoiding pitfalls that transformed Logo, Jasper, HyperCard, and inquiry-
based modeling into mutant forms of their former selves.
Of course, because no two school environments are identical, some customization should be expected at
each implementation site. We believe such customization must be proactive, carefully planned and organized such
that program fidelity is maintained and project originators are able to recognize the Epic Fail trajectory before
falling victim to it. This requires a clear articulation of program elements that can be altered for convenience,
changed within a set margin, or not changed at all (including content and implementation methods). Additionally,
it means anticipating which program elements might challenge traditional school instruction and, accordingly,
planning ahead to minimize disruption of the innovation’s theoretical integrity. At times, we have referred to this
as a “fixin’s bar” approach where designers propose an array of potential modifications (e.g., adding or removing
11
particular activities, procedures, etc.) that 1) will not dramatically deviate from the program’s core
mission/foundation, and 2) can be used to locally customize the program without ruining its “flavor.”
Fatal Mutation Due to Assimilation can be avoided with planned customization and clear designation of
critical components. Project originators know that new sites will seek to customize the intervention to meet
unique characteristics of their context. For game-based learning designers and researchers in particular, this
necessitates various options for play and a clear list of innovation-specific recommendations that can help
instructors fit games into pre-existing core curricula. A 1:1 learning and game objective relationship can ensure
overlap between state and national standards (e.g., Young et al., 2012) and decrease the likelihood that
participating teachers will simply assimilate games into existing practices like direct instruction or timed “stations.”
Similarly, researchers and designers can pre-empt Loss of Fidelity by making the parameters, theoretical
frameworks, and logic models that drive their designs as transparent as possible. Assumptions made during tool or
program development must be aligned with how the tool or program is intended to operate in situ. Teacher
support through regular follow-up (including audio/video, on-going training, face-to-face focus groups, surveying,
student feedback, and other qualitative tools) should target teacher understanding of learning theory in addition
to technical operating procedures and troubleshooting techniques. Above all, users must be invited to join as many
development discussions as possible, ensuring the innovation’s on-the-ground implementation can and will
actually support student learning outcomes.
Failure to Thrive can be combatted through the creation of a self-sustaining, dynamic communities of
practice (e.g., Lave & Wenger, 1991) that exist alongside the original innovation. Any such (metagame) community
must be able to evolve over time under standard innovation parameters and within the innovation’s underlying
theoretical framework. While this might include the creation of a webpage, forum, YouTube channel, wiki, and/or
series of regular face-to-face meetings, continued success will only come from ongoing facilitation by leading
experts (i.e., project originators and trained practitioner-specialists). All teachers hoping to become community
practitioner-specialists should be capable of describing the program’s underlying theory and show evidence of
their ability to adapt the theory to fit within the scope of a living classroom environment. When possible, original
practitioners and other expert researchers should return to the community for two-way dialogue concerning
information regarding progress in the field, modifications to the theory, and related research projects. The
12
application of cost-sharing user fees may increase school buy-in, providing impetus to remain involved with the
innovation and assist with the burden of community development. Though teacher-tool and researcher/designer-
tool interactions will likely continue regardless of community formation, teacher-researcher communication is the
only element that will sustain program fidelity beyond the original scope of the project.
To us, it seems clear that innovation implementation should be planned with early consideration of
learning effects, customizability within design parameters, and purity of theory-based interaction goals. For that to
happen, player goals and solution trajectories must align with socially constructed knowledge (i.e., core curricula)
at a 1:1 ratio, made easier when developers create implementation boundary constraints in anticipation of Epic
Failure. Of course, averaging across user behavior may seem like a simple way to separate outcome chaff from
wheat (it requires fewer resources, reduces development time, and makes statistical analysis rather
straightforward), but doing so also comes with risks, not least of all discarding valuable data as chaff even if it isn’t.
This is why we’ve argued a situated cognition worldview is especially helpful for educational innovators:
unpredictable, emergent factors invariably affect project outcomes, so it is necessary to assume that some users
students, teachers, administrators, institutionswill identify affordances the designers can or will not. In response,
designers must build fail safes capable of 1) inducing the adoption of specific, designer-aligned goals; and 2)
curbing behavior to conform to a particular model of thinking/action (i.e., giving users enough tethering to near
the precipice of Epic Fail without falling over the ledge). That is the only way to adequately balance user desire for
agency with developer need for consistency and ensure long-term implementation can be successful.
Castle Upon a Hill
Any game-based learning research aimed at supporting macroscopic educational reform requires
researchers capable of on-going school-level involvement, random fidelity checks, and two-way monitoring of the
innovation through the creation of shared research, designer, teacher communities. The challenges outlined in this
chapter are numerous and complex, but we believe project originators who are mindful about the maintenance of
an active community role and leveraging educational changes at federal, state, and local levels can create
instructional innovations that work and can be scaled for mass implementation. That is why we chose to focus on
the particular cases featured throughout this chapter, to provide a foundation for understanding how and why
13
repeated failure happens and to help guide the development of more engaging and long-lasting alternatives to
current K-12 and higher education practices.
That said, we feel it appropriate to close with a (slightly modified) version of the offer we made at the
start of this book:
“It’s dangerous to go alone! Take this [situated cognition].
Ecopsychology is possibly the most powerful means of exploding the existing GBL castle and replacing it
with a new, improved version. We hope our analyses and advicein addition to those put forth by our co-
authorswill point the field toward more sophisticated, thoughtful consideration of how and why games behave
as complex learning ecologies. With your cooperation as our Player 2, we’re confident that our collective
educational endeavors will be stronger, more effective, andabove allless susceptible to Epic Fail.
References
Aronowsky, A., Sanzenbacher, B., Thompson, J., Villanosa, K., & Drew, J. (2012). When simple is not best: Issues
that arose using why reef in the conservation connection digital learning program. GLS 8.0 Conference
Proceedings, 24-29.
Barab, S. A., Hay, K. E., Squire, K., Barnett, M., Schmidt, R., Karrigan, K., Yamagata-Lynch, L. & Johnson, J. (2000).
Virtual Solar System Project: Learning through a Technology-Rich, Inquiry-Based, Participatory Learning
Environment. Journal of Science Education and Technology, 9 (1), 7-25.
Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307-359.
Bergmann, J. & Sams, A. (2012). Flipping the classroom. Tech & Learning, 32(10), 42.
Clark, D. B., Tanner-Smith, E. E., & Killingsworth, S. S. (2016). Digital games, design, and learning: A systematic
review and meta-analysis. Review of Educational Research, 86(1): 79-122. doi:
10.3102/0034654315582065
Cognition and Technology Group at Vanderbilt (1992). Technology and the design of generative learning
environments. In T.M. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction: A
conversation. Hillsdale NJ: Lawrence Erlbaum Associates.
Dusenbury, D., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of
implementation: Implications for drug abuse prevention in school settings. Health Education Research,
18(2), 237-256.
Fretz, E. B., Wu, H., Zhang, B., Davis, E. A., Krajcik, J. S., & Soloway, E. (2002). An investigation of software scaffolds
supporting modeling practices. Research in Science Education, 32(4), 567-589.
Honey M. A. and Hilton, M. (2011). Learning science through computer games and simulations Committee on
Science Learning: Computer Games, Simulations, and Education, National Research Council.
Hutchins, E. (1995). Cognition In the Wild. Cambridge, MA: MIT Press.
Lave, J. & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge
University Press.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York, NY: Basic Books.
Papert, S. (1993). The Children’s Machine. New York, NY: Basic Books.
Papert, S. (1997). Why school reform is impossible. Review of Tyack and Cuban (1995). Accessed May 31, 2013
from http://www.papert.org/articles/school_reform.html
Slota, S. T., Travis, R., & Ballestrini, K. (2012). Operation BIOME: The design of a situated, social constructivist
ARG/RPG for biology education. GLS 8.0 Conference Proceedings, 261-267.
14
Sylvan, E., Larsen, J., Asbell-Clark, J., & Edwards T. (2012). The canary’s not dead, it’s just resting: The productive
failure of a science-based augmented-reality game. GLS 8.0 Conference Proceedings, 30-37.
Tsurusaki, B., Amiel, T.& Hay, K. 2003. Using Modeling-Based Inquiry in the Virtual Solar System. Presentation at
EdMedia: World Conference on Educational Media and Technology, Honolulu, Hawaii, USA. ISBN 978-1-
880094-48-8
Vogel J.J., Vogel D.S., Cannon-Bowers J., Bowers C.A., Muse K. & Wright M. (2006) Computer gaming and
interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research 34, 229
243. doi:10.2190/FLHV-K4WA-WPVQH0YM
Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A metaanalysis of the cognitive
and motivational effects of serious games. Journal of Educational Psychology, 105, 249-265.
doi:10.1037/a0031311
Young, M., Slota, S., Cutter, A., Jalette, G., Lai, B., Mullin, G., Simeoni, Z., Tran, M., & Yukhymenko, M. (2012). Our
princess is in another castle: a review of trends in video gaming for education. Review of Educational
Research, 82(1), 61-89. doi: 10.3102/0034654312436980
... Consequently, there are still many unanswered questions pertaining to the educational utility of digital games, and researchers are calling for more in-depth investigations of game-oriented pedagogical practices (Young & Slota, 2017) rather than research oriented towards investigating the educational potential and affordances of games themselves (Berg Marklund, 2014;Tobias et al., 2011). ...
... This is of profound ethical concern to me. Games have too often been seen as a "medicinal" remedy which can be injected into any classroom, as Young and Slota (2017) have argued. Like Biesta, I question "whether human phenomena such as education can be approached in the same way" (Biesta, 2020, p. 16) as natural and physical phenomena. ...
... Despite this broad scientific approach, studies on games and gamification are still largely concerned with effects on motivation and engagement (Zeybek & Saygı, 2024), without really going deeper into questions of how digital games can foster good learning experiences. Scholars note that we still know little about how game experiences are integrated into instruction (Bell & Gresalfi, 2017, p. 514), or how they can serve specific educational purposes (Beavis et al., 2017;Young & Slota, 2017). Studies are often lacking in descriptive detail of how games are incorporated into classrooms (Tokac et al., 2019). ...
Thesis
Full-text available
Since its inception in 2010, Nordahl Grieg Upper Secondary School in Bergen has aimed to explore new ways of using technology in education. One of the school's long-term initiatives is the pedagogical use of digital games. In 2014, the school established a new position - the “game pedagogue” - a role designed to provide support and advice to teachers interested in incorporating games into the classroom. Over the years, the use of games has become increasingly widespread, and a significant proportion of the school's teachers have used games in their teaching one or more times. As this practice became more common, the school recognized the need to examine how games can support both teaching and student learning. This thesis aims to document and study this usage by closely analyzing one of the school's longest-standing and most well-established teaching programs involving games. Since 2013, the religion teachers at the school have used the game The Walking Dead in the subject of Religion and Ethics in the third year of upper secondary school to teach ethical theories. The thesis is a qualitative case study that investigates how the game is used as a pedagogical tool in teaching. Data collection includes classroom observations, interviews with the teachers who taught using the game, and field notes The research question guiding this study is: What can we learn about the pedagogical utility of digital games by studying how The Walking Dead is used to teach ethical theories? The thesis consists of three sub-studies aiming to understand: • The pedagogical potential of games to facilitate necessary learning conditions, especially concerning ethical reasoning. • The teacher's role in framing the game experience and guiding students' learning and participation. • How teachers' familiarity with games affects their approach to using the game as a pedagogical tool. These questions contribute to a comprehensive exploration of how The Walking Dead can be used to help students apply ethical theories as a basis for their own argumentation and to solve ethical dilemmas. The theoretical framework for this thesis is based on variation theory and sociocultural theory, which together provide a comprehensive understanding of how learning can occur in classrooms where digital games are used as pedagogical tools. Variation theory addresses how learning happens through experiencing differences in the subject matter. In this study, this means that students learn to distinguish between ethical dilemmas and various ethical theories by experiencing how different theories provide different answers to different dilemmas. The theory helps explain how the game facilitates necessary conditions for learning and how this can be influenced by the teacher's framing and facilitation. Sociocultural theory adds an important aspect by emphasizing learning as a social process between students and teachers. The theory shows how participation in shared learning activities, such as discussion and dialogue, indicates that the conditions for learning are not only a result of the game's design but also how they are created through social interaction. Together, these theories provide useful and comprehensive tools for analyzing the learning process in the classroom, especially concerning the interaction between students, teachers, and the game. They explain how learning does not occur solely at an individual level but also depends on the social and cultural frameworks of which the game is a part. Therefore, variation theory and sociocultural theory become central theoretical approaches to understanding how digital games like The Walking Dead can help teachers facilitate student learning. In the thesis, The Walking Dead is considered both as an artifact with fixed properties and as an activity within the classroom. The game's design includes an interactive narrative with predefined ethical dilemmas, providing the class with a shared, cohesive experience that facilitates discussions. The first sub-study examines how the game's components and structure contribute to necessary learning conditions by forming a background against which the ethical theories students are to learn can be illuminated. Thus, the ethical theories are contrasted with each other, as they emphasize different aspects of the dilemmas and point toward different solutions. In this way, students experience differences between various ethical theories in a manner that aids their learning. The game's pedagogical value lies in how it helps students contrast and generalize ethical theories within and across the game's dilemmas The study also views the game as a classroom activity, focusing on the teacher's role and student interaction during gameplay. The second sub-study investigates how teachers frame the game experience using various teaching strategies and finds that the way teachers frame the game is crucial for the game to become a useful learning resource. The third study examines how teachers' gaming competence affects their use of the game as a pedagogical tool. The findings show that teachers' familiarity with digital games significantly influences how they present the game and its ethical dilemmas, and whether students use ethical theories in their discussions. The thesis also illustrates how whole-class play of The Walking Dead, where the entire class plays a single copy of the game on a large screen in the school's auditorium, creates a shared space for reflection and dialogue. The first sub-study shows how whole-class gameplay provides an opportunity for a common experience where students can discuss ethical dilemmas and compare different theoretical perspectives. This collective approach ensures that all students participate and demonstrates how games can be used to promote a deeper understanding of ethical concepts through classroom discussions. By analyzing The Walking Dead both as an artifact and an activity, the thesis provides a holistic understanding of the game's pedagogical potential in ethics education. Learning can be understood as the ability to make increasingly finer distinctions between figure and background, where, by learning what a concept refers to, one can separate it from the rest of the context—that is, what it does not refer to. The game offers students a rich narrative experience with ethical problems that lack obvious correct solutions. The ethical dilemmas are complex and nuanced enough for students to argue for different solutions based on different theories, with each theory highlighting different aspects of the dilemma. The research emphasizes the importance of game design, teacher guidance, and the classroom context in creating meaningful learning experiences. In summary, the thesis concludes that if learning is defined as the ability to distinguish between figure and background, games and gaming experiences can serve as highly useful backgrounds.
Article
Full-text available
In this meta-analysis, we systematically reviewed research on digital games and learning for K-16 students. We synthesized comparisons of game versus nongame conditions (i.e., media comparisons) and comparisons of augmented games versus standard game designs (i.e., value-added comparisons). We used random-effects meta-regression models with robust variance estimates to summarize overall effects and explore potential moderator effects. Results from media comparisons indicated that digital games significantly enhanced student learning relative to nongame conditions ( = 0.33, 95% confidence interval [0.19, 0.48], k = 57, n = 209). Results from value-added comparisons indicated significant learning benefits associated with augmented game designs ( = 0.34, 95% confidence interval [0.17, 0.51], k = 20, n = 40). Moderator analyses demonstrated that effects varied across various game mechanics characteristics, visual and narrative characteristics, and research quality characteristics. Taken together, the results highlight the affordances of games for learning as well as the key role of design beyond medium.
Article
Full-text available
It is assumed that serious games influences learning in 2 ways, by changing cognitive processes and by affecting motivation. However, until now research has shown little evidence for these assumptions. We used meta-analytic techniques to investigate whether serious games are more effective in terms of learning and more motivating than conventional instruction methods (learning: k = 77, N 5,547; motivation: k = 31, N 2,216). Consistent with our hypotheses, serious games were found to be more effective in terms of learning (d= 0.29, p d = 0.36, p d = 0.26, p > .05) than conventional instruction methods. Additional moderator analyses on the learning effects revealed that learners in serious games learned more, relative to those taught with conventional instruction methods, when the game was supplemented with other instruction methods, when multiple training sessions were involved, and when players worked in groups. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
Article
Full-text available
Do video games show demonstrable relationships to academic achievement gains when used to support the K-12 curriculum? In a review of literature, we identified 300+ articles whose descriptions related to video games and academic achievement. We found some evidence for the effects of video games on language learning, history, and physical education (specifically exergames), but little support for the academic value of video games in science and math. We summarize the trends for each subject area and supply recommendations for the nascent field of video games research. Many educationally interesting games exist, yet evidence for their impact on student achievement is slim. We recommend separating simulations from games and refocusing the question onto the situated nature of game-player-context interactions, including meta-game social collaborative elements.
Article
Full-text available
In this manuscript we describe an introductory astronomy course for undergraduate students in which we moved from the large-lecture format to one in which students were immersed in a technologically-rich, inquiry-based, participatory learning environment. Specifically, undergraduate students used 3-D modeling tools to construct virtual reality models of the solar system, and in the process, build rich understandings of various astronomical phenomena. For this study, primarily naturalistic inquiry was used to gain a holistic view of this semester-long course. These data are presented as two case studies focusing on: (1) the role of the teacher in this participatory learning environment; (2) the particular dynamics that formed in each group; (3) the modeling process; (4) the resources used, specifically student-developed inscriptions; and (5) the role of technology and whether learning the technology interfered with learning astronomy. Results indicated that VR can be used effectively in regular undergraduate university courses as a tool through which students can develop rich understandings of various astronomical phenomena.
Article
Full-text available
Modeling of complex systems and phenomena is of value in science learning and is increasingly emphasised as an important component of science teaching and learning. Modeling engages learners in desired pedagogical activities. These activities include practices such as planning, building, testing, analysing, and critiquing. Designing realistic models is a difficult task. Computer environments allow the creation of dynamic and even more complex models. One way of bringing the design of models within reach is through the use of scaffolds. Scaffolds are intentional assistance provided to learners from a variety of sources, allowing them to complete tasks that would otherwise be out of reach. Currently, our understanding of how scaffolds in software tools assist learners is incomplete. In this paper the scaffolds designed into a dynamic modeling software tool called Model-It are assessed in terms of their ability to support learners' use of modeling practices. Four pairs of middle school students were video-taped as they used the modeling software for three hours, spread over a two week time frame. Detailed analysis of coded videotape transcripts provided evidence of the importance of scaffolds in supporting the use of modeling practices. Learners used a variety of modeling practices, the majority of which occurred in conjunction with scaffolds. The use of three tool scaffolds was assessed as directly as possible, and these scaffolds were seen to support a variety of modeling practices. An argument is made for the continued empirical validation of types and instances of tool scaffolds, and further investigation of the important role of teacher and peer scaffolding in the use of scaffolded tools. Peer Reviewed http://deepblue.lib.umich.edu/bitstream/2027.42/43636/1/11165_2004_Article_5112115.pdf
Article
Full-text available
To help inform drug abuse prevention research in school settings about the issues surrounding implementation, we conducted a review of the fidelity of implementation research literature spanning a 25-year period. Fidelity has been measured in five ways: (1) adherence, (2) dose, (3) quality of program delivery, (4) participant responsiveness and (5) program differentiation. Definitions and measures of fidelity were found not to be consistent across studies, and new definitions are proposed. While there has been limited research on fidelity of implementation in the social sciences, research in drug abuse prevention provides evidence that poor implementation is likely to result in a loss of program effectiveness. Studies indicate that most teachers do not cover everything in a curriculum, they are likely to teach less over time and training alone is not sufficient to ensure fidelity of implementation. Key elements of high fidelity include teacher training, program characteristics, teacher characteristics and organizational characteristics. The review concludes with a discussion of the tension between fidelity and reinvention/adaptation, and ways of resolving this tension. Recommendations are made for developing a consistent methodology for measuring and analyzing fidelity of implementation. Further, researchers and providers should collaborate to develop ways of introducing flexibility into prevention programs.
Article
In this study I investigated how collaborative interactions influence problem-solving outcomes. Conversations of twelve 6th-grade triads were analyzed utilizing quantitative and qualitative methods. Neither prior achievement of group members nor the generation of correct ideas for solution could account for between-triad differences in problem-solving outcomes. Instead, both characteristics of proposals and partner responsiveness were important correlates of the uptake and documentation of correct ideas by the group. Less successful groups ignored or rejected correct proposals, whereas more successful groups discussed or accepted them. Conversations in less successful groups were relatively incoherent as measured by the extent that proposals for solutions in these groups were connected with preceding discussions. Performance differences observed in triads extended to subsequent problem-solving sessions during which all students solved the same kinds of problems independently. These findings suggest that the quality of interaction had implications for teaming. Case study descriptions illustrate the interweaving of social and cognitive factors involved in establishing a joint problem-solving space. A dual-space model of what collaboration requires of participants is described to clarify how the content of the problem and the relational context are interdependent aspects of the collaborative situation. How participants manage these interacting spaces is critical to the outcome of their work and helps account for variability in collaborative outcomes. Directions for future research that may help teachers, students, and designers of educational environments learn to see and foster productive interactional practices are proposed. The properties of groups of minds in interaction with each other, or the properties of the interaction between individual minds and artifacts in the world, are frequently at the heart of intelligent human performance (Hutchins, 1993, p. 62).