Content uploaded by Stephen Thomas Slota
Author content
All content in this area was uploaded by Stephen Thomas Slota on Oct 28, 2018
Content may be subject to copyright.
Content uploaded by Stephen Thomas Slota
Author content
All content in this area was uploaded by Stephen Thomas Slota on Sep 01, 2017
Content may be subject to copyright.
1
CHAPTER 12
THE INEVITABILITY OF EPIC FAIL:
EXPLODING THE CASTLE WITH SITUATED LEARNING
Stephen T. Slota, Ph.D., University of Connecticut
Michael F. Young, Ph.D., University of Connecticut
Deconstruction of previous large-scale efforts to leverage innovative technology toward school
improvement reveals a developmental trend of initial excitement followed by careful research, promising results,
and—tragically—some mutation that leads to an eventual loss of impact. This applies to all major educational
technology endeavors over the last half-century and is observable in any situation where designers and innovators
have close and direct involvement with teacher training but the innovation’s core theoretical foundations and key
concepts are gradually (or abruptly) shed during and post-implementation. For those of us who study and design
playful learning environments, this emphasizes a critical need to ask: “Why do technology-rich research
innovations always seem to fail once they grow beyond the control of the design team?”
In this closing chapter, we describe the three pillars of a concept we refer to as Epic Fail. Doing so will help
us identify applicable, overarching concerns with contemporary instructional design and provide suggestions that
can help address difficulties with dissemination, local customization, and modding beyond the scope of an original
designer’s control. Additionally, it will help us tie suggestions to non-game innovations in order to highlight how
such issues exist across educational subfields and how we—educators, designers, and researchers—might be able
to make inevitable epic failures a bit less inevitable.
A Basis in Situated Cognition
As noted throughout this book, ecological psychology research suggests that students play games and
participate in online communities based on goals and intentions that arise in-the-moment through social
interactions with other players, in-game objects, and non-player characters (NPCs)—in other words, interactions
that emerge on-the-fly and in the context of a narrative of play (See Chapter 1, Castle Upon a Hill). Yet, nearly all
standard empirical studies of instructional game development and implementation to date (i.e., horse-race, t-test
comparisons of playful learning vs. traditional classroom learning) have presumed that game design and playful
2
learning behave like pharmaceutical medications that are chemically identical and organized in pre-measured
dosages for all learners. Educational researchers have hyperfocused on game effects (measured via high-stakes or
other exams) by controlling for course content, learner characteristics, and value-added features (e.g., core
mechanics, visuals, narrative structure, etc.) under the assumption that fiddling with a particular independent
variable will yield a particular, predictable result.
But consider the logic underpinning this approach. Generally speaking, if one assumption is true (i.e.,
individual prior experiences inherently shape individual future experiences), the other (i.e., games can be a magic
“pill” to treat learning), necessarily, cannot be. Are players individual learners who experience gameplay on a
personal level that cannot be replicated, even across the same player playing the same game multiple times? Or
are individuality, pre-existing experience, and environmental context irrelevant with respect to solving
macroscopic instructional challenges? The paradoxical nature of these questions is the primary reason we see
current educational game research as fundamentally flawed (and believe it’s a blindspot for practicing teachers,
technology innovators, and others seeking to add educational games to their respective instructional toolkits).
We base our argument on the notion that game players play with slightly different goals in mind each
time they enter a given world or universe. Irrespective of whether or not a player has previously played the same
game with the same individual objective(s), micro-differences in player attitudes, knowledge, and movements
through the game environment will always yield new emergent goals and experiences (even if a new play session
might superficially appear indistinguishable from an earlier one). The phenomenon becomes more pronounced as
we consider the wide variation of game interactions across players and across content/update patches of a
particular game. This is why, returning to our earlier analogy, the highly-individual and continually-evolving nature
of play renders game-based instruction (in and of itself) a mostly useless treatment for instructional maladies:
individual games prescribed as “pills” would need dynamic content capable of self-modification to produce
positive outcomes for any individual patient, and patients would need personalized dosages (or different
medications altogether) to treat the same underlying illness.
Thus far, the messy, inconclusive un-interpretability of player experiences across games, players, and time
(studied via meta-analytical techniques) has yielded little actionable data (see: Clark et al., 2016; Vogel et al., 2006;
Wouters et al., 2013; Young et al., 2012) and underscored our suggestion that data reduction by way of averaging
3
across studies, game players, games, and curricular content stands in opposition to the fundamental tenets of
situated cognition and ecological psychology. Instead, analyses should be targeting the dynamic interactions
between players and game affordances (given that distillation of player-game-environment interactions into a one-
size-fits-all result obscures important learning outcomes at the individual difference level). To drive this point
home, we need only examine real world cases where traditional empirical research—specifically in the realm of
educational technology design—fell apart during or after implementation.
The Inevitability of Epic Fail
Looking back on a number of high-profile, technology-driven projects aimed at creating new approaches
to instruction, there is a decades-long pattern of learning science, instructional design, and cognitive theory
coalescing into some spectacular innovation that eventually disintegrates as quickly as it came into being—a
phenomenon we refer to as Epic Fail.
In general, Epic Fail begins with early data collection revealing one or more important findings for a given
instructional variable (e.g., student-side variables like test achievement, self-regulation, course completion,
motivation, and engagement OR teacher-side variables like advanced content knowledge, wait time, inquiry-based
pedagogy use, and collaboration among teams of teachers). Once the results are published, widespread
excitement and adoption begins to spread (e.g., interdistrict collaboration, statewide or national implementation).
However, follow-up research conducted weeks, months, or years later ultimately reveals that the project’s
purported benefits were not sustained: they arise when project originators are closely involved with teacher
training and classroom implementation but disappear when project originators pull back to let the innovation
thrive on its own. Notable victims of this cycle include Logo (Papert, 1980), The Adventures of Jasper Woodbury
(Cognition and Technology Group at Vanderbilt [CTGV], 1992), Apple’s Hypercard, inquiry-based simulations like
Model-It (Fretz et al. 2002), and—perhaps unsurprisingly—game-based instruction (with positive short-term and
negative long-term outcomes befalling a variety of educational video games [Honey & Hilton, 2011; Young et al.,
2012], games for non-academic instruction [e.g., Aronowsky, Sanzenbacher, Thompson, Villanosa, & Drew, 2012;
Sylvan, Larsen, Asbell-Clark, & Edwards, 2012] and scholastic tabletop roleplaying games [e.g., Slota, Travis, &
Ballestrini, 2012]).
4
With such limited long-term success and sustained impact on classroom learning, we can’t help but wonder: “Why
do technology-rich research innovations always seem to fail once core designers are no longer directly
involved?” In the following subsections, we’ll explore our question using a few of the high-profile examples cited
above. We’ll also propose three possible (if not probable) reasons for the inevitability of Epic Fail: 1) Fatal
Mutation Due to Assimilation; 2) Loss of Fidelity; and 3) Failure to Thrive.
Learning from the Past
There is still debate over the extent to which even simple technologies like calculators and word
processors have meaningfully affected school classrooms, nevermind more advanced tools like geospatial mapping
software, programming languages, 3D avatar-based virtual worlds, scientific modeling environments, and online
video series used to flip classrooms. The rise and subsequent fall of said technologies demonstrates how no
amount of time, effort, and money can guarantee sustained, long-term change. As it turns out, the inherent
complexity of novel, tech-driven instruction can undermine both good ideas and intentions:
Logo
Based on Papert’s exploration of constructionism-based learning environments, Logo was developed in
1967 as a programming language that made LISP-like artificial intelligence programming accessible to elementary
education students through the graphical representations of a small, robotic half-sphere that resembled a small
plastic turtle. It received widespread research and acclaim by teachers at the time of its implementation, but
rather than acknowledging a new paradigm for student learning, schools tended to assimilate Logo into their tried-
and-true method of education: direct instruction. Once the developers and researchers left the school
environment, classroom teachers defaulted to teaching about Logo in place of teaching with Logo. As a result,
students were led to memorize Logo programming commands (e.g., FD 50 RT 90) from teacher-generated
worksheets. After peaking in the mid-1980s (largely due to the development of the Apple II computer), most of
Logo’s 197 compilers/interpreters fell out of educational use despite numerous research studies demonstrating
Logo’s instructional effectiveness when implemented as Papert originally envisioned. Not long after, Papert (1993)
5
concluded that large-scale school reform was likely impossible and chose to focus on smaller-scale projects
(Papert, 1997).
The Adventures of Jasper Woodbury
Vanderbilt University’s Jasper Woodbury videodisc series was a professionally-filmed video production
designed to support middle school mathematics and problem solving. Drawing on contemporary learning theories
(i.e., situated cognition, anchored instruction), these videos were meant to provide a meaningfully-authentic
learning context prior to and in conjunction with other instructional activities. Unfortunately, fearing that students
were unprepared to handle the series’ outward complexity, many teachers chose to educate their classes using the
stories as post-instruction word problems rather than a context for inquiry-driven learning. Once the program’s
researchers ceased their direct interaction with participating educators, the videos became mainly capstone
activities, relieving them of any potential they once had for grounding abstract learning in the real world. The
migration of videodisc to DVD coupled with rapid antiquation of the series’ content (e.g., the price of gas as part of
the calculations) further sealed the program’s fate. By the late 1990s, Jasper had been shelved.
HyperCard
HyperCard was an Apple, Inc. invention similar to a programmable powerpoint slide presentation that
served as a framework for several other learning technology innovations in the late-1990s and early-2000s. It was
most widely used by teachers and subject matter experts hoping to create focused classroom tutorials, collect
dribble file data for assessments, and interactively control videodisc players. However, rapid technological
advancement during the 1990s rendered HyperCard obsolete and sidelined all related instructional materials. Once
Apple stopped supporting the system in favor of HTML and Java (the last update was made in 1998 even though
services were supported until March 2004), work derived from thousands of HyperCard programming hours was
lost or abandoned. With no simple migration path, HyperCard vanished from the classrooms its creators hoped it
would revolutionize. Like other instructional innovations, the project’s failed implementation highlights how
Information Age technologies come and go so quickly that tech-specific innovative pedagogy may be doomed to
fail (or at least fade) even if project originators remain available for widespread training and implementation.
Inquiry-Based Science Simulations
6
Acknowledging that the advanced sciences were increasingly reliant upon algorithm-driven theoretical
models of phenomena rather than direct observational data, cognitive scientists in the early 2000s began providing
middle and high school students with tools that would enable inquiry-based exploration of dynamic environmental
systems. Tools like Model-It (Fretz et al., 2002) and the Virtual Solar System (Barab et al., 2000) were designed to
help students identify central variables and catalog complex variable interrelationships by developing theoretical
models that could be tested via animated simulation (Tsurusaki, Amiel, & Hay, 2003).
These and similar projects were meant to transform American education by encouraging science teachers
to adopt inquiry-based pedagogy when teaching about complex systems; to model a particular theory-driven
approach to instruction rather than assume the role of a be-all end-all instructional silver bullet; to usher a
wholesale re-envisioning of our educational institutions. As with other high-profile projects, work proceeding from
small-scale laboratory studies to school-based trials suggested that students were fully capable of using instincts
about environmental phenomena to construct workable theoretical models. But the broad pool of educators who
signed on via publishing companies and other distribution networks lacked direct access to the research literature
and frameworks of inquiry necessary for understanding program development and implementation. Not every
student in the class easily understood the modeling software. Students working in groups met with widely variable
success depending on difficult-to-predict group dynamics. Funding from public and private stakeholders
evaporated among reports of lost instructional time and teachers teaching how to navigate the tool rather than
the course content. Eventually, user interest in patches, updates, user-created mods, and online discussion hubs
flagged.
After just a few short years, Model-It and the Virtual Solar System had been relegated to the educational
innovation graveyard.
Understanding Trajectories of Failure
Given the number and scale of investments made to support endeavors like those described above, it
initially struck us as odd that epic failure could be so frequent (both in- and outside the realm of educational
technology; e.g., instructional design, adult education, creativity, learning science, special education, multicultural
education, etc.). Naturally, we would not be so naive as to think all educational technology projects could or
7
should take only a trajectory to success, but no single project—that we know of—has ever spurred large scale
educational reform, regardless of project originator, institution, or technology. Likewise, all major projects that we
know of—regardless of those same elements—have fallen apart before reaching successful large-scale
implementation. That leaves us asking: “What gives?”
After thoughtful analysis and discussion with our contemporaries, we have come to believe that there are
three major counter forces common to technology-rich educational research innovations that move developers
and researchers onto a trajectory of failure. Each counter force, while bearing some similarity to the others, is a
unique challenge that must be controlled pre-, during-, and post-implementation—a difficult (if not impossible)
prospect.
Below, we explore how and why they occur in situ as a consequence of project implementation.
1) Fatal Mutation Due to Assimilation
Fatal Mutation Due to Assimilation refers to teacher-generated changes that are fundamentally in
opposition to a project’s theoretical foundations and goals. Our word choice here is intentional: like certain
genetic mutations in biological life, a single or small number of mutations to the theoretical DNA of a large-scale
project can bring about a swift, painful death. In the case of Logo, constructivist theory dictated that students
would discover the programming language in-context, including having younger students possess and share
information that older students had not yet learned. Once schools started rejecting this approach (arguing that
older students should learn more advanced content than younger ones), teachers began having their classes
memorize decontextualized commands before allowing any interaction with the cybernetic turtle. While some
teacher-directed modifications might have been less damaging than others, vulnerabilities emergent through the
project’s wide implementation made it possible—likely, even—for minor problems to erode Logo’s viability and
sustainability. For Papert (1980), this amounted to a kind of Piagetian Assimilation, with new ideas being forced
into existing schemata (i.e., direct, teacher-led instruction) rather than leveraged toward the reformation of
schools as learning ecologies.
2) Loss of Fidelity
8
For the purposes of this discussion, we characterize Loss of Fidelity as participants doing what the
designers and researchers intended but failing to focus on core content, adding materials that are antithetical to
project objectives, and/or watering down required activities to the point that they are no longer effective. This
can be thought of, in part, as personal teacher preference running up against designer recommendations, but the
problem is actually a bit more complex, occurring whenever there is a schism between individual teacher
intentions, classroom constraints, and school reality. Dusenbury et al. (2003) explored this precise issue in the
context of drug abuse prevention programs, identifying five major measures of fidelity: Dosage, Adherence,
Program Differentiation, Participant Responsiveness, and Quality of Program Delivery. We believe their framework
applies directly to the technology-based classroom interventions in our chosen examples.
Dosage includes agreed participation in a daily intervention program but—rather than implementing the
set treatment each day—only following through with that program once per week or less. In the case of a
schoolteacher, intervention might be interrupted by legitimate competing events (e.g., fire drills, required testing,
schedule changes), or it may manifest as a timing issue wherein dosage is miscalculated or deliberately modified to
fit some preconceived schedule (e.g., teaching about the tool instead of with it to ensure students can complete all
of their learning stations before the bell rings). Returning to one of our real world cases, teachers who sought to
utilize The Adventures of Jasper Woodbury but were concerned about the instructional time commitment would
often present the first episode (i.e., Journey to Cedar Creek) following direct instruction about distance, rate, and
time, thus “covering” the content in fewer than three days. Even if instruction took place during the appropriate
content unit, any teacher who modified the implementation timeline (regardless of reasoning) inherently altered
dosage as well (i.e., showing the videos after instruction instead of using them as a macro-context for the coming
week’s activities).
Adherence and Program Differentiation refer to the addition of instructional practices or pedagogies that
make a unique program more like a particular pre-existing program. While the researcher may wish to implement
a purely constructivist program, for instance, a participating classroom teacher might choose to add Classdojo™ or
a similar behaviorism-based tool to reinforce certain learning behaviors (i.e., adhering to Behaviorist pedagogy).
This eclectic approach can enhance an intervention, but it can also make novel innovations less distinguishable
9
from existing programs (thus preventing the researcher from assessing any uniquely added benefit or even
measuring the innovation’s general effectiveness).
Quality of Delivery refers to how well instructors understand the theoretical foundations of a given
innovation and dynamically interact with learners in a manner consistent with the underlying design principles,
especially when their guides and prepared curriculum don’t work out exactly as planned. Teaching “in the cracks”
(i.e., in a live classroom where interactions cannot be scripted) requires implementers to “fill” non-program
activities and discussion with information and responses that are consistent with the designer’s theoretical
framework. This bears a direct relationship to Participant Responsiveness, the way instructional interventions are
received by the target audience (i.e., both teachers and students). Because instruction is intended to induce
particular learning experiences and interactions, miscommunication or ineffectual implementation may lead the
audience to miss the intervention’s situated value. When elements like Dosage (e.g., how much Jasper or Logo
instruction is needed before measurable changes in math achievement can be expected) conflict with school
scheduling or administrative initiatives, Quality of Delivery and Participant Responsiveness tend to suffer dramatic
setbacks.
3) Failure to Thrive
The third counter force, Failure to Thrive, represents a pattern wherein lack of researcher oversight or
sustained grant funding causes instructors to gradually shift away from program goals, theories, and procedures
present at the time of initial implementation. In part, this appears to involve situations where participating
educators “do it for the researcher(s)” as a personal favor, or for the status of being part of the research team, or
to obtain resources/benefits for participating in a grant-funded project. Once the project originators leave, the
teachers simply move on or revert to prior instructional practices.
In describing a situated view of naval quartermasters, Hutchins (1995) addressed how success arises from
interaction among people and artifacts in the world. From this perspective, failure can occur when any one of
these potential interactions is interrupted: teacher-tool, researcher/designer-tool, and teacher-researcher. Each
interaction must be functioning and ongoing in some form to provide the feedback necessary for sustaining
technology-rich programs over time. While teachers often crave interactions with talented adults and welcome the
10
opportunity to share their insights, debate with researchers, reflect on and explain their own pedagogy, and
receive critiques of their teaching from academic peers, the social and cognitive factors arising from broken
interaction can obscure progress toward a common objective. Barron (2003) explained this as smart groups
capable of generating workable solutions ignoring those solutions as a result of structural social dynamics.
Following this line of logic, any interruption of teacher-researcher interaction, intended or not, may allow
misunderstanding, lack of personal buy-in or time investment, social conflict, or other social dynamics to
overshadow the project’s original goals. These issues eventually consume program implementation and push
participating teachers back into their respective comfort zones.
Avoiding the Precipice of Epic Failure
With the advent of major government efforts to improve schools (e.g., No Child Left Behind, Race to the
Top, Common Core State Standards, Every Student Succeeds), sweeping instructional change is even more
complicated to achieve than it was during the late 20th century. Devotion to improved performance (as tracked via
traditional quantitative measures) has come largely at the expense of innovation, and direct instruction in the form
of test preparation has served almost exclusively as the means to contend with the ever-growing clamor for
accountability and data-driven decision making. This has forced designers to meet parent, teacher, district, and
researcher needs while simultaneously avoiding pitfalls that transformed Logo, Jasper, HyperCard, and inquiry-
based modeling into mutant forms of their former selves.
Of course, because no two school environments are identical, some customization should be expected at
each implementation site. We believe such customization must be proactive, carefully planned and organized such
that program fidelity is maintained and project originators are able to recognize the Epic Fail trajectory before
falling victim to it. This requires a clear articulation of program elements that can be altered for convenience,
changed within a set margin, or not changed at all (including content and implementation methods). Additionally,
it means anticipating which program elements might challenge traditional school instruction and, accordingly,
planning ahead to minimize disruption of the innovation’s theoretical integrity. At times, we have referred to this
as a “fixin’s bar” approach where designers propose an array of potential modifications (e.g., adding or removing
11
particular activities, procedures, etc.) that 1) will not dramatically deviate from the program’s core
mission/foundation, and 2) can be used to locally customize the program without ruining its “flavor.”
Fatal Mutation Due to Assimilation can be avoided with planned customization and clear designation of
critical components. Project originators know that new sites will seek to customize the intervention to meet
unique characteristics of their context. For game-based learning designers and researchers in particular, this
necessitates various options for play and a clear list of innovation-specific recommendations that can help
instructors fit games into pre-existing core curricula. A 1:1 learning and game objective relationship can ensure
overlap between state and national standards (e.g., Young et al., 2012) and decrease the likelihood that
participating teachers will simply assimilate games into existing practices like direct instruction or timed “stations.”
Similarly, researchers and designers can pre-empt Loss of Fidelity by making the parameters, theoretical
frameworks, and logic models that drive their designs as transparent as possible. Assumptions made during tool or
program development must be aligned with how the tool or program is intended to operate in situ. Teacher
support through regular follow-up (including audio/video, on-going training, face-to-face focus groups, surveying,
student feedback, and other qualitative tools) should target teacher understanding of learning theory in addition
to technical operating procedures and troubleshooting techniques. Above all, users must be invited to join as many
development discussions as possible, ensuring the innovation’s on-the-ground implementation can and will
actually support student learning outcomes.
Failure to Thrive can be combatted through the creation of a self-sustaining, dynamic communities of
practice (e.g., Lave & Wenger, 1991) that exist alongside the original innovation. Any such (metagame) community
must be able to evolve over time under standard innovation parameters and within the innovation’s underlying
theoretical framework. While this might include the creation of a webpage, forum, YouTube channel, wiki, and/or
series of regular face-to-face meetings, continued success will only come from ongoing facilitation by leading
experts (i.e., project originators and trained practitioner-specialists). All teachers hoping to become community
practitioner-specialists should be capable of describing the program’s underlying theory and show evidence of
their ability to adapt the theory to fit within the scope of a living classroom environment. When possible, original
practitioners and other expert researchers should return to the community for two-way dialogue concerning
information regarding progress in the field, modifications to the theory, and related research projects. The
12
application of cost-sharing user fees may increase school buy-in, providing impetus to remain involved with the
innovation and assist with the burden of community development. Though teacher-tool and researcher/designer-
tool interactions will likely continue regardless of community formation, teacher-researcher communication is the
only element that will sustain program fidelity beyond the original scope of the project.
To us, it seems clear that innovation implementation should be planned with early consideration of
learning effects, customizability within design parameters, and purity of theory-based interaction goals. For that to
happen, player goals and solution trajectories must align with socially constructed knowledge (i.e., core curricula)
at a 1:1 ratio, made easier when developers create implementation boundary constraints in anticipation of Epic
Failure. Of course, averaging across user behavior may seem like a simple way to separate outcome chaff from
wheat (it requires fewer resources, reduces development time, and makes statistical analysis rather
straightforward), but doing so also comes with risks, not least of all discarding valuable data as chaff even if it isn’t.
This is why we’ve argued a situated cognition worldview is especially helpful for educational innovators:
unpredictable, emergent factors invariably affect project outcomes, so it is necessary to assume that some users—
students, teachers, administrators, institutions—will identify affordances the designers can or will not. In response,
designers must build fail safes capable of 1) inducing the adoption of specific, designer-aligned goals; and 2)
curbing behavior to conform to a particular model of thinking/action (i.e., giving users enough tethering to near
the precipice of Epic Fail without falling over the ledge). That is the only way to adequately balance user desire for
agency with developer need for consistency and ensure long-term implementation can be successful.
Castle Upon a Hill
Any game-based learning research aimed at supporting macroscopic educational reform requires
researchers capable of on-going school-level involvement, random fidelity checks, and two-way monitoring of the
innovation through the creation of shared research, designer, teacher communities. The challenges outlined in this
chapter are numerous and complex, but we believe project originators who are mindful about the maintenance of
an active community role and leveraging educational changes at federal, state, and local levels can create
instructional innovations that work and can be scaled for mass implementation. That is why we chose to focus on
the particular cases featured throughout this chapter, to provide a foundation for understanding how and why
13
repeated failure happens and to help guide the development of more engaging and long-lasting alternatives to
current K-12 and higher education practices.
That said, we feel it appropriate to close with a (slightly modified) version of the offer we made at the
start of this book:
“It’s dangerous to go alone! Take this [situated cognition].”
Ecopsychology is possibly the most powerful means of exploding the existing GBL castle and replacing it
with a new, improved version. We hope our analyses and advice—in addition to those put forth by our co-
authors—will point the field toward more sophisticated, thoughtful consideration of how and why games behave
as complex learning ecologies. With your cooperation as our Player 2, we’re confident that our collective
educational endeavors will be stronger, more effective, and—above all—less susceptible to Epic Fail.
References
Aronowsky, A., Sanzenbacher, B., Thompson, J., Villanosa, K., & Drew, J. (2012). When simple is not best: Issues
that arose using why reef in the conservation connection digital learning program. GLS 8.0 Conference
Proceedings, 24-29.
Barab, S. A., Hay, K. E., Squire, K., Barnett, M., Schmidt, R., Karrigan, K., Yamagata-Lynch, L. & Johnson, J. (2000).
Virtual Solar System Project: Learning through a Technology-Rich, Inquiry-Based, Participatory Learning
Environment. Journal of Science Education and Technology, 9 (1), 7-25.
Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307-359.
Bergmann, J. & Sams, A. (2012). Flipping the classroom. Tech & Learning, 32(10), 42.
Clark, D. B., Tanner-Smith, E. E., & Killingsworth, S. S. (2016). Digital games, design, and learning: A systematic
review and meta-analysis. Review of Educational Research, 86(1): 79-122. doi:
10.3102/0034654315582065
Cognition and Technology Group at Vanderbilt (1992). Technology and the design of generative learning
environments. In T.M. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction: A
conversation. Hillsdale NJ: Lawrence Erlbaum Associates.
Dusenbury, D., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of
implementation: Implications for drug abuse prevention in school settings. Health Education Research,
18(2), 237-256.
Fretz, E. B., Wu, H., Zhang, B., Davis, E. A., Krajcik, J. S., & Soloway, E. (2002). An investigation of software scaffolds
supporting modeling practices. Research in Science Education, 32(4), 567-589.
Honey M. A. and Hilton, M. (2011). Learning science through computer games and simulations Committee on
Science Learning: Computer Games, Simulations, and Education, National Research Council.
Hutchins, E. (1995). Cognition In the Wild. Cambridge, MA: MIT Press.
Lave, J. & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge
University Press.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York, NY: Basic Books.
Papert, S. (1993). The Children’s Machine. New York, NY: Basic Books.
Papert, S. (1997). Why school reform is impossible. Review of Tyack and Cuban (1995). Accessed May 31, 2013
from http://www.papert.org/articles/school_reform.html
Slota, S. T., Travis, R., & Ballestrini, K. (2012). Operation BIOME: The design of a situated, social constructivist
ARG/RPG for biology education. GLS 8.0 Conference Proceedings, 261-267.
14
Sylvan, E., Larsen, J., Asbell-Clark, J., & Edwards T. (2012). The canary’s not dead, it’s just resting: The productive
failure of a science-based augmented-reality game. GLS 8.0 Conference Proceedings, 30-37.
Tsurusaki, B., Amiel, T.& Hay, K. 2003. Using Modeling-Based Inquiry in the Virtual Solar System. Presentation at
EdMedia: World Conference on Educational Media and Technology, Honolulu, Hawaii, USA. ISBN 978-1-
880094-48-8
Vogel J.J., Vogel D.S., Cannon-Bowers J., Bowers C.A., Muse K. & Wright M. (2006) Computer gaming and
interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research 34, 229–
243. doi:10.2190/FLHV-K4WA-WPVQH0YM
Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A metaanalysis of the cognitive
and motivational effects of serious games. Journal of Educational Psychology, 105, 249-265.
doi:10.1037/a0031311
Young, M., Slota, S., Cutter, A., Jalette, G., Lai, B., Mullin, G., Simeoni, Z., Tran, M., & Yukhymenko, M. (2012). Our
princess is in another castle: a review of trends in video gaming for education. Review of Educational
Research, 82(1), 61-89. doi: 10.3102/0034654312436980