ArticlePDF Available

Business Simulations and Cognitive Learning: Developments, Desires, and Future Directions


Abstract and Figures

This article focuses on the research associated with the assessment of the cognitive learning that occurs through participation in a simulation exercise. It summarizes the objective evidence regarding cognitive learning versus the perceptions of cognitive learning achieved as reported by participants and instructors. The authors also explain why little progress has occurred in objectively assessing cognitive learning in the past 25 years and provide potential options for filling this deficiency.
Content may be subject to copyright.
Simulation & Gaming
DOI: 10.1177/1046878108321624
2009; 40; 193 originally published online Jul 31, 2008; Simulation Gaming
Philip H. Anderson and Leigh Lawton
and Future Directions
Business Simulations and Cognitive Learning: Developments, Desires,
The online version of this article can be found at:
Published by:
On behalf of:
Association for Business Simulation & Experiential Learning
International Simulation & Gaming Association
Japan Association of Simulation & Gaming
North American Simulation & Gaming Association
Society for Intercultural Education, Training, & Research
can be found at:Simulation & Gaming Additional services and information for Email Alerts: Subscriptions: Citations
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Business Simulations
and Cognitive Learning
Developments, Desires, and
Future Directions
Philip H. Anderson
Leigh Lawton
University of St. Thomas, St. Paul, Minnesota, USA
This article focuses on the research associated with the assessment of the cognitive
learning that occurs through participation in a simulation exercise. It summarizes the
objective evidence regarding cognitive learning versus the perceptions of cognitive
learning achieved as reported by participants and instructors. The authors also explain
why little progress has occurred in objectively assessing cognitive learning in the past
25 years and provide potential options for filling this deficiency.
Keywords: assessment; attitudes; benefits; business simulations; cognitive learning;
computerized simulation; educational outcomes; objective evidence;
objective measures; perceptions; progress
n this article, we attempt to outline what instructors hope simulations will accom-
plish and what they consider to be the educational benefits of simulations. We also
summarize what is known about the educational outcomes of business simulation
exercises in terms of the effect of simulations on (a) cognitive knowledge about the
discipline and (b) attitudes toward the course and/or the discipline. Finally, we dis-
cuss why little progress has been made in our knowledge of learning outcomes and
suggest potential avenues for advancing our knowledge of the efficacy of simula-
tions for learning.
The Scope of the Inquiry
This article is concerned with assessing learning for only a subset of business
simulations. We have limited our scope because the term business simulation covers
Simulation & Gaming
Volume 40 Number 2
April 2009 193-216
© 2009 SAGE Publications
hosted at
Authors’ Note: We would like to acknowledge the significant contributions that Jim Gentry (University
of Nebraska), Paul Schumann (Minnesota State University–Mankato), and Joe Wolfe made to this study.
Their insights and recommendation greatly improved our article and are greatly appreciated.
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
194 Simulation & Gaming
such an extremely broad range of activities that it would be impossible to draw valid
conclusions across this entire spectrum. The interesting classification of simulations
by Lean, Moizer, Towler, and Abbey (2006) illustrates nicely just how much ground
business simulations cover. We restrict ourselves here to computer-based simulations
in which students or, more typically, teams of students compete. A computer model
attempts to reflect the basic dimensions of a business environment, and the students
vie either against each other or against a set of computer competitors to achieve suc-
cess in the simulated marketplace. Simulations typical of this genre include total
enterprise simulations and computerized marketing simulations.
Throughout this article we draw a distinction between assessments based on per-
ceptions of learning versus assessments based on actual, direct evidence of learning.
The literature refers to the latter as objective evidence, but, in fact, these assessments
may range from something as objective as a multiple choice test to something as sub-
jective as an instructor’s personal evaluation of a student’s performance. The real
point of distinction is whether the assessment is or is not based on the perceptions of
the participant or the instructor. To be consistent with the literature, we will refer to
evaluations that are grounded in evidence other than perceptions as “objective.
What Instructors Believe Simulations Accomplish
Instructors may adopt simulations in an attempt to achieve any number of out-
comes. Over the past four decades, scores of journal articles and countless confer-
ence presentations have offered myriad explanations as to why simulations are and
should be used. Often these explanations are expressed as advantages of simulations
over alternative pedagogies. The major desired outcomes typically can be sorted into
three categories (e.g., Faria, 2001; Gentry & Burns, 1981; Hsu, 1989; Knotts &
Keys, 1997; Wolfe, 1985):
1. Learning
a. Teach students the terminology, concepts, and principles of business in a general
or specific discipline.
b. Help students grasp the interrelationships among the various functions of business
(marketing, finance, production, etc.).
c. Demonstrate the difficulty of executing business concepts that appear relatively
simple. (Requiring students to implement concepts often leads them to discover
that activities such as developing a business plan and successfully implementing
it are significantly more challenging than reading about them or hearing about
them in a lecture might communicate.)
d. Enhance retention of knowledge. (It has been long accepted that participation in
an activity yields greater retention of concepts and relationships than does a
more passive educational pedagogy.)
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 195
e. Enable students to transfer learning to the business world. (Because simulations
require participants to act in the role of a manager, simulation users point to the
face validity of simulations as evidence that students will have an easier time
transferring what they learned in the classroom to the world of work.)
2. Attitudinal
a. Improve student attitudes toward the discipline.
b. Provide a common experience for class discussion. (This may be especially
germane for undergraduate students with little business experience.)
c. Engage students in the learning process.
3. Behavioral
a. Teach students to apply the concepts and principles of business to make effective
b. Enable students to implement course concepts. (The requirement to implement
rather than merely discuss course concepts allows students to test ideas, experi-
ence the consequences of their actions, and respond to unanticipated outcomes.)
c. Improve students’ ability to interact with their peers. (Because most instructors
using simulations have their students work in groups, the belief exists that
students will learn interpersonal skills during the course of play.)
d. Give students practice at making business decisions.
e. Improve students’ business decision skills.
The foregoing list shows the wide range of objectives that instructors may hope
to achieve by using simulations. It should be pointed out that these aims are not
mutually exclusive. In fact, most instructors undoubtedly hope to achieve multiple
outcomes simultaneously. It also is worth noting that there are strong reasons to
suspect that simulations are likely to be considerably more effective in delivering
some of these learning outcomes than others. The literature on business simulations
shows considerable consensus concerning what simulations should do well and
what they may not do well (e.g., Faria & Wellington, 2004; Fripp, 1993; Lean
et al., 2006; Saunders, 1997). To be effective, simulations require a substantial time
commitment from participants. Consequently, the literature suggests that business
simulations are an inefficient pedagogy for teaching terminology, factual knowl-
edge, basic concepts, or principles (1a above). The basics of a course can be cov-
ered more quickly in lectures. It may be an open debate as to whether students will
be able to retain or implement some of these basics if lecture is the sole method of
delivery, but few will dispute that lectures are much faster.
The literature also implies that simulations are somewhat less effective than other
pedagogies at teaching specific applications or concepts. Simulations of the type
considered here are multifaceted and amorphous. If an instructor wants a student to
learn how to conduct break-even analysis or consider the implications of a particu-
lar course principle such as the product life cycle, a case that focuses attention on a
narrowly defined issue may be more efficient.
Given the diversity of purposes for which simulations are used, it is no surprise that
it has been difficult to devise a simple instrument that measures the effectiveness of
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
simulations and equally difficult to generalize the results of studies that assess the
educational value of simulations.
The Educational Outcomes of Business Simulation Exercises
For over 25 years, many researchers have used Bloom’s taxonomy of learning
objectives (Bloom, Englehart, Furst, Hill, & Krathwohl, 1959) to guide their inves-
tigations of the learning resulting from business simulations (Gentry & Burns, 1981;
Gosen & Washbush, 2004; Wolfe, 1985). In many respects, Bloom’s taxonomy has
been the anchor for assessing whether learning occurs in business simulations.
Because Bloom’s taxonomy provides the framework for organizing the literature on
the learning outcomes of simulations, we briefly summarize the taxonomy here.
Bloom’s taxonomy classifies learning outcomes into three domains (Bloom
et al., 1959; Krathwohl, Bloom, & Masia, 1964): cognitive (knowing), affective
(feeling), and psychomotor (doing). Each of these domains has stages of learning
unique to that domain. This taxonomy has proven useful for researchers in all fields
of education. By identifying the domain and the stages within them, educators and
researchers have been able to distinguish more clearly the particular learning out-
come(s) they are pursuing as well as the degree to which the outcome has been
accomplished. Following a brief overview of what we know about the relationship
between simulations and the affective and psychomotor domains, this article will
focus on the cognitive learning outcomes of business simulations.
Business Simulations and Affective Learning
A substantial share of the early research on simulations focused on the attitudes
(affective domain) of participants exposed to the pedagogy. Much of this initial research
focused on comparing the general perceptions of students regarding cases, lectures, and
simulations (e.g., Anderson & Woodhouse, 1984; Blythe & Gosenpud, 1981; Fritzsche,
1974; Mancuso, 1975; Wolfe, 1975). Subsequent research expanded into attempting to
assess what is learned from participating in a simulation, rather than simply the per-
ception of whether learning occurred. As will be discussed later, although most of these
later studies purported to measure actual gains in cognitive learning, they relied on per-
ceptions and self-reports of learning rather than more objective measures. If we restrict
our focus to only those studies that aim to examine participants’ affective reaction to
simulations, we find that with few exceptions, students like simulation exercises and
view them more positively than either lectures or case discussions (see reviews by
Burns, Gentry, & Wolfe, 1990; Faria, 2001; Gosen & Washbush, 2004).
It is worth noting that these relative comparisons have been made by students
experiencing different pedagogies within a course. There is a dearth of studies
employing experimental designs with control groups or where comparisons are
196 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 197
made between student attitudes in one section of a course that is solely lecture based
versus those in a class that is solely case discussion based or solely simulation based.
This lack of research employing control groups not only characterizes research on
attitudes but also extends to the studies focused on psychomotor and cognitive learn-
ing reported in this article.
Recently, Anderson and Lawton (2004a, 2004b, 2005, 2006, 2007) extended the
simulation literature to include problem-based learning (PBL). PBL is a pedagogical
approach that reverses the typical order of presentation. In contrast to the traditional
subject-based learning model, where instructors tell students what they need to know
and then assign a problem illustrating how to use those concepts, PBL begins by pre-
senting students with a problem. Students “discover” course concepts and knowledge
as they strive to solve the problem. There is evidence that the PBL approach results in
richer and longer lasting learning than does the traditional approach (Brownwell &
Jameson, 2004; Miller, 2004; Spence, 2001). Anderson and Lawton conducted a series
of studies on the use of business simulations as the “problem” for PBL. They argued
that simulations constitute a form of PBL, given their inherent requirements for the dis-
covery versus the prescription of knowledge. The PBL literature shows strong support
for the pedagogy’s positive impact on student attitudes. Albanese and Mitchell’s
(1993) review concluded that “compared with conventional instruction, PBL, as sug-
gested by the findings, is more nurturing and enjoyable” (p. 52). Vernon and Blake also
published a review article in 1993 and reported that “PBL was found to be significantly
superior [to more traditional teaching methods] with respect to students’ program eval-
uations (i.e., students’ attitudes and opinions about their programs)” (p. 550). Finally,
Colliver’s (2000) review found evidence that students may find a PBL approach to be
more enjoyable, challenging, and motivating than alternative pedagogies.
Of particular note in Anderson and Lawton’s research using simulations in a PBL
pedagogy is the relationship between attitudes and performance. As expected based on
PBL literature, they found that students consistently rated simulation exercises as enjoy-
able, engaging, and stimulating over the duration of the course, and they perceived the
simulations to reflect the discipline they were studying. Surprisingly, Anderson and
Lawton found no support for a relationship between performance on the simulation and
enthusiasm for the simulation, the perception of how much was learned, or how well the
simulation reflected the discipline. Put simply, students’ participation in a business sim-
ulation exercise yielded positive attitudes regardless of how the students performed.
This combination of business simulation and PBL literature shows strong support
for using a business simulation to achieve positive student attitudes in a course. With
respect to attitudes, there appears to be little to risk, and much to gain, from inte-
grating business simulations into business school programs.
Business Simulations and Psychomotor Learning
There has also been research on the behavioral (psychomotor domain) outcomes
of using simulations. A number of studies have focused on external validity by
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
198 Simulation & Gaming
comparing success on a simulation with current business success (Vance & Gray,
1967; Wolfe, 1976). Others looked at future business success subsequent to partici-
pating in a simulation exercise. For example, Norris and Snyder (1982) conducted a
5-year longitudinal study that found no correlation between company return on
investment and general game participation measures with career success. However,
as noted by Wolfe and Roberts (1986), several “methodological problems could have
accounted for the non-correlations found” (p. 47). Wolfe and Roberts (1986, 1993)
also examined the external validity of simulations by looking at students’ perfor-
mance on a simulation exercise and their career success 5 years later. These two lon-
gitudinal studies found some association between career success regarding salaries
and promotions for students and performance on the simulation. When combined
with research design problems noted by the authors, however, these studies offered
only minimal support that behaviors exhibited in a business simulation might facili-
tate later career growth.
Research on behavioral change during simulation play has been limited. The
most extensive study of this type was by Wellington, Faria, Whiteley, and Nulsen
(1995). They attempted to measure the link between cognitive learning and behav-
ioral learning. They used questionnaires and quizzes after each decision round and
then monitored the students’ decisions in later rounds for behaviors that reflected
what they had learned earlier. Unfortunately, the results were mixed. Although cog-
nitive learning occurred, the students were not able to consistently translate that
knowledge into “correct” decision-making behavior.
Schumann, Anderson, Scott, and Lawton (2001) issued a call for greater research
in the psychomotor arena. They noted the paucity of research assessing simulation
participants’ behavior following completion of the exercise. They hypothesized that
a pretest-posttest control-group design that assessed certain student behaviors in sub-
sequent courses could yield insights into the pedagogy’s effectiveness in producing
behavioral change. They also suggested surveying employers to obtain feedback on
students’ behaviors early in their careers. They state, however, that it is unknown
whether developing instruments for assessing behavior would be any easier than
doing so for learning. Although their study included no empirical support, their sug-
gestions appear to suggest a promising avenue for future research.
Bloom’s Cognitive Learning Domain
In 1988, Butler et al. noted that the preponderance of research on what is learned
in simulations focused on Bloom’s cognitive domain, and our review of the litera-
ture shows little change since then. Research on the cognitive domain was aided by
Gentry and Burns (1981), who provided descriptions of learning and the assessment
process for the six levels in the cognitive domain. These descriptions, shown in Table
1, have served as guides for researchers for the past 25 years.
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 199
The focus on the cognitive domain is understandable. As indicated above, the
affective domain addresses feelings related to learning. Although attitude change may
be an important concern, it does not demonstrate that learning has occurred, only that
those questioned believe they have learned. Similarly, while it is of interest whether
behavior changes as a result of participation in a simulation, we need to know whether
the change was the result of choosing to change based on knowledge gained or was
due simply to behaviors employed in simulation decision rounds. Until we can deter-
mine whether cognitive learning occurred, distinguishing the cause of a behavior
change remains elusive. That is, if behavior change is due solely to repeating actions
taken in a simulation, transference from a simulation to a business situation is
unlikely to be fruitful unless very similar conditions just happen to exist. It is our cog-
nitive comprehension that allows us to adapt what we have learned in one situation to
other situations. Lack of an appropriate cognitive foundation seriously inhibits the
chances of taking appropriate actions as new situations present themselves.
The Cognitive Learning Outcomes
of Business Simulation Exercises
Over the years, researchers have contemplated the state of knowledge about the
cognitive learning outcomes that derive from participating in a business simulation
(e.g., Brenenstuhl, 1975; Gosen & Washbush, 2004; Keys & Wolfe, 1990; Wolfe,
Table 1
Cognitive Domain: Bloom’s Taxonomy of Learning Objectives
Learning Objective Description of Learning Assessment Process
Basic knowledge Student recalls or recognizes information Answering direct questions/tests
Comprehension Student changes information into a Ability to act on or process
different symbolic form information by restating in his or
her own terms
Application Student discovers relationships, Application of knowledge to
generalizations, and skills simulated problems
Analysis Student solves problems in light of Identification of critical
conscious knowledge of relationships assumptions, alternatives, and
between components and the principle constraints in a problem situation
that organizes the system
Synthesis Student goes beyond what is known, Solution of a problem that requires
providing new insights original, creative thinking
Evaluation Student develops the ability to create Logical consistency and attention to
standards of judgment, weigh, and detail
Source: Bloom, Englehart, Furst, Hill, & Krathwohl (1959).
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
1985, 1997). This research on simulations has provided insights into attitudes of
students and into the relationship of various measures such as GPA, team cohesion,
team building, and group size to simulation performance (Gentry, 1980; Wellington
& Faria, 1992; Wolfe, Bowen, & Roberts, 1989; Wolfe & Box, 1988; Wolfe &
Chacko, 1983). Although this research has contributed to our understanding and use
of the pedagogy, we have continued to be very disappointed with how little we can
objectively demonstrate regarding what students learn from participating in simula-
tion exercises. As the following will show, this issue has been studied from the earli-
est days of simulations, but we still are largely unable to document what simulation
exercises accomplish regarding cognitive learning. Knowing the educational merits of
simulations and other pedagogies would help us select the tools that maximize the
learning of our students. However, the sad fact is that we really have very little objec-
tive versus attitudinal knowledge of the relative educational merits of case studies ver-
sus lecture versus simulations. And what little knowledge we do have concerning the
relative merits tends to be based on assessing educational objectives that fall far down
the educational food chain, at the knowledge and comprehension levels of Bloom’s
taxonomy (Burns et al., 1990; Gosen & Washbush, 2004; Wolfe, 1985, 1997).
Throughout the history of business simulations, researchers have studied their
role in the educational process, so an extensive base of publications exists.
Fortunately, a series of review articles have distilled the state of our knowledge at
several points along the road. This article focuses on these reviews, not simply to
present a history of simulation research but also to show more clearly the progres-
sion that has occurred in assessing the efficacy of simulations as pedagogy. In addi-
tion to the review articles, we include a few articles that focus specifically on the
relationship between business simulation participation and learning.
An article by Greenlaw and Wyman (1973) was one of the earliest reviews sum-
marizing the learning effectiveness of business simulations. Their work was
extended by Wolfe in 1985. These two reviews reached essentially the same conclu-
sion: The absence of control groups when assessing the wide variety of classroom
practices for using business simulations, combined with the range of simulations
used, made it very difficult to compare and contrast learning that occurs with simu-
lations. Wolfe best captured the state of knowledge at that time by saying that simu-
lation games “appear to be valid, but much work needs to be done to understand the
causes and deterrents of effectiveness” (Wolfe, 1985, p. 275). Employing Bloom’s
taxonomy of learning objectives (Bloom et al., 1959) as a means for classifying
learning outcomes, Wolfe found a dearth of assessment of learning at the higher
levels on Bloom’s taxonomy in the studies he reviewed.
Building off of Bloom’s taxonomy of learning objectives used by Wolfe (1985)
and others, Hsu (1989) moved the call for critical analysis into sharper focus. He
clarified the need to distinguish between “declarative knowledge” and “procedural
knowledge.” The former deals with “a passive mastery of information” whereas the
latter refers to “gaining information or knowledge and acquiring practical skills in
200 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 201
applying that knowledge through performing tasks” (Hsu, 1989, p. 414). In an
attempt to incorporate Bloom’s three domains of cognitive, affective, and psy-
chomotor learning, Hsu proposed a four-phase learning process that consisted of (a)
retaining information, (b) organizing knowledge, (c) experiencing, and (d) firming.
He used these phases to identify tools and methods for assessing learning and then
applied this model to management education and business simulations. Finally, Hsu
categorized the research reviewed by Greenlaw and Wyman (1973) and Wolfe
(1985), finding little support for the effectiveness of playing management games. To
achieve success at demonstrating support for the pedagogy’s effectiveness, he called
for having “clear and specific hypotheses on the specific learning objectives” that
target managerial, technical, and problem-solving skills (Hsu, 1989, p. 433).
Shortly after Hsu’s article appeared, Keys and Wolfe (1990) published an exten-
sive review analyzing the various dimensions of simulation research. They reviewed
studies conducted to assess the relationship between participants’ performance on a
business simulation exercise and variables such as team size, team composition, the
simulation’s face validity, number and frequency of decision rounds played, instruc-
tor involvement, and student aptitudes and achievement levels. Although this review
provided valuable insight into variables that might influence performance on the
simulation, the objective measures used to assess learning were limited to compre-
hension of simulation rules and course facts, the lower levels on Bloom’s cognitive
domain. As with the vast majority of simulation studies, objective assessments of
learning at the higher stages of Bloom’s taxonomy (Bloom et al., 1959) were absent.
At the same time, Burns et al. (1990) summarized the literature on business sim-
ulations by stating there was a “paucity of solid empirical evidence regarding the rel-
ative effectiveness of experiential techniques” (p. 253). They pointed out that (a) too
many simulation users rely on the enthusiasm of their students and own intuition as
proof of the validity of the pedagogy and (b) the wide variety of methodological con-
structs used as the foundation for research in this area raises questions regarding
internal and external validity. They argued that until some measure of external valid-
ity is established, it is unclear whether what we learn from experimental exercises
can be generalized and transferred to success in the working world (Burns et al.,
1990, pp. 256-258). Put simply, they found an absence of rigorous research support-
ing the learning effectiveness of experiential methods such as business simulations.
Gosenpud (1990) conducted a review of the effectiveness of business simulations
as well as other experiential exercises. He cited the lack of rigor as a major problem
mitigating the conclusions reached in the studies. Gosenpud found studies that
reported cognitive learning, but they either were based on perceptions of learning or
they assessed the lower levels of the learning domain. The studies Gosenpud cited
for assessing behavioral change/skill acquisition (the psychomotor domain) either
suffered from ill-defined criterion measures or, again, were based on perceptions of
behavioral change.
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Wolfe (1990) also conducted an extensive review of over 300 articles covering three
decades to assess “what is known about the efficacy of business games” (p. 280). He
organized the review based on categories used by Burns et al. (1990), including such
variables as game complexity, team size, student attributes, and educator considera-
tions. Wolfe found numerous studies purporting to show the effect of these variables
on game performance. Unfortunately, he concludes that “game performance has been
employed as a proxy for course-related knowledge gain, although the accuracy of this
proxy relationship has never been investigated” (Wolfe, 1990, p. 299).
Prompted by Wolfe’s assessment, Anderson and Lawton (1992) conducted a
study to test directly the relationship between game performance and a series of
learning measures. They assessed the performance of students on seven course
assignments (e.g., two annual business plans, a scenario exam, and assignments on
specific simulation decisions such as conducting a cost/benefit analysis on plant
expansion). The assignments were designed to measure learning at the various stages
in Bloom’s cognitive domain. They did not find a significant relationship between
financial performance and the various measures of learning, except for the two
annual business plans. As Anderson and Lawton noted, this raises questions about
using financial performance as a proxy for learning on a simulation exercise.
In 2001, Faria provided a review of “the changing nature of business simula-
tion/gaming research. Although he explicitly stated his intent was not to update the
review by Keys and Wolfe (1990), the article did report on the status of research into
the effectiveness of business simulation at the time of its publication. As part of his
review, Faria categorized the research into cognitive, affective, and behavioral learn-
ing, following Bloom’s taxonomy of learning objectives (Bloom et al., 1959). While
Faria found support in the literature for cognitive learning, the learning involved
“basic facts or concepts” (2001, p. 104), which are the lower levels of learning on
Bloom’s taxonomy. Objective support for the higher levels of the cognitive domain
was absent. Faria found support for affective learning, reporting the participants’
positive attitude toward the simulation and its role in their perceived learning not just
at the end of the experience but also years later. However, Faria also points out that
affective learning is, by its inherent nature, based on the perceptions of the simula-
tions’ participants, not objective assessments. As for behavioral learning resulting
from simulation exercises, Faria states that “research results attempting to measure
behavioral change have been mixed” (2001, p. 105).
In the same year that the Faria review was published, Washbush and Gosen (2001)
reported the results of a series of studies they conducted with total enterprise simula-
tions. These studies encompassed nine hypotheses related to simulation-based learn-
ing. They included testing the relationship between learning and variables such as
participation in a simulation exercise and performance on that exercise. Two of their
conclusions are pertinent to this review: (a) There is evidence that learning occurs as
a consequence of participating in a simulation exercise, and (b) there is a lack of sup-
port for a relationship between learning and performance on the simulation. Although
202 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Washbush and Gosen did find evidence that participation in a simulation exercise
resulted in learning, they measured learning using multiple-choice and short essay
questions. While not stated as such by Washbush and Gosen, their measures of learn-
ing appear to be targeted at the lower levels of learning on Bloom’s cognitive domain.
Concerning the second conclusion (that there is no support for a relationship between
performance and learning), Washbush and Gosen point out this is consistent with the
findings of Anderson and Lawton (1992).
More recently, Gosen and Washbush (2004) conducted a review of research to assess
the effectiveness of business simulations on learning. They concluded that performance
on a simulation exercise should not be used as a proxy for learning. Perhaps more
important, Gosen and Washbush found that “none of the 115 studies considered for this
article met all the criteria for sound research” (2004, p. 283). They went on to state that
“learning is an internal mental process, and what is learned and how it is learned is
unique to each individual” (Gosen & Washburn, 2004, p. 284). This statement would
appear to capture the essence of the learning assessment debate. Gosen and Washbush
conclude that the lack of rigor in the research, which is a consequence of our limited
ability to assess higher level learning, precludes a clear answer to this question.
Finally, Wideman et al. (2007) addressed the contention that participating in sim-
ulations results in learning. They argued that although research has supported the
educational value of this pedagogy, the research is deeply flawed. They contend that
studies have largely relied on the self-reports of teachers and students. They state
that the absence of valid objective measures for assessing learning hampers the abil-
ity to draw any meaningful conclusions regarding a simulation’s effect on learning;
support for the educational impact of simulations is subjective, at best.
The Current State of Assessment of Cognitive
Learning and Business Simulations
To summarize the current state of the research on efficacy of business simulation
for achieving cognitive learning outcomes, it is safe to say that little has changed
since Wolfe’s admonition in 1985 that simulation games “appear to be valid, but
much work needs to be done to understand the causes and deterrents of effective-
ness” (p. 275). Twenty years later, Gosen and Washbush (2004) come to essentially
the same conclusion, stating that “there have not been enough high-quality studies
to allow us to conclude players learn by participating in simulations” (p. 286).
Objective measures of learning are still limited to the basic knowledge, comprehen-
sion, and application stages of cognitive learning. Attempts to measure analysis, syn-
thesis, and evaluation stages have continued to be limited to self reports—participant
perceptions of their improved abilities. We have a long way to go to provide objec-
tive evidence of the learning efficacy of business simulation exercises for the higher
levels on Bloom’s taxonomy for the cognitive domain.
Anderson, Lawton / Business Simulations 203
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
204 Simulation & Gaming
While a review of the literature on simulations shows its positive effect on the
attitudes of participants, its efficacy regarding cognitive learning yields a gloomy
picture. When we reflect on our knowledge of what simulations accomplish, the
overwhelming conclusion is that we know very little. The principal reasons why we
know so little are quite clear and have been for some time. First, the studies con-
ducted to examine what students learn from simulations have been based largely on
perceptions of learning rather than some more objective standard—a flaw that has
been repeatedly pointed out for over 20 years (e.g., Anderson & Lawton, 1992, 1997;
Burns et al., 1990; Gosen & Washbush, 2004; Wolfe, 1985, 1997). Second, where
objective assessments have been attempted, they almost invariably focus on the
lower levels of Bloom’s taxonomy of cognitive learning, and this shortcoming too
has been noted repeatedly in the review articles cited above.
There are occasions where perceptions are perfectly fitting, and there are
instances when measuring the lower levels of Bloom’s taxonomy is appropriate.
But we should be clear about where these cases lie. If we are attempting to deter-
mine whether a pedagogy is successful in improving attitudes toward learning in
our course or toward a discipline, it is appropriate to measure student perceptions.
In fact, we have little alternative other than perceptions. However, if we are attempt-
ing to measure dimensions of learning such as knowledge of vocabulary, under-
standing of principles, understanding the relationships among principles or
disciplines, or the ability to apply principles, then measuring perceptions is unlikely
to be very useful.
Measuring at the lower levels of Bloom’s taxonomy is fitting if we really wish to
test whether students have basic knowledge or comprehension. For example, many
instructors give short tests early (these tests often are provided in the instructors’
manuals accompanying simulations) to see if students have sufficient understanding
of the rules and the business environment to compete effectively. Because the goal
is to determine the student’s level of knowledge, we should employ a test that mea-
sures their level of knowledge. However, if we are interested in whether students are
performing well at a higher level learning, then tests of knowledge and comprehen-
sion alone are inadequate, and we need to use tests or other direct measures of learn-
ing that require analysis, synthesis, and evaluation.
If the collective wisdom of long-time simulation users is correct, simulations are
better at conveying higher levels of learning while, for example, the lecture method
is likely to be more efficient at the lower levels. If this is true, measuring at lower
levels on Bloom’s taxonomy is likely to result in undervaluing simulations.
This lack of knowledge of learning outcomes is not unique to simulations; we
might ask what objective evidence has been offered to demonstrate that lectures,
case discussions, or any other pedagogy are able to achieve Bloom’s higher level
cognitive learning objectives. Although the literature on business simulations may be
replete with statements lamenting overreliance on perceptions and anecdotal evi-
dence, is the situation any different from that supporting any other pedagogy?
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 205
The issue of assessing the relative educational power of alternative pedagogies
may be more salient for simulations than, say, lectures because simulations are rel-
ative newcomers to the educational scene. Because the lecture method has tradition
that dates back centuries and because it is so prevalent, educators feel little need to
justify the use of the pedagogy. The more recent the introduction of a pedagogy, the
greater the pressure to justify its value. Furthermore, as noted above, simulations’
time utilization relative to other pedagogies often raises questions regarding its effi-
ciency for imparting learning. The bottom line is that although greater focus has
been placed on the educational outcomes of simulations than of most alternative
pedagogies, this should not be interpreted as evidence that simulations are weak
tools; more likely, it reflects their relatively recent introduction as a pedagogy.
Are We Capable of Assessing the Cognitive
Learning in Simulations?
As Wolfe and Crookall stated in 1998, “The impediments to conducting educa-
tional research that is useful and practical to those who teach are numerous and are
associated with the field of educational research in general and with the nature of
learning in simulation/gaming contexts more specifically” (p. 9). While these imped-
iments pose obstacles to implementation, we do now have a good idea of what must
be done to compare the effectiveness of pedagogies.
To compare the cognitive learning of alternative pedagogies, we must (1) clarify
the domain in Bloom’s taxonomy and the states within that domain that we are try-
ing to assess (e.g., knowledge, applications, evaluation), (2) employ appropriate
assessment instruments, and (3) design and execute appropriate studies. Having a
clear understanding of the specific “educational value” that we are trying to assess
should influence the domain we measure as well as our choice of a measurement
As noted above, to improve our state of knowledge we should use student per-
ceptions as our dependent variable only when we are concerned with attitude
change. In order to move forward, we must develop and use more objective measures
when we wish to assess learning in Bloom’s cognitive domain.
Assessment Tools
If the educational value that we are attempting to assess is something other than
attitudes, we have a large number of assessment tools from which to choose. As has
been amply pointed out in the literature, our ability to measure lower levels of learn-
ing (knowledge and comprehension) tends to be considerably better than our ability
to measure higher level learning (synthesis and evaluation), but if we are clear on our
learning objectives, we should be able to identify appropriate tools. By nature, some
instruments are better suited to measuring lower levels of Bloom’s taxonomy of
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
learning (e.g., multiple choice exams) while others are more appropriate for higher
levels of learning (e.g., case-based exams). Anderson and Lawton (1988) provided a
good discussion of alternative assessment tools to use for assessing the different
levels of the cognitive domain, identifying the strength of each (strong, medium,
weak) for measuring each level. Gijbels, Dochy, Van den Bossche, and Segers (2005)
also offered suggestions in this regard; they discussed the potential of a number of
assessment instruments for measuring different levels of learning. These include
modified essay questions, progress tests, free recall, standardized simulations, essay
questions, short-answer questions, multiple-choice questions, oral examinations,
performance-based testing, case-based examinations, and synthesizing research. So
multiple tools are available if we choose to use them.
Regarding assessment tools, a legitimate question arises as to whether any of
these instruments is sensitive enough to measure the gains achieved with any peda-
gogy. It is one thing to measure whether a student has added a vocabulary item or
can understand a new concept, but quite another to assess whether the student has
improved his or her ability to solve problems that require original, creative thinking.
When viewed in the context of a student’s lifetime spent as a problem solver, can we
reasonably expect any instrument to be so sensitive as to be able to detect the impact
of a single experience such as a simulation, or even an entire course?
So while we have assessment tools that can be employed to measure higher levels
of learning, and while these instruments have undoubtedly been underutilized (in
favor of instruments that measure the lower levels of Bloom’s taxonomy), we may
question how many educational experiences are powerful enough to produce mea-
surable improvements for the higher levels of learning.
Research Design
We know how to design studies to compare the relative merits of alternative ped-
agogies. We all are well aware that anecdotal evidence and observational studies pro-
vide weak evidence for the relative efficacy of alternative pedagogies (Butler,
Markulis, & Strang, 1988). Providing valid substantiation for the power of a peda-
gogy requires rigorous experimental design.
To meet these conditions, students should be assigned to alternative treatments
randomly. However, at most institutions students have obligations both within the
institution (such as other classes) and outside the institution (such as work schedules
and family responsibilities) that make random assignment almost impossible. In
addition, it is very difficult to hold conditions constant to isolate the effect of the
treatment. One of the largest potentially confounding factors is the instructor. There
are two obvious paths to control for the impact of the instructor, but both pose prac-
tical difficulties. One path to tackle the problem is to hold the instructor constant by
having the same instructor use different pedagogies in different sections of his or her
course. The problems with this approach include the following: (1) the instructor
206 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
must be teaching multiple sections of the same class, (2) the number of students in
the sections must be sufficiently large to make the test feasible, and (3) the instruc-
tor must be willing to devote the time and effort to teach, prepare, and present the
course in two different ways. The bias and relative competence of the instructor also
may pose problems. If the instructor is more enamored, more comfortable, or more
proficient with one of the pedagogies being used, it is likely to put the other peda-
gogy at a distinct disadvantage.
The second path to controlling for instructor effect is to have a large pool of instruc-
tors and to randomly assign them to use different pedagogies. Obtaining the participa-
tion of a pool of instructors and coordinating their activities poses real challenges. The
rewards for participating faculty members are likely to be slim. Even if a publishable
paper comes out of the study, the number of coauthors is likely to diminish the value
of the publication for any of the participants in the eyes of their institutions.
There is an ethical dimension to pedagogical research that adds yet another imped-
iment to implementation. If an instructor firmly believes in the educational superior-
ity of one pedagogy over another, is it ethical to use a perceived inferior pedagogy for
the sake of potentially advancing our knowledge of that pedagogy’s worth? The situ-
ation educators face is much like that in medical research, where some patients are
given placebos rather than active treatment; while there may be long-term benefits, in
the short run some patients (and likewise some students) are likely to suffer.
Problem-Based Learning and Cognitive Learning Outcomes
As discussed earlier, Anderson and Lawton (2005, 2006) argued that business
simulations constitute a form of PBL. If they are correct, then the PBL literature
should shed light on the learning outcomes of business simulations. That is, if the
PBL literature shows objective assessment of that pedagogy’s efficacy for achieving
learning outcomes, that evidence provides tangential support for the efficacy of sim-
ulations as well.
Gijbels et al. (2005) published an excellent review on the status of assessment of
PBL. The central question that they addressed was whether students achieve learning
outcomes more effectively under a PBL pedagogy than under conventional pedago-
gies. Unfortunately for our purposes, the authors did not adopt Bloom’s taxonomy
as their framework for approaching this question. Instead, they used Sugrue’s (1995)
model of the cognitive components of problem solving. This model uses three levels
of knowledge structure to classify the studies they reviewed. These levels are (a)
understanding the concepts, (b) understanding the principles that link concepts, and
(c) linking concepts and principles to conditions and procedures for application
(Gijbels et al., 2005, p. 44). Although Sugrue’s model certainly differs from
Bloom’s taxonomy, his first level (understanding the concepts) relates to students’
ability to identify and classify material to which they have been exposed. This level
clearly coincides with Bloom’s lowest level of basic knowledge. Sugrue’s second
Anderson, Lawton / Business Simulations 207
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
208 Simulation & Gaming
level (understanding the principles that link concepts) is concerned with students’
ability to organize concepts and ideas in a meaningful way. If a student is operating
at this second level, he or she can use principles “to interpret problems, to guide
actions, to troubleshoot systems, to explain why something happened, or to predict
the effect a change in some concept(s) will have on other concepts” (Sugrue, 1993,
p. 9). This second level appears to contain elements of Bloom’s comprehension level
and possibly the application level. Sugrue’s highest level (linking concepts and prin-
ciples to conditions and procedures for application) concerns the ability of the
student to apply the organized set of principles to solve problems. This third level
arguably corresponds with Bloom’s application level and possibly the analysis level.
The literature and insights gained on PBL could yield synergistic insights and direc-
tions for future research for simulations. For example, the Gijbels et al. (2005) review
found that PBL was most effective when assessing understanding of the principles that
link concepts—Sugrue’s second level of knowledge and, by extension, perhaps
Bloom’s comprehension and application levels. Their review also found some indica-
tion that PBL produced better results for Sugrue’s highest level of the knowledge struc-
ture, although the difference was not statistically significant. Gijbels et al. point out
that a shortcoming of their review is that despite an exhaustive effort to find studies
across a broad range of disciplines, all but one of the studies that met their criteria for
inclusion in their review came from medical education. Regardless, the conclusions are
reasonably consistent with the prevailing wisdom on the benefits of business simula-
tions. These results suggest that there may be opportunities for future research in the
PBL field that can provide objective evidence on its efficacy for achieving learning
outcomes at the higher levels of knowledge that can apply to simulations.
Are We Likely to Do It?
Almost no experimental studies exist that compare learning outcomes under alter-
native pedagogies. Interestingly, Wolfe (1990) identified this problem nearly 20 years
ago, yet the gap still exists. The obstacles to designing and conducting experimental
studies in a university setting may initially seem daunting.
We might speculate as to why so little progress has been made in objectively
assessing the cognitive learning that occurs in simulations. It is our opinion, echoed
by Gosen and Washbush (2004), that perceptions are widely used because they are
easily measured. So despite the repeated call for greater rigor in measuring learn-
ing, we have made little headway on this front. Researchers continue to use self-
assessments rather than more suitable tools because they are much easier to employ.
As a consequence, studies on the educational merits of simulations often are mea-
suring the affective domain, not the cognitive domain they purport to measure.
Using perceptions tends to be advantageous in some ways to those who wish to claim
the superiority of simulations over alternative pedagogies because simulations almost
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
invariably are rated positively by students. The downside of using perceptions is that evi-
dence based on perceptions often is dismissed by scholars because it lacks suitable rigor.
Studies that attempt to go beyond perceptions to more objective measures of learn-
ing more often than not use tools best suited to measuring lower levels of learning on
Bloom’s taxonomy. We believe this bias toward instruments that measure at the lower
levels exists for several reasons: (a) We have a better understanding of how to measure
the lower levels than how to measure the higher levels, and (b) measuring at the higher
levels requires designing and executing tools that require greater time and effort to con-
struct and to score. Given the many publication and service pressures on faculty
members, there is a bias against using more appropriate instruments. As noted by
Gosen and Washbush (2004), there is little payoff for expending the effort to design
and execute the rigorous research designs needed to pursue assessment at these cogni-
tive levels. Pedagogical research studies may naturally be valued in schools of educa-
tion, but they generally are accorded little worth in business programs, especially at
major research institutions where discipline-based research is favored, if not required.
Efforts to improve our knowledge might also be handicapped by faculty evalua-
tion systems. Utilizing different pedagogies between course sections for the same
instructor runs the risk of student complaints of unequal workloads and protestations
of using students as guinea pigs while putting their education at risk. Few instructors
are willing to run this risk when tenure and promotion decisions are influenced by
student evaluations.
Further, Serva and Fuller (2004) argue that the poor state of teaching evaluations in
universities has serious implications for improving teaching effectiveness and student
learning. Their concern is that teaching evaluation instruments reflect neither changes
in the use of technology in education nor new delivery modes such as distance edu-
cation. They argue that the misalignment of the evaluations system and educational
environment results in instructor reluctance to test new learning techniques or tech-
nologies. If this is true, then instructor willingness to experiment with pedagogical
designs to test for their effect on learning outcomes will also likely be suppressed.
In summary, even though we know how to assess the educational impact of alterna-
tive pedagogies, the barriers to implementation may seem so high that most likely little
will be done. The effort required relative to reward achieved is such that we are likely
to see continued reliance on descriptive studies using student perceptions as the depen-
dent variable. The pressures on instructors to publish and to participate in service activ-
ities make rigorous research to measure upper level learning outcomes unattractive.
Is There Any Hope?
While the situation may appear grim, there are some developments occurring in
the profession that may result in improved assessment of the educational impact of
Anderson, Lawton / Business Simulations 209
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
210 Simulation & Gaming
New Technology for Measuring Learning
Wideman et al. (2007) propose a vehicle for getting beyond the reliance on self-
report measures of the learning achieved on simulation exercises. They report on an
exploratory study of a software program (VULab) designed to collect data from sim-
ulation participants as they engage in the exercise. Online data collection is employed
to minimize intrusion into game activities. In some cases, the data collected are a
combination of screen activity synchronized with audio of player discussion.
Wideman et al. contend that these recordings, combined with questionnaires, can be
coded using qualitative software analysis such as ATLAS.ti, Version 5, as described
by Pandit (1996). While Wideman et al. experienced a number of technology prob-
lems, the methodology appears to hold some promise as a means for obtaining data
other than perceptions. They claim this approach may allow for measuring students’
comprehension of course concepts and assessment of their evaluative skills, provid-
ing insights into the higher levels of cognition on Bloom’s taxonomy.
Distance Learning
Distance learning may offer a venue in which we can measure the educational
value of alternative pedagogies. Because students are not together in a classroom,
instructors have the ability to set up true experimental conditions far more easily
than in traditional classrooms. For example, one set of randomly grouped students
could be assigned a set of cases while, at the same time, in the same course, with the
same instructor, a second set of students could be assigned to participate in a simu-
lation. The two groups of students could be evaluated using the same assessment
instrument to see how their learning compares.
Of course, using distance learning courses as a laboratory presents a set of prob-
lems of its own. It may be necessary to modify (or it may not even be feasible to use)
some pedagogies commonly used in traditional classrooms (like role-playing exer-
cises and lectures) in a distance learning environment. A further issue arises con-
cerning the ability to generalize the results of an online experience to a traditional
classroom setting. It may well be that the methods that are most effective for online
learning differ from those that are most effective for in-class learning.
Association to Advance Collegiate Schools
of Business (AACSB) Accreditation Pressures
As business programs seek to pursue or maintain AACSB accreditation, one of
the challenges they face is developing and implementing assurance of learning mea-
sures (AoLs). By their nature and design, theses AoL goals typically seek to assess
students’ business skills beyond just knowledge of business disciplines to include
their ability to analyze, synthesize, and evaluate—the upper levels of Bloom’s tax-
onomy. If simulation users and providers wish to utilize simulations in this process,
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 211
they will have to develop the objective evidence needed to convince accreditation
committees of the pedagogy’s efficacy for assessing cognitive learning.
Earlier, we identified some barriers to researchers undertaking the rigorous studies
needed to provide this validation. Perhaps the “carrot” of seeking accreditation or the
“stick” of maintaining accreditation will offer a sufficient incentive to take on the burdens
inherent in this research. Given the potential financial payoff for a simulation provider, the
availability of funding of this research may attract researchers to undertake this effort. The
bottom line is that we must continue our journey toward effective assessment of the value
of simulations vis-à-vis other pedagogies. As Wolfe and Crookall state, “meaningful edu-
cational research must be conducted if we are to create identifiable value for (a) those
exposed to our methods and (b) those who underwrite and fund our work” (1998, p. 13).
Financial Pressures
Finally, it is possible that a perfect storm may drive the profession to conduct a
rigorous assessment of the worth of alternative pedagogies. There seem to be at least
three converging forces that may motivate educational institutions and funding agen-
cies to allocate the resources required to conduct a rigorous analysis of the educa-
tional merit of pedagogies:
1. the emphasis on assessment throughout all levels of education
2. the rapidly escalating costs of higher education
3. the competition from alternative educational delivery systems
Educators in traditional brick and mortar institutions may feel pressure to justify
their high cost while online alternatives may feel compelled to strive for legitimacy.
The route for those in both camps to validate their case is to provide results of well-
designed research studies demonstrating the educational value of the education they
provide. If sufficient resources are made available to educators, the inertia that has
characterized this field may be overcome.
Future Directions
Simulation scholars have attempted to determine whether simulations accomplish
what instructors hope to achieve through the use of this pedagogy. However, several
of these topics outlined at the beginning of this article have received almost no atten-
tion. The following topics appear to be fertile areas for research because there is vir-
tually no existing literature: Does participating in a simulation result in better (than
if the student is exposed to the material through other pedagogies) long-term reten-
tion of knowledge? Do students improve their grasp of the interrelationships among
the various functions of business (marketing, finance, production, etc.) as a result of
participating in a simulation? Are the interpersonal skills of students improved
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
212 Simulation & Gaming
through participating in a simulation? Do students who participate in simulations
really develop a greater appreciation for the difficulty of implementing what may, on
the surface, appear to be rather straightforward business concepts? Are business sim-
ulations really effective devices for integrating students into business programs, and
are they effective at improving retention rates?
Integration and Retention
This last issue deserves some additional comment because it is not frequently
cited. Some faculty have discovered that the interactive nature of simulations make
them ideal icebreakers and team-building exercises. Although no published studies
exist, it is entirely possible that judicious use of simulations may increase retention
rates in business programs. There is a body of literature to support the contention
that students who fail to become academically and socially integrated into academic
programs are at high risk for dropping out of their programs (e.g., Ashar & Skenes,
1993; Astin, 1993; Tinto, 1993). Group exercises such as simulations that force inter-
action and cooperation among students appear to hold considerable potential for
helping students to become integrated.
Distance learners are likely to be a particularly vulnerable group because they
rarely have the opportunity for face-to-face interaction with their peers. The resulting
sense of isolation is likely to be a major contributing factor for the high dropout rate
among distance learners, which is worse than that for students in traditional on-
campus programs (Carr, 2000). Nitsch (2003) states that distance education “students
need some form of socialization in order to feel like they are part of the institution.
Even though they do not live on campus, they need strong ties to the academic
culture and peer learners” (p. 20). Kennedy and Lawton (2007) “have used a business
simulation with success as a device for integrating master’s level students into an
online master’s level program. While simulations are not panaceas for the problems
facing distance education programs, if used well, they can incorporate many elements
[that have proven successful in increasing retention rates]” (p. 7). However, at this
point hard data to support this application of simulations are virtually nonexistent.
A Final Comment
We have learned much regarding the efficacy of simulations for promoting learning
(cognitive, affective, psychomotor) over the past 40 years, but we still have much to
learn and there are many avenues of research worth pursuing. While seeking answers to
the questions of cognitive learning that occur as a consequence of the use of simulations
is important and needs to be investigated, the questions posed regarding future research
also merit attention. Simulation scholars may feel daunted by the challenges we face.
However, we must keep in mind that demonstrating the learning effectiveness of a ped-
agogy is a task we do not face alone. As technology advances and as new pedagogies
emerge, attention inevitably will be focused on assessing the educational worth of the
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 213
alternatives. It is inconceivable that any one study will provide the definitive answer to
the efficacy and the place in the educational landscape for each pedagogy. So it is imper-
ative that we continue to take baby steps toward increasing our understanding.
Albanese, M., & Mitchell, S. (1993). Problem-based learning: A review of the literature on its outcomes
and implementation issues. Academic Medicine, 68(1), 52-81.
Anderson, P. H., & Lawton, L. (1988). Assessing student performance on a business simulation exercise.
Developments in Business Simulation & Experiential Learning, 15, 241-245. Available from
Anderson, P. H., & Lawton, L. (1992). The relationship between financial performance and other mea-
sures of learning on a simulation exercise. Simulation & Gaming, 23, 326-340.
Anderson, P. H., & Lawton, L. (1997). Demonstrating the learning effectiveness of simulations: Where
we are and where we need to go. Developments in Business Simulation & Experiential Learning, 24,
68-73. Available from
Anderson, P. H., & Lawton, L. (2004a, March). Applying problem based learning pedagogy to a simula-
tion exercise. Proceedings of the Applied Business Research Conference, 10, 1-8.
Anderson, P. H., & Lawton, L. (2004b). Simulation exercises and problem-based learning: Is there a fit?
Developments in Business Simulations and Experiential Exercises, 31, 68-73. Available from
Anderson, P. H., & Lawton, L. (2005). The effectiveness of a simulation exercise for integrating problem-
based learning in management education. Developments in Business Simulations and Experiential
Exercises, 32, 10-18. Available from
Anderson, P. H., & Lawton, L. (2006). The relationship between students’ success on a simulation exer-
cise and their perception of its effectiveness as a PBL problem. Developments in Business Simulations
and Experiential Exercises, 33, 41-47. Available from
Anderson, P. H., & Lawton, L. (2007). Simulation performance and its effectiveness as a PBL problem:
A follow-up study. Developments in Business Simulations and Experiential Exercises, 34, 43-50.
Available from
Anderson, P. H., & Woodhouse, R. H. (1984). The perceived relationship between pedagogies and attain-
ing objectives in the business policy course. Developments in Business Simulations and Experiential
Exercises, 11, 152-156. Available from
Ashar, H., & Skenes, R. (1993). Can Tinto’s student departure model be applied to nontraditional
students? Adult Education Quarterly, 43, 90-100.
Astin, A. W. (1993). What matters in college: Four critical years revisited. San Francisco: Jossey-Bass.
Bloom, B. S., Englehart, M. D., Furst, E. D., Hill, W. H., & Krathwohl, D. R. (1959). Taxonomy of edu-
cational objectives: The classification of educational goals. Handbook 1: The cognitive domain. New
York: David McKay.
Blythe, S. E., & Gosenpud, J. J. (1981). A relative evaluation of experiential and simulation learning in
terms of perceptions of effected changes in students. Developments in Business Simulations and
Experiential Exercises, 8, 145-148. Available from
Brenenstuhl, D. C. (1975). Cognitive versus affective gains in computer simulations. Simulation &
Gaming, 6, 303-311.
Brownwell, J., & Jameson, D. (2004). Problem-based learning in graduate management education: An
integrative model and interdisciplinary application. Journal of Management Education, 28, 558-577.
Burns, A. C., Gentry, J. W., & Wolfe, J. (1990). A cornucopia of considerations in evaluating the effec-
tiveness of experiential pedagogies. In J. W. Gentry (Ed.), Guide to business gaming and experiential
learning (pp. 253-278). East Brunswick, NJ: Nichols/GP Publishing.
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Butler, R. J., Markulis, P. M., & Strang, D. R. (1988). Where are we? An analysis of the methods and
focus of the research on simulation gaming. Simulation & Gaming, 19, 3-26.
Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students.
The Chronicle of Higher Education, p. A39.
Colliver, J. A. (2000). Effectiveness of problem-based learning curricula: Research and theory. Academic
Medicine, 75, 259-266.
Faria, A. J. (2001). The changing nature of business simulation/gaming research. Simulation & Gaming,
32, 97-110.
Faria, A. J., & Wellington, W. J. (2004). A survey of simulation game users, former-users, and never-users.
Simulation & Gaming, 35, 178-207.
Fripp, J. (1993). Learning through simulation. London: McGraw-Hill.
Fritzsche, D. J. (1974). The lecture vs. the game. Proceedings of the Annual ABSEL Conference, 43-44.
Available from
Gentry, J. W. (1980). Group size and attitudes toward the simulation experience. Simulation & Gaming,
11, 451-460.
Gentry, J. W., & Burns, A. C. (1981). Operationalizing a test of a model of the use of simulation games
and experiential learning. Developments in Business Simulation and Experiential Learning, 8, 48-52.
Available from
Gijbels, D., Dochy, F., Van den Bossche, P., & Segers, M. (2005). Effects of problem-based learning: A
meta-analysis from the angle of assessment. Review of Educational Research, 75(1), 27-61.
Gosen, J. J., & Washbush, J. (2004). A review of scholarship on assessing experiential learning effective-
ness. Simulation & Gaming, 35, 270-293.
Gosenpud, J. J. (1990). Evaluation of experiential learning. In J. W. Gentry (Ed.), Guide to business gam-
ing and experiential learning (pp. 301-329). East Brunswick, NJ: Nichols/GP Publishing.
Greenlaw, P. S., & Wyman, F. P. (1973). The teaching effectiveness of games in collegiate business
courses. Simulation & Gaming, 4, 259-294.
Hsu, E. (1989). Role-event gaming simulation in management education: A conceptual framework and
review. Simulation & Gaming, 20, 409-438.
Kennedy, E. J., & Lawton, L. (2007, July). Integrating students into distance education programs. Paper
presented at the Fifth International Conference on Management, Athens, Greece.
Keys, B., & Wolfe, J. (1990). The role of management games and simulations in education and research.
Journal of Management, 16, 307-336.
Knotts, U. S., Jr., & Keys, J. B. (1997). Teaching strategic management with a business game. Simulation
& Gaming, 28, 377-394.
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of educational objectives: The classi-
fication of educational goals. Handbook II: The affective domain. New York: David McKay.
Lean, J., Moizer, J., Towler, M., & Abbey, C. (2006). Simulations and games: Use and barriers in higher
education. Active Learning in Higher Education, 7, 227.
Mancuso, L. C. (1975). A comparison of lecture-case study and lecture-computer simulation teaching
methodologies in teaching basic marketing. Proceedings of the Annual ABSEL Conference, 339-346.
Available from
Miller, J. (2004). Problem-based learning in organizational behavior class: Solving students’ real prob-
lems. Journal of Management Education, 28, 578-590.
Nitsch, W. B. (2003). Examination of factors leading to student retention in online graduate education.
Unpublished manuscript. Retrieved April 11, 2008, from
Norris, D. R., & Snyder, C. K. (1982). External validation of simulation games. Simulation & Gaming,
13, 73-85.
Pandit, N. (1996). The creation of theory: A recent application of the grounded theory method. Qualitative
Report, 2(4). Retrieved April 17, 2008, from
214 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Anderson, Lawton / Business Simulations 215
Saunders, P. M. (1997). Experiential learning, cases, and simulations in business communication.
Business Communication Quarterly, 50(1), 97-114.
Schumann, P. L., Anderson, P. H., Scott, T. W., & Lawton, L. (2001). A framework for evaluating simu-
lations as educational tools. Developments in Business Simulation & Experiential Learning, 28, 215-
220. Available from
Serva, M. A., & Fuller, M. A. (2004). Aligning what we do and what we measure in business schools:
Incorporating active learning and effective media use in the assessment of instruction. Journal of
Management Education, 28, 19-38.
Spence, L. (2001). Problem based learning: Lead to learn, learn to lead. In Problem based learning hand-
book (pp. 1-12). University Park: Penn State University, College of Information Sciences and
Technology. Available from
Sugrue, B. (1993). Specifications for the design of problem-solving assessments in science. Project 2.1:
Designs for assessing individual and group problem-solving. Los Angeles: National Center for
Research on Evaluation.
Sugrue, B. (1995). A theory-based framework for assessing domain-specific problem solving ability.
Educational Measurement: Issues and Practice, 14(3), 29-36.
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago:
University of Chicago Press.
Vance, S. C., & Gray, C. F. (1967). Use of a performance evaluation model to research in business gam-
ing. Academy of Management Journal, 10(1), 27-37.
Vernon, D. T., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative
research. Academic Medicine, 68, 550-563.
Washbush, J., & Gosen, J. J. (2001). An exploration of game-derived learning in total enterprise simula-
tions. Simulation & Gaming, 32, 281-296.
Wellington, W. J., & Faria, A. J. (1992). An examination of the effect of team cohesion, player attitude,
and performance expectations on simulation performance results. Developments in Business Simulation
& Experiential Exercises, 19, 184-189. Available from
Wellington, W. J., Faria, A. J., Whiteley, T. R., & Nulsen, R. O., Jr. (1995). Cognitive and behavioral con-
sistency in a computer-based marketing simulation game environment: An empirical investigation of
the decisions-making process. Developments in Business Simulation & Experiential Learning, 22, 12-
18. Available from
Wideman, H. H., Owston, R. D., Brown, C., Kushniruk, A., Ho, F., & Pitts, K. C. (2007). Unpacking the
potential of educational gaming: A new tool for gaming research. Simulation & Gaming, 38, 10-30.
Wolfe, J. (1975). A comparative evaluation of the experiential approach as a business policy learning envi-
ronment. Academy of Management Journal, 18, 442-452.
Wolfe, J. (1976). Correlates and measures of the external validity of computer-based business policy
decision-making environments. Simulation & Gaming, 7, 411-438.
Wolfe, J. (1985). The teaching effectiveness of games in collegiate business courses: A 1973-1983 update.
Simulation & Gaming, 16, 251-288.
Wolfe, J. (1990). The evaluation of computer-based business games: Methodology, findings, and future
needs. In J. W. Gentry (Ed.), Guide to business gaming and experiential learning (pp. 279-300). East
Brunswick, NJ: Nichols/GP Publishing.
Wolfe, J. (1997). The effectiveness of business games in strategic management course work. Simulation
& Gaming, 28, 360-376.
Wolfe, J., Bowen, D. D., & Roberts, C. R. (1989). Team-building effects on company performance: A
business game-based study. Simulation & Gaming, 20, 388-408.
Wolfe, J., & Box, T. M. (1988). Team cohesion effects on business game performance. Simulation &
Gaming, 19, 82-97.
Wolfe, J., & Chacko, T. I. (1983). Team-size effects on business game performance and decision-making
behaviors. Decision Sciences, 14, 121-133.
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
Wolfe, J., & Crookall, D. (1998). Developing a scientific knowledge of simulation/gaming. Simulation &
Gaming, 29, 7-19.
Wolfe, J., & Roberts, C. R. (1986). The external validity of a business management game: A five-year lon-
gitudinal study. Simulation & Gaming, 17, 45-59.
Wolfe, J., & Roberts, C. R. (1993). A further study of the external validity of business games: Five-year
peer group indicators. Simulation & Gaming, 24, 21-33.
Phil Anderson is chair of the Management Department and a professor at the University of St. Thomas. He
also taught for 4 years, including 1 year on a Fulbright Scholars Fellowship, at University College in Cork,
Ireland. He coauthored Threshold Competitor: A Management Simulation (Prentice Hall, 2002), Threshold
Entrepreneur: A New Business Venture Simulation (Prentice Hall, 2000), Merlin: A Marketing Simulation
(McGraw-Hill, 2003), and Micromatic: A Strategic Management Simulation (Houghton Mifflin, 2005). He
has published numerous articles in the areas of educational pedagogy and learning, small business growth,
quality systems (with special focus on the ISO 9000 International Quality Standard), and determinants of
ethical behavior. Contact: University of St. Thomas, Opus College of Business, MCH 316, St. Paul, MN
55105, USA; +1 651-962-5136 (t); +1 651-962-5093 (f);
Leigh Lawton is chair of the Decision Sciences Department and a professor at the University of St.
Thomas. He has been using simulations for over 30 years and is coauthor of Merlin, A Marketing
Simulation (McGraw-Hill, 2003). He has published numerous articles on business ethics and the learning
effectiveness of business simulations. Contact: University of St. Thomas, Opus College of Business, MCH
316, St. Paul, MN 55105, USA; +1 651-962-5084 (t); +1 651-962-5093 (f);
216 Simulation & Gaming
at PURDUE UNIV LIBRARY TSS on May 23, 2009 http://sag.sagepub.comDownloaded from
... Many studies and reviews of educational games have focused on students' perceptions of outcomes such as satisfaction, motivation, perceived ability or perceived learning. Research that examines participants' affective reactions has found that students tend to like educational games, feel they have benefitted from playing and view them more positively than lectures and case discussions (Anderson & Lawton, 2009;Faria, 2001;Koli c-Vehovec et al., 2019). Whether student perceptions are suited for evaluating the success of educational games depends on the outcome studied and the purpose of using games in education (Bacon, 2016). ...
... Conversely, measures of actual knowledge rely on direct evidence of learning (Anderson & Lawton, 2009), occasionally called "objective" evaluations (e.g., Schumann et al., 2014). This term is somewhat imprecise, as any assessment is subjective by being situated in the assessment developer's and interpreter's perspectives. ...
... Competence improvements from games potentially cover both lower cognitive levels, such as knowing theory and terminology, and higher levels, such as applying knowledge across contexts and integrating multiple curriculum objectives into a larger whole. Yet, games are generally inefficient for learning terminology and theory compared with lectures (Anderson & Lawton, 2009). This finding contradicts the earlier mentioned game purpose of combatting disengagement among students who find terminology and theory uninspiring. ...
Full-text available
Background The development and promotion of educational games are still outpacing knowledge of these games' effects, raising calls for evidence of benefits and challenges. Studies suggest that students and teachers like games, but the payoff of the investment in terms of increased motivation and achievement remains unclear. Objectives This study investigates the pure effect of a marketing simulation game on motivation, perceived learning and achievement, above and beyond regular student‐active instruction. Methods We applied a randomized, controlled experiment in a marketing course in upper‐secondary schools (Nclasses = 22; Nstudents = 433) comparing a collaborative–competitive marketing simulation game with regular, case‐based, student‐active instruction on three groups of outcome measures: motivation, perceived ability, and achievement. Additionally, students and teachers provided quantitative and qualitative feedback on game experiences. Results and Conclusions We showcase the importance of a robust study design with valid compound instruments. Moreover, investigations of the game implementation and experiences reveal insights about intervention timing, differential negative consequences by gender and need for reflection opportunities. We find no clear evidence of positive or negative effects of the game, despite students' and teachers' satisfaction. Implications Beyond the effect evaluation, we offer recommendations to researchers and developers of educational games about scaffolding, timing and teacher competence building.
... Over 1200 related papers were published between 1960 and 2019 alone (Hallinger and Wang, 2020). Virtual simulation games have been based on the experiential learning theory (Kolb, 1983), combined with the organization theory and the game theory, to design rules and algorithms (Sterman, 1994;Geurts et al., 2007), which is widely used in business education (Keys and Wolfe, 1990;Wolfe, 1993;Anderson and Lawton, 2009;Faria et al., 2009). The teaching method of virtual simulation game has also been recently applied in entrepreneurship education in many colleges and universities, and has been demonstrated to effectively improve the investment and learning performance of college students in entrepreneurship learning (Kriz and Auchter, 2016;Charrouf and Janan, 2019;Isabelle, 2020;Zulfiqar et al., 2021). ...
... Cognitive investment refers to the use of deep learning methods and strategies, with intrinsic learning motivation, emotional investment refers to the interest and satisfaction of learning and the relationship with teachers and peers, and behavioral investment refers to the participation in in-class and out-of-class learning activities related to behavior (Fredricks et al., 2004). In general, learning results include increases in knowledge, improvements in ability and changes in attitude (Anderson and Lawton, 2009). Since the focus of the virtual simulation game adopted in the present study was on the application of knowledge and improvement of analytical and decision-making ability, entrepreneurial skill development was measured as an indicator of learning results. ...
... As confirmed by a large number of studies, simulated game experience can effectively improve students' participation and learning outcomes (Anderson and Lawton, 2009;Beltrão and Barçante, 2015;Kriz and Auchter, 2016;Liberona and Rojas, 2017;Charrouf and Janan, 2019;Isabelle, 2020;Mirjana et al., 2020;Kauppinen and Choudhary, 2021;Zulfiqar et al., 2021). Thus, the following hypotheses are proposed in the present study: ...
Full-text available
With the emergence of the COVID-19 pandemic, virtual simulation games have provided an effective teaching method for online entrepreneurship education. By exploring the mechanisms that influence student engagement and learning outcomes from different perspectives, such as game design, team and individual perspectives, numerous scholars have demonstrated that such a teaching method can effectively improve students’ engagement and learning performance. However, the existing studies are relatively scattered, and there is a scarcity of studies in which the effects of said factors are considered. Based on the learning process 3P model (presage-process-product) proposed by Biggs (1993) , students’ perceived experience of game design, teamwork and self-efficacy were taken as variables in the early learning stage in the present study, and the influence mechanism of virtual simulation game learning experience on students’ engagement and entrepreneurial skill development was explored, so as to close the gap in existing research. In the present study, 177 college students from Chinese universities were surveyed and the data were surveyed using AMOS 23.0 software. Although the empirical results show that students’ “goal and feedback” and “alternative” experience of game design did not have a significant positive impact on students’ engagement, there was a direct and significant effect the development of entrepreneurial skills. Students’ experience of teamwork and general self-efficacy could not only directly and significantly affect the development of entrepreneurial skills, but also indirectly affect the development of entrepreneurial skills through learning engagement. The research results are practically significant for teachers in the selection and development of virtual simulation games, can be effectively applied in teaching process management, and can improve students’ engagement and learning performance.
... BSGs are also suitable for raising awareness of various types of risks [4]. BSGs are suitable to support business students since it is expected that business students should have the opportunity to become more experienced and astute decision makers in uncertain situations [5]. While traditional lectures are ideal to provide definitions, concepts and theory, decision making is an empirical process [6]. ...
Full-text available
Despite the increasing use of simulation games in business education, only few studies have explored the cognitive processes that learners employ while playing the game, with quite controversial results about the students’ learning outcomes. The current study analyses the impact of a Business Simulation Game (BSG) on the cognitive processes related to the “Structure of the Observed Learning Outcome” (SOLO) taxonomy. Moreover, overall learning performance and perceived teamwork competency have been investigated. A quasi-experimental pre and post-test design was applied. Eighty (80) university students played a marketing simulation game to practise a business marketing plan. The results showed a significant improvement in the unistructural and extended abstract levels of the taxonomy after playing the game. There was no significant difference in the multi-structural level while the effect on the relational level was negative. Also, a strong, positive correlation between perceived teamwork competency and learning performance was found. Implications for instructional designers and educators are discussed.
... The term "simulation" refers to several types of guided learning experiences artificially resembling reality, where teacher participants can exercise their skills interactively and dynamically, and cope with challenges in a learner-centered experience focused on the individual needs of each learner (Anderson & Lawton, 2009;Dieker et al., 2014;Karlen et al., 2020;Kramarski, 2018). Studies have found that simulations can assist in the practical assimilation of theory. ...
Full-text available
Self-regulated learning (SRL) is essential for independent active learners. Despite its importance, supporting students' SRL is often challenging for teachers who lack the necessary knowledge and skills for in-class SRL practices. Hence, there is a need to support teachers' SRL: both as learners—how to self-regulate their own learning, and as teachers—how to use practices to support students' SRL. This study proposes an innovative instructional model empowered by “Authentic Interactive Dynamic Experiences” (“AIDE”) oriented to SRL and called the SRL–AIDE model. To examine the effectiveness of the model, we designed a professional development program based on the SRL–AIDE model, called the SRL–AIDE program. It involved explicit exposure to SRL theory, beliefs in independent learning as enhancing SRL, and immersive experiences including video-based learning and simulations with live actors to stimulate motivation for SRL classroom implementation. The model’s effectiveness was evaluated using authentic methods. Seventy-six teachers participated in either the SRL–AIDE program (experimental group) or a control program focused on effective learning principles. The results indicated a shift in beliefs toward independent learning as a core behavior in enhancing SRL, and a highly significant and systematic increase among the experimental group in the lesson plan, performance, and reflection (on the performed lesson) as phases in the teaching relating to the SRL cycle, including cognitive, metacognitive, and independent learning strategies. The improvements of the SRL practices were apparent in two measurement types: explicitness level and duration. Implications for class instruction, teachers’ professional development oriented toward students’ outcomes, and authentic evaluation are discussed.
... A number of platforms claim to employ game-based approaches, including variations of the persuasive technology techniques reviewed here, for the purposes of comprehension and awareness development. However, reviews [17], [18] show that most of them still fall short of verifying whether they help their users grasp the knowledge that they claim to convey. This is a problem that has been widely discussed in relation to learning technologies and Laurillard [19] has proposed a framework to compare the taught object being presented to the understood object actually being comprehended. ...
Full-text available
There is a conflict between the need for security compliance by users and the fact that commonly they cannot afford to dedicate much of their time and energy to that security. A balanced level of user engagement in security is difficult to achieve due to difference of priorities between the business perspective and the security perspective. We sought to find a way to engage users minimally, yet efficiently, so that they would both improve their security awareness and provide necessary feedback for improvement purposes to security designers. We have developed a persuasive software toolkit to engage users in structured discussions about security vulnerabilities in their company and potential interventions addressing these. In the toolkit we have adapted and integrated an established framework from conventional crime prevention. In the research reported here we examine how non-professionals perceived security problems through a short-term use of the toolkit. We present perceptions from a pilot lab study in which randomly recruited participants had to analyze a crafted insider threat problem using the toolkit. Results demonstrate that study participants were able to successfully identify causes, propose interventions and engage in providing feedback on proposed interventions. Subsequent interviews show that participants have developed greater awareness of information security issues and the framework to address these, which in a real setting would lead ultimately to significant benefits for the organization. These results indicate that when well-structured such short-term engagement is sufficient for users to meaningfully take part in complex security discussions and develop in-depth understanding of theoretical principles of security.
... The term "simulation" refers to several types of guided learning experiences artificially resembling reality, where teacher participants can exercise their skills interactively and dynamically, and cope with challenges in a learner-centered experience focused on the individual needs of each learner (Anderson & Lawton, 2009;Dieker et al., 2014;Karlen et al., 2020;Kramarski, 2018). Studies have found that simulations can assist in the practical assimilation of theory. ...
העידן הפוסט-מודרני מעמיד אתגר בפני המערכת החינוכית בכללותה, ובפני המוסדות להכשרת מורים ולפיתוח מקצועי בפרט, שכן התפתחות הידע המואצת מחייבת את המורים להיות 'לומדים לאורך החיים' ((Organisation for Economic Co-operation and Development, 2013 ולפתח מיומנויות של לומד עצמאי הן כמורה והן אצל התלמיד תוך התנסות בפרקטיקות הוראה מתאימות לדרישות העידן החדש. לדעת חוקרים, הקושי המרכזי של מורים הוא בקישור בין הידע ופרקטיקות ההוראה הנלמדים בתוכניות ההכשרה והפיתוח המקצועי לבין ההוראה בפועל בזמן אמת בכיתה ((Kramarski, 2018. אחת מהמיומנויות המרכזיות במאה ה-21 היא טיפוח לומד עצמאי פעיל בעל הכוונה עצמית בלמידה (Self-Regulated Learning – SRL; Zimmerman, 2013). הכוונה עצמית בלמידה ובהוראה היא היכולת של הלומד/מורה להציב לעצמו מטרות, לבחור אסטרטגיות פעולה, להיות מודע למחשבותיו, להרגשותיו ולהתנהגותו במהלך הלמידה או ההוראה, לפקח עליהן ולנהל אותן כדי להשיג את מטרותיו. הכוונה עצמית היא תהליך מורכב ודינמי, שבו הלומד (מורה או תלמיד) נמצא במרכז הלמידה מבחינה קוגניטיבית, מטה-קוגניטיבית ומוטיבציונית-רגשית (Zimmerman, 2013). מחקרים הראו כי מיומנויות של לומד עצמאי בעל הכוונה עצמית אינן מתפתחות באופן ספונטני, ונדרשת סביבת למידה מתאימה שמאפשרת לטפח מיומנויות אלו (Maghfiroh, Subchan, & Iqbal, 2017). מרבית המחקרים על סביבות למידה שתומכות בהכוונה עצמית נערכו בקרב תלמידים ובקרב פרחי הוראה בלבד, אך מעט מחקרים נערכו בקרב מורים בפועל העוסקים בתהליכי הטמעה של פרקטיקות הוראה המעודדות לומד בעל הכוונה עצמית בכיתה בזמן אמת (Dignath & Büttner, 2018; Mevarech & Kramarski, 2014). על פי הספרות המחקרית, מיומנויות של לומד בעל הכוונה עצמית (SRL) ניתן לטפח בסביבה המאפשרת למידה אקטיבית. עבור מורים, סביבת למידה כזו כוללת חוויות הוראה אותנטיות העשויות לספק להם כלים לגשר על הפער בין התיאוריה למעשה, ולסייע בידם להשתמש בפרקטיקות ההוראה שנלמדות בתוכניות ההכשרה בהוראה בפועל (Kramarski, 2018; Van Driel, Beijaard & Verloop, 2001; Vermunt, 2016). במחקר הנוכחי התמקדנו בסביבת למידה ייחודית המשלבת סימולציות והכוונה עצמית, ומאפשרת את טיפוחו של המורה כלומד עצמאי (SRL) ואת הטמעת הפרקטיקות הנחוצות להוראה שבה התלמיד במרכז הלמידה, הן בעת התנסות בפועל בזמן אמת בסימולציה והן בעת העברת שיעור בכיתה.
... It is interesting that students attach greater importance to the acquisition of competencies in the field of managerial experience and the understanding of the relationship between economic variables. In contrast, the development of analytical skills through simulation games is not positively perceived by students from the perspective of the teacher (game moderator), which is also documented in research studies [14,35,41,43]. ...
Full-text available
Information technologies play an important role in designing new ways of teaching, and, at the same time, the globalization of the business world affects the quality of human capital that the corporate sector requires. Apart from theoretical business understanding, multidisciplinary knowledge is also needed. Business simulation games belong among suitable educational tools, which are able to respond to contemporary business requirements and the requirements of students. Business simulation games provide a useful tool for experiential learning by university students studying business programs. For the effective development of students’ competencies in economic and managerial fields of study, it is necessary to apply appropriate steps in the implementation of simulation games and to understand the experience of students in the use of games in the teaching process. For that purpose, best practice for business simulation games has been determined. This introduced best practice includes description of the benefits from realized simulation games, from the lecturer point of view. A realized survey focused on the main benefits considered by students who completed the subject Managerial Simulation Game. The students’ approach to the implemented simulation games was gradually monitored over the course of five academic years. Research samples contain 148 students from the first year of a master’s study program. Our survey showed that the subject is more beneficial for students for their further study than for their future professions. At the same time, the vast majority of students perceive simulation games as a useful and interesting way to verify the dependencies between economic variables. To strengthen analytical skills, it is necessary to introduce tasks that support working with economic data through simulation games. The novelty of the paper consists of mapping the benefits from the implemented simulation games for the student’s own person, categorizing the identified benefits into six groups with the same characteristics, and, at the same time, implementing the research for students attending private and public schools.
... Therefore, learning effectiveness can be identified as the comprehensive set of a student's learning outcomes (Li & Liang, 2020). A multi-period, problem-based simulation learning approach gives deeper and longer-lasting interdisciplinary application and learning than traditional deductive methods such as lectures, rote memorization, and tests (Miller, 2004;Brownwell & Jameson, 2004;Anderson & Lawton, 2008). Simulations can also encourage students to actively evaluate and employ cross-domain business knowledge within complicated decision-making environments that commonly occur in the real world (Gatti et al., 2019;Hernández-Lara, Perera-Lluna, & Serradell-López, 2019;Pérez-Pérez et al., 2021). ...
Business simulation game systems (BSGs) have become an important learning tool for higher education in business and management fields in recent years. However, few studies have investigated how BSG systems affect perceived learning effectiveness and entrepreneurial self-efficacy (ESE). Based on the previous information systems literature, this study developed and validated a BSG systems success model. The newly proposed success variable of model-reality fit, which was conceptualized as the fit between the BSG model and the real-world business environment, was also examined. Data collected from 152 college students in Taiwan was tested against the research model using the partial least squares (PLS) approach. The results indicate that system quality and model-reality fit positively influence user satisfaction, which in turn promotes reuse intention, learning effectiveness, and ESE, while service quality and information quality do not. Furthermore, service quality and model-reality fit play a critical role in determining reuse intention, although system quality and information quality do not have a significant effect on reuse intention. Other than the insignificant impact of user satisfaction on ESE, the results also confirm that user satisfaction and reuse intention positively predict learning effectiveness and ESE.
... In this sense, business simulations can be considered effective for improving business skills (Greco and Murgia, 2007;Rachman-Moore and Kennett, 2006). Some authors argue that assessment methodologies lack scientific rigor and that it is difficult to demonstrate that learning takes place through simulation (Gosen and Washbush, 2004;Anderson and Lawton, 2009). ...
Full-text available
The development of digital infrastructure and digital entrepreneurship is a problem of harmonizing initiatives and programs of the evolution of three levels: telecommunication infrastructure, data management, services and digital skills and competencies. Focus and resources at one level or another are determined by the priorities of digital ecosystem. Thus, digital regulator is a tool for harmonization and development of digital ecosystem. Digital entrepreneurship operates with entities similar to traditional entrepreneurship, such as capital, resources, people. The driving force of digital entrepreneurship is human capital – that is, knowledge, talents, skills, abilities, competencies, experience, intelligence of people. The rapid spread of digital technologies makes digital skills of citizens key among other skills. Digitalization and cross-platformisation are currently the main trends in labor market. In other words, the ability to work with digital technologies delivered by Industry 4.0 is gradually becoming permanent and necessary for most specializations, i.e. end-to-end or cross-platform. The uniqueness of digital competencies lies in the fact that thanks to them citizens can more effectively acquire competencies in many other areas (for example, learning languages, subjects, professions, etc.). The goal pursued in the course of teaching digital entrepreneurship is revealed through the implementation of the following issues: • What to teach? (answer – new digital competencies and skills); • Why to teach? (answer – to modernise content); • How to teach? (answer – effectively use of digital technology); • Where to teach? (answer – in a new space, a new augmented reality); • Who should teach? (answer – teachers-coaches, mentors, teachers-practitioners in digital entrepreneurship); • What is the result? (answer – high value of the graduates in the labor market, specialists with high quality competencies and skills in digital entrepreneurship). Using of methodological recommendations in the course of education of students on the peculiarities of the content of teaching digital entrepreneurship allows the teacher: to master new methods, techniques, technologies of digital learning in new virtual reality; to acquire digital business competencies in alignment with Industry 4.0 and highly specialized business level. This should be done in order to train professionals who have the required quality, the required business of the 21st century, the level of digital skills and abilities that effectively and safely use digital technology to solve business problems. For these reasons, it is important to use the latest methods in the field of education to increase the level of competence in digital entrepreneurship, namely teachers of economics and business, its compliance with approved European standards, which is what these guidelines for teaching digital entrepreneurship.
Purpose This study has two objectives: to explore the factors that influence student self-efficacy regarding engagement and learning outcomes in a business simulation game course and to compare the difference between hierarchical and general teaching methods. Design/methodology/approach From September 2021 to May 2022, a questionnaire was administered to 126 students in a business simulation game course at the Zhongshan Institute, University of Electronic Science and Technology of China. Data were analyzed using nonparametric paired samples tests and linear regression. Findings The results showed that student self-efficacy, engagement and learning outcomes were significantly higher with the hierarchical teaching method than with the general teaching method. There were also differences in the factors that influenced self-efficacy regarding learning outcomes between the two teaching methods. With the general teaching method, student self-efficacy did not directly affect learning outcomes, but did so indirectly by mediating the effect of engagement. However, with the hierarchical teaching method, self-efficacy directly and significantly affected learning outcomes, in addition to indirectly affecting learning outcomes through student engagement. Research limitations/implications Compared with the control group experimental research method, the quasi-experimental research method can eliminate the influence of sample heterogeneity itself, but the state of the same sample may change at different times, which is not necessarily caused by the hierarchical teaching design. Practical implications Based on the results of this study, teachers can apply hierarchical teaching according to student ability levels when integrating business simulation games. The results of this study can inspire teachers to protect student self-confidence and make teaching objectives and specific requirements clear in the beginning of the course, and also provide an important practical suggestion for students on how to improve their course performance. Social implications The research results can be extended to other courses. Teachers can improve students' self-efficacy through hierarchical teaching design, thus improving students' learning performance and also provide reference value for students to improve their learning performance. Originality/value This study built a model based on self-system model of motivational development (SSMMD) theory, comparing factors that affect student self-efficacy regarding learning outcomes under different teaching methods. The model enriches the literature on SSMMD theory as applied to business simulation game courses and adds to our understanding of hierarchical teaching methods in this field. The results provide a valuable reference for teachers that can improve teaching methods and learning outcomes.
Full-text available
Analysis was conducted to assess the relationship between financial performance on a business simulation exercise and various other measures of student learning. Financial performance was represented by a composite performance score that rated student companies based on net income, return on investment (ROI), and return on assets (ROA) achieved in a competitive, computer-based management simulation. (Although highly intercorrelated, a 1988 study by House and Napier found the combination of these measures provided the best overall measure of a company's financial performance.) Little or no relationship was found between the performance score and the other measurements used to assess student learning.
This paper outlines a particular approach to building theory that was employed in a recent doctoral research project (Pandit, 1995). Three aspects used in conjunction indicate the project's novelty: firstly, the systematic and rigorous application of the grounded theory method; secondly, the use of on-line computerised databases as a primary source of data; and, thirdly, the use of a qualitative data analysis software package to aid the process of grounded theory building.
Simulation modeling offers several distinct gains in learning opportunities beyond traditional quality improvement tools. A simulation model captures complex, multivariate system components and replicates system operation in compressed time. The visual aspect enables workers to "see" the effect of proposed changes and thus eliminates much of the fear of failure typically associated with change. Most important, simulation permits design of a total solution, addressing interactions of all system components. (C)1997Aspen Publishers, Inc.
The practice of teaching strategic management using management games is growing throughout the world. Games are used to assist in teaching students to integrate the functional areas of business and to provide a working knowledge of the strategic management process. Games also provide valuable experience in team skill development. Because more, not less, skill is required to teach a game-oriented course than to teach a lecture- or case-oriented strategy course, the tactics chosen by an instructor are critical to success.