Content uploaded by Thomas Malloy
Author content
All content in this area was uploaded by Thomas Malloy on Feb 04, 2016
Content may be subject to copyright.
Copyright 2001 Psychonomic Society, Inc. 270
Behavior R esearch Meth ods, Instruments, & Computers
2001, 33 (2), 270-273
Emphasizing the active nature of information process-
ing, Posner and Osgood (1980) proposed that computers
be used to train inquiry in a way that “the teaching envi-
ronment . . . arise[s] in close relationship to the inquiry
activities of the faculty who shape it” (p. 95). Recent ex-
amples of educators designing computer-assisted learn-
ing contexts in which students actively discover knowl-
edge by thinking like the practitioners of a discipline
include examples as diverse as designing research stud-
ies (Brown, 1999) and solving ethical dilemmas in med-
icine through case studies (Martin & Reese, 2000). Wash-
burn (1999), focusing on one important mental skill of
scientists, uses an interactive program to teach students,
who typically accept published inferences as fact, to d is-
tinguish between factual statements and inferences in re-
search reports. The Difference to Inference game engages
students in on e fundamental activity of scienti fic inquiry—
the use of deductive logic, inductive logic, and statistical
reasoning in the falsification of theories.
Game Procedure
Accessing Difference to Inference. The game proce-
dure is laid out in detail in StatCenter at the University of
Utah (http://www.psych.utah.edu/stat/introstats/). It is
free an d freely avail able to instructors and students. A vis-
itor may log in as a guest. Sc ores earned by one gu est over-
write scores earned by other guests. Teachers wanting to
keep individual scores for studentsmay contact the author
to make an arrangement for doing so.
On the main menu, under “Work and Learn,” select
“InteractiveLearning.”Scroll downto “Difference to In-
ference” and click. There will be a choice of “stories”
that give meaning to game activities. Different instruc-
tors prefer to frame the game with different stories to re-
flect the content of their courses. For this discussion,
“Hurricane Damage” will be used as a frame. Selecting
“Hurricane Damage” will show a menu with historical
notes about hurricane damage and a “Start the Game” op-
tion. The game interface comes with an extensivetutorial
and with a link to an on-line lecture integratinggame play
into statisticaltheory. Space does not allow for more than
an overview of the game here.
Game story. Students are asked to plan a strategic se-
ries of two-group experiments whose data will discrimi-
nate among five candidate theo ries. They are shown f ive
maps that describe the pattern of deforestation in five
provinces of a tropical country prior to a hurricane. An
example of a set of five patterns of deforestation isshown
on the far right of Figure 1. The dark gray cells in each
of the five maps represent an area of deforestation; the
light gray cells represent areas that remain forested. (On
a compu tersc reen, the dark gray cells are red, and th e light
gray cells are green.) The records indicating which pat-
tern of deforestation goes with which province have been
lost during the hurricane. The students are to suppose
that they are in one of the provinces and have been given
a grant to determine which of the five maps applies to
that province. It is known that mortality because of mud
slides is higher in areas of land that are deforested than
in areas protected by rain forest. The students are to use
hurricane mortality data to determine which deforesta-
tion map appliesto the provincein which they are located.
The game has five levels of difficulty. Difficulty level
depends on the effect size of deforestation: Does defor-
estation produce increases in mortality that are enormous
and require no statistical analyses (easiest level), or are
they more subtle, perhaps so subtle that even powerful in-
ferential statistics such as tmight b e insufficient and rep-
lication is required (hardest level).
Interface. Figure 1 shows the game interface. Five
candidate t heories are in a c olumn on the righ t. The fami ly
of candidate theories is randomly generated by a schema
This project was supported by a Utah Higher Education Technology
Initiative g rant. The JAVA prog ramming for the D ifference to Infer ence
game was d esigned and wri tten by Gary Jen sen with Mat thew Graham.
Gary Jensen also desig ned the Oracle database and its interaction with
the JAVA applet. The Malloy (2000) paper cited in the references was
based on th e same JAVA applet but addressed different th eoreticalissues.
Correspondenceregarding t his article sho uld be addressed t o T. E. Mal-
loy, University of Utah, Department of Psychology, 380 S. 1520 E.,
Room 502, Salt Lake City, UT 84112-0251 (e-mail: malloy@psych.
utah.edu).
Difference to Inference: Teaching logical and
statistical reasoning through on-line interactivity
THOMAS E. MALLOY
University of Utah, Salt Lake City, Utah
Difference to Inference is an on-line JAVA program that simulates theory testing and falsification
through research design and data collection in a game format. The program, based on cognitive and
epistemolog ical principles, is designed t o support learning of the thinking skills underlying deductive
and inductive logic and statistical reasoning. Difference to Inference has database connectivit y so that
game scores can be counted as part of course grades.
DIFFERENCE TO INFERENCE 271
plus exception rule for each student each time the game
is played. The white grid in the center of the interface is
a workspace that allows students to use horizontal or ver-
tical selection tools to choose to see mortality rates for
any two adjacent cells.
Buttons next to each candidate theory allow the out-
line of the theory to be projected onto the white work-
space. Figure 1 shows the outline of the second-from-
the-top theory projected onto the workspace. Allowing a
theory to be outli nedo n the workspace keeps the task from
primarily requiring visual-spatial skills and lets players
focus on logic. The example in Figure 1 shows a case in
which a player has used a tool to select two cells (high-
lighted in gray) on the far right of the work area. Notice
that, in the column of theories, the top and bottom theo-
ries show both these two cells to be forested; therefore,
those two theoriespredict that both cells would have low
mortality rates (no difference). In contrast the middle
three theories in the column show that, of the two cells
selected by the player, the one on the left is deforested,
and the one on the right is forested. Therefore, the middle
three theories predict that there should be h igh mortality
rates in the left cell and low mortalityin the right cell (dif-
ference). These different predictions are differences that
make a difference: Depending on the data, either three of
the theories or two of the theories will be falsified.
In short, players can query mortality rates from any
two adjacent cells. Data resulting from the query appear
below the white workspa ce. The d ata is g enerated pseu do-
randomly from Gaussian distributions whose parameters
are set when difficu lty level (deforestation effect size) is
set; thus, each data query results in a unique data set.
Buttons allow the studentsto ask for statistical analyses,
such as group means, standard deviations, standard er-
rors (SEMs), and a ttest.
Scoring. The students start the game with a small
grant. Collecting and replicating data costs money, as do
statistical analyses performed on collected data. After
conducting a series of two-group studies, the students se-
lect one theory (map) as the best description of the pat-
tern of deforestation on the basis of the hurricane mor-
tality data. The program is structured so that only one of
the five candidate theories is consistent with the statisti-
cal equations that generate the data sets. The other four
theories will be inconsistent with the data—at least, they
will be inconsistent with the data if the right research
questions are asked. The students’choice of theory is
recorded on an Oracle database and affects their course
Figure 1. Interactive Graphical User Interface for Difference to Inference ga me. Students must discover which of the five patterns
in a column on the right is hidden behind the white grid in the center. The data (shown at the bottom) have been collected from se-
lected cells (shown in gray).
272 MALLOY
grades. Submission of a theory that is inconsistent with
the data costs a great deal of grant funds. After submit-
ting a theoretical conclusion, the students can play the
game again with five new (randomly generated) theories.
This process continues until the students earn large
enough grants to get full credit for that particular level of
the game. They then begin on the next level, in which
treatment effects are more subtle and statistical reason-
ing more important.
Game Design: Difference to Inference
Deducing differences that make a theoretical dif-
ference. Students must deduce a specific research ques-
tion on the basis of a critical examinationof the five de-
forestation patterns. If they are to eliminate theories on
the basis of mortalitydata, first they must seek the edges
of the deforestation patterns where data collected in a
two-group study will show differences in mortality. But
not all such differences are theoretically important.
Comparative examination of the edges of the five theo-
ries shows that all five make different predictions for many
comparisons. A research study that f inds such a differ-
ence cannot discriminate among theories. So students
must decide where to set up a two-group comparison
whose data might eliminate some theories, but not oth-
ers. They must not just f ind differences between their
groups, but they must find those “differences that make
a difference” in eliminating theories (Bateson, 1972,
p. 452). This can be done by criticallyexamining the five
maps on the right and deducing where the theories make
different predictions. The two selected cells (shown in
gray) in Figure 1 target a difference in mortality that will
make a difference in eliminating theories.
Deduction and falsification. The data obtained from
research studies may or may not be inconsistentwith each
theory’s prediction. If a theory’s prediction is shown not
to be true, the theory is falsified through deductive logic.
In the Difference to Inference game, the data from a well-
designed study can be used to falsify one or more of the
theories.
The fundamental inductive pattern. In contrast,
data that are consistent with a theory’s prediction do not
prove the theory. Yet, the empirical scientific method can
be persuasive when theories accurately predict data, es-
pecially across a series of studies. Polya (1968) develops
the important distinction between deductive falsification
and inductive plausibility. When the prediction o f a the-
ory is found to be false, the theory is proven to be false.
But when the prediction of a theory is verified, the theory
is not proven to be true; rather, it becomesmore plausible.
Polya describes inductive patterns by which theories,
although not proven, become more and more plausible.
The fundamental inductive pattern is the verification of
a consequence of a theory. When a prediction ofa theory
is found to be true, the theory becomes more plausible.
Other in ductive patterns that lead to increased plausibil-
ity include successive verifications of a theory’s predic-
tions and the elimination of rival theories (Polya, 1968,
p. 26). The Difference to Inference game gives students
the opportunity to learn about induction and plausibility.
They gain repeated experience with a theory that, al-
though not proven, becomes more plausible, because its
predictions are always verified, whereas rival theories
are eliminated. Such experience provides a basis for dis-
cussions of the distinction between deductive falsifica-
tion and inductive plausibility in class.
Statistical reasoning. Up to this point, the game has
been discussed as if the data were clear cut: Either the
data from the two targeted cells showed a difference in
mortality or not, and this difference (or lack of it) either
was consistent with a particular theory’s prediction or it
was not. The emphasis has been on the use of deductive
and ind uctive logic. But at hi gher levels of game di fficu lty,
the treatment effects are small, and the data obtained for
the two targeted cells can show a great deal of overlap. It
may not be at all obvious whether or not there is a differ-
ence in mortality in the two cells. At these levels of diffi-
culty, the game requires statistical reasoning, including
the use of nu ll-hypothesis-testing logic using procedures
like a ttest. This allows students, after they have learned
about the basic logic of theory falsification, to learn to
integrate statistical reasoning into the research process.
As on e component of StatCenter, the Difference to In-
ference game is supported by several HTML lectures on
distribution theory, sampling distributions, hypothesistest-
ing, ttests, a nd other common tests. So students have a rich
context for learning statistical reasoning when they play the
most difficult levels of the game. Consequently, at high-
difficulty levels, the game gives students practice inte-
grating statistical reasoning with deductiveand inductive
reasoning to make theoretical inferences. Both Cook and
Campbell (1979)and Polya (1968) refer to this step in the
scientific hypothesisas eliminating the plausible compet-
ing hypothesis of chance. Higher difficulty levels of the
game facilitate formal class discussions of how chance
alone may account for the differences in two sets of data
and the im portance of elimin atingchance as a rival theo ry.
Final inference. At the end of a series of studies, stu-
dents must choosea theory. The game is structured,some-
what artificially, so that only one of the f ive candidate
theories will remain consistent with the simulated data
from all the possible comparisons of the mortality rates
in two adjacent cells. The other four theories will be
eliminated by the results of at least one possibleresearch
study. To succeed in the game, the students must criti-
cally design and conduct a series of two-group research
projects whose data eliminate four of the theories. They
are to choose the theory that remains consistent with all
the known data. This inference is very different from in-
ferring that a theory has been proven. Often, students en-
ter a method s course with preconceptions inclining them
to want to prove theories true. The Difference to Infer-
ence game addresses this teaching need by giving stu-
dents repeated experience in eliminating theories that are
inconsistent with data and preferring theories that gain
plausibility through their consistency with known data.
DIFFERENCE TO INFERENCE 273
Discussion
The program structure of Difference to Inference is
based on principles from logical and plausible reasoning
(Polya, 1968), statistical reasoning (Cook & Campbell,
1979; Polya, 1968), and epistemological frameworks
about the nature of information and pattern (Attneave,
1954; Bateson, 1972). Using these principles, the game
simulates empirical scientif ic theory. Students desig n stud-
ies whose data eliminate one or more theories until only
one theory remains that is consistent with the known data.
The use of visual patterns as theories has several ad-
vantages. Attne ave (1954 , p. 187 ) noted that th ere is a sim-
ilarity between the “abstraction of simple homogeneities
from a visual field” and the “induction of a highly gen-
eral scientific law from a mass of experimental data.”
Using this similarity, a computer can generate families
of competing but overlappingvisual theories in ways that
make deductions of research designs challenging. This
allows students to gain repeated practice with the processes
of theory elimination across many games without bur-
dening the instructor with the need to construct endless
examples of competing theories. The simple visual na-
ture of the patterns also allows for the game structure to
teach the targeted principles (such as f inding differences
that make a difference), using unambiguous materials.
Malloy (2000) has sketched a general framework,
based on procedural or nondeclarative memory (Squire,
Knowlton,& Musen, 1993), that hypothesizesthat repet-
itive experience with computer games builds procedural
knowledge.The nature of that knowledge depends on the
logic of the computer game’s program. Even setting
aside the issue of violen t content , most commerc ial game s
have trivial logical structures, and the resulting proce-
dural learning will be equally trivial. But educators can
build games whose logic is based on principles valued
by various disciplines, such as those underlying the pro-
cess of scientific inquiry. Difference of Inference was
designed to require repetitive experiences, which is con-
sistent with the logic of scientific inference and might
therefore leadto procedural knowledgeimportant to sci-
entific thinking.
The advantage of visual patterns as theories leads di-
rectly to a primary limitation of the Difference to Infer-
ence game. Whereas most theories in the social sciences
are verbal and co mplex, sometimes even ambiguous, the
game uses simple, clear-cut visual patterns as candidate
theories.There are, obviously, questions of relevanceand
generalizability. This limitation can be addressed in the
context of a course in which the instructor provides the
bridge to appropriate contentto aid in the generalization
of thinking skills learned while playing the game. It is
important to integrate procedural and declarative learn-
ing. Althoug h Difference to Inference explicitly empha-
sizes the learning of the research process, it is best used
in a con text that provides the substanti ald eclarativek nowl-
edge support of a typical methods or statistics course.
One of the limitations of declarative knowledge is that it
is difficult to learn about processes declarativelywithout
experiencing the process. Reading abo ut how to ski typ-
ically provides more insights after a few runs down the
mountain than it does prior to such experience. In the
same way Difference to Inference and other simulation
software that give practice in scientific thinking can give
students many runs through the structure of the research
experience to provide a foundation for subsequent de-
clarative knowledge in lectures and text. For example,
the author, for one, f inds it much easier to lecture about
the difference between proving theories and falsifying
theories after, rather than before, students have had ex-
perience with the falsification process in Difference to
Inference. In short , Difference to Inference is not m eant to
stand alone but to be used in a supportive declarative
context. In that way, students’repeated experience with
the logical and statistical reasoning processes involved in
theory falsification provides a concrete basis for declar-
ative knowledge statements about the philosophy of sci-
ence. The experience provided by Difference to Inference
can be generalized to verbal research questions b ased on
verbal theories by using StatCenter’s Virtual Lab applet
(Malloy & Jensen, 2000).
Difference to Inference is one example of a general
proposal (Malloy, 2000; Posner & Osgood, 1980): The
logical structure of computer programs can be designed
to correspond to the structure of culturally valued activ-
ities (e.g., empirical scientific research) so as to aid stu-
dents in developing internal thought structures for ac-
complishing those valued activities.
REFERENCES
Attneave, F. (1954). Someinformational asp ects of v isual perception.
Psychologi cal Review,61, 183-193.
Bat e so n, G. (1972).Steps to an ecology of mind. New York: Ballantine.
Brown, M. F. (1999). Wildcat World: Simulation programs for teach-
ing basic co ncepts in psychological science. Behavior Research Meth -
ods, In struments, & Computers,31, 14-18.
Cook, T. D., & Campbell, D. T. (197 9). Qua si-experimentation.
Chicago : Rand McNally.
Malloy,T. E. (2000). Teaching deductive,inductive, and inferentiallogic
through interactive online computer simulation. Journal of Information
Technology in Med icine [O n-line], 3. Available: http://www.J-ITM
Malloy, T. E., & Jensen, G. C. (2000, November). Utah virtual lab:
JAVA interactivity for teaching science and statistics online. Paper
presented at the meeting of the Society for Computers in Psychology,
New Orleans.
Martin, R. M., & Reese, A. C. (2000). Computer assisted instruction
as a component of a comprehensive curriculum in medical ethics. Jour-
nal of Information Technology in Medicine [On-l ine], 3. Available:
http://www.J-ITM.com
Poly a, G. (1968).Patterns of plausibleinference. Princeton, N J: Prince-
ton University Press.
Posner, M. I., & Osgood, G. W. (1980). Computers in the train ing of
inquiry.Beh avior Research Methods, Instruments, & C omputers,12,
87-95.
Squire, L. R., Knowlton, B., & Musen, G. (1993).The structure and
organizatio n of memory. Annual R eview of Psycho logy,44, 453-495.
Washburn,D. A. (1999). Distinguishing interpretation from fact (DIFF):
A computerized d rill for methodology courses. Behavior R esearch
Method s, Instrum ents, & Computers,31, 3-6.
(Manuscrip t received November 13 , 2000;
revision accep ted for publicatio n March 11, 2 001.)