ArticlePDF Available

Difference to Inference: Teaching logical and statistical reasoning through on-line interactivity

Authors:

Abstract and Figures

Difference to Inference is an on-line JAVA program that simulates theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support learning of the thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades.
Content may be subject to copyright.
Copyright 2001 Psychonomic Society, Inc. 270
Behavior R esearch Meth ods, Instruments, & Computers
2001, 33 (2), 270-273
Emphasizing the active nature of information process-
ing, Posner and Osgood (1980) proposed that computers
be used to train inquiry in a way that “the teaching envi-
ronment . . . arise[s] in close relationship to the inquiry
activities of the faculty who shape it” (p. 95). Recent ex-
amples of educators designing computer-assisted learn-
ing contexts in which students actively discover knowl-
edge by thinking like the practitioners of a discipline
include examples as diverse as designing research stud-
ies (Brown, 1999) and solving ethical dilemmas in med-
icine through case studies (Martin & Reese, 2000). Wash-
burn (1999), focusing on one important mental skill of
scientists, uses an interactive program to teach students,
who typically accept published inferences as fact, to d is-
tinguish between factual statements and inferences in re-
search reports. The Difference to Inference game engages
students in on e fundamental activity of scienti fic inquiry—
the use of deductive logic, inductive logic, and statistical
reasoning in the falsification of theories.
Game Procedure
Accessing Difference to Inference. The game proce-
dure is laid out in detail in StatCenter at the University of
Utah (http://www.psych.utah.edu/stat/introstats/). It is
free an d freely avail able to instructors and students. A vis-
itor may log in as a guest. Sc ores earned by one gu est over-
write scores earned by other guests. Teachers wanting to
keep individual scores for studentsmay contact the author
to make an arrangement for doing so.
On the main menu, under “Work and Learn,” select
“InteractiveLearning.”Scroll downto “Difference to In-
ference” and click. There will be a choice of “stories”
that give meaning to game activities. Different instruc-
tors prefer to frame the game with different stories to re-
flect the content of their courses. For this discussion,
“Hurricane Damage” will be used as a frame. Selecting
“Hurricane Damage” will show a menu with historical
notes about hurricane damage and a “Start the Game” op-
tion. The game interface comes with an extensivetutorial
and with a link to an on-line lecture integratinggame play
into statisticaltheory. Space does not allow for more than
an overview of the game here.
Game story. Students are asked to plan a strategic se-
ries of two-group experiments whose data will discrimi-
nate among five candidate theo ries. They are shown f ive
maps that describe the pattern of deforestation in five
provinces of a tropical country prior to a hurricane. An
example of a set of five patterns of deforestation isshown
on the far right of Figure 1. The dark gray cells in each
of the five maps represent an area of deforestation; the
light gray cells represent areas that remain forested. (On
a compu tersc reen, the dark gray cells are red, and th e light
gray cells are green.) The records indicating which pat-
tern of deforestation goes with which province have been
lost during the hurricane. The students are to suppose
that they are in one of the provinces and have been given
a grant to determine which of the five maps applies to
that province. It is known that mortality because of mud
slides is higher in areas of land that are deforested than
in areas protected by rain forest. The students are to use
hurricane mortality data to determine which deforesta-
tion map appliesto the provincein which they are located.
The game has five levels of difficulty. Difficulty level
depends on the effect size of deforestation: Does defor-
estation produce increases in mortality that are enormous
and require no statistical analyses (easiest level), or are
they more subtle, perhaps so subtle that even powerful in-
ferential statistics such as tmight b e insufficient and rep-
lication is required (hardest level).
Interface. Figure 1 shows the game interface. Five
candidate t heories are in a c olumn on the righ t. The fami ly
of candidate theories is randomly generated by a schema
This project was supported by a Utah Higher Education Technology
Initiative g rant. The JAVA prog ramming for the D ifference to Infer ence
game was d esigned and wri tten by Gary Jen sen with Mat thew Graham.
Gary Jensen also desig ned the Oracle database and its interaction with
the JAVA applet. The Malloy (2000) paper cited in the references was
based on th e same JAVA applet but addressed different th eoreticalissues.
Correspondenceregarding t his article sho uld be addressed t o T. E. Mal-
loy, University of Utah, Department of Psychology, 380 S. 1520 E.,
Room 502, Salt Lake City, UT 84112-0251 (e-mail: malloy@psych.
utah.edu).
Difference to Inference: Teaching logical and
statistical reasoning through on-line interactivity
THOMAS E. MALLOY
University of Utah, Salt Lake City, Utah
Difference to Inference is an on-line JAVA program that simulates theory testing and falsification
through research design and data collection in a game format. The program, based on cognitive and
epistemolog ical principles, is designed t o support learning of the thinking skills underlying deductive
and inductive logic and statistical reasoning. Difference to Inference has database connectivit y so that
game scores can be counted as part of course grades.
DIFFERENCE TO INFERENCE 271
plus exception rule for each student each time the game
is played. The white grid in the center of the interface is
a workspace that allows students to use horizontal or ver-
tical selection tools to choose to see mortality rates for
any two adjacent cells.
Buttons next to each candidate theory allow the out-
line of the theory to be projected onto the white work-
space. Figure 1 shows the outline of the second-from-
the-top theory projected onto the workspace. Allowing a
theory to be outli nedo n the workspace keeps the task from
primarily requiring visual-spatial skills and lets players
focus on logic. The example in Figure 1 shows a case in
which a player has used a tool to select two cells (high-
lighted in gray) on the far right of the work area. Notice
that, in the column of theories, the top and bottom theo-
ries show both these two cells to be forested; therefore,
those two theoriespredict that both cells would have low
mortality rates (no difference). In contrast the middle
three theories in the column show that, of the two cells
selected by the player, the one on the left is deforested,
and the one on the right is forested. Therefore, the middle
three theories predict that there should be h igh mortality
rates in the left cell and low mortalityin the right cell (dif-
ference). These different predictions are differences that
make a difference: Depending on the data, either three of
the theories or two of the theories will be falsified.
In short, players can query mortality rates from any
two adjacent cells. Data resulting from the query appear
below the white workspa ce. The d ata is g enerated pseu do-
randomly from Gaussian distributions whose parameters
are set when difficu lty level (deforestation effect size) is
set; thus, each data query results in a unique data set.
Buttons allow the studentsto ask for statistical analyses,
such as group means, standard deviations, standard er-
rors (SEMs), and a ttest.
Scoring. The students start the game with a small
grant. Collecting and replicating data costs money, as do
statistical analyses performed on collected data. After
conducting a series of two-group studies, the students se-
lect one theory (map) as the best description of the pat-
tern of deforestation on the basis of the hurricane mor-
tality data. The program is structured so that only one of
the five candidate theories is consistent with the statisti-
cal equations that generate the data sets. The other four
theories will be inconsistent with the data—at least, they
will be inconsistent with the data if the right research
questions are asked. The studentschoice of theory is
recorded on an Oracle database and affects their course
Figure 1. Interactive Graphical User Interface for Difference to Inference ga me. Students must discover which of the five patterns
in a column on the right is hidden behind the white grid in the center. The data (shown at the bottom) have been collected from se-
lected cells (shown in gray).
272 MALLOY
grades. Submission of a theory that is inconsistent with
the data costs a great deal of grant funds. After submit-
ting a theoretical conclusion, the students can play the
game again with five new (randomly generated) theories.
This process continues until the students earn large
enough grants to get full credit for that particular level of
the game. They then begin on the next level, in which
treatment effects are more subtle and statistical reason-
ing more important.
Game Design: Difference to Inference
Deducing differences that make a theoretical dif-
ference. Students must deduce a specific research ques-
tion on the basis of a critical examinationof the five de-
forestation patterns. If they are to eliminate theories on
the basis of mortalitydata, first they must seek the edges
of the deforestation patterns where data collected in a
two-group study will show differences in mortality. But
not all such differences are theoretically important.
Comparative examination of the edges of the five theo-
ries shows that all five make different predictions for many
comparisons. A research study that f inds such a differ-
ence cannot discriminate among theories. So students
must decide where to set up a two-group comparison
whose data might eliminate some theories, but not oth-
ers. They must not just f ind differences between their
groups, but they must find those “differences that make
a difference” in eliminating theories (Bateson, 1972,
p. 452). This can be done by criticallyexamining the five
maps on the right and deducing where the theories make
different predictions. The two selected cells (shown in
gray) in Figure 1 target a difference in mortality that will
make a difference in eliminating theories.
Deduction and falsification. The data obtained from
research studies may or may not be inconsistentwith each
theorys prediction. If a theorys prediction is shown not
to be true, the theory is falsified through deductive logic.
In the Difference to Inference game, the data from a well-
designed study can be used to falsify one or more of the
theories.
The fundamental inductive pattern. In contrast,
data that are consistent with a theorys prediction do not
prove the theory. Yet, the empirical scientific method can
be persuasive when theories accurately predict data, es-
pecially across a series of studies. Polya (1968) develops
the important distinction between deductive falsification
and inductive plausibility. When the prediction o f a the-
ory is found to be false, the theory is proven to be false.
But when the prediction of a theory is verified, the theory
is not proven to be true; rather, it becomesmore plausible.
Polya describes inductive patterns by which theories,
although not proven, become more and more plausible.
The fundamental inductive pattern is the verification of
a consequence of a theory. When a prediction ofa theory
is found to be true, the theory becomes more plausible.
Other in ductive patterns that lead to increased plausibil-
ity include successive verifications of a theorys predic-
tions and the elimination of rival theories (Polya, 1968,
p. 26). The Difference to Inference game gives students
the opportunity to learn about induction and plausibility.
They gain repeated experience with a theory that, al-
though not proven, becomes more plausible, because its
predictions are always verified, whereas rival theories
are eliminated. Such experience provides a basis for dis-
cussions of the distinction between deductive falsifica-
tion and inductive plausibility in class.
Statistical reasoning. Up to this point, the game has
been discussed as if the data were clear cut: Either the
data from the two targeted cells showed a difference in
mortality or not, and this difference (or lack of it) either
was consistent with a particular theorys prediction or it
was not. The emphasis has been on the use of deductive
and ind uctive logic. But at hi gher levels of game di fficu lty,
the treatment effects are small, and the data obtained for
the two targeted cells can show a great deal of overlap. It
may not be at all obvious whether or not there is a differ-
ence in mortality in the two cells. At these levels of diffi-
culty, the game requires statistical reasoning, including
the use of nu ll-hypothesis-testing logic using procedures
like a ttest. This allows students, after they have learned
about the basic logic of theory falsification, to learn to
integrate statistical reasoning into the research process.
As on e component of StatCenter, the Difference to In-
ference game is supported by several HTML lectures on
distribution theory, sampling distributions, hypothesistest-
ing, ttests, a nd other common tests. So students have a rich
context for learning statistical reasoning when they play the
most difficult levels of the game. Consequently, at high-
difficulty levels, the game gives students practice inte-
grating statistical reasoning with deductiveand inductive
reasoning to make theoretical inferences. Both Cook and
Campbell (1979)and Polya (1968) refer to this step in the
scientific hypothesisas eliminating the plausible compet-
ing hypothesis of chance. Higher difficulty levels of the
game facilitate formal class discussions of how chance
alone may account for the differences in two sets of data
and the im portance of elimin atingchance as a rival theo ry.
Final inference. At the end of a series of studies, stu-
dents must choosea theory. The game is structured,some-
what artificially, so that only one of the f ive candidate
theories will remain consistent with the simulated data
from all the possible comparisons of the mortality rates
in two adjacent cells. The other four theories will be
eliminated by the results of at least one possibleresearch
study. To succeed in the game, the students must criti-
cally design and conduct a series of two-group research
projects whose data eliminate four of the theories. They
are to choose the theory that remains consistent with all
the known data. This inference is very different from in-
ferring that a theory has been proven. Often, students en-
ter a method s course with preconceptions inclining them
to want to prove theories true. The Difference to Infer-
ence game addresses this teaching need by giving stu-
dents repeated experience in eliminating theories that are
inconsistent with data and preferring theories that gain
plausibility through their consistency with known data.
DIFFERENCE TO INFERENCE 273
Discussion
The program structure of Difference to Inference is
based on principles from logical and plausible reasoning
(Polya, 1968), statistical reasoning (Cook & Campbell,
1979; Polya, 1968), and epistemological frameworks
about the nature of information and pattern (Attneave,
1954; Bateson, 1972). Using these principles, the game
simulates empirical scientif ic theory. Students desig n stud-
ies whose data eliminate one or more theories until only
one theory remains that is consistent with the known data.
The use of visual patterns as theories has several ad-
vantages. Attne ave (1954 , p. 187 ) noted that th ere is a sim-
ilarity between the “abstraction of simple homogeneities
from a visual field” and the “induction of a highly gen-
eral scientific law from a mass of experimental data.”
Using this similarity, a computer can generate families
of competing but overlappingvisual theories in ways that
make deductions of research designs challenging. This
allows students to gain repeated practice with the processes
of theory elimination across many games without bur-
dening the instructor with the need to construct endless
examples of competing theories. The simple visual na-
ture of the patterns also allows for the game structure to
teach the targeted principles (such as f inding differences
that make a difference), using unambiguous materials.
Malloy (2000) has sketched a general framework,
based on procedural or nondeclarative memory (Squire,
Knowlton,& Musen, 1993), that hypothesizesthat repet-
itive experience with computer games builds procedural
knowledge.The nature of that knowledge depends on the
logic of the computer games program. Even setting
aside the issue of violen t content , most commerc ial game s
have trivial logical structures, and the resulting proce-
dural learning will be equally trivial. But educators can
build games whose logic is based on principles valued
by various disciplines, such as those underlying the pro-
cess of scientific inquiry. Difference of Inference was
designed to require repetitive experiences, which is con-
sistent with the logic of scientific inference and might
therefore leadto procedural knowledgeimportant to sci-
entific thinking.
The advantage of visual patterns as theories leads di-
rectly to a primary limitation of the Difference to Infer-
ence game. Whereas most theories in the social sciences
are verbal and co mplex, sometimes even ambiguous, the
game uses simple, clear-cut visual patterns as candidate
theories.There are, obviously, questions of relevanceand
generalizability. This limitation can be addressed in the
context of a course in which the instructor provides the
bridge to appropriate contentto aid in the generalization
of thinking skills learned while playing the game. It is
important to integrate procedural and declarative learn-
ing. Althoug h Difference to Inference explicitly empha-
sizes the learning of the research process, it is best used
in a con text that provides the substanti ald eclarativek nowl-
edge support of a typical methods or statistics course.
One of the limitations of declarative knowledge is that it
is difficult to learn about processes declarativelywithout
experiencing the process. Reading abo ut how to ski typ-
ically provides more insights after a few runs down the
mountain than it does prior to such experience. In the
same way Difference to Inference and other simulation
software that give practice in scientific thinking can give
students many runs through the structure of the research
experience to provide a foundation for subsequent de-
clarative knowledge in lectures and text. For example,
the author, for one, f inds it much easier to lecture about
the difference between proving theories and falsifying
theories after, rather than before, students have had ex-
perience with the falsification process in Difference to
Inference. In short , Difference to Inference is not m eant to
stand alone but to be used in a supportive declarative
context. In that way, studentsrepeated experience with
the logical and statistical reasoning processes involved in
theory falsification provides a concrete basis for declar-
ative knowledge statements about the philosophy of sci-
ence. The experience provided by Difference to Inference
can be generalized to verbal research questions b ased on
verbal theories by using StatCenters Virtual Lab applet
(Malloy & Jensen, 2000).
Difference to Inference is one example of a general
proposal (Malloy, 2000; Posner & Osgood, 1980): The
logical structure of computer programs can be designed
to correspond to the structure of culturally valued activ-
ities (e.g., empirical scientific research) so as to aid stu-
dents in developing internal thought structures for ac-
complishing those valued activities.
REFERENCES
Attneave, F. (1954). Someinformational asp ects of v isual perception.
Psychologi cal Review,61, 183-193.
Bat e so n, G. (1972).Steps to an ecology of mind. New York: Ballantine.
Brown, M. F. (1999). Wildcat World: Simulation programs for teach-
ing basic co ncepts in psychological science. Behavior Research Meth -
ods, In struments, & Computers,31, 14-18.
Cook, T. D., & Campbell, D. T. (197 9). Qua si-experimentation.
Chicago : Rand McNally.
Malloy,T. E. (2000). Teaching deductive,inductive, and inferentiallogic
through interactive online computer simulation. Journal of Information
Technology in Med icine [O n-line], 3. Available: http://www.J-ITM
Malloy, T. E., & Jensen, G. C. (2000, November). Utah virtual lab:
JAVA interactivity for teaching science and statistics online. Paper
presented at the meeting of the Society for Computers in Psychology,
New Orleans.
Martin, R. M., & Reese, A. C. (2000). Computer assisted instruction
as a component of a comprehensive curriculum in medical ethics. Jour-
nal of Information Technology in Medicine [On-l ine], 3. Available:
http://www.J-ITM.com
Poly a, G. (1968).Patterns of plausibleinference. Princeton, N J: Prince-
ton University Press.
Posner, M. I., & Osgood, G. W. (1980). Computers in the train ing of
inquiry.Beh avior Research Methods, Instruments, & C omputers,12,
87-95.
Squire, L. R., Knowlton, B., & Musen, G. (1993).The structure and
organizatio n of memory. Annual R eview of Psycho logy,44, 453-495.
Washburn,D. A. (1999). Distinguishing interpretation from fact (DIFF):
A computerized d rill for methodology courses. Behavior R esearch
Method s, Instrum ents, & Computers,31, 3-6.
(Manuscrip t received November 13 , 2000;
revision accep ted for publicatio n March 11, 2 001.)
... Teaching in this manner also allows students to take control of their learning process, which is the hallmark of constructivist learning theory [3]. In addition, applets allow students to go beyond simply learning about a theory to actually seeing its applications, which improves both procedural and declarative forms of knowledge [4][5][6]. ...
Article
Full-text available
The Web Interface for Statistics Education (http://wise.cgu.edu) is a website built around interactive tutorials designed to teach introductory and advanced statistical concepts. The tutorials use Java applets that dynamically illustrate the statistical concepts being taught. By using Java applets, we teach statistics in a manner not possible in a traditional classroom environment. In this paper, we provide examples of the applets, illustrate how students use them, and we report the outcome of a study that examined tutorial effectiveness as a learning tool.
... The fundamentals of neural nets that they laid down have undergone various stages of elaboration and development by theorists like Hebb (1949), Holland (1975) and Valera, Thompson and Rosch (1993) among many others. And the rigorous focus on difference as the defining epistemological relationship was developed extensively by Bateson (1972Bateson ( , 1979, and continued in our own work by DeLozier and Grinder (1987) with application as a teaching method by Malloy (2001). ...
Article
Full-text available
To begin to take steps to a mental ecology of emergence we first establish two fundamental assumptions from the methodology of transformational grammar --the centrality of human judgment based on direct experience and the proposition that the systematic nature of human behaviour is algorithmically driven. We then set a double criterion for understanding any formalism such as emergence: What is formalism X, that a human may know it; and a human, that a human may know formalism X? In the cybernetic sense, the two are defined in relation to each other. In answer to the first question, we examine emergence as a formalism, using Turing's work as a defining case and an NK Boolean system as a specific working model. In answer to the second question, we frame the knowing of emergence in a Batesonian epistemological approach informed by modern developments in discrete dynamic systems. This epistemology specifies mental process as the transformation of differences across a richly connected network. The relational reference point which integrates the two sides of the cybernetic question is human judgment of perceptual similarity which links emergent hierarchies in a formal NK Boolean model to hierarchies of perceptual similarity based on direct experience.
... Examples include tools for explaining and animating algorithms [7] , introducing computer science concepts using Scheme [2] , and teaching Java [1] . Instructional innovation is not limited to computer science, as evidenced by an online course that uses interactive applet-based games to teach statistics [4]. We will describe an online course, Creating Interactive Web Content, that we created and have taught successfully at the University of Utah for the last three years. ...
Conference Paper
Online courses have proliferated across all disciplines in recent years. One commonly-used approach for creating an online course is to build a web site containing as much course information---assignments, solutions, lecture notes, streaming videos, and the like---as possible. The goal of this type of course is to replicate online, to the maximum extent possible, the classroom experience. Online courses built this way exploit the , that we created and have taught successfully for the last three years. It is a general enrollment course that uses HTML and JavaScript as a vehicle for teaching elementary programming concepts.
Article
Online courses have proliferated across all disciplines in recent years. One commonly-used approach for creating an online course is to build a web site containing as much course information---assignments, solutions, lecture notes, streaming videos, and the like---as possible. The goal of this type of course is to replicate online, to the maximum extent possible, the classroom experience. Online courses built this way exploit the communications capabilities of networked computers. We believe, however, that online courses should also strive to exploit the computational capabilities of computers. That is, online courses should provide value-added components that make possible learning experiences beyond what is feasible in the traditional classroom. We describe such an online course, called Creating Interactive Web Content , that we created and have taught successfully for the last three years. It is a general enrollment course that uses HTML and JavaScript as a vehicle for teaching elementary programming concepts.
Article
This article charts the promissory notes and concerns related to college-level online education as reflected in the educational literature. It is argued that, to appreciate the potential and limitations of online education, we need to trace the issues that bind online education with distance education. The article reviews the history of distance education through the lenses of three historical themes—democratization, liberal education, and educational quality—and charts the current scene of online education in terms of three educational visions that may inform the development of online initiatives: the presentational view, the performance-tutoring view, and the epistemic-engagement view. The article emphasizes the potential contributions of online education to democratization and the advancement of the scholarship of teaching.
Article
Full-text available
This bibliography is a continuation of those previously published in Teaching of Psychology (e.g., Berry & Daniel, 1984; Fulkerson & Wise, 1987; Johnson & Schroder, 1997; Wise & Fulkerson, 1996). We maintained similar search methods and criteria for inclusion that were used in previous bibliographies. We also continued the cumulative numbering of the items. To help the reader locate relevant articles we arranged items into a small number of subject categories. Generally, if fewer than three items fell into a specific subject category they were relegated to a category labeled miscellaneous.
Article
We extend George W. Cobb's evaluative framework of statistics textbooks to six online instructional materials that exemplify the diversity of introductory statistics materials available on the Internet. Materials range from course Web sites with limited interactive capabilities to courseware and electronic textbooks that make extensive use of interactive learning objects and environments. Instructional materials are examined in light of recent cognitive research that underscores the robustness of learning from examples, the importance of authentic problem solving in promoting knowledge in use and skill acquisition, and the use of feedback to maximize learning opportunities. Selected units that focus on statistical tools (measures of central tendency, simple linear regression, and one-way analysis of variance) are analyzed in terms of authenticity and diversity of examples, authenticity and cognitive complexity of exercises, and use of interactive learning objects and feedback. General conclusions and suggestions for future directions for online statistics instruction are presented.
Article
Full-text available
The era of microprocessors provides an opportunity for examination and restructuring of university-level education in psychology. Computers may aid in the development of environments based upon sustained and deepening inquiry on the part of students from their earliest contact with psychology. We have been developing a computerized system for the exercise of skills in psychological inquiry appropriate both as a teaching tool for beginning undergraduates and for the research of permanent and visiting professors. In this paper we discuss our philosophy, some aspects of the organizational and human factors problems involved, and issues of hardware and software design.
Article
Full-text available
A series of computer programs is described that allows beginning psychology students to design, conduct, analyze, and interpret virtual (computer-simulated) psychological studies. This technique allows the instructor more control over the outcome of student experiments, increases the scope of experiments that can be done by students, decreases the amount of class time that must be devoted to conducting experiments, and eliminates concerns about student experimenters using human or animal subjects.
Article
Full-text available
It is important for psychologists to distinguish between statements of fact and opinions in the research reports they read or hear. Surprisingly, this basic skill is not readily observed in undergraduate students. A computerized laboratory activity is described that permits students to practice this discrimination, and demonstration data are reported to support the effectiveness of the exercise.
Article
Full-text available
A series of computer programs is described that allows beginning psychology students to design, conduct, analyze, and interpret virtual (computer-simulated) psychological studies. This technique allows the instructor more control over the outcome of student experiments, increases the scope of experiments that can be done by students, decreases the amount of class time that must be devoted to conducting experiments, and eliminates concerns about student experimenters using human or animal subjects.
Article
explicate four kinds of validity [statistical conclusion validity, internal validity, construct validity and external validity] / describe and critically examine some quasi-experimental designs from the perspective of these four kinds of validity, especially internal validity / argue that the quality of causal inference depends on the structural attributes of a quasi-experimental design, the local particulars of each research project, and the quality of substantive theory available to aid in interpretation / place special emphasis on quasi-experimental designs that allow multiple empirical probes of the causal hypothesis under scrutiny on the assumption that this usually rules out more threats to internal validity (PsycINFO Database Record (c) 2012 APA, all rights reserved)