ArticlePDF Available

Wildcat World: Simulation programs for teaching basic concepts in psychological science

Authors:

Abstract and Figures

A series of computer programs is described that allows beginning psychology students to design, conduct, analyze, and interpret virtual (computer-simulated) psychological studies. This technique allows the instructor more control over the outcome of student experiments, increases the scope of experiments that can be done by students, decreases the amount of class time that must be devoted to conducting experiments, and eliminates concerns about student experimenters using human or animal subjects.
Content may be subject to copyright.
Copyright 1999 Psychonomic Society, Inc. 14
Two psychology professors teach courses in research
methods or experimental psychology. Among the goals of
their courses is to provide students with a deep under-
standing of the purposes, assumptions, advantages, and
disadvantages of correlational and experimental research.
Professor A does this by having students plan and carry
out their own psychological experiments. Although there
are clear pedagogical advantages to this heroic, tried-and-
true technique, Professor A is often distressed by the lack
of control that she has over the outcome of these experi-
ments. Often student learning is limited by the large num-
ber of experiments that “don’t work” and therefore illus-
trate only some of the concepts that they are intended to
illustrate. “Having students plan and conduct experiments
is a good thing,” thinks Professor A, “but it would be nice
to have a bit more control over the results of the experi-
ments.” Meanwhile, Professor B uses a different set of
teaching techniques. He lectures about the principles of
variation, covariation, experimental manipulation, and the
patterns of inferences underlying various types of psycho-
logical research. “My students are exposed to sophisti-
cated knowledge about research principles,” muses Pro-
fessor B, “but they have little or no hands-on experience
doing it.
This parable illustrates a fundamental dilemma in the
teaching of the basic concepts of behavioral research. Psy-
chological science, like so many things, is probably best
learned by doing. But, as teachers, we want to make sure
that our students obtain data in these initial experiments
that will illustrate the concepts we want them to learn and
encourage them to do additional experiments. This paper
describes one attempt to solve this dilemma in a limited
domain. The underlying strategy is very simple: the use of
computer-based simulated research studies, which allow
the instructor to restrict or control the outcome of the stud-
ies performed by the students. Thus, although students
plan the details of the study (e.g., they decide what the re-
search question is and which variables need to be manip-
ulated in order to answer it), the outcome of the study is re-
stricted within boundaries determined by the instructor.
Studies can be carried out very quickly and do not require
the use of actual human or animal subjects (an advantage
in these days of tight restrictions related to the use of
subjects).
THE BASICS OF WILDCAT WORLD
Early versions of Wildcat World were written using Mi-
crosoft BASIC, but the present version was written using
Microsoft Visual Basic for Windows Professional Edition,
Version 3.0 (Microsoft, 1993). The basic premise of Wild-
cat World, from the students’ perspective, is that it in-
volves the study of human facial features. In each of the
three modules, features of cartoon human faces (shape of
the eyes, shape of the mouth, size of a laugh as represented
by the size of the mouth) are measured as dependent vari-
ables. These measures are taken either by judgments made
by the students or by “instruments” that are part of the
Wildcat World module. In the first module of Wildcat
World, two dependent measures are taken by student
judgments, judgments are compared to illustrate the con-
cepts involved in interrater reliability, and the two mea-
sures are compared to determine whether there is a corre-
lation between them. In the second module, students
design an experiment in which one independent variable is
manipulated in order to determine its effect on an auto-
mated measure of mouth size (laugh size). In the third
module, a “complex” experiment is designed in which two
independent variables are manipulated. The results ob-
tained in this experiment are constrained in such a way that
students obtain an interaction between the effects of the
two independent variables.
“Wildcat” refers to the mascot of Villanova University. I thank all the
students in Psychology 4050 (Research Methods in Psychology) during
the past decade who have inspired and supported the creation and de-
velopment of Wildcat World. A copy of the Wildcat World programs
may be obtained on the internet as www.vill.edu/~mbrown/abstract.htm
or by sending a blank diskette to M.F.B. along with your request. These
programs are compatible with any personal computer running Windows
Version 3.1 or later. Correspondence should be addressed to M. F.
Brown, Department of Psychology, Villanova University, Villanova, PA
19085 (e-mail: mbrown@email.vill.edu).
Wildcat World: Simulation programs for teaching
basic concepts in psychological science
MICHAEL F. BROWN
Villanova University, Villanova, Pennsylvania
A series of computer programs is described that allows beginning psychology students to design,
conduct, analyze, and interpret virtual (computer-simulated) psychological studies. This technique al-
lows the instructor more control over the outcome of student experiments, increases the scope of ex-
periments that can be done by students, decreases the amount of class time that must be devoted to
conducting experiments, and eliminates concerns about student experimenters using human or animal
subjects.
Behavior Research Methods, Instruments, & Computers
1999, 31 (1), 14-18
WILDCAT WORLD 15
WILDCAT WORLD I: MEASUREMENT,
INTERRATER RELIABILITY,
AND CORRELATION
Figure 1 shows a sample screen from Wildcat World I.
The caricature of the human face serves as the common
feature of Wildcat World. The face appears on each “trial”
of the “study,” with the eyes and mouth animated to slowly
open to a terminal size and shape, which varies from trial
to trial. The degree and nature of the variation in these
measured variables is determined by the instructor/pro-
grammer. I have often found it useful to maximize varia-
tion by simply using the BASIC RND function to produce
an evenly distributed value between n and n, which is
then added to the intended mean value of the variable.
Positive and negative correlations between the two vari-
ables can be produced by restricting the value of one vari-
able depending on the value of the other. However, when
students are to apply inferential statistics to their results,
values of the two variables are made to vary according to
a normal distribution, using the technique of Box and
Muller (1958).
Students control the pace of the study in that the pro-
gram advances to the next “trial” when the student uses
the computer mouse to click on the button labeled “next
face.” Each trial consists simply of exposure to a face, and
faces vary from trial to trial in terms of two features: the
shape of the eyes and the shape of the mouth. These vari-
ables are implemented in Visual Basic as the aspect ratio
of the circles that form the facial features. On each trial, a
pair (or more) of students independently rates the eye and
mouth shape on a 1–100 scale, using the scale anchors
shown on the left of the screen. Two activities form the basis
of the laboratory exercise I typically use with this mod-
ule. First, students develop and obtain a measure of inter-
observer reliability by comparing the ratings of different
student raters. Second, they plot and calculate the degree
of correlation between their measure of eye shape and
mouth shape. I have usually programmed Wildcat World I
so that there is a moderate positive correlation, a moder-
ate negative correlation, or no correlation between these
features. A code appears prior to the first trial that speci-
fies which of these three (randomly determined) condi-
tions obtains during that run of the program (if students
are instructed to record this code, it allows the instructor
to know the [approximate] correlation between the vari-
ables for that run of the program). The faces presented on
different trials can be described to students as represent-
ing different people (to see whether eye shape and mouth
shape are correlated across people) or as one person on
different occasions (to see if an individual’s mouth and
eyes covary in shape).
The intent of this program is to provide a mechanism for
efficiently demonstrating the concepts of variation, the
need for judgment in behavioral measures, the purpose of
measures of interobserver agreement, and correlation as a
measure of covariation among variables.
WILDCAT WORLD II: EXPERIMENTAL
MANIPULATION
In the second module, Wildcat World is presented to
students as representing a research project carried out in a
comedy club. The face now represents patrons of the com-
edy club whose reactions to comedians on stage are mea-
sured. The dependent variable is the size of the laugh pro-
duced by the comedian in each patron, and this is measured
automatically by an instrument identified as a “mouthom-
Figure 1. Screen capture of display during a trial of Wildcat World I. Button in upper left corner of dis-
play allows advance to next face, using mouse click. Ellipses on left provide anchors for rating scale to be
applied to mouth/eye shape.
16 BROWN
eter” or “laughometer.” The student/experimenter is given
complete control over the identity of the comedian ap-
pearing on stage, selected from four possibilities, includ-
ing a no-comedian control condition. On each trial, the
level of the independent variable must be chosen (student
experimenters use the mouse to choose a level of come-
dian from the menu presented in the bottom left area of the
screen; see Figure 2). The trial is then activated by click-
ing on the “next trial” button in the top left area of the
screen. The resulting value of the dependent variable
(mouth/laugh size) is measured by the “mouthometer” in
the bottom right area of the screen. I have typically pro-
grammed this value to vary moderately around mean val-
ues that differ among levels of comedian, with some co-
medians producing mean values substantially greater than
that produced by the control condition and other comedi-
ans producing mean values that do not differ from the
mean produced by the control condition.
Thus, the student experimenters decide what experi-
mental question their experiment is to address (e.g., com-
parison of the effect of two different comedians, determi-
nation of whether a particular comedian is at all funny).
This module can be presented as representing manipula-
tion of the independent variable either between subjects
or within subjects. Students have to determine the order in
which conditions are to be presented, considering issues
such as order effects (e.g., time of day, audience fatigue).
Although I have not typically programmed an order effect
into the functions producing the data, this can be easily
done. In either case, the possibility of order effects is in-
cluded in the discussion of the project when it is presented
as involving within-subjects manipulation. Thus, concepts
such as trial block randomization can be included in the
context of designing this Wildcat World experiment.
WILDCAT WORLD III: COMPLEX
EXPERIMENTS AND INTERACTIONS
The third module of Wildcat World expands on the con-
cepts presented in the second module. Again, the module
is presented as representing an experiment in which the
experimenter has complete control over the identity of the
comedian appearing on stage in a comedy club. In addi-
tion, the student experimenter has complete control over
the number of companions with whom the subject is sit-
ting. As shown in Figure 3, this second independent vari-
able is represented on the screen as a schematic of a round
table, with a number of squares representing occupied chairs
at the table. The subject is sitting at a table alone, sitting
with one companion (two persons at the table), or sitting
with five companions (six at the table). Thus, on each trial
of the virtual experiment, the student experimenter selects
a level for each of the two independent variables (identity
of the comedian on stage and number of people at the sub-
ject’s table). After invoking the trial (using the “next trial”
button), the size of the mouth/laugh produced in the sub-
ject is given by the mouthometer.
Typically, I have programmed this module so that the
effects of independent variables tend to produce an inter-
action. This is accomplished by programming the value of
the dependent variable to vary around a mean that is a
multiplicative function of the values of the two indepen-
dent variables. Students have typically been encouraged
to design 2 2 experiments, at least initially. Thus, they
usually obtain a marked interaction between the effects of
social condition and comedian identity. Discussion of the
meaning and possible interpretation of the interactions ob-
tained provides a framework for understanding the im-
portance of interaction in designing and interpreting psy-
Figure 2. Screen capture of display during a trial of Wildcat World II. Button on left side of display al-
lows advance to next trial, using mouse click. Menu on lower left side allows choice of the level of the inde-
pendent variable (comedian), using mouse click. The measure provided in the box in the lower right cor-
ner of the screen provides resulting value of dependent variable (mouth/laugh size).
WILDCAT WORLD 17
chological experiments. For example, students discover
that the form of the interaction between the effects of the
two variables can be specified only by examining a larger
number of experimental conditions.
COMMENTARY ON WILDCAT WORLD
Computer simulations used in the teaching of psychol-
ogy have been described for a number of specialized con-
tent areas, such as operant conditioning (Graf, 1995; Gra-
ham, Alloway, & Krames, 1994; Shimoff & Catania, 1995).
Computer simulations have also been described that use
stochastically determined events to illustrate statistical
principles (see, e.g., Goldstein & Strube, 1995; Strube &
Goldstein, 1995). There is some empirical evidence that
such programs are effective in helping to communicate such
concepts to students (Weir, McManus, & Kiely, 1991).
Wildcat World is based on using these techniques, in a
context that avoids any particular area of psychological
content, to highlight the basic concepts underlying psy-
chological science. It is clear to students that the specific
variables and measures involved in Wildcat World are ir-
relevant and that the points to be learned from the exer-
cises have to do with general methodological principles.
Thus, the often difficult-to-understand concepts of varia-
tion, covariation, experimental control, interaction, and so
on are emphasized rather than any specific content area of
psychology. Experiments can be carried out very quickly,
and the results of the experiments can be controlled or re-
stricted by the instructor. Finally, students can design, con-
duct, and interpret psychological studies without using
human or animal subjects.
Reaction to Wildcat World by Villanova University stu-
dents has been positive. A recent survey of students who had
used Wildcat World during the immediately preceding se-
mester asked them to rate each of the three modules in terms
of “the extent to which it effectively illustrated . . .” “the
use of ratings, measures of interrater reliability, and the
logic of correlational research” (Wildcat World I); “the
basic features of a simple experiment” (Wildcat World II);
or “complex experiments (i.e., experiments with more
than one independent variable” (Wildcat World III). Re-
sponses were provided anonymously on a 5-point Likert-
type scale (1 = poorly, 3 = moderately, and 5 = very well).
The mean responses (n = 13)
1
were 4.1, 4.2, and 3.6, re-
spectively.
These simulations of basic research methodologies are
a valuable technique for providing hands-on research ex-
perience that avoids the disadvantages and complications
of having students use live subjects during their initial ex-
periences as experimenters. Advantages of this approach
include the ability of the instructor to control the outcome
of student projects, the relatively small amount of time re-
quired for data collection, and the explicit emphasis on the
methodological issues rather than particular content areas
represented by experimental paradigms that might other-
wise be used. These advantages make Wildcat World, or
software based on the same principles as Wildcat World,
a valuable tool for a research methods or experimental
psychology course.
REFERENCES
Box, G. E. P., & Muller, M. E. (1958). A note on the generation of ran-
dom normal deviates. Annals of Mathematical Statistics, 29, 610-611.
Figure 3. Screen capture of display during a trial of Wildcat World III. Button in lower left corner of dis-
play allows advance to next trial, using mouse click. Menus in upper left corner allow choice of the levels
of the two independent variables (comedian and number of companions), using mouse clicks. The values
of these independent variables are displayed in the upper right corner of the display. The measure provided
in box in lower right corner of the screen provides resulting value of dependent variable (mouth/laugh size).
18 BROWN
Goldstein, M. D., & Strube, M. J. (1995). Understanding correla-
tions: Two computer exercises. Teaching of Psychology, 22, 205-206.
Graf, S. A. (1995). Three nice labs, no real rats: A review of three op-
erant laboratory simulations. Behavior Analyst, 18, 301-306.
Graham, J., Alloway, T., & Krames, L. (1994). Sniffy, the virtual rat:
Simulated operant conditioning. Behavior Research Methods, Instru-
ments, & Computers, 26, 134-141.
Microsoft Visual Basic 3.0 [Computer software] (1993). Redmond,
WA: Microsoft.
Shimoff, E., & Catania, A. C. (1995). Using computers to teach be-
havior analysis. Behavior Analyst, 18, 307-316.
Strube, M. J., & Goldstein, M. D. (1995). A computer program that
demonstrates the difference between main effects and interactions.
Teaching of Psychology, 22, 207-208.
Weir, C. G., McManus, I. C., & Kiely, B. (1991). Evaluation of that
teaching of statistical concepts by interactive experience with Monte
Carlo simulations. British Journal of Educational Psychology, 61,
240-247.
NOTE
1. Surveys were provided to students in a class of 25 who had taken
Research Methods in Psychology the previous semester. Students were
asked to complete the survey following that class and return it to a fac-
ulty mailbox. Thus, the return rate was 52%.
(Manuscript received August 15, 1997;
revision accepted for publication May 28, 1998.)
... eady evolution (Bradley, 1993) have combined to lay a conceptual and practical groundwork for teachers to use computer simulations of laboratory experience as educational tools for teaching the principles of research methods. Among recent examples is the virtual lab developed by Colle and Green (1996) with graphical simulations of virtual subjects. Brown's (1999) Wildcat World allows students to design studies researching human facial features. Washburn's (1999) program gives students experience in distinguishing interpretations from findings. In the Utah Virtual Lab, which is one component of StatCenter at the University of Utah, students work in an online virtual environment. Using online tool ...
Article
Full-text available
The Utah on-line Virtual Lab is a JAVA program run dynamically off a database. It is embedded in StatCenter (www.psych.utah.edu/learn/statsampler.html), an on-line collection of tools and text for teaching and learning statistics. Instructors author a statistical virtual reality that simulates theories and data in a specific research focus area by defining independent, predictor, and dependent variables and the relations among them. Students work in an on-line virtual environment to discover the principles of this simulated reality: They go to a library, read theoretical overviews and scientific puzzles, and then go to a lab, design a study, collect and analyze data, and write a report. Each student's design and data analysis decisions are computer-graded and recorded in a database; the written research report can be read by the instructor or by other students in peer groups simulating scientific conventions.
Article
This study evaluated three tutorial modules, equivalent in content but different in mode of presentation, for introducing elementary statistics concepts. In a single session, fifty-seven college students participated in one of four randomly assigned conditions: paper-and-pencil, basic computerized, computerized multimedia, or control group. Participants completed a demographic questionnaire, a survey of attitudes toward math and computers, the Personal Need for Structure (PNS) scale, a basic math proficiency test, an evaluation of assigned module, and a post test statistical comprehension test. The results suggested that participant evaluations of the modules were comparably positive. Although participants in the modules performed significantly better on the statistical comprehension posttest than those in the control group, there were no significant differences between the modules. The basic computerized module took significantly less time to complete than the other modules.
Article
Full-text available
Abstract Difference to Inference is an online JAVA program simulating theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support the learning of thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades. ,,,,,,,,,,,,,,,,,,,,,,,,Difference to Inference ,3 Difference to Inference: Teaching logical and statistical reasoning through online interactivity Emphasizing the active nature of information processing, Posner and Osgood (1980) proposed that computers be used to train inquiry in a way that “the teaching environment... arise in close relationship to the inquiry activities of the faculty who shape it,” p. 95. Recent examples of educators designing computer-assisted learning contexts in which students actively discover knowledge by thinking like the practitioners of a discipline include examples as diverse as designing research studies (Brown, 1999) and solving ethical dilemmas in medicine through case studies (Martin & Reese, 2000). Washburn (1999), focusing on one important mental skill of scientists, uses an interactive program to teach students, who typically accept published inferences as fact, to distinguish between factual statements and inferences in research reports. The Difference to Inference game engages students in one fundamental activity of scientific inquiry–the use of deductive logic, inductive logic, and statistical reasoning in the falsification of theories. Game Procedure Accessing Difference to Inference. The game procedure is laid out in detail in StatCenter at the University of Utah (http://www.psych.utah.edu/stat/introstats/).It is free and freely available to instructors and students. A visitor may log in as a guest.
Article
Full-text available
Difference to Inference is an on-line JAVA program that simulates theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support learning of the thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades.
Article
Full-text available
We report on the use of our Sniffy program to teach operant conditioning to 900 introductory psychology students. The simulation is designed primarily to teach the principles of shaping and partial reinforcement in an operant chamber. Advanced features are provided for exploring modeling issues and the learning parameters of the model. Students observe the rat’s pretraining behaviors, shape barpressing, and explore the effects of partial reinforcement schedules on a cumulative record. Any of 30 actions can be trained to occur in specific locations in the Skinner box. This paper summarizes details about the software, interface, and instructional objectives.
Article
Full-text available
When it is impractical to provide behavior analysis students with extensive laboratory experience using real organisms, computers can provide effective demonstrations, simulations, and experiments. Furthermore, such computer programs can establish contingency-shaped behavior even in lecture classes, which usually are limited to establishing rule-governed behavior. We describe the development of computerized shaping simulations and the development of software that teaches students to discriminate among reinforcement schedules on the basis of cumulative records.
Article
. Two studies evaluated the effectiveness of interactive computer demonstrations in teaching about statistical sampling distributions. Students used a Monte Carlo simulation of either standard errors (SEMDEMO) or F-distributions (FDEMO) and were subsequently tested on both concepts. In Study 1 only lower ability students using FDEMO showed improved attainment related to their specific experience. SEMDEMO was then simplified, following student feedback, and study 2 then showed higher specific attainment related to interactive experience with both SEMDEMO and FDEMO, particularly in lower ability students. Reasons for improved performance may include increased practice and deeper processing of concepts.
Article
This article describes a QuickBASIC program for demonstrating the difference between main effects and interactions in factorial designs. The program guides the student through the construction of data patterns corresponding to different combinations of the main effects and the interaction in a 2 times 2 design. Program feedback provides tailored guidance to help students produce the requested patterns of means. The program also generates ideal solutions for comparison. To simulate actual experience, the program generates a data set (N = 80) for each constructed pattern of means and calculates the tests of significance for the main effects and interaction. The program can be used in conjunction with a traditional lecture on factorial designs to improve student understanding of main effects and interactions and to develop student skill in recognizing main effects and interactions from graphical displays.
Article
This article describes two QuickBASIC programs that provide students direct experience with interpreting correlation scatterplots. In one program, students estimate the size of the correlation in nine randomly generated scatterplots and receive feedback about accuracy. In the second program, students plot correlations of a certain size or type (e.g., curvilinear) and receive feedback about the size of the Pearson correlation coefficient in their plots. Other features of the program include automatic truncation of range and the addition or deletion of data points to a previously prepared scatterplot. The programs can be used in classroom exercises designed to highlight factors that influence the size of a Pearson correlation coefficient.
Article
The operant laboratory, once the foundation for a curriculum in behavior analysis, has become almost impossible to maintain because of costs, federal regulations, and other factors. Fortunately, over the years several computer simulations of animal labs have been developed. This paper describes, compares, and contrasts three simulations and concludes that in general, they are effective and offer an alternative to the real thing.