Content uploaded by Yuen-kuang Cliff Liao
Author content
All content in this area was uploaded by Yuen-kuang Cliff Liao on Jun 25, 2015
Content may be subject to copyright.
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
* Corresponding author. 69
The Effect of Computer Simulation Instruction on Student Learning: A
Meta-analysis of Studies in Taiwan
Yuen-kuang Liao*
Department of Education, National Taiwan Normal University
yliao@ntnu.edu.tw
Yu-wen Chen
Teacher, Ching Shin Elementary School, Taipei
e0141@chjhs.tp.edu.tw
Abstract
A meta-analysis was performed to synthesize existing research comparing the effects of computer simulation
instruction (CSI) versus traditional instruction (TI) on students’ achievement in Taiwan. Twenty-nine studies were
located from four sources, and their quantitative data was transformed into Effect Size (ES). The overall grand
mean of the study-weighted ES for all 29 studies was 0.54. The results suggest that CSI is more effective than TI in
Taiwan. However, only 1 (reliability of measure) of the 17 variables had significant main effect on mean ES.
The results of this study suggest that CSI clearly has a more positive effect on students’ learning than TI. The
results also shed light on the debate between Clark and Kozma regarding learning from the media.
Keywords: computer simulation, virtual reality, CAI, CAL, achievement, meta-analysis
1. Introduction
Computer technology has been widely used in
education for more than forty years. More specifically,
computer simulation as an instructional technology
has been commonly used in education [1], [2]. A study
by Heinich, Molenda & Russell [3] reported that
computer simulations were extensively used for job
training in 95% of the Colleges of Management in the
USA. In the colleges using CSI, 1/6 of the faculty and
1/4 of the total instructional time were given to
computer-simulation-related activities. Faria [4] also
found that more than 1700 business schools in the
USA used computer simulation software for
instruction; more than 200 different types of software
were used for this purpose.
Many potential benefits have been claimed for the
use of computer simulation in teaching. For instance,
Huppert, Yaakobi, & Lazarovvitz [5] have noted that
“In computer simulations, students have
opportunities to receive supplemental
contact with the variables tested in real
experiences or dangerous ones. Students
can be active during the simulated
experiments by identifying the study
problem, writing in their notebooks their
hypotheses, planning and performing the
simulated experiments, gathering results,
collecting data in their notebooks, plotting
these data back in the computer, and using
the data for drawing tables and graphs.”
(p.232)
Rivers and Vockel [6] also found in their study
that computer simulations enhanced students’ active
involvement in the learning process, and facilitated
their practice and mastery of concepts and principles;
clearly computer simulation helped students to meet
their learning objectives or goals. Michael [7] pointed
out that simulation programs such as Electronic
Workbench, LegoCAD, and Car Builder are helping
students learn about events, processes, and activities
that either replicate or mimic the real world.
According to Michael[7], computer simulation can
afford learners numerous advantages. For example,
computer simulations can (1) provide the students
with the opportunity to engage in activities that may
otherwise be unattainable, (2) enhance academic
performance and the learning achievement levels of
students, and (3) be equally as effective as real-life
hands-on laboratory experiences.
Chou [8], and Serpell[9] also noted the
significantly greater effectiveness of computer
simulation instruction as compared to traditional
instruction. Slack & Stewartv[10], Johnson & Stewart
[11], and Collins & Morrison [12] reported that by
using genetics construction kits as part of a strategic
computer simulation, undergraduate and high school
students learned to “solve” genetics programs and to
build accurate and rich mental models of genetic
knowledge. However, Parker [13] and Tannehill [14]
have found no significant differences between
computer simulation instruction and traditional
instruction. Hopkins [15], Hummel & Batty [16], and
Tylinski [17] even reported an opposite finding: the
significantly greater effectiveness of traditional
instruction.
Two review studies regarding the effectiveness of
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
70
computer simulation have also been analyzed. A
review conducted by Deckkers & Donatti [18]
concluded that although simulations were more
effective in the development of attitudes than lectures,
it appeared that the claims of improved cognitive
development and learning retention were not readily
supported. Lee’s [19] review used a meta-analytic
approach: it collected 19 studies and yielded 51 effect
sizes. The study found that the overall mean effect size
for academic achievement was .41, meaning about
66% of the students in computer simulation classes
outperformed the average students from the control
groups. On the other hand, the overall effect size for
attitude was -.04, meaning control groups performed
slightly better than computer simulation classes.
Obviously, the results of these two reviews were
inconsistent. It is assumed that these conflicting
results may be due to the different sources used by the
two review studies or their different methodologies.
Computer simulations have been defined in
different ways by different researchers. According to
Alessi & Trollip [20], simulation is just one type
among many of computer assisted instruction (CAI).
Lee [19] defined simulation in a broad sense as a
computer program which temporarily creates a set of
images (items, objects) and connects them through
cause-and-effect relations. Thomas & Hooper [21]
defined computer-based instructional simulation as a
computer program containing a manipulatable model
of a real theoretical system. The program then enables
the student to change the model from a given state to a
specified goal-state by directing it through a number
of intermediate states. Thus, the program accepts
commands from the user, alters the state of the model,
and when appropriate displays the new state (p.498).
Virtual reality (VR) is a computer technology
which combines computer graphics, computer
simulation, and human-computer interfaces [22]. In
one sense, VR shares some characteristics of computer
simulation, such as the mimicking of real life and
user-driven control. However, it is not the intention of
this study to discuss the different definitions of
computer simulation. For the purposes of the present
meta-analysis, studies employing computer
simulations or VRs as delivery systems for instruction
were considered to be types of computer simulation
given a broader definition of the term, and were thus
included in the group of studies analyzed.
1.1 The Development of Computer
Simulation in Taiwan
The development of CAI in Taiwan has moved
from the development of traditional courseware for
mainframe computers to Windows-based CAI, then to
multimedia CAI, and finally to web-based CAI (see
details in [23]). As part of the process of development
of CAI in Taiwan, the development of computer
simulation (CS) in Taiwan began with system/tool
design. Two earliest studies (i.e., [24], [25]) with
regard to CS were published in 1980s. However, most
of these studies were published after 1996. The total
number of studies counted for this category is 41, and
the subject areas of these CS systems are diverse,
including Physics, Chemistry, Mathematics, Erath
Science, Geography, Statistics, Computer Science,
Health Education, Physical Education, Architectures,
etc. The earliest empirical study of CS was published
in 1993 [26], while more than 80% of these studies
were published after 2000. The grade levels of
students participated in these studies ranged from
elementary to graduate, and the subject areas studied
are also wide-ranging.
1.2 Purposes of this Study
In spite of the many claims for the potential
benefits of using computer simulation in education,
the results of research comparing the effects of
computer simulation instruction and those of
traditional instruction in Taiwan are conflicting. For
example, Chao[27], Chuang [28], Huang [29], Lin
[30], Nein [31], and Su [32] all reported significant
gains for CSI as compared with traditional instruction.
On the other side, Chen [33], Jao [26], Tseng [34], and
Yu [35] found no significant differences in the
effectiveness of using CSI and traditional methods of
instruction. Owing to the contradictory evidence
provided by existing research in the area, and the fact
that very little, if any, thorough quantitative synthesis
of computer simulation instruction in Taiwan has been
done, the present author thought it important to
conduct a meta-analysis in order to clarify the
above-mentioned research findings. Moreover, since 2
meta-analyses of CSI have been published in the USA
[18],[19], and since this is the first meta-analysis of
CSI to be conducted in Taiwan, the synthesis of
previous research undertaken here not only examines
the accumulated research-based effects of computer
simulation on students' learning efficiency in Taiwan,
but also provides a comparative view of meta-analyses
of CSI in Taiwan and the USA.
2. Procedures
The research method used in this study is a
meta-analytic approach similar to that suggested by
Kulik, Kulik, & Bangert-Drowns[36]. Their approach
requires a reviewer to (a) locate studies through
objective and replicable searches; (b) code the studies
for salient features; (c) describe outcomes on a
common scale; and (d) use statistical methods to relate
study features to outcomes [36]. Their method differs
from that of Glass, McGaw, & Smith [37]: in Kulik et
al’s approach each individual study, defined as the set
of results from a single publication, was weighted
equally with all the other studies, so that the problem
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
71
of aggregate multiple effect sizes from a single study
could be avoided.
The purpose of the present study was then to
synthesize and analyze the existing research on the
effects of these two instructional approaches. It was
necessary to define these approaches precisely in
order to ensure the proper selection of appropriate
studies. “Computer Simulation Instruction (CSI)” was
taken to refer to classes using computer simulation as
a replacement for or supplement to traditional
instruction in order to teach students. “Traditional
Instruction (TI)” was taken to refer to classes using
traditional methods of instruction, that is,
non-computer-based methods, to teach students.
2.1 Data Sources
The studies considered for use in this
meta-analysis came from four sources. One large
group of studies came from computer searches of the
Chinese Periodical Index. A second group of studies
came from the Dissertation and Thesis Abstract
System of Taiwan. A third group of studies was
retrieved from the Government Research Bulletin
(GRB) of Taiwan. The last group of studies was
retrieved via the bibliographies of the documents
located.
Twenty-nine studies were located through these
search procedures: 25 came from the Dissertation and
Thesis Abstract System, 2 from National Science
Council (NSC) research projects, and the other 2 were
retrieved from published journals and proceedings.
Several criteria were established for inclusion of
studies in the present analysis.
1. Studies had to compare the effects of CSI
versus TI on students’ achievement.
2. Studies had to provide quantitative results
from both CSI and TI classes.
3. Studies had to be retrievable from
university or college libraries by
interlibrary loan, from the GRB, or from
Taiwan’s Dissertation and Thesis Abstract
System.
4. Studies had to use Taiwan’s students as
subjects.
There were also several criteria for eliminating
studies or reports cited by other reviews: (a) studies
did not report sufficient quantitative data in order to
estimate Effect Sizes; (b) studies reported only
correlation coefficients - the r value or Chi-square
value; (c) studies could not be obtained through
interlibrary loans or from standard clearinghouses.
2.2 Outcome Measures
The instructional outcome measured most often
in the 29 studies was student learning, as indicated on
standard or researcher-developed achievement tests at
the end of the instructional program. For statistical
analysis, outcomes from a variety of different studies
using a variety of different instruments had to be
expressed on a common scale. The transformation
used for this purpose was the one recommended by
Glass et al [37] and modified by others (e.g., Hunter,
Schmidt, and Jackson [38]). To reduce measurements
to a common scale each outcome was coded as an
Effect Size (ES), which is defined as the difference
between the mean scores of two groups divided by the
standard deviation of the control group. For those
studies that did not report means and standard
deviations, F values or t values were used to estimate
the ES; these formulas are presented in Table 1. Also,
with studies that employed a one-group
pretest-posttest design, in which there was no control
group, an alternative approach suggested by Andrews,
Guitar, and Howie [39] was used. In their approach the
ES is estimated by comparing the post-treatment mean
with the pre-treatment mean, and then dividing by the
pre-treatment standard deviation.
Table 1: Formula Used in Calculating Effect Size
___________________________________________
Mean and SD ES =(Mx-Mc)/ SDc
t - value ES = t ×2/11/1 NN +
F - Value ES = F×2/11/1 NN +
-
Note. ES = Effect size. Mx = mean for the experimental
group. Mc = mean for the control group. SDc = standard
deviation of the control group. N1 = number of subjects in
the experimental group. N2 = number of subjects in the
control group.
In most cases, the application of the formula
given by Glass and his colleagues was quite
straightforward. But in some cases, more than one
value was available for use in the ES formula. Thus,
when some studies reported differences in both
posttest measures and pre-post gains, and some studies
reported both raw-score differences and
covariance-adjusted differences between groups, the
pre-post gains and covariance-adjusted differences
were selected for estimating ES. In addition, several
subscales and subgroups were used in measuring a
single outcome: for example, those that reported
separate data by gender or grade. In such cases, each
comparison was weighted in inverse proportion to the
number of comparisons within the study (i.e. 1/n,
where n = the number of comparisons in the study) so
that the overweighting of the ES in a given study could
be avoided (see, for example, [40], p. 230).
2.3 Variables Studied
Seventeen variables were coded for each study in
the present synthesis. These variables are listed in
Table 2. Each of these variables was placed in one of
the following sets of characteristics: (a) study
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
72
characteristics, (b) methodological characteristics, and
(c) design characteristics. The first 3 variables in the
set of study characteristics were coded so that
potentially different effects for subjects with different
backgrounds could be detected. The other 2 variables
(i.e., type of publication and year of publication) in the
set of study characteristics were coded because it is
important to know how effects are related to sources
of information over time. The 6 variables placed in the
set of methodological characteristics were coded so
that effects related to the characteristics of research
procedures could be detected. The last 6 variables in
the set of design characteristics were coded because it
is critical to know how effects are related to the nature
and design of the primary research. Each variable was
employed as a factor in an analysis of variance
(ANOVA), used to investigate whether there were
significant differences within each variable on the ES.
Table 2: The Assignments of Studied Variables in
Each Characteristic
___________________________________________
Characteristics Variables
___________________________________________
Study Characteristics
Grade Level
Location of School
Subject Area
Type of Publication
Year of Publication
Methodological Characteristics
Instructor Bias
Instrumentation
Reliability of Measure
Sample Size
Selection Bias
Type of Research Design
Design Characteristics
Comparison group
Duration of Treatment
Implementation of Innovation
Type of Instruction for Treatment
Type of Innovation
Visual Presentation .
2.4 Coder Reliability
To obtain more reliable outcomes from coding,
the author of this study and 2 research assistants coded
the studies. Each of the 2 research assistants coded
half of the studies on each of the independent variables.
To check for accuracy, the author coded each of the
studies independently. The inter-coder agreement for
the studies coded by coders was 85%. In addition, the
different codings (i.e. inter-coder differences) in
studies handled by two coders were discussed. Final
agreement had to be reached after discussion.
3. Results
The number of comparisons and the
study-weighted ESs (defined as the overall ES for a
single study) are reported in Table 3. Of the 29 studies
included in the present synthesis, 27 (93%) of the
study-weighted ESs were positive and favored the CSI
group, while 2 (7%) of them were negative and
favored the TI group. The range of the study-weighted
ESs was from -0.197 to 2.67. The overall grand mean
for all 29 study-weighted ESs was 0.537. When this
mean ES was converted to percentiles, the percentiles
indicating student achievement were 70 for the CSI
group and 50 for the TI group. The overall grand
median for all 29 study-weighted ESs was 0.373,
suggesting that percentiles indicating student
achievement were 64 for the CSI group and 50 for the
TI group. The standard deviation of 0.573 reflects the
mild variability in ESs across studies.
Table 3: Number of Comparisons and
Study-weighted Effect Sizes
________________________________________
Author(s) Year N of ES
Comparison
Chang, H. P. [41] 2001 2 0.273
Chao, J. H. [27] 2001 1 0.645
Chao, J. T. [42] 1999 10 0.322
Chen, C. H. [33] 2002 3 -0.083
Chen, C. [43] 2003 1 0.262
Chen, H. [44] 2005 2 0.207
Chen, T. [45] 1998 1 0.373
Chen, Y. S. [46] 2002 2 0.238
Chuang, C. F. [28] 2000 1 1.964
Guan, H. D. [47] 1999 1 0.428
Hsu, Y. S. et al [48] 2001 1 0.302
Hsu, Y. S. [49] 2002 1 0.370
Huang, C. K. [50] 2002 3 0.433
Huang, J. C. [29] 2002 1 1.165
Jao, Y. H. [26] 1993 2 0.144
Lai, Y. J. [51] 2002 1 -0.197
Li, M. [52] 2005 2 0.837
Lin, C. [53]] 2004 1 0.410
Lin, U. C. [30] 2002 1 2.670
Lin, Y. [54] 2005 1 0.849
Nein, J. S. [31] 2002 4 0.531
Shen, S. [55] 2005 1 0.650
Su, C. Y. [56] 2000 1 0.335
Su, J. S. [32] 2002 15 0.511
Tseng, C. [34] 2002 2 0.084
Wang, C. [57] 2005 1 0.716
Wang, K. K. [58] 1994 3 0.290
Wang, T. [59] 2005 1 0.543
Yu, J. [35] 2002 1 0.144
Overall grand mean 0.537
Overall grand SD 0.573
Overall grand median 0.373
95% Confidence interval 0.319~0.755
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
73
Range -0.197~2.670
Positive ES (%) 27(93%)
Negative ES (%) 2(7%)
Note. Total N = 2729. Total N of studies = 29.
Total N of comparisons = 67
Among the 67 ESs included in the present
synthesis, 59 (88%) were positive and favored the CSI
group, while 8 (12%) were negative and favored the TI
group. The range of ESs was from -0.691 to 2.67. The
ESs for the 67 comparisons are displayed in a scatter
diagram in Figure 1. This diagram shows that despite
several large effects, most of the ESs were small to
moderate in magnitude. About 66% of the ESs lie
between -0.5 and 0.5, while less than 31% of the ESs
were greater than 0.5.
Table 4 lists the F values for the 17 variables for
all study-weighted ESs in the study. Descriptive
statistics for the 17 variables are presented in Table 5.
Among the 17 variables, only 1 variable (reliability of
measure), showed statistically significant impact. The
post hoc test (Scheffe), F(2,26)=3.88, p<.05, showed
that the mean comparison of studies in which the
reliability of measure was coded as ‘adequate
indication’ was higher than the studies coded as
‘actual reliability figure’ or ‘unspecified’. There were
no significant differences found between the mean
comparison of studies coded as ‘actual reliability
figure’ or ‘unspecified’. However, since there were
only 1 studies that coded as ‘adequate indication’, this
result may be considered tentative.
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0 102030 4050607080
case
effect siz
e
Figure 1: The Scatter Diagram of Effect Size
Table 4: Results of ANOVAs for Coded Variables
Variables df F p
Study Characteristics
Grade level 3,25 0.459 0.713
Location of school 4,24 0.566 0.690
Subject area 4,24 0.148 0.962
Type of publication 1,27 0.523 0.476
Year of publication 2,26 0.464 0.634
Methodological Characteristics
Instructor bias 3,25 0.842 0.484
Instrumentation 2,26 0.281 0.757
Reliability of measure 2,26 3.880 0.034*
Selection bias 2,26 0.648 0.532
Sample size 2,26 0.466 0.633
Type of research design 3,25 0.248 0.862
Design Characteristics
Comparison group 1,27 0.018 0.895
Duration of treatment 3,25 0.255 0.857
Implementation of
innovation 1,27 0.118 0.733
Type of instruction for
treatment 4,24 0.352 0.840
Type of innovation 1,27 0.125 0.726
Visual presentation 2,26 0.298 0.745
*p<.05
Table 5: Means and Standard Deviations of
Study-weighted ESs for Coded Variables
___________________________________________
Variables N % ES SD
Study Characteristics
Grade Level
1st - 6 th 13 44.8 0.555 0.689
7th - 9th 5 17.2 0.493 0.385
10th – 12 th 8 27.6 0.411 0.286
College 3 10.3 0.873 0.962
Location of School
North of Taiwan 9 31.0 0.337 0.148
Center of Taiwan 5 17.2 0.791 0.670
South of Taiwan 11 37.9 0.619 0.740
East of Taiwan 1 3.4 0.207 N/A
Unspecified 3 10.3 0.419 0.690
Subject Area
Science 13 44.8 0.559 0.685
Physics &
Chemistry 3 10.3 0.505 0.627
Engineering 5 17.2 0.559 0.217
Math 1 3.4 0.084 N/A
Others 7 24.1 0.561 0.629
Type of Publication
Dissertation 25 86.2 0.506 0.540
Others 4 13.8 0.731 0.823
Year of Publication
1993~1999 5 17.2 0.311 0.107
2000~2002 16 55.2 0.597 0.749
2003~2005 8 27.6 0.559 0.248
Methodological Characteristics
Instructor Bias
Same 13 44.8 0.474 0.319
Different 2 6.9 0.209 0.091
No Instructor 4 13.8 0.369 0.054
Unspecified 10 34.5 0.753 0.889
Instrumentation
Local 25 86.2 0.570 0.612
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
74
Standardized 1 3.4 0.373 N/A
Unspecified 3 10.3 0.321 0.044
Reliability of Measure
Actual reliability
figure 15 51.7 0.73 0.639
Adequate
Indication 1 3.4 1.964 N/A
Unspecified
or inadequate 13 44.8 0.503 0.337
Sample Size
0~60 10 34.5 0.496 0.219
61~120 12 41.4 0.654 0.859
Over 121 7 24.1 0.397 0.236
Selection Bias
Adequately
minimized 11 37.9 0.469 0.754
Probably threat 14 48.3 0.654 0.486
Unspecified 4 13.8 0.317 0.044
Type of Research Design
One group repeated
measure 5 17.2 0.426 0.134
Posttest only
control group 1 3.4 0.144 N/A
Nonequivalent
control group 20 69.0 0.586 0.677
Static group
comparison 3 10.3 0.529 0.272
Design Characteristics
Comparison Group
Traditional
instruction 22 75.9 0.546 0.640
No comparison
group 7 24.1 0.512 0.312
Duration of Treatment
0~6 Hours 16 55.2 0.520 0.647
6~24 Hours 4 13.8 0.506 0.233
24~96 Hours 6 20.7 0.699 0.640
Unspecified 3 10.3 0.348 0.463
Implementation
of Innovation
Replacement for
usual instruction 24 82.8 0.554 0.627
Supplement to
instruction 5 17.2 0.456 0.158
Type of Innovation
Computer
Simulation 19 65.5 0.510 0.454
Virtual Reality 10 34.5 0.590 0.777
Type of Instruction
for Treatment
Large group 3 10.3 0.505 0.627
Small group
(less than 5 persons4 13.8 0.268 0.145
In a group)
Individual 10 34.5 0.682 0.510
Mixed 1 3.4 0.543 N/A
Unspecified 11 37.9 0.512 0.742
Visual Presentation
2-D 8 27.6 0.404 0.386
3-D 13 44.8 0.606 0.680
Unspecified 8 27.6 0.559 0.584
4. Discussion
The results of this meta-analysis indicate that CSI
has moderately positive effects on students’
achievement in comparison with the effects of TI. An
effect is said to be medium when ES = 0.5 and large
when ES = 0.8 [60]. The effectiveness of CSI is also
confirmed by 93% of the positive study-weighted ES
values and 88% of the positive ESs overall. These
results are consistent with Lee’s [19] meta-analysis.
The moderateness of the effect must be kept in mind,
however; the overall study-weighted mean ES of
0.537 only indicates 20 percentile scores higher than
those for the TI group. The percentile scores for the
overall grand mean and median were 70 and 64,
respectively. The difference of 6 percentile points may
possibly be attributed to the mild overall grand
standard deviation (0.573).
This analysis of the variables suggests some
interesting trends in the accumulated research base
and is discussed in the following sections.
4.1 Study Characteristics
For the grade-level variable, there was no
significant difference in mean ES. However, the
smallness of the ES associated with high school
subjects (10th – 12th graders) is probably due to the fact
that these students have to study very hard for a
nationwide college entrance examination in Taiwan
and using CSI may not be the most sensible approach
for achieving the primary goal, which is passing this
examination. It is also possible that different
instructional approaches were used for these students
as compared to the other students. More studies need
to be conducted to clarify this variable.
CSI studies conducted to measure students’
achievement tend to focus on specific subject areas.
Studies included in the present meta-analysis were
spread over a wide range of subject areas. The fact that
more than 60% of the studies examined the effects of
CSI in the teaching of science (including Physics &
Chemistry) or Engineering suggest that CSI is more
often used for teaching these two subject areas in
Taiwan. In addition, the various subject areas
examined also suggest that CSI may potentially be
implemented in many different subject areas.
However, one subject area (math) showed the smallest
ES (0.084), suggesting that the effects of CSI may
vary over different subject areas.
The source of studies in a meta-analysis is always
an important factor to be examined [37], [61]. The fact
that approximately 85% of the studies were located in
dissertations/theses and only 14% were from journals
and research projects was predictable for Taiwan. It is
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
75
probably due to the limitation on length for papers
published in most journals; also, a large number of
educational journal articles in Taiwan do not report
detailed quantitative data, which restrains them from
being included in a meta-analysis. The larger ES
associated with published articles is typical in
meta-analysis [37], p. 227.
The year-of-publication variable in the
meta-analysis allows an assessment of the effect of
CSI over time. More than half of the studies located
were published between 2000 and 2002, suggesting
that CSI studies have just become more popular in the
past seven years in Taiwan; it is expected that more
studies will be published soon.
4.2 Methodological Characteristics
After reviewing several meta-analyses of media
comparison research, Clark [62] suggested that the
positive effects of media seemed to be the
uncontrolled effects of instructional method or content
differences between treatments that were compared;
he concluded that effects more or less disappeared
when the same instructor delivered all treatments. For
the present synthesis, although no significant
differences in ES were found, the results show that
studies using the same instructors for treatments had
visibly higher ES than those using different instructors.
This finding clearly indicates that the positive effects
of CSI as compared with TI should not be confused
with the uncontrolled effects of instructional method
noted by Clark. However, more than 1/3 of the studies
were coded as unspecified, the results were considered
tentative.
As for instrumentation, more than 85% of the
studies used local (researcher-developed) instruments,
while less than 5% used standardized instruments.
This is possibly because CSI is a new field in
educational research in Taiwan and there are not many
published instruments available. The sample size for a
study may significantly affect the statistical power of
the study; in general, the larger the sample size, the
better the statistical power. For the present
meta-analysis, 41% of studies and the largest mean ES
were associated with studies with medium sample size
(61-120). This seems to suggest that the effects of CSI
on students’ achievement may work better for medium
sample size in Taiwan.
As for research design type, 69% of the studies
employed a nonequivalent control group design and
its mean ES was higher than that for other types of
designs. This result suggests that nonequivalent
control group design is a more preferable design for
CSI study in Taiwan, and studies using this design also
produced results showing higher levels of academic
achievement.
4.3 Design Characteristics
As for comparison groups, the mean ES for TI
studies was slightly higher than for studies with no
comparison group, (e.g. studies in which the same
group is repeatedly measured). This result suggests the
greater effectiveness of CSI as compared with TI. In
one sense, this is unusual, because students in TI
classes do have certain learning activities, but in “No
Instruction” classes there is no learning activity for
students. Why “No Instruction” had better academic
outcomes than TI, when both were compared with CSI,
poses a serious question for researchers in Taiwan.
Duration of treatment is usually a critical variable
in meta-analysis. Clark [62], after reviewing several
meta-analyses of CAI, suggested that the effects of
new media to instruction were due to a novelty effect,
because the ES was reduced when treatment lasted for
longer period of time. Liao and Bright [63] also
reported this novelty effect in their meta-analysis of
the effects of programming on students’ cognitive
abilities. Although no significant difference was found
for this variable, the results of this synthesis do not
quite support the previous viewpoint regarding the
novelty effect. The largest mean ES was associated
with studies lasting 24 – 96 hours, while the smallest
mean ES was connected with studies lasting 6 -24
hours. There may be some unknown effects related to
the duration of treatment that influence students’ CSI
outcomes in Taiwan and the US, and these may lead to
the distinctly different outcomes of their respective
meta-analyses. More cross-national comparative
studies need to be done to clarify this issue.
It is usually difficult for teachers to decide
whether to use a new innovation as a replacement for
their usual means of instruction or as a supplement to
their instruction. In most cases, the decision depends
on which approach can provide more effective
outcomes. The difference in mean ES between studies
coded as replacements and those coded as
supplements was quite small, suggesting that in
Taiwan replacement CSI and supplemental CSI are
equally effective means of enhancing student
achievement. As for the type of innovation, the
difference in mean ES between computer simulation
and virtual reality (VR) was trivial, suggesting that
computer simulation and VR are equally effective
means of improving students’ learning outcomes.
Of the 29 studies included, 10 (35%) of them
employed individual instruction for the CSI classes, 4
(14%) were for small group instruction, and only 3
(10%) studies were for large group instruction.
Although no significant difference was found for this
variable, the differences in mean ES among studies
using individual (ES = 0.682), small group (ES =
0.268), and large group (ES = 0.505) instruction were
noticeable. These results suggest that CSI may be
more effective if it is implemented in individual or
large group settings in Taiwan. However, given that
over 1/3 of the studies were coded as unspecified, this
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
76
result was considered tentative.
As for visual presentation, the mean ES for
studies coded as 3-D (3-dimensional) was mildly
higher than that for studies coded as 2-D. This
suggests that 3-D visual presentation may be more
effective than 2-D for students’ learning. In general,
3-D approximates the real world more closely than
2-D; thus it may tend to provide learners with more
authentic feelings with regard to the subject area(s).
5. Conclusion
The results of this study suggest that the effects
of CSI are positive as compared with those of TI in
Taiwan. While many educators have put and are
putting tremendous effort into devising new ways of
using computer technology in the classroom, with the
clear expectation that such technology will
dramatically increase students’ academic achievement,
the results of this study provide can classroom
teachers with a cumulative bank of research-based
evidence for the positive effects of CSI on student
learning. The analyses of the studied variables,
however, provided no consistent evidence to support
Clark’s claim that there are no learning benefits to be
gained from employing different media in instruction
[62], [64], [65], [66], [67]. From an instructional
medium designer's point of view, Clark might
overlook the fact that certain media attributes make
certain methods possible, particularly when new
technology, such as computer simulation, is used as
the instructional innovation.
Left unanswered is the question as to which
factors most clearly contribute to positive outcomes.
Future research attempting to answer this question
will require further clarification of the exact
relationship between computer simulation and
learning. This meta-analysis shows only that CSI
tends to improve students’ academic achievement.
That information by itself is useful.
Acknowledgements
This research project was supported by a grant
from the National Science Council, Taiwan, Republic
of China, under the contract NSC 92-2520-S-003
-017. Our gratitude also goes to the Academic Page
Editing Clinic, National Taiwan Normal University.
References
References marked with an asterisk indicate
studies included in the meta-analysis
[1] Sheingold, K. and Hadley, M., 1990,
“Accomplished teachers integrating computers
into classroom practice,” New York: Bank
Street College of Education, Center for
Technology in Education.
[2] Thomas, R. and Hooper, E., 1991, “Simulation:
An opportunity we are missing,” Journal of
Research on Computing in Education, vol. 23,
no. 4, pp. 497-513.
[3] Heinich, R., Molenda, M. and Russell, J. D.,
1985, “Instructional media and the new
technologies of instruction,” Canada: John
Wiley & Sons Press.
[4] Faria, A.J. and Nulsen, R.O., 1995, “The
COMPETE saga: Or 25 years of writing and
administering simulation games,” Simulation &
Gaming, vol. 26, no. 4, pp. 439-448.
[5] Huppert, J., Yaakobi, J. and Lazarovvitz, R.,
1998, “Learning microbiology with computer
simulations: Students’ academic achievement
by methor and gender,” Research in Science &
Technological Education, vol. 16, no.2, 231-
246.
[6] River, R-H. and Vockell, E., 1987, “Computer
simulations to stimulate scientific problem
solving,” Journal of Research in Science
Teaching, vol. 24, no. 5, pp. 403 – 416.
[7] Michael, K. Y., 2000, “Comparison of students’
product creativity using a computer simulation
activity versus a hands-on activity in
technology education,” Unpublished doctoral
dissertation, Virginia Polytechnic Institute and
State University, VA.
[8] Chou, C. H., 1998, “The effectiveness of using
multimedia computer simulations coupled with
social constructivist pedagogy in a college
introductory physics classroom,” Unpublished
doctoral dissertation, Columbia University
Teachers College, NY.
[9] Serpell, Z. N., 2002, “Ethnicity and tool type as
they relate to problem-solving, transfer and
proxemic behavior in a communal learning
context,” Unpublished doctoral dissertation,
Howard University, WA.
[10] Slack, S. and Stewart, J., 1989, “Improving
student problem solving in genetics,” Journal
of Biological Education, vol. 23, pp. 308 – 312.
[11] Johnson, S. K. and Stewart, J., 1990, “Using
philosophy of science curriculum development:
An example from high school genetics,”
International Journal of Science Education, vol.
12, pp. 297 – 307.
[12] Collins, A. and Morrison, D., 1992, “Strategic
simulations in undergraduate biology: An
opportunity for instruction,” The 65th Annual
Meeting of the National Association for
Research in Science Teaching (NARST),
Boston, MA, USA.
[13] Parker, Z. J., 1995, “An exploratory study of
experiential hands-on learning vs. computer
simulation in a high school biology class,”
Unpublished doctoral dissertation, The
Pennsylvania State University, PA.
[14] Tannehill, D. E., 1998, “The use of a
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
77
low-fidelity computer simulation in teaching
the diagnosis of electronic automotive
systems,” Unpublished doctoral dissertation,
University of Missouri – Columbia, MO.
[15] Hopkins, K. S., 2001, “The effects of computer
simulation versus hands-on dissection and the
placement of computer simulation within the
learning cycle on student achievement and
attitude,” Unpublished doctoral dissertation,
Baylor University, TX.
[16] Hummel, T. J. and Batty, C. M., 1989, “A
comparison of computer simulation and
video-taped roleplays as instructional methods
in the teaching of specific interviewing skills,”
The Annual Meeting of the American
Educational Research Association, San
Francisco, CA, USA.
[17] Tylinski, D. J., 1994, “The effect of a computer
simulation on junior high students'
understanding of the physiological systems of
an earthworm,” Unpublished doctoral
dissertation, Indiana University of
Pennsylvania, PA.
[18] Dekkers, J. and Domatti, S., 1981, “The
integration of research studies on the use of
simulation as an instructional strategy,” Journal
of Educational Research, vol. 74, no. 6, pp.
424-427.
[19] Lee, J., 1999, “Effectiveness of
Computer-based instruction simulation: A
meta-analysis,” International Journal of
Instructional Media, vol. 26, no. 1, pp. 71-85.
[20] Alessi, S. M. and Trollip, S. R., 2001,
“Multimedia for learning : methods and
development,” Boston: Allyn and Bacon.
[21] Thomas, R. and Hooper, E., 1991, “Simulation:
An opportunity we are missing,” Journal of
Research on Computing in Education, vol. 23,
no. 4, pp. 497-513.
[22] McLellan, H., 1996, “Virtual realities,” In
Jonassen D. H. (Eds.), Handbook of Research
for Educational Communications and
Technology (p.457-487). New York: Simon &
Schuster Macmillan.
[23] Liao, Y. C, 2007, “Effects of computer assisted
instruction on students’ achievement in Taiwan:
A meta-analysis,” Computers and Education,
vol. 48 , no. 2, pp. 216-233.
[24] Feng, D., 1984, “A computer simulation on
corn-dried system,” Unpublished master’s
dissertation, National Taiwan Unicersity, Taipei,
Tai wa n.
[25] Tao, J., 1999, “A computer simulation for pre-
laplacian-of gaussian operation,” Unpublished
master’s dissertation, National Taiwan
Unicersity, Taipei, Taiwan.
[26] Jao, Y. H., 1993, “An empirical study of the
learning effects of a computer simulation,”
Unpublished master dissertation, Tamkang
University, Taipei, Taiwan.
[27] Chao, J. H., 2001, “A study for learning
effectiveness of ‘Electronic Circuit’ practial
course with computer-simulation teaching
strategy on vocational senior High school
level,” Unpublished master dissertation,
National Changhua University of Education,
Changhua, Taiwan.
[28] Chuang, C. F., 2000, “The efficacy of
computer-simulated courseware in teaching
selected technological concepts,” Proceedings
of the National Science Council (Part D), vol.
10, no. 2, pp. 51-60.
[29] Huang, J. C., 2002), “The effect of applying
multimedia animation resources on the
teaching of experiments of physics and
chemistry in junior high schools,” Unpublished
master dissertation, National Kaohsiung
Normal University, Kaohsiung, Taiwan.
[30] Lin, U. C., 2002, “A study of virtual-lab on
WWW affects students’ learning achievement
of natural science learning at elementary
schools,” Unpublished master’s dissertation,
National Tainan Teachers College, Tainan,
Tai wa n.
[31] Nein J. S., 2002, “A study of learning
effectiveness of digital logic course with
computer-simulation teaching method at
vocation senior high school level,”
Unpublished master dissertation, National
Changhua University of Education, Changhua,
Tai wa n.
[32] Su, J. S., 2002, “The study on the learning
effect of algorithms with Internet hypermedia
and interaction algorithm animations,”
Unpublished master dissertation, Ming Chuan
University, Taipei. Taiwan.
[33] Chen, C. H., 2002, “The effects of multimedia
simulation on senior high school students’
physics achievement,” Unpublished master
dissertation, National Kaohsiung Normal
University, Kaohsiung, Taiwan.
[34] Tseng C., 2002, “The effects of advanced
organizers and computer visual simulated
environment on primary school level's concept
of probability,” Unpublished master’s
dissertation, National Tainan Teachers College,
Tainan, Taiwan.
[35] Yu, J., 2002, “Applying artificial neural
network on the study of the learning style in a
web-based virtual science lab,” Unpublished
master’s dissertation, National Tainan Teachers
College, Tainan, Taiwan.
[36] Kulik, J., Kulik, C-L. and Bangert-Drowns, R.
L., 1985, “Effectiveness of computer-based
education in elementary pupils,” Computers in
Human Behavior, vol. 1, pp. 59-74.
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
78
[37] Glass, G. V., McGaw, B. and Smith, M. L.,
1981, “Meta-analysis in social research,”
Beverly Hills, CA: Sage Publications.
[38] Hunter, J. E., Schmidt, F. L. and Jackson, G. B.,
1982, “Meta-analysis: Cumulating research
findings across studies,” Beverly Hills, CA:
Sage.
[39] Andrews, G., Guitar, B. and Howie, P., 1980,
“Meta-analysis of the effects of stuttering
treatment,” Journal of Speech and Hearing
Disorders, vol. 45, pp. 287-307.
[40] Waxman, H. C., Wang, M. C., Anderson, K. A.
and Walberg, H. J., 1985, “Adaptive education
and student outcomes: A Quantitative
synthesis,” Journal of Educational Research,
vol. 78, no. 4, pp. 228-236.
[41] Chang, H. P., 2001, “A study for learning
effectiveness of computer simulation program
used in practical course ‘Basic Electricity’ for
vocational high school students in division of
refrigerating and air-conditioning,”
Unpublished master dissertation National
Changhua University of Education, Changhua,
Tai wa n.
[42] Chao, J. T., 1999, “Computerized project-based
learning for elementary science education,”
Unpublished master dissertation, National
Chengchi University, Taipei. Taiwan.
[43] Chen, C., 2003, “Study on merging information
technology into “Nine-year coherent
curriculum on earth science” creative
teaching-- An example to 921 earthquake
virtual reality education,” Unpublished
master’s dissertation, National Tainan Teachers
College, Tainan, Taiwan.
[44] Chen, H., 2005, “Application of Easy Java
Simulation to the Science Course in Junior
High School,” Unpublished master dissertation,
National Changhua University of Education,
Changhua, Taiwan.
[45] Chen, T., 1998, “The study of using virtual
reality to enhance the learning of spatial
ability,” Unpublished master dissertation,
National Chengchi University, Taipei. Taiwan.
[46] Chen, Y. S., 2002, “A study of the effects of
pneumatics with computer simulation
instruction on vocational education,”
Unpublished master dissertation, National
Changhua University of Education, Changhua,
Tai wa n.
[47] Guan, H. D., 1999, “The impacts of teaching
for conceptual change strategies enhanced with
computer simulation on students’ learning
about density related concepts,” Unpublished
master’s dissertation, National Hualien
Teachers College, Hualien, Taiwan.
[48] Hsu, Y. S. and Chiu, M. H., 2001, “The use of
multiple representations in a web-based and
situated learning environment,”
(NSC89-2511-S003-133).
[49] Hsu, Y. S., 2002, “The effects of virtual
learning environment with multiple
representations on science learning (III),”
(NSC90-2520-S003-010).
[50] Huang, C. K., 2002, “The efficiency of
applying animation multimedia for assistance
of teaching of physics and chemistry in junior
high school,” Unpublished master dissertation,
National Kaohsiung Normal University,
Kaohsiung, Taiwan.
[51] Lai, Y. J., 2002, “A study of learning styles on
natural science virtual-lab by using fuzzy
inference rules,” Unpublished master’s
dissertation, National Tainan Teachers College,
Tainan, Taiwan.
[52] Li, M., 2005, “Development and effect analysis
and of a VR-based diagnosis training system
for a gasoline-injection engine,” Unpublished
master’s dissertation, National Kaohsiung First
University of Science and Technology,
Kaohsiung, Taiwan.
[53] Lin, C., 2004, “The effect on fifth-grade
students’ concept of earth motion after
implementing information technology
integrated into teaching and assessment,”
Unpublished master’s dissertation, National
PingTung Teachers College, PingTung,
Tai wa n.
[54] Lin, Y., 2002, “The effects of integrating
information technology into teaching on
elementary school student Astronomy
achievement – A case of “moon”,”
Unpublished master’s dissertation, National
PingTung Teachers College, PingTung,
Tai wa n.
[55] *Shen, S. (2005). “An investigation of using
virtual reality to develop “earth motion”
curriculum in elementary school,” Unpublished
master dissertation, National Chun Cheng
University, Chia Yi, Taiwan.
[56] Su, C. Y., 2000, “How to help students learning
"The Stars" by using computer simulation
software,” Unpublished master dissertation,
Taipei Municipal Teachers College. Taipei.
Tai wa n.
[57] *Wang, C., 2005, “The effects of integrating
information technology into teaching on
elementary school student Astronomy
achievement – A case of “star”,” Unpublished
master’s dissertation, National PingTung
Teachers College, PingTung, Taiwan.
[58] Wang, K. K., 1994, “Middle school students'
decision-making on solid waste management in
Taiwan,” The Annual Meeting of the National
Association for Research in Science Teaching,
Anaheim, CA, USA.
Journal of Information Technology and Applications
Vol. 2, No. 2, pp. 69-79, 2007
79
[59] Wang, T., 2005, “A Study of problem-based
learning applied in the interactive on-line
simulation teaching -An example for the
“Digital Logic Lab. ” course at vocational high
school,” Unpublished master dissertation,
National Changhua University of Education,
Changhua, Taiwan.
[60] Cohen, J., 1977, “Statistical power analysis for
the behavioral science (Revised Edition),” New
York: Academic Press.
[61] Christman, E., Badgett, J. and Lucking, R.,
1997, “Progressive comparison of the effects
of computer-assisted instruction on the
academic achievement of secondary students,”
Journal of Research on Computing in
Education, vol. 29, no. 4, pp. 325 – 337.
[62] Clark, R. E., 1983, “Reconsidering research on
learning from media,” Review of Educational
Research, vol. 53, no. 4, pp. 445-459.
[63] Liao, Y. C. and Bright, G. W., 1991, “Effects of
computer programming on cognitive outcomes:
A meta-analysis,” Journal of Educational
Computing Research, vol. 7, no. 3, pp.
251-268.
[64] Clark, R. E., 1985, “Computer Research
Confounding,” The annual meeting of the
American Educational Research Association,
Chicago, IL, USA.
[65] Clark, R. E., 1991, “When researchers swim
upstream: Reflections on an unpopular
argument about learning from media,”
Educational Technology, vol. 31, no. 2, pp.
34 – 40.
[66] Clark, R. E., 1994, “Media will never influence
learning,” Educational Technology Research
and Development, vol. 42, no. 2, pp. 21 – 29.
[67] Clark, R. E. and Craig, T. G., 1992, “Research
on multimedia learning effects,” In Giardina, M.
(Ed.), Interactive multimedia learning
environments. New York: Springer Verlag.
Biography Yuen-kuang Liao received his
BA degree from National
Taiwan Normal University in
1980, M.ed and Ed.D degrees
from University of Houston,
Houston, Texas, in 1986 and
1990, respectively. Currently, he
is a professor in the Department
of Education, National Taiwan
Normal University (NTNU). He also serves as the
director of the Center for Educational Research in
NTNU. His current research interests include
meta-analysis in teaching and learning, technology
and gender issue, game-based learning, and attitudes
toward Internet.