ArticlePDF Available

Abstract and Figures

This research study employs a second-order meta-analysis procedure to summarize 40 years of research activity addressing the question, does computer technology use affect student achievement in formal face-to-face classrooms as compared to classrooms that do not use technology? A study-level meta-analytic validation was also conducted for purposes of comparison. An extensive literature search and a systematic review process resulted in the inclusion of 25 meta-analyses with minimal overlap in primary literature, encompassing 1,055 primary studies. The random effects mean effect size of 0.35 was significantly different from zero. The distribution was heterogeneous under the fixed effects model. To validate the second-order meta-analysis, 574 individual independent effect sizes were extracted from 13 out of the 25 meta-analyses. The mean effect size was 0.33 under the random effects model, and the distribution was heterogeneous. Insights about the state of the field, implications for technology use, and prospects for future research are discussed.
Content may be subject to copyright.
http://rer.aera.net
Research
Review of Educational
http://rer.sagepub.com/content/81/1/4
The online version of this article can be found at:
DOI: 10.3102/0034654310393361
January 2011
2011 81: 4 originally published online 10REVIEW OF EDUCATIONAL RESEARCH
F. Schmid
Rana M. Tamim, Robert M. Bernard, Eugene Borokhovski, Philip C. Abrami and Richard
Learning : A Second-Order Meta-Analysis and Validation Study
What Forty Years of Research Says About the Impact of Technology on
Published on behalf of
American Educational Research Association
and
http://www.sagepublications.com
can be found at:Review of Educational ResearchAdditional services and information for
http://rer.aera.net/alertsEmail Alerts:
http://rer.aera.net/subscriptionsSubscriptions:
http://www.aera.net/reprintsReprints:
http://www.aera.net/permissionsPermissions:
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
What is This?
- Jan 10, 2011 OnlineFirst Version of Record
- Mar 2, 2011Version of Record >>
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Review of Educational Research
March 2011, Vol. 81, No. 1, pp. 4–28
DOI: 10.3102/0034654310393361
© 2011 AERA. http://rer.aera.net
4
What Forty Years of Research Says About
the Impact of Technology on Learning:
A Second-Order Meta-Analysis and
Validation Study
Rana M. Tamim
Hamdan Bin Mohammed e-University
Robert M. Bernard, Eugene Borokhovski,
Philip C. Abrami, and Richard F. Schmid
Concordia University
This research study employs a second-order meta-analysis procedure to sum-
marize 40 years of research activity addressing the question, does computer
technology use affect student achievement in formal face-to-face classrooms
as compared to classrooms that do not use technology? A study-level meta-
analytic validation was also conducted for purposes of comparison. An
extensive literature search and a systematic review process resulted in the
inclusion of 25 meta-analyses with minimal overlap in primary literature,
encompassing 1,055 primary studies. The random effects mean effect size of
0.35 was significantly different from zero. The distribution was heteroge-
neous under the fixed effects model. To validate the second-order meta-
analysis, 574 individual independent effect sizes were extracted from 13 out
of the 25 meta-analyses. The mean effect size was 0.33 under the random
effects model, and the distribution was heterogeneous. Insights about the
state of the field, implications for technology use, and prospects for future
research are discussed.
Keywords:  computers and learning, instructional technologies, achievement, 
meta-analysis.
In 1913 Thomas Edison predicted in the New York Dramatic Mirrorthat “books 
will soon be obsolete in the schools.  . . .  It is possible to teach  every branch of 
human knowledge with the motion picture. Our school system will be completely 
changed in ten years” (quoted in Saettler, 1990, p. 98). We know now that this did 
not exactly happen and that, in general, the effect of analog visual media on school-
ing, including video, has been modest. In a not so different way, computers and 
RER393361RER
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
5
associated technologies have been touted for their potentially transformative prop-
erties. No one doubts their growing impact in most aspects of human endeavor, and 
yet strong evidence of their direct impact on the goals of schooling has been illu-
sory  and  subject  to  considerable  debate.  In  1983,  Richard  E.  Clark  famously 
argued that media have no more effect on learning than a grocery truck has on the 
nutritional value of the  produce it brings to market. He also  warned against the 
temptation to compare media conditions to nonmedia conditions in an attempt to 
validate or justify their use. Features of instructional design and pedagogy, he 
argued, provide the real active ingredient that determines the value of educational 
experiences. Others, like Robert Kozma (e.g., 1991, 1994) and Chris Dede (e.g., 
1996), have argued that computers may possess properties or affordances that can 
directly change the nature of teaching and learning. Their views, by implication, 
encourage the study of computers and other  educational media use in the class-
room for their potential to foster better achievement and bolster student attitudes 
toward schooling and learning in general.
Although the  debate about technology’s role in education has not  been fully 
resolved, literally thousands of comparisons between computing and noncomput-
ing classrooms, ranging from kindergarten to graduate school, have  been made 
since the late 1960s. And not surprisingly, these studies have been meta-analyzed 
at intervals since then in an attempt to characterize the effects of new computer 
technologies as they emerged. More than 60 meta-analyses have appeared in the 
literature since 1980,  each  focusing on a specific question  addressing  different 
aspects such as subject matter, grade level, and type of technology. Although each 
of the published meta-analyses provides a valuable piece of information, no single 
one is capable of answering  the  overarching  question  of  the  overall  impact  of 
technology use on student achievement. This could be achieved by conducting a 
large-scale comprehensive meta-analysis covering various technologies, subject 
areas, and grade levels. However, such a task would represent a challenging and 
costly undertaking. Given the extensive number of meta-analyses in the field, it is 
more reasonable and more feasible to synthesize their  findings.  Therefore,  the 
purpose of this study is to synthesize findings from meta-analyses addressing the 
effectiveness of computer technology use in educational contexts to answer the big 
question of  technology’s impact on student achievement, when the comparison 
condition contains no technology use.
We employ an approach to meta-analysis known as second-order meta-analysis 
(Hunter &  Schmidt, 2004) as a  way of summarizing the  effects of many  meta-
analyses. Second-order meta-analysis  has  its own merits and has  been  tried by 
reviewers across  several  disciplines  (e.g.,  Butler,  Chapman,  Forman,  & Beck, 
2006; Lipsey & Wilson, 1993; Møller & Jennions, 2002; Wilson & Lipsey, 2001). 
According to those who  have experimented with it, the approach is intended  to 
offer the potential to summarize a growing body of meta-analyses, over a number 
of years, in the same way that a meta-analysis attempts to reach more reliable and 
generalizable inferences than  individual  primary  studies (e.g., Peterson, 2001). 
However, no common or standard set of procedures has emerged, and specifically 
there has been no attempt to address the methodological quality of the included 
meta-analyses or explain the variance in effect sizes.
This second-order meta-analysis attempts to synthesize the findings of the cor-
pus of meta-analyses addressing the impact of computer technology integration on 
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
6
student achievement. To validate its results, we conduct a study-level synthesis of 
research reports  contained  in the second-order meta-analysis. Results  will help 
answer the overarching question  of  the  impact  of  technology  use  on  students’
performance as compared to the absence of technology and may lay the foundation 
for new forms of quantitative primary research that investigates the comparative 
advantages or disadvantages of more or less technology use or functions of tech-
nology (e.g., cognitive tools, interaction tools, information retrieval tools).
Syntheses of Meta-Analyses
A second-order meta-analysis (Hunter & Schmidt, 2004) is defined as an
approach for quantitatively synthesizing findings from a number of meta-analyses
addressing a similar research question. In some ways, the methodological issues
are the same as those addressed by “first-order” meta-analysts; as we note, in some
ways they are quite different. As previously indicated, a number of syntheses of
meta-analyses have appeared in the literatures in various disciplines.
Among researchers who have used quantitative approaches to summarize meta-
analytic  results  are  Mark  Lipsey  and  David  Wilson (Lipsey  &  Wilson,  1993; 
Wilson & Lipsey, 2001), both addressing psychological, behavioral, and educa-
tional treatments; Sipe and Curlette (1997), targeting factors related to educational 
achievement; Møller and Jennions (2002), focusing on issues in evolutionary biol-
ogy; Barrick, Mount, and Judge (2001), addressing personality and performance; 
Peterson  (2001),  studying  college  students  and  social  science  research;  and 
Luborsky et al. (2002), addressing psychotherapy research.
All  of  these  previous  syntheses  attempted  to  reach  a  summary  conclusion 
by answering a “big question” that was posed in the literature by previous meta-
analysts. However, none  addressed  the  methodological  quality  of  the  included 
meta-analyses in the same way that the methodological quality of primary research 
is addressed in a typical first-order meta-analysis (e.g., Valentine & Cooper, 2008), 
and none attempted to explain the variance in effect sizes. However, the issue of 
overlap in primary literature included in the synthesized meta-analyses was tack-
led in some. For example, Wilson and Lipsey (2001) excluded one review from 
each pair  that had more  than 25% overlap  in primary research  addressed while 
making judgment calls when the list of included studies was unavailable. Barrick 
et al. (2001) conducted two separate analyses, one with the set of meta-analyses 
that had no  overlap  in the studies integrated and  one  with the full set  of  meta-
analyses, including those with substantial overlap in the studies they include. In 
a combination of  both  approaches,  Sipe  and  Curlette (1997) considered  meta- 
analyses as unique if they had no overlap or fewer than 3 studies in common. The 
meta-analysis with the larger number of studies was included, and if both were not 
more than 10 studies apart, the more recently published one was included. In addi-
tion, analyses were conducted for the complete set and for the set that they consid-
ered to be unique.
Technology Integration Meta-Analyses
As noted previously, numerous meta-analyses addressing technology integra-
tion and its impact on students’ performance have been published since Clark’s
(1983) initial proclamation on the effects of media. Schacter and Fagnano (1999)
conducted a qualitative review of meta-analyses, and Hattie (2009) conducted a
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
7
comprehensive synthesis of meta-analyses in the field of education, but no second-
order meta-analysis has been reported or published targeting the specific area of
computer technology and learning.
In examining the entire collection of published meta-analyses, it becomes clear 
that each focuses on a specific question addressing particular issues and aspects of 
technology integration.  For example, Bangert-Drowns (1993) studied  the influ-
ence of word processors on student achievement at various grade levels (reported 
mean ES = 0.27), whereas P. A. Cohen and Dacanay (1992) focused on the impact 
of computer-based instruction (CBI) on students’ achievement at the postsecond-
ary levels (reported mean ES = 0.41). Christmann and Badgett (2000) investigated 
the  impact  of  computer-assisted  instruction  (CAI)  on  high  school  students’
achievement (reported mean ES= 0.13), and Bayraktar (2000) focused  on  the 
impact of  CAI on K–12 students’ achievement  in science (reported  mean ES= 
0.27). The meta-analysis conducted by Timmerman and Kruepke (2006) addressed 
CAI and its influence on students’achievement at the college level (reported mean 
ES = 0.24). Although the effect sizes vary in magnitude, and although there is some 
redundancy in the issues addressed by the different meta-analyses and some over-
lap in the empirical research included in them, the existence of this corpus allows 
us an opportunity to derive an estimate of the overall impact of technology integra-
tion as it has developed and been studied in technology-rich versus technology-
impoverished educational environments.
By applying the procedures and standards of systematic reviews to the synthe-
sis of meta-analyses in the field, this study is intended to capture the essence of 
what the existing body of literature says about the impact of computer technology 
use on students’ learning performance and inform researchers, practitioners, and 
policymakers about the state of the field. In addition, this approach may prove to 
be extremely helpful in certain situations when reliable answers to global  ques-
tions are  required  within  limited  time  frames  and  with  limited  resources.  The 
approach may be considered a brief review (Abrami  et  al.,  2010)  that  offers  a 
comprehensive understanding of the empirical research up to a point in time while 
utilizing relatively fewer resources than an extensive brand-new meta-analysis.
Method
The general systematic approach used in conducting a regular meta-analysis (e.g.,
Lipsey & Wilson, 2001) was followed in this second-order meta-analysis with some
modifications to meet its objectives as presented in the following section.
Inclusion and Exclusion Criteria
Similar to all forms of systematic reviews, a set of inclusion or exclusion criteria
was specified to help (a) set the scope of the review and determine the population to
which generalizations will be possible, (b) design and implement the most adequate
search strategy, and (c) minimize bias in the review process for inclusion of meta-
analyses. For this second-order meta-analysis, a meta-analysis was included if it:
  addressed the impact of any form of  computer technology as a supple-
ment for in-class instruction as compared to traditional, nontechnology 
instruction in regular classrooms within formal educational settings (this 
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
8
criterion excluded distance education and fully online learning compara-
tive studies, previously reviewed by Bernard et al., 2004; U.S. Department 
of Education, 2009; and others);
focused  on  students’  achievement  or  performance  as  an  outcome
measure;
  reported an average effect size;
  was published during or after 1985 and was publicly available.
If any of the above-mentioned criteria were not met, the study was disqualified and 
the reason for exclusion noted.
The year 1985 was considered as a cut point since it was around that year that 
computer technologies became widely accessible to a large percentage of schools 
and other  educational  institutions (Alessi & Trollip, 2000). Moreover, by 1985 
meta-analysis had been established as an acceptable form of quantitative synthesis 
with clearly specified and systematic procedures. As highlighted by Lipsey and 
Wilson (2001), this is supported  by the fact that by the early  1980s there was a 
substantial corpus of books and articles addressing meta-analytic procedures by 
prominent researchers  in  the  field,  such  as  Glass,  McGaw,  and  Smith  (1981); 
Hunter,  Schmidt,  and  Jackson  (1982);  Light  and  Pillemer  (1984);  Rosenthal 
(1984); and Hedges and Olkin (1985).
Developing and Implementing Search Strategies
To capture the most comprehensive and relevant set of meta-analyses, a search
strategy was designed with the help of an information retrieval specialist. To avoid
publication bias and the file drawer effect, the search targeted published and
unpublished literature. The following retrieval tools were used:
1.  Electronic  searches  using  major  databases,  including  ERIC,  PsycINFO, 
Education  Index,  PubMed  (Medline),  AACE  Digital  Library,  British 
Education Index, Australian Education Index,  ProQuest Dissertations and 
Theses  Full-text,  EdITLib,  Education  Abstracts,  and  EBSCO Academic 
Search Premier
2.  Web searches using the Google and Google Scholar search engines
3.  Manual  searches  of  major  journals,  including  Review of Educational
Research
4.  Reference lists of prominent articles and major literature reviews
5.  The Centre for the Study of Learning and Performance’s in-house eLEARN-
ing  database,  compiled  under  contract  from  the  Canadian  Council  on 
Learning
The search strategy included the term meta-analysis and its synonyms, such as 
quantitative review and systematic review. In addition, an array of search terms 
relating to computer technology use in educational contexts was used. They varied 
according  to  the  specific  descriptors  within  different  databases  but  generally 
included terms such as computer-based instruction, computer-assisted instruction, 
computer-based teaching,  electronic mail, information  communication  technol-
ogy, technology uses  in education, electronic learning,  hybrid courses, blended 
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
9
learning, teleconferencing, Web-based  instruction,  technology  integration,  and 
integrated learning systems.
Searches were updated at the end of 2008, and results were compiled into a com-
mon bibliography. This year seems  an appropriate place  to draw a line between 
meta-analyses summarizing technology versus no technology contrasts as more and 
more research addresses the comparative effectiveness of different technologies.
Reviewing and Selecting Meta-Analyses
The  searches  resulted  in  the  location  of  429  document  abstracts  that  were 
reviewed independently by two researchers. From the identified set of documents, 
158 were retrieved for full-text review. The interrater agreement for this step was 
85.5% (Cohen’s κ = .71).
To establish coding reliability for full-text review, two researchers reviewed 15 
documents independently, resulting in an interrater agreement of 93.3% (Cohen’s 
κ = .87). The primary investigator reviewed the rest of the retrieved full-text docu-
ments, and in cases where the decision was not straightforward, a second reviewer 
was consulted. From the 158 selected documents, 12 were not available, leaving 
146 for full-text review. From these, 37 met all of the inclusion criteria.
An important issue in meta-analysis is that of independence of samples. It is not 
uncommon, for instance, to find a single control condition compared to multiple 
treatments. When the same sample is used repeatedly, the chance of making a Type 
I error increases. An analogous problem exists in a second-order meta-analysis, 
when  the  same  studies  are  included  in  more  than  one  meta-analysis  (Sipe  & 
Curlette, 1997). To minimize this problem, the first step taken in this second-order 
meta-analysis was to compile the overall set of primary studies included in the 37 
different meta-analyses and to  specify  the  single  or  multiple meta-analyses  in 
which each study appeared. The overall number of different primary studies that 
appeared in  one or more meta-analyses  was 1,253. For each  meta-analysis, the 
number and frequency of studies that were included in another meta-analysis were 
calculated.
We identified the set of meta-analyses that contained the largest number  of 
primary studies with the least level of overlap among them. Because of the fact that 
primary studies included in more than one  meta-analysis did not only appear in 
two particular meta-analyses, the removal of one meta-analysis from the overall 
set  resulted  in  a  change  in  the  frequencies  of  overlap  in  other  meta-analyses. 
Therefore, the highest overlapping meta-analyses were removed one at a time until 
25% or less of overlap (Wilson & Lipsey, 2001) in included primary studies was 
attained for each of the remaining meta-analyses. After the exclusion of any meta-
analysis, the percentage  of  overlap  for  the remaining set of meta-analyses was 
recalculated, and based on the new frequencies another highly overlapping meta-
analysis was excluded. This process was repeated 12 times.
The process was completed by the principal investigator, with spot checks con-
ducted to ensure that no mistakes were made. The final number of meta-analyses 
that were considered to be unique or having acceptable levels of overlap was 25, 
with none having a count of overlapping studies beyond 25%. The overall number 
of primary studies included in this set was 1,055 studies, which represents 84.2% 
of the total number of primary studies included in the overall set of meta-analyses.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
10
The final set of 25 included meta-analyses with a list of technologies, subject 
matter, and grade level addressed is presented in Table 1. The list offers an idea of 
the variety and richness of the topics addressed in the various meta-analyses and 
thus the inability to select a single representative meta-analysis to answer the over-
arching question. The  list  of  included meta-analyses along with the number of 
TABLE 1
List of technologies, grade levels, and subject matter in each meta-analysis
Meta-analysis Technology Grade level Subject matter
Bangert-Drowns (1993) Word processor All Combination
Bayraktar (2000) CAI S and P Science and health
Blok, Oostdam, Otter, and 
Overmaat (2002)
CAI E Language
Christmann and Badgett 
(2000)
CAI S Combination
Fletcher-Flinn and Gravatt 
(1995)
CAI P Combination
Goldberg, Russell, and  
Cook (2003)
Word processor E and S Language
Hsu (2003) CAI P Mathematics
Koufogiannakis and  
Wiebe (2006)
CAI P Information  
Literacy
Kuchler (1998) CAI S Mathematics
Kulik and Kulik (1991) CBI All Combination
Y. C. Liao (1998) Hypermedia All Combination
Y.-I. Liao and Chen (2005) CSI E and S Combination
Y. K. C. Liao (2007) CAI All Combination
Michko (2007) Technology P Engineering
Onuoha (2007) Simulations S and P Science and health
Pearson, Ferdig, Blomeyer, 
and Moran (2005)
Digital media S Language
Roblyer, Castine, and King 
(1988)
CBI All Combination
Rosen and Salomon (2007) Technology E and S Mathematics
Schenker (2007) CAI P Mathematics
Soe, Koki, and Chang 
(2000)
CAI E and S Language
Timmerman and Kruepke 
(2006)
CAI P Combination
Torgerson and Elbourne 
(2002)
ICT E Language
Waxman, Lin, and Michko 
(2003)
Technology E and S Combination
Yaakub (1998) CAI S and P Combination
Zhao (2003) ICT P Language
Note.  E  =  elementary;  S  =  secondary;  P =  postsecondary;  CAI  =  computer-assisted  instruction;  CBI  = 
computer-based instruction; ICT = information and communication technology.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
11
studies and the percentage of overlap along with the list of excluded meta-analyses 
is available on request.
Extracting Effect Sizes and Standard Errors
Effect sizes. An effect size is the metric introduced by Glass (1977) representing 
the difference between the means of  an experimental group and a control group 
expressed in standardized  units  (i.e.,  divided by a standard deviation). As such, 
an effect  size is easily interpretable  and  can be converted to a  percentile  differ-
ence between the treatment and control groups. Another benefit is the fact that an 
effect size is not greatly affected by sample size, as are test statistics, thus reduc-
ing problems of power associated with large and small samples. Furthermore, an 
aspect that  is significant for  meta-analysis is that effect sizes can be aggregated 
and then subjected to further statistical analyses (Lipsey & Wilson, 2001).  The 
three most common methods for calculating effect sizes are (a) Glass’s Δ, which 
uses  the  standard  deviation  of  the  control  group,  (b)  Cohen’s  d,  which  makes 
use of the pooled standard deviation of the control and experimental groups, and 
(c) Hedges’s g, which applies a correction to overcome the problem of the over-
estimation of the effect size based on small samples.
For the purpose of this second-order meta-analysis, the effect sizes from differ-
ent meta-analyses were extracted while noting the type of metric used. In a perfect 
situation, where authors  provide  adequate  information,  it would be possible to 
transform all of the three types of group comparison effect sizes to one type, pref-
erably Hedges’s g. However, because of reporting limitations, this was not possi-
ble, particularly when Glass’s Δ was used in a given meta-analysis. Knowing that 
all three (Δ, d, g) are variations for calculating the standardized mean difference 
between two groups,  and  assuming  that the sample sizes were large  enough  to 
consider the differences between the three forms to be minimal, we decided to use 
the effect sizes in the forms in which they were reported.
In cases where the included meta-analysis expressed the effect size as a stan-
dard correlation coefficient, the reported effect size was converted to Cohen’s d by 
applying Equation 1 (Borenstein, Hedges, Higgins, & Rothstein, 2009).
(1)
Standard error. Standard error is a common metric used to estimate variability in the 
sampling distribution and thus in the population. Effect sizes calculated from larger 
samples are better population estimates  than  those  calculated from studies with 
smaller samples. Thus, larger samples have smaller standard errors and smaller stud-
ies have larger standard errors. The standard error of g is  calculated by applying 
Equation 2. Notice that the standard error is largely based on sample size, with the 
exception of the inclusion of g2 under the radical. As a result of this, two samples of 
equal size but with different effect sizes will have slightly different standard errors. 
Standard error squared is the variance of the sampling distribution, and the inverse 
of the variance is used to weight studies under the fixed effects model.
(2)
dr
r
XY
XY
=
2
12
= + + ++ −
1 1
213
4 9
2
n n
g
n n n n
e c e c e c
( ) ( )
s
g
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
12
In this second-order meta-analysis four different procedures for extracting stan-
dard errors from the  included meta-analyses were used, depending on the  avail-
ability of information in a given meta-analysis:
  Extraction of the standard error as reported by the author
  Calculation of the standard error from individual effect sizes and corre-
sponding sample sizes for the included primary studies
  Calculation of the standard error from a reported confidence interval
  Imputation of the standard error from the calculated weighted average 
standard error of the included meta-analyses
Coding Study Features
In a regular meta-analysis, study features are typically extracted from primary
studies as a means of describing the studies and performing moderator analysis. A
similar approach was followed in this second-order meta-analysis targeting com-
mon qualities available in the included meta-analyses. A variety of resources was
reviewed for possible assistance in the design of the codebook for the current proj-
ect, including (a) literature pertaining to meta-analytic procedural aspects (e.g.,
Bernard & Naidu, 1990; Lipsey & Wilson, 2001; Rosenthal, 1984), (b) published
second-order meta-analyses and reviews of meta-analyses (e.g., Møller & Jennions,
2002; Sipe & Curlette, 1997; Steiner, Lane, Dobbins, Schnur, & McConnell,
1991), and (c) available standards and tools for assessing the methodological qual-
ity of meta-analyses such as Quality of Reporting of Meta-Analyses (Moher et al.,
2000) and the Quality Assessment Tool (Health-Evidence, n.d.).
The overall structure of the codebook was influenced by the synthesis of meta-
analyses conducted  by Sipe and  Curlette (1997). The  four main sections of  the 
codebook are (a) study identification (e.g., author, title, and year of publication), 
(b) contextual features (e.g., research question, technology addressed, subject mat-
ter, and grade level), (c) methodological features (e.g., search phase, review phase, 
and study feature extraction), and (d) analysis procedures and effect size informa-
tion (e.g., type of effect size, independence of data, and effect size synthesis pro-
cedures). The full codebook is presented in Appendix A.
The process of study feature coding was conducted by two researchers working 
independently. Interrater agreement was 98.7% (Cohen’s κ = .97). After completing 
the coding independently, the two researchers met to resolve any discrepancies.
Methodological Quality Index
Unlike the report of a single primary study, a meta-analysis carries the weight
of a whole literature of primary studies. Done well, its positive contribution to the
growth of a field can be substantial; done poorly and inexpertly, it can actually do
damage to the course of research. Consequently, while developing the codebook,
specific study features were designed to help in assessing the methodological qual-
ity of the included meta-analyses. The study features addressed aspects pertaining
to conceptual clarity (two items), comprehensiveness (seven items), and rigor of a
meta-analysis (seven items). Items addressing conceptual clarity targeted (a) clar-
ity of the experimental group definition and (b) clarity of the control group defini-
tion. Items that addressed comprehensiveness targeted the (a) literature covered,
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
13
(b) search strategy, (c) resources used, (d) number of databases searched, (e) inclu-
sion or exclusion criteria, (f) representativeness of included research, and (g) time
between the last included study and the publication date. Finally, items that
addressed aspects of rigor targeted the thoroughness, accuracy, or availability of
the (a) article review, (b) effect size extraction, (c) codebook description or over-
view, (d) study feature extraction, (e) independence of data, (f) standard error cal-
culation, and (g) weighting procedure. For each of the included items, a
meta-analysis could have received a score of either 1 (low quality) or 2 (high qual-
ity). The total score out of 16 indicated its methodological quality; the higher the
score, the better the methodological quality.
The methodological quality scores for the 25 meta-analyses ranged between 5 
and 13. The studies were grouped into weak, moderate, and strong meta-analyses. 
Meta-analyses scoring 10 or more were considered strong (k = 10). Meta-analyses 
scoring 8 or 9 were considered moderate (k = 7), whereas meta-analyses scoring 7 
or less were considered weak (k= 8).
Data for Validation Process
To allow for the validation of the findings of the second-order meta-analysis,
individual study-level effect sizes and sample sizes from the primary studies
included in the various meta-analyses were extracted. In the cases where the over-
all sample size was provided, it was assumed that the experimental and control
groups were equal in size, and in the case of an odd overall number of participants,
the sample size was reduced by one. However, because these data were to be used
for validation purposes, if sample sizes were not given by the authors for the indi-
vidual effect sizes, no imputations were done. From the 25 studies, 13 offered
information allowing for the extraction of 574 individual effect sizes and their cor-
responding sample sizes, with the overall sample size being 60,853 participants.
The principal investigator conducted the extraction, and random spot checks were
done as a mistake-prevention measure.
Data Analysis
For the purpose of outlier, publication bias, effect size synthesis, and moderator
analyses, the Comprehensive Meta Analysis 2.0 software package (Borenstein,
Hedges, Higgins, & Rothstein, 2005) was used. The effect size, standard error,
methodological quality indexes, and scores for the extracted study features for
each of the 25 different meta-analyses were input into the software. A list contain-
ing information about the included studies, their mean effect size type, effect size
magnitude, standard errors, number of overlapping studies, and percentage overlap
in primary literature with other meta-analyses for each of the included meta-anal-
yses is presented in the table in Appendix B.
Results
In total, 25 effect sizes were extracted from 25 different meta-analyses involving
1,055 primary studies (approximately 109,700 participants). They represented com-
parisons of student achievement between technology-enhanced classrooms and more
traditional types of classrooms without technology. The meta-analyses addressed a
variety of technological approaches that were used in the experimental conditions to
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
14
enhance and support face-to-face instruction. The control group was what many edu-
cation researchers refer to as the “traditional” or “computer-free” settings.
Outlier Analysis and Publication Bias
Outlier analysis through the “one study removed” approach (Borenstein et al.,
2009) revealed that all effect sizes fell within the 95th confidence interval of the
average effect size, and thus, there was no need to exclude any studies. Examination
of the funnel plot revealed an almost symmetrical distribution around the mean
effect size with no need for imputations, indicating the absence of any obvious
publication bias.
Methodological Quality
In approaching the analysis, we wanted to determine if the three levels of meth-
odological quality were different from one another. This step is analogous to a
comparison among different research designs or measures of methodological qual-
ity that is commonly included in a regular meta-analysis. The mixed effects com-
parison of the three levels of methodological quality of the 25 meta-analyses is
shown in Table 2. Although there seems to be a particular tendency for smaller
effect sizes to be associated with higher methodological quality, the results of this
comparison were not significant. On the basis of this analysis, we felt justified in
combining the studies and not differentiating among levels of quality.
Effect Size Synthesis and Validation
The weighted mean effect size of the 25 different effect sizes was significantly
different from zero for both the fixed effects and the random effects models. For
the fixed effects model, the point estimate was 0.32, z(25) = 34.51, p < .01, and was
significantly heterogeneous, QT(25) = 142.88, p < .01, I2 = 83.20. The relatively
high Q value and I2 reflect the high variability in effect sizes at the meta-analysis
level. Applying the random effects model, the point estimate was also significant,
0.35, z(25) = 14.03, p < .01. The random effects model was considered most appro-
priate for interpretation because of the wide diversity of technologies, settings,
subject matter, and so on among the meta-analyses (Borenstein et al., 2009).
However, the presence of heterogeneity, detected in the fixed effects model result,
suggested that a probe of moderator variables might reveal additional insight into
the nature of differences among the meta-analyses. For this, we applied the mixed
effects model.
For the purpose of validating the findings of the second-order meta-analysis, 
the extracted  raw  data  were  used  in  the calculation  of  the  point  estimate  in  a 
TABLE 2
Mixed effects comparison of levels of methodological quality
Level k ES SE Q statistic
Low   8 0.42* 0.07
Medium   7 0.35* 0.04
High 10 0.31* 0.03
Total between 2.50
p = .29. *p < .05.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
15
process similar to  a  regular  meta-analysis. As  described  earlier, 574 individual 
effect sizes  and their corresponding  sample sizes were extracted,  with the total 
number of participants being 60,853. The weighted mean effect size for the 574 
individual effect sizes was significantly different from  zero with both  the fixed 
effects model and the random effects models. From the fixed effects model, the 
point estimate was 0.30, z(574) = 37.13, p < .01, and heterogeneous, QT(574) = 
2,927.87, p < .01, I2= 80.43. With the random effects model, the point estimate 
was 0.33.
In comparing the second-order analysis with the validation sample, it is clear 
that the average effect sizes are similar for both the fixed effects and the random 
effects models. The I2 for the second-order meta-analysis and the validation sam-
ple indicates similar variability, although the Q totals are very different (i.e., the Q
total tends to increase as the sample size increases).
Moderator Variable Analysis
To explore variability, a mixed effects model was used in moderator variable
analysis with the coded study features. A mixed effects model summarizes effect
sizes within subgroups using a random model but calculates the between-group Q
value using a fixed model (Borenstein et al., 2009). Moderator analyses for subject
matter, type of publication, and type of research designs included did not reveal
any significant findings. For two substantive moderator variables (i.e., subject mat-
ter and type of technology) the number of levels (five and eight, respectively)
mitigated against finding differences. However, the analysis revealed that “primary
purpose of instruction” (i.e., “direct instruction” vs. “support for instruction”) was
significant in favor of the “support instruction” condition (see Table 3). These two
levels of purpose of instruction were formed by considering technology use in the
bulk of studies contained in each meta-analysis, as to whether they involved direct
instruction (e.g., CAI and CBI) or provided support for instruction (e.g., the use of
word processors and simulations).
Likewise, when studies involving K–12 applications of technology were com-
pared to postsecondary applications, a significant difference was found. This result 
favored K–12 applications  (Table 3).  This comparison involved a  subset  of  20 
studies out of the total of 25. The other 5 studies were mixtures of studies involving 
K–12 and postsecondary.
TABLE 3
Results of the analysis of two moderator variables
Level k ES SE Q statistic
Primary purpose of technology use
  Direct instruction 15 0.31* 0.01
  Support instruction 10 0.42* 0.02
  Total between 3.86*
Grade level of student
  K–12   9 0.40* 0.04
  Postsecondary 11 0.29* 0.03
  Total between 4.83*
*p < .05.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
16
Summary of the Findings
The current second-order meta-analysis summarized evidence regarding the
impact of technology on student achievement in formal academic contexts based
on an extensive body of literature. The synthesis of the extracted effect sizes, with
the support of the validation process, revealed a significant positive small to mod-
erate effect size favoring the utilization of technology in the experimental condi-
tion over more traditional instruction (i.e., technology free) in the control group.
The analysis of two substantive moderator variables revealed that computer tech-
nology that supports instruction has a marginally but significantly higher average
effect size compared to technology applications that provide direct instruction.
Also, it was found that the average effect size for K–12 applications of computer
technology was higher than computer applications introduced in postsecondary
classrooms.
Discussion
The main purpose of this second-order meta-analysis is to bring together more
than 40 years of investigations, beginning with Schurdak (1967), that have asked
the general question, “What is the effect of using computer technology in class-
rooms, as compared to no technology, to support teaching and learning?” It is a
relevant question as we enter an age of practice and research in which nearly every
classroom has some form of computer support. Although research studies compar-
ing various forms of technology use in both control and treatment groups are
becoming popular, it does not seem that technology versus no technology com-
parisons will become obsolete. Studies of this sort may still be useful for answer-
ing specific targeted questions, as in the case of software development (e.g.,
software that replaces or enhances traditional teaching methods). A case in point
is a recent meta-analysis (Sosa, Berger, Saw, & Mary, 2010) of statistics instruc-
tion in which CAI classroom conditions were compared to standard lecture-based
instruction conditions. The overall results favored CAI (d = 0.33, p = .00, k = 45),
similar to the findings from the current study. The results of this very targeted
meta-analysis provide evidence of the important contributions that CBI can pro-
vide to teachers of statistics. Relating these findings to the current work, we aver-
aged 4 meta-analyses of mathematics instruction from the 25 previously described
(two were referenced by Sosa et al., 2010) and found that the average effect size
was 0.32 under the fixed effects model (0.39 under the random effects model).
However, and similar to classroom comparative studies of distance education and 
online learning (e.g., Bernard et al., 2004, 2009), we feel that we are at a place where 
a shift from technology versus no technology studies to more nuanced studies com-
paring different conditions, both involving CBI  treatments, would  help the field 
progress. And because such a rich corpus of meta-analyses exists, spanning virtually 
the entire history  of  technology integration in  education, we feel that it may  be 
unnecessary to mount yet another massive systematic review, limited to technology 
versus no-technology studies. Moreover, it appears that the second-order meta-anal-
ysis approach represents an economical means of providing an answer to big ques-
tions. The validation study, although not a true systematic review, offered support for 
the accuracy of the effect size synthesis and indicated that the results of the second-
order meta-analysis were not anomalous, where we found approximately the same 
average effect using both approaches.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
17
The average effect size in both the second-order meta-analysis and the valida-
tion study ranged between 0.30 and 0.35 for both the fixed effects and the random 
effects models, which is low to moderate in magnitude according to the qualitative 
standards suggested by J. Cohen (1988). Such an effect size magnitude indicates 
that the mean in the experimental condition will be at the 62nd percentile relative 
to the  control  group. In other words,  the average student in  a  classroom where 
technology is used will perform 12 percentile points higher than the average stu-
dent in the traditional setting that does not use technology to enhance the learning 
process. It is important to note that these average effects must be interpreted cau-
tiously because of the wide variability that surrounds them. We interpret this to 
mean that other factors, not identified in previous meta-analyses or in this sum-
mary, may account for this variability. We support Clark’s (1983, 1994) view that 
technology serves at the pleasure of instructional design, pedagogical approaches, 
and teacher practices and generally agree with the view of Ross, Morrison,  and 
Lowther (2010) that “educational technology is not a homogeneous ‘intervention’ 
but a broad variety of modalities, tools, and strategies for learning. Its effective-
ness, therefore, depends on how well it  helps teachers and students achieve the 
desired instructional goals” (p. 19). Thus, it is arguable that it is aspects of the goals 
of instruction, pedagogy, teacher effectiveness, subject matter, age level, fidelity 
of technology implementation, and possibly other factors that may represent more 
powerful influences on effect sizes than the nature of the technology intervention. 
It is incumbent on future researchers and primary meta-analyses to help sort out 
these nuances, so that computers will be used as effectively as possible to support 
the aims of instruction.
Results from the moderator analyses indicated that computer technology sup-
porting instruction has a slightly but significantly higher average effect size than 
technology applications used for direct instruction. The average effect size associ-
ated with  direct instruction utilization  of technology (0.31)  is highly consistent 
with the average effect size reported by Hattie (2009) for CAI in his synthesis of 
meta-analyses, which was also 0.31. Moreover, the overall current findings are in 
agreement with the results provided by a Stage 1 meta-analysis of technology 
integration recently published by Schmid et al. (2009), where effect sizes pertain-
ing to computer  technology  used  as  “support  for cognition” were significantly 
greater than  those related to computer  use for “presentation of  content.” Taken 
together with the current  study, there is the suggestion that one of  technology’s 
main strengths may lie in supporting students’ efforts to achieve rather than acting 
as a tool for  delivering  content.  Low  power  prevented  us  from examining this 
comparison between purposes,  split  by other instructional variables  and  demo-
graphic characteristics.
Second-Order Meta-Analysis and Future Perspectives
With the increasing number of published meta-analyses in a variety of areas,
there is a growing need for a systematic and reliable methodology for synthesizing
related findings. We have noted in this review a degree of fragmentation in the
coverage of the literature, with the next meta-analysis overlapping the previous one
but not including much of the earlier literature. We suspect that this not a unique
case. At some point in the development of a field there comes the need to sum-
marize the literature over the entire history of the issue in question. We see only
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
18
two choices: (a) conduct a truly comprehensive meta-analysis of the entire litera-
ture or (b) conduct a second-order meta-analysis that synthesizes the findings and
judges the general trends that can be derived from the entire collection. A large
review can be time-consuming and expensive, but it has a better chance of identify-
ing underlying patterns of variability that may be of use to the field. A second-
order meta-analysis is less costly and less time-consuming while providing
sufficient power with regard to the findings. In the case of technology integration,
we see the second-order approach to be a viable option. First, time is of the essence
in a rapidly expanding and changing field such as this. Second, since we are
unlikely to see the particular technologies summarized here reappearing in the
future, it is probably enough to know that in their time and in their place these
technologies produced some measure of success in achieving the goals they were
designed to enable.
The strongest point in  a  second-order  meta-analysis  is  its  ability  to provide 
evidence to answer a general question by taking a substantive body of research into 
consideration. The current synthesis with the validation process indicated that the 
approach is an adequate technique for synthesizing effect sizes and estimating the 
average effect size in relation to a specific phenomenon. Future advancement in 
the reporting of meta-analyses may allow for using moderator analysis in second-
order meta-analysis to answer more specific questions pertaining to various study 
features of interest.
APPENDIX A
Codebook of variables in the second-order meta-analysis
Study identification
  Identification number
  Author
  Title
  Year of publication
  Type of publication
1.  Journal
2.  Dissertation
3.  Conference proceedings
4.  Report or gray literature
Contextual features
  Research question
  Technology addressed
  Control group definition or description
  Clarity of the control group definition
1.  Control group not defined, no reference to specifics of the treatment
2.  Providing general name for  control group with brief description  of the 
treatment condition
3.  Control group defined specifically with some missing aspects from the 
full definition as described in level
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
19
4.  Clearly defined  intervention,  with  a  working  or  operational definition 
linked to conceptual or theoretical model with examples
  Experimental group treatment definition or description
  Clarity of the experimental group definition
1.  Experimental group not defined, no reference to specifics of the treatment
2.  Providing general name for experimental group with brief description of 
the treatment condition
3.  Experimental group defined specifically with some missing aspects from 
the full definition as described in number
4.  Clearly defined  intervention,  with  a  working  or  operational definition 
linked to conceptual or theoretical model with examples
  Grade level
  Subject matter
1.  Science or health
2.  Language
3.  Math
4.  Technology
5.  Social Science
6.  Combination
7.  Information literacy
8.  Engineering
9.  Not specified
Methodological features
  Search phase
  Search time frame
  Justification for search time frame
1.  No
2.  Yes
  Literature covered
1.  Published studies only
2.  Published and unpublished studies
  Search strategy
1.  Search strategy not disclosed, no reference to search strategy offered
2.  Minimal description of search strategy with brief reference to resources 
searched
3.  Listing of resources and databases searched
4.  Listing of resources and databases searched with sample search terms
  Resources used
1.  Database searches
2.  Computerized search of Web resources
3.  Hand search of specific journals
4.  Branching
  Databases searched
  Number of databases searched
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
20
Review phase
  Inclusion or exclusion criteria
1.  Criteria not disclosed with no description offered
2.  Overview of criteria presented briefly
3.  Criteria specified with enough detail to allow for easy replication
  Included research type
1.  Randomized controlled trial (RCT) only
2.  RCT, quasi
3.  RCT, quasi, pre
4.  Not specified
  Article review
1.  Review process not disclosed
2.  Review process by one researcher
3.  Rating by more than one researcher
4.  Rating by more than one researcher with interrater agreement reported
Effect size (ES) and study feature extraction phase
ES extraction
1.  Extraction process not disclosed, no reference to how it was conducted
2.  Extraction process by one researcher
3.  Extraction process by more than one researcher
4.  Extraction process by more than one researcher with interrater agreement 
reported
  Code book
1.  Code book not described, no reference to features extracted from primary 
literature
2.  Brief description of main categories in code book
3.  Listing of specific categories addressed in code book
4.  Elaborate description of code book allowing for easy replication
  Study feature extraction
1.  Extraction process not disclosed, no reference to how it was conducted
2.  Extraction process by one researcher
3.  Extraction process by more than one researcher
4.  Extraction process by more than one researcher with interrater agreement 
reported
Analysis
  Independence of data
1.  No
2.  Yes
  Weighting by number of comparisons
1.  Yes
2.  No
ES weighted by sample size
1.  No
2.  Yes
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
21
  Homogeneity analysis
1.  No
2.  Yes
  Moderator analysis
1.  No
2.  Yes
  Metaregression conducted
1.  No
2.  Yes
Further reporting aspects
  Inclusion of list of studies
1.  No
2.  Yes
  Inclusion of ES table
1.  No
2.  Yes
  Time between last study and publication date
ES information
  ES type
1.  Glass
2.  Cohen
3.  Hedges
4.  Others: specify
  Total ES
  Mean ES
SE
SE extraction
1.  Reported
2.  Calculated from ES and sample size
3.  Calculated from confidence interval
4.  Replaced with weighted average SE
  Time frame included
  Number of studies included
  Number of ES included
  Number of participants
  Number of participants extraction
1.  Calculated
2.  Given
Specific ES
  Specific variable
  Mean ES
SE
SE extraction
1.  Reported
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
22
2.  Calculated from ES and sample size
3.  Calculated from confidence interval
4.  Replaced with weighted average SE
  Time frame included
  Number of studies included
  Number of ES included
  Number of participants
  Number of participants extraction
1.  Calculated
2.  Given
APPENDIX B
Included meta-analyses with number of studies, effect size (ES) types, mean ESs,
standard errors, number of overlapping studies, and percentage of overlaps
Meta-analysis
Number 
of  
studies ES type
Mean 
ES SE
Number of 
overlapping 
studies
Percentage 
of overlap
Bangert-Drowns 
(1993)
19 Missing 0.27 0.11 1 5.3
Bayraktar (2000) 42 Cohen’s d0.27 0.05 7 16.7
Blok, Oostdam, 
Otter, and  
Overmaat 
(2002)
25 Hedges’s g0.25 0.06 2 8.0
Christmann and 
Badgett (2000)
16 Missing 0.13 0.05 4 25.0
Fletcher-Flinn 
and Gravatt 
(1995)
120 Glass’s Δ 0.24 0.05 26 21.7
Goldberg, Rus-
sell, and Cook 
(2003)
15 Hedges’s g0.41 0.07 1 6.7
Hsu (2003) 25 Hedges’s g0.43 0.03 4 16.0
Koufogiannakis 
and Wiebe 
(2006)
8Hedges’s g−0.09 0.19 1 12.5
Kuchler (1998) 65 Hedges’s g0.44 0.05 7 10.8
Kulik and Kulik 
(1991)
239 Glass’s Δ 0.30 0.03 8 3.3
Y. C. Liao 
(1998)
31 Glass’s Δ 0.48 0.05 2 6.4
Y.-I. Liao and 
Chen (2005)
21 Glass’s Δ 0.52 0.05 2 9.5
Y. K. C. Liao 
(2007)
52 Glass’s Δ 0.55 0.05 2 3.8
(continued)
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
23
Note
This  study  was  partially  supported  by  grants  from  the  Social  Sciences  and 
Humanities Research Council of Canada and the Fonds québécois de la recherche sur 
la société et la culture to Schmid. Earlier versions of this article were presented at the 
Eighth Annual Campbell Collaboration Colloquium, Oslo, May 2009 and the World 
Conference on Educational Multimedia, Hypermedia & Telecommunications, Toronto, 
June 2010. The authors would like to express appreciation to Ms. C. Anne Wade, whose 
knowledge and retrieval expertise helped  ensure the comprehensiveness of this sys-
tematic review. Thanks to Ms. Katherine Hanz and Mr. David Pickup for their help in 
the information  retrieval  and data management  phases. The authors also  thank  Ms. 
Lucie Ranger for her editorial assistance.
Meta-analysis
Number 
of  
studies ES type
Mean 
ES SE
Number of 
overlapping 
studies
Percentage 
of overlap
Michko (2007) 45 Hedges’s g0.43 0.07 0 0.0
Onuoha (2007) 35 Cohen’s d0.26 0.04 3 8.6
Pearson, Ferdig, 
Blomeyer, and 
Moran (2005)
20 Hedges’s g0.49a0.11 2 10.0
Roblyer, Castine, 
and King 
(1988)
35 Hedges’s g0.31 0.05 4 11.4
Rosen and Salo-
mon (2007)
31 Hedges’s g0.46 0.05 0 0.0
Schenker (2007) 46 Cohen’s d0.24 0.02 9 19.6
Soe, Koki, and 
Chang (2000)
17 Hedges’s g
and  
Pearson’s ra
0.26a0.05 2 11.8
Timmerman 
and Kruepke 
(2006)
114 Pearson’s ra0.24 0.03 27 23.7
Torgerson and 
Elbourne 
(2002)
5Cohen’s d0.37 0.16 0 0.0
Waxman, Lin, 
and Michko 
(2003)
42 Glass’s Δ 0.45 0.14 5 11.9
Yaakub (1998) 20 Glass’s Δ 
and g
0.35 0.05 4 20.0
Zhao (2003) 9 Hedges’s g1.12 0.26 1 11.1
a. Converted to Cohen’s d.
APPENDIX B (continued)
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
24
References
*References marked with an asterisk indicate studies included in the second-order
meta-analysis.
Abrami, P. C., Borokhovski, E., Bernard, R. M., Wade, C. A., Tamim, R., Persson, T., 
. .  .  Surkes, M. A. (2010). Issues  in  conducting and disseminating  brief  reviews. 
Evidence and Policy, 6, 371–389. doi:10.1322/174426410X524866
Alessi, S. M., & Trollip, S. R. (2000). Multimedia for learning: Methods and develop-
ment (3rd ed.). Boston, MA: Allyn & Bacon.
*Bangert-Drowns, R. L. (1993). The word processor as an instructional tool: A meta-
analysis of word processing in writing instruction. Review of Educational Research, 
63, 69–93. doi:10.3102/00346543063001069
Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and performance at 
the beginning of the new millennium: What do we know and where do we go next? 
Personality and Performance, 9(1/2), 9–30. doi:10.1111/1468-2389.00160
*Bayraktar,  S.  (2000).  A meta-analysis on the effectiveness of computer-assisted
instruction in science education (Doctoral dissertation). Retrieved from ProQuest 
Dissertations and Theses database. (UMI No. 9980398)
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, A. C., Tamim, R. M., Surkes, 
M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treat-
ments  in  distance  education.  Review of Educational Research,  79,  1243–1289. 
doi:10.3102/0034654309333844
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., . . . 
Huang, B. (2004). How  does distance education compare with classroom instruc-
tion? A meta-analysis of the empirical literature. Review of Educational Research, 
74, 379–439.
Bernard, R. M.,  & Naidu, S. (1990). Integrating research into instructional practice: 
The  use  and  abuse  of  meta-analysis.  Canadian Journal of Educational
Communication, 19, 171–198.
*Blok, H., Oostdam, R., Otter, M.  E.,  &  Overmaat,  M.  (2002).  Computer-assisted 
instruction  in  support  of  beginning  reading  instruction:  A  review.  Review of
Educational Research, 72, 101–130. doi:10.3102/00346543072001101
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. (2005). Comprehensive 
Meta-Analysis (Version 2) [Computer software]. Englewood, NJ: Biostat.
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. (2009). Introduction to
meta-analysis. Chichester, UK: Wiley.
Butler, A., Chapman, J. E., Forman, E. M., & Beck, A. T. (2006). The empirical status 
of cognitive-behavioral  therapy: A review of meta-analyses.  Clinical Psychology
Review, 26, 17–31. doi:10.1016/j.cpr.2005.07.003
*Christmann, E. P., & Badgett, J. L. (2000). The comparative effectiveness of CAI on 
collegiate academic performance. Journal of Computing in Higher Education, 11(2), 
91–103. doi:10.1007/BF02940892
Clark,  R.  E.  (1983).  Reconsidering  research  on  learning  from  media.  Review of
Educational Research, 53, 445–449. doi:10.3102/00346543053004445
Clark, R. E. (1994).  Media  will  never  influence  learning.  Educational Technology
Research and Development, 42(2), 21–29. doi:10.1007/BF02299088
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: 
Erlbaum.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
25
Cohen, P. A., & Dacanay, L. S. (1992). Computer-based instruction and health profes-
sions  education:  A  meta-analysis  of  outcomes.  Evaluation and the Health
Professions, 15, 259–281. doi:10.1177/016327879201500301
Dede, C. (1996). Emerging technologies and distributed learning. American Journal of
Distance Education, 10(2), 4–36. doi:10.1.1.136.1029
*Fletcher-Flinn, C. M., & Gravatt, B. (1995). The efficacy of computer assisted instruc-
tion  (CAI):  A  meta-analysis.  Journal of Educational Computing Research,  12,
 219–242.
Glass, G.  V. (1977).  Integrating  findings: The meta-analysis  of research. Review of
Research in Education, 5, 351–379. doi:10.3102/0091732X005001351
Glass, G. V., McGaw, B., & Smith, M.  L. (1981).  Meta-analysis in social research. 
Beverly Hills, CA: Sage.
*Goldberg, A., Russell, M.,  & Cook, A. (2003). The effect of computers on student 
writing:  A  meta-analysis  of  studies  from  1992–2002.  Journal of Technology,
Learning, and Assessment, 2, 3–51.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to
achievement. London, UK: Routledge.
Health-Evidence.  (n.d.).  Quality assessment tool.  Retrieved  from  http://health-
evidence.ca/downloads/QA%20tool_Doc%204.pdf
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: 
Academic Press.
*Hsu, Y. C.  (2003).  The effectiveness of computer-assisted instruction in statistics
education: A meta-analysis  (Doctoral  dissertation).  Retrieved  from  ProQuest 
Dissertations and Theses database. (UMI No. 3089963)
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and
bias in research findings. Thousand Oaks, CA: Sage.
Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis. Beverly Hills, 
CA: Sage.
*Koufogiannakis, D., & Wiebe, N. (2006). Effective methods for teaching information 
literacy skills  to  undergraduate students: A systematic review and  meta-analysis. 
Evidence Based Library and Information Practice,  1(3),  3–43.  Retrieved  from 
http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/76/153
Kozma, R. (1991). Learning with media. Review of Educational Research, 61, 179–
221. doi:10.3102/00346543061002179
Kozma, R. (1994). Will media influence learning: Reframing the debate. Educational
Technology Research and Development, 42(2), 7–19. doi:10.1007/BF02299087
*Kuchler, J. M. (1998). The effectiveness of using computers to teach secondary school
(grades 6–12) mathematics: A meta-analysis(Doctoral dissertation). Retrieved from 
ProQuest Dissertations and Theses database. (UMI No. 9910293)
*Kulik, C. L. C., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: 
An  updated  analysis.  Computers in Human Behavior,  7(1–2),  75–94. 
doi:10.1016/0747-5632(91)90030-5
*Liao, Y. C. (1998). Effects of hypermedia versus traditional instruction on students’ 
achievement: A meta-analysis. Journal of Research on Computing in Education, 30, 
341–360.
*Liao, Y. -I., & Chen, Y. -W. (2005, June). Computer simulation and students’ achieve-
ment in Taiwan: A meta-analysis. Paper  presented  at  the  World Conference  on 
Educational Multimedia, Hypermedia and Telecommunications, Montreal, Canada.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
26
*Liao, Y. K. C. (2007). Effects of computer-assisted instruction on students’ achieve-
ment  in  Taiwan:  A  meta-analysis.  Computers and Education,  48,  216–233. 
doi:10.1016/j.compedu.2004.12.005
Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. 
Cambridge, MA: Harvard University Press.
Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychological educational, and 
behavioral treatment.  American Psychologist, 48, 1181–1209. doi:10.1037/0003-
066X.48.12.1181
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis (Vol. 49). London, 
UK: Sage.
Luborsky, L., Rosenthal, R., Diguer, L., Andrusyna, T. P., Berman, J. S., Levitt, J. T., 
. . . Krause, D. K. (2002). The dodo bird verdict is alive and well—mostly. American
Psychological Association, 9, 2–12. doi:10.1093/clipsy.9.1.2
*Michko, G. M. (2007). A meta-analysis of the effects of teaching and learning with
technology on student outcomes in undergraduate engineering education (Doctoral 
dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 
3089963)
Moher, D., Cook, D. J., Eastwood, S., Olkin, I., Rennie, D., & Stroup, D. F. (2000). 
Improving the quality  of reports of meta-analyses of randomized controlled trials: 
The QUOROM statement. British Journal of Surgery, 87, 1448–1454. doi:10.1046/
j.1365-2168.2000.01610.x
Møller, A. P., & Jennions, M. D.  (2002).  How  much variance can be  explained  by 
ecologists and evolutionary biologists? Oecologia, 132, 492–500.
*Onuoha, C. O. (2007). Meta-analysis of the effectiveness of computer-based labora-
tory versus traditional hands-on laboratory in college and pre-college science
instructions  (Doctoral  dissertation).  Retrieved  from  ProQuest  Dissertations  and 
Theses database. (UMI No. 3251334)
*Pearson, P. D., Ferdig, R. E., Blomeyer, J. R. L., & Moran, J. (2005). The effects of
technology on reading performance in the middle-school grades: A meta-analysis
with recommendations for policy. Naperville, IL: North Central Regional Educational 
Laboratory.
Peterson,  R. A. (2001).  On  the  use  of  college  students  in  social  science  research: 
Insights from a  second-order  meta-analysis.  Journal of Consumer Research, 28, 
450–461.
*Roblyer,  M.  D.,  Castine,  W.  H.,  &  King,  F.  J.  (1988). Assessing  the  impact  of 
computer-based instruction: A review of recent research. Computers in the Schools, 
5(3–4), 41–68. doi:10.1300/J025v05n03_04
*Rosen, Y., & Salomon, G. (2007). The differential learning achievements of construc-
tivist  technology-intensive  learning  environments  as  compared  with  traditional 
ones:  A  meta-analysis.  Journal of Educational Computing Research,  36,  1–14. 
doi:10.2190/R8M4-7762-282U-554J
Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: 
Sage.
Ross,  S.  M.,  Morrison,  G.  R.,  &  Lowther,  D.  L.  (2010).  Educational  technology 
research past and present: Balancing rigor and relevance to impact school learning. 
Contemporary Educational Technology,  1,  17–35.  Retrieved  from  http://www
.cedtech.net/articles/112.pdf
Saettler, P. (1990). The evolution of American educational technology. Greenwich, CT: 
Information Age Publishing.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Impact of Technology on Learning
27
Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learn-
ing and achievement? How, when, and under what conditions? Journal of Computing
Research, 20, 329–343. doi:10.2190/VQ8V-8VYB-RKFB-Y5RU
*Schenker, J. D. (2007). The effectiveness of technology use in statistics instruction in
higher education: A meta-analysis using hierarchical linear modeling (Doctoral 
dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 
3286857)
Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R., Abrami, P. C., Wade, C. 
A., . . . Lowerison, G. (2009). Technology’s effect on achievement in higher educa-
tion: A Stage I meta-analysis of classroom applications. Journal of Computing in
Higher Education, 21(2), 95–109.
Schurdak, J. (1967). An approach to the use of computers in the instructional process 
and  an  evaluation.  American Educational Research Journal,  4,  59–73. 
doi:10.3102/00028312004001059
Sipe, T. A., & Curlette, W. L. (1997). A meta-synthesis of factors related to educational 
achievement: A methodological approach to summarizing and synthesizing meta-
analysis. International Journal of Educational Research, 25, 583–698. doi:10.1016/
S0883-0355(96)80001-2
*Soe, K., Koki,  S.,  &  Chang, J. M. (2000).  Effect of computer-assisted instruction
(CAI) on reading achievement: A meta-analysis. Honolulu,  HI: Pacific Resources 
for Education and Learning.
Sosa, G., Berger, D. E., Saw, A. T., & Mary, J. C. (2010). Effectiveness of computer-
based instruction  in  statistics: A meta-analysis.  Review of Educational Research. 
Advance online publication. doi:10.3102/0034654310378174
Steiner, D. D., Lane, I.  M., Dobbins,  G. H., Schnur, A., & McConnell, S. (1991). A 
review of meta-analyses in organizational behavior and human resources manage-
ment: An empirical assessment. Educational and Psychological Measurement, 51, 
609–626. doi:10.1177/0013164491513008
*Timmerman, C. E., & Kruepke, K. A. (2006). Computer-assisted instruction, media 
richness, and college student performance. Communication Education, 55, 73–104. 
doi:10.1080/03634520500489666
*Torgerson, C. J., & Elbourne, D. (2002). A systematic review and meta-analysis of the 
effectiveness of information and communication technology (ICT) on the teaching 
of spelling. Journal of Research in Reading, 25, 129–143. doi:10.1111/1467-9817
.00164
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. 
(2009). Evaluation of evidence-based practices in online learning: A meta-analysis
and review of online learning studies. Retrieved from  http://www2.ed.gov/about/
offices/list/opepd/ppss/reports.html
Valentine,  J.  C.,  &  Cooper, H.  (2008).  A  systematic  and  transparent  approach  for 
assessing the methodological quality of intervention effectiveness research:  The 
Study Design and Implementation Assessment Device (Study DIAD). Psychological
Methods, 13, 130–149. doi:10.1037/1082-989X.13.2.130
*Waxman, H. C., Lin, M.-F., & Michko, G. M. (2003). A meta-analysis of the effective-
ness of teaching and learning with technology on student outcomes. Retrieved from 
http://www.ncrel.org/tech/effects2/waxman.pdf
Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness 
research: Evidence from meta-analysis. Psychological Methods, 6, 413–429.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
Tamim et al.
28
*Yaakub,  M.  N.  (1998).  Meta-analysis of the effectiveness of computer-assisted
instruction in the technical education and training(Doctoral dissertation). Retrieved 
from ProQuest Dissertations and Theses database. (UMI No. 3040293)
*Zhao, Y. (2003). Recent developments in technology and language learning: A litera-
ture review  and  meta-analysis.  CALICO Journal, 21(10),  7–27. Retrieved  from 
https://www.calico.org/html/article_279.pdf
Authors
RANA M.  TAMIM,  Ph.D.,  is  an  associate  professor  and  graduate  program  director  at 
the School of e-Education at Hamdan Bin Mohammed e-University, Dubai International 
Academic  City,  Block  11,  P.O.  Box  71400,  Dubai,  United  Arab  Emirates;  e-mail: 
r.tamim@hbmeu.ac.ae; rm.tamim@gmail.com. She is a collaborator with the Centre for 
the Study of Learning and Performance at Concordia University. Her research interests 
include online and  blended learning, learner-centered instructional design, and science 
education. Her research expertise includes quantitative and qualitative research methods 
in addition to systematic review and meta-analysis.
ROBERT M. BERNARD, Ph.D., is professor of education and systematic review theme 
leader for the Centre for the Study of Learning and Performance at Concordia University, 
LB 583-3,  1455 de  Maisonneuve Blvd. W, Montreal, QC  H3G 1M8, Canada;  e-mail: 
bernard@education.concordia.ca. His  research  interests  include  distance  and  online 
learning and  instructional technology. His methodological expertise  is in the  areas  of 
research design and statistics and meta-analysis.
EUGENE BOROKHOVSKI, Ph.D.,  holds  a doctorate in cognitive  psychology  and  is a 
research assistant professor with the Psychology  Department and a systematic reviews 
manager at the Centre for the Study of Learning and Performance of Concordia University, 
LB  581,  1455  de  Maisonneuve  Blvd.  W,  Montreal,  QC  H3G  1M8,  Canada;  e-mail: 
eborokhovski@education.concordia.ca. His areas of expertise and interest include cogni-
tive and educational psychology, language acquisition, and methodology and practices of 
systematic review, meta-analysis in particular.
PHILIP C. ABRAMI, Ph.D., is a research chair and the director of the Centre for the Study 
of Learning and Performance at Concordia University, LB 589-2, 1455 de Maisonneuve 
Blvd. W, Montreal, QC  H3G 1M8,  Canada; e-mail: abrami@education.concordia.ca. 
His current work focuses on research integrations and primary investigations in support 
of applications of educational technology in distance and higher education, in early lit-
eracy, and in the development of higher order thinking skills.
RICHARD  F. SCHMID,  Ph.D.,  is  professor  of  education,  chair  of  the  Department  of 
Education, and educational technology theme leader for  the  Centre  for  the  Study  of 
Learning and  Performance at Concordia University, LB 545-3,  1455 de Maisonneuve 
Blvd. W, Montreal, QC  H3G 1M8, Canada;  e-mail: schmid@education.concordia.ca. 
His research interests include examining pedagogical strategies supported by technolo-
gies and the cognitive and affective factors they influence.
at CONCORDIA UNIV LIBRARY on March 4, 2013http://rer.aera.netDownloaded from
... This study aligns with prior research indicating that student engagement increases with interactive technologies (Fredricks et al., 2004;Zheng et al., 2016). It also echoes findings from meta-analyses showing small to moderate improvements in student performance with digital tools (Tamim et al., 2011). However, unlike some earlier studies, this research integrates both engagement and achievement metrics, offering a more holistic view of the impact of technology in education. ...
Article
Full-text available
The integration of technology in K-12 classrooms has transformed traditional teaching and learning environments. This study explores the impact of technology integration on student engagement and academic performance in K-12 education. Drawing from both quantitative and qualitative data collected across multiple schools, the research examines how various digital tools-such as learning management systems, interactive whiteboards, and educational apps-influence student motivation, participation, and achievement. Results indicate that effective technology integration can enhance student engagement and improve academic outcomes, particularly when implemented with pedagogical intent and adequate teacher training. However, disparities in access and educator readiness remain significant challenges. This study offers evidence-based insights for educators, policymakers, and school leaders aiming to optimize technology use for improved educational outcomes.
... The significance of ICT has gained further prominence in the background of worldwide initiatives towards blended and online learning models. It significantly contributes to improving the quality of education, promoting equity, and preparing teachers and students for a technologically enabled world (Tamim et al., 2011;Selwyn, 2020). Understanding the factors that influence access to, and use of ICT is crucial, as most classrooms now utilize digital technologies. ...
Article
In today's fast changing world, we must adopt 21st-century skills and knowledge to keep pace with the evolving needs of society and the economy. One of these is the use of ICT, or Information and Communication Technology, in education. This study addresses the identified gaps and effectiveness of ICT training and integration in education. To analyze the concept, the researcher conducted data mining across various countries that have studied common interests. Through analysis and synthesis, we can conclude that ICT is crucial and timely to use and adopt in this technological era for both teachers and students. The study reveals that socioeconomic level (SES) is one of the factors hindering students from accessing the internet and Information and Communication Technology (ICT), particularly those from low-income and rural communities. At the same time, providing consistent ICT training and programs to teachers enhances competency, overall instructional management, and positively affects students' learning outcomes. Students' academic performance has improved as a result of the use and integration of ICT, which also helps them become more engaged and motivated to study in the classroom. The study concluded that while ICT use in schools is essential for improving the teaching and learning process, specific initiatives from the government and other stakeholders are required to close the digital divide and ensure that all students can utilize its potential, such as continuous professional development for teachers, and guaranteed technology access for marginalized students. Such dual-focused interventions, combining infrastructure investment with human capacity building, can transform educational ecosystems into equitable, future-ready learning environments.
... The significance of ICT has gained further prominence in the background of worldwide initiatives towards blended and online learning models. It significantly contributes to improving the quality of education, promoting equity, and preparing teachers and students for a technologically enabled world (Tamim et al., 2011;Selwyn, 2020). Understanding the factors that influence access to, and use of ICT is crucial, as most classrooms now utilize digital technologies. ...
Article
Full-text available
In today’s fast changing world, we must adopt 21st-century skills and knowledge to keep pace with the evolving needs of society and the economy. One of these is the use of ICT, or Information and Communication Technology, in education. This study addresses the identified gaps and effectiveness of ICT training and integration in education. To analyze the concept, the researcher conducted data mining across various countries that have studied common interests. Through analysis and synthesis, we can conclude that ICT is crucial and timely to use and adopt in this technological era for both teachers and students. The study reveals that socioeconomic level (SES) is one of the factors hindering students from accessing the internet and Information and Communication Technology (ICT), particularly those from low-income and rural communities. At the same time, providing consistent ICT training and programs to teachers enhances competency, overall instructional management, and positively affects students' learning outcomes. Students' academic performance has improved as a result of the use and integration of ICT, which also helps them become more engaged and motivated to study in the classroom. The study concluded that while ICT use in schools is essential for improving the teaching and learning process, specific initiatives from the government and other stakeholders are required to close the digital divide and ensure that all students can utilize its potential, such as continuous professional development for teachers, and guaranteed technology access for marginalized students. Such dual-focused interventions, combining infrastructure investment with human capacity building, can transform educational ecosystems into equitable, future-ready learning environments.
... The digital divide continues to be a major challenge in the adoption of emerging technologies for education. While high-income countries have successfully integrated these tools into their education systems, low-income regions struggle with limited internet access, outdated curricula, and inadequate teacher training programs (Warschauer & Matuchniak, (Tamim et al., 2011). This study aims to fill that gap by analyzing recent advancements, case studies, and policy frameworks that support the integration of these technologies in education. ...
Article
Full-text available
Emerging technologies play a crucial role in transforming education by fostering inclusivity and equity. This study explores how artificial intelligence, virtual reality, and adaptive learning platforms contribute to overcoming traditional barriers to education. The research adopts a qualitative approach, analyzing case studies and recent advancements in educational technology. Findings reveal that these technologies enhance accessibility for students with disabilities, provide personalized learning experiences, and bridge educational gaps in underprivileged regions. The study highlights the importance of policy support and infrastructure development to maximize the benefits of technological integration in education. The implications suggest that adopting emerging technologies can create a more inclusive and equitable learning environment, ensuring education for all.
... Using technology in the classroom has been shown to improve students' performance in a variety of ways. Technology-enhanced teaching, for instance, was found to have a moderate to considerable positive effect on student achievement in a meta-analysis of 58 research by Tamim et al. (2011). Means et al. (2010 also discovered that pupils who incorporated technology into their education were more successful overall. ...
Article
The rapid pace of technological change presents significant challenges for academic institutions and professionals. The importance of knowledge workers and skills in enabling businesses to compete in the era of Industry 4.0 is paramount, reshaping the global economy. This study investigated the impacts of knowledge workers and skills on technology from 1990 to 2021. This study utilized pooled mean group (PMG), mean group (MG), and dynamic fixed effect (DFE) models from dynamic panel techniques. The Hausman test identified the PMG estimator as the most suitable for this research. PMG's findings revealed a substantial, consistent impact of knowledge workers on R&D over time. Additionally, the gap between highly skilled and semiskilled employees negatively affected R&D. In the short term, FDI exhibited a negative yet significant impact on R&D, while the interaction between high-skilled and nation-dummies positively influences R&D. Thus, policymakers should prioritize enhancing the educational system through new initiatives and support programs. Moreover, increased government funding and support for employer-led training programs are essential to meet the rising demand for skilled workers. Businesses must also foster employee skill development to thrive in the impending industrial revolution.
... According to OECD (2015), some research have claimed that the impact of technology on student learning is either restricted or even negative, while others have suggested that its impact is either limited or favourable. Tamim et al., (2011) argued in a systematic review of studies on technology in education was the finding that while technology can have a positive impact on student engagement and learning outcomes, the quality of the studies on the topic varied greatly, and more in-depth research is required to determine the true impact of technology on education. ...
Article
Full-text available
The integration of technology in the classroom has become increasingly popular, with many educators seeing it as a way to enhance teaching and learning. However, there is a need to understand how technology is being used and how it is impacting both students and teachers. This qualitative study aimed to explore students' and teachers' perspectives on the use of technology in the classroom. Semi-structured interviews were conducted with eight teachers and ten students in a high school in the United States. The interviews were analysed using thematic analysis. The findings revealed that technology was perceived as a valuable tool for enhancing learning, but that there were also challenges associated with its use, such as technical difficulties and distractions. Additionally, students and teachers had differing opinions on how technology should be used in the classroom, with some students preferring a more traditional approach to learning. Overall, this study highlights the need for careful consideration of how technology is integrated into the classroom, as well as the importance of understanding students' and teachers' perspectives on its use. References Akbulut, Y., & Cardak, C. S. (2012). The advantages and challenges of using ICTs in teaching and learning processes: The case of a Turkish online university. The International Review of Research in Open and Distributed Learning, 13(2), 87-105. Al-Qudah, D. M., & Al-Dababneh, K. A. (2022). Investigating the impact of using educational technologies on English language teaching and learning at a higher education institution in Jordan. Education and Information Technologies, 27(1), 1-21. Alqurashi, E. (2021). The Impact of Mobile Learning on EFL Students’ Vocabulary Acquisition: A Meta-Analysis. International Journal of Emerging Technologies in Learning (iJET), 16(8), 36-54. Asif, M., Adil Pasha, M., Shafiq, S., & Craine, I. (2022). Economic Impacts of Post COVID-19. Inverge Journal of Social Sciences, 1(1), 56–65. Bebell, D., & Kay, R. (2010). One to one computing: A summary of the quantitative results from the Berkshire wireless learning initiative. Journal of Technology, Learning and Assessment, 9(2), 1-54. Baker, R. S., & Inventado, P. S. (2016). Educational data mining and learning analytics: Potentials and possibilities for online education. Emergence and innovation in digital learning, 83-98. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7-74. Bower, M., Dalgarno, B., Kennedy, G. E., Lee, M. J., & Kenney, J. (2017). Design and implementation factors in blended synchronous learning environments: Outcomes from a cross-case analysis. Computers & Education, 114, 121-137. Bulger, M. E., Mayer, R. E., & Almeroth, K. C. (2019). The effects of technology on the human mind and body. In K. Peppler (Ed.), The SAGE encyclopedia of out-of-school learning (pp. 1022-1026). SAGE Publications, Inc. CAST. (2018). Universal Design for Learning Guidelines version 2.2. Retrieved from http://udlguidelines.cast.org/ Chen, W (2009). Integrating technology into classroom teaching: A theory-practice nexus? Asia-Pacific Journal of Teacher Education, 37(3), 229-243. Cheung, A. C., & Slavin, R. E. (2013). Effects of educational technology applications on reading outcomes for struggling readers: A best-evidence synthesis. Reading Research Quarterly, 48(3), 277-299. Cho, M. H., & Her, H. (2021). Understanding the relationship between blended learning, self-regulated learning, and academic achievement: A structural equation modeling approach. Computers & Education, 167, 104164. Chong, J. L., Wong, S. C., & Lim, J. Y. (2022). Designing an E-learning system for problem-based learning in STEM education. Journal of Educational Technology & Society, 25(1), 85-101. Christensen, C. M., Horn, M. B., & Johnson, C. W. (2011). Disrupting class: How disruptive innovation will change the way the world learns. McGraw Hill Professional. Davis, F. D. (2019). Technology acceptance model for education and research. Educational Technology Research and Development, 67(5), 1281-1304. Dey, M. (2021). Psychological processes in language learning and teaching: Scoping review and future research directions. Journal of Psychological Perspective, 3(2), 105-110. Díaz-Rodríguez, M. L., & Álvarez-Valdivia, I. (2020). Students' perceptions of e-learning platforms in higher education: Analysis of a satisfaction model. Journal of Educational Technology Development and Exchange, 13(1), 1-13. Dillenbourg, P. (1999). What do you mean by "collaborative learning"? In P. Dillenbourg (Ed.), Collaborative-learning: Cognitive and Computational Approaches (pp. 1-19). Elsevier. Drent, M., & Meelissen, M. (2008). Which factors obstruct or stimulate teacher educators to use ICT innovatively? Computers & Education, 51(1), 187-199. Ertmer, P. A., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255-284. Ertmer, P. A., Ottenbreit-Leftwich, A. T., & York, C. S. (2007). Exemplary technology-using teachers: Perceptions of factors influencing success. Journal of Computing in Teacher Education, 23(2), 55-61. Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012). Teacher beliefs and technology integration practices: A critical relationship. Computers & Education, 59(2), 423-435. Fullan, M. (2013). Stratosphere: Integrating technology, pedagogy, and change knowledge. Pearson. Gewerc, A., Segura-Robles, A., & Arroyo-Cañada, F. J. (2018). Are we ready for educational robotics? A survey on teacher training needs. Computers & Education, 116, 1-17. Gros, B. (2014). Challenges to digital game-based learning. In Handbook of research on educational communications and technology (pp. 525-535). Springer. Heidig, S., & Clarebout, G. (2011). Do media interfere with learning? A review on learner control in hypermedia environments. Journal of Computer Assisted Learning, 27(4), 269-287. Hodges,C (2020). The difference between emergency remote teaching and online learning. Educause Review, 27. Honey, M., & Hilton, M. (2011). Learning science through computer games and simulations. National Academies Press. Hsu, Y. C., & Wang, C. (2021). Investigating the effects of mobile learning on students’ academic achievement: A meta-analysis. Computers & Education, 163, 104094. Kizilcec, R. F., Bailenson, J. N., & Gomez, C. J. (2017). The instructor's face in video instruction: Evidence from two large-scale field studies. Journal of Educational Psychology, 109(6), 849. Khan, B. H. (2018). Technology-enhanced learning: The good, the bad, and the ugly. Springer International Publishing. Koc, M., & Bakir, N. (2019). Technology integration in education: A review of research. Journal of Educational Technology Development and Exchange, 12(1), 1-14. Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60-70. Lawless, K. A., & Pellegrino, J. W. (2007). Professional development in integrating technology into teaching and learning: Knowns, unknowns, and ways to pursue better questions and answers. Review of Educational Research, 77(4), 575-614. Lee, J., & Srinivasan, S. (2012). What motivates students to learn? An examination of e-learning in higher education. Journal of Educational Technology Development and Exchange, 5(1), 1-14. Liu, S.-H., Liao, H.-L., & Pratt, J. A. (2010). Impact of media richness and flow on e-learning technology acceptance. Computers & Education, 54(2), 599–609. Mangen, A., & Velay, J. L. (2010). Digitizing literacy: Reflections on the haptics of writing. Advances in Haptics, 171-182. Mayer, R. E. (2009). Multimedia Learning (2nd ed.). Cambridge University Press. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054. Mouza, C., & Lavigne, N. C. (2016). Technology integration in K-12 contexts: Examining the process of classroom transformation. Teaching and Teacher Education, 60, 431-441. Muir-Herzig, R. M., & Mulder, C. M. (2018). Technology integration in K-12 classrooms: A teacher perspective. Journal of Educational Technology Development and Exchange, 11(1), 1-14. OECD. (2015). Students, computers and learning: Making the connection. OECD Publishing. Papadakis,(2018). Greek teachers' attitudes towards information and communication technologies in education. Education and Information Technologies, 23(2), 735-751. Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6. Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223-231. Puentedura, R. (2014). SAMR: A model for technology integration. Retrieved from http://www.hippasus.com/rrpweblog/archives/2014/06/29/SAMRAModelForTechnologyIntegration Redondo, R. D., & Cabrera-García, M. (2022). Augmented reality as a tool to develop reading comprehension in students with learning difficulties. Educational Technology Research and Development, 70(1), 57-76. Scherer, R., Siddiq, F., & Teo, T. (2016). Becoming more specific: Measuring and modeling teachers’ perceived usefulness of ICT in the context of teaching and learning. Computers & Education, 96, 1-17. Shabani, M., Khatib, M., & Ebadi, S. (2019). Investigating the effects of technology-enhanced language learning on EFL learners' motivation and attitudes. Journal of Educational Technology Development and Exchange, 12(2), 1-13. Shahid, N., Asif, M., & Pasha, D. A. (2022). Effect of Internet Addiction on School Going Children. Inverge Journal of Social Sciences, 1(1), 13–55. Shin, W. S., Kim, J. E., & Kim, K. J. (2021). The effects of educational technology on student creativity: A meta-analysis. Educational Technology Research and Development, 69(2), 527-547. Tamim, (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(1), 4-28. Teo, T. (2015). Examining pre-service teachers' perceived usefulness, ease of use, and attitude towards educational technology: A Malaysian perspective. Journal of Educational Technology & Society, 18(3), 274-285. Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin Books. Voogt, J., Fisser, P., Pareja Roblin, N., Tondeur, J., & van Braak, J. (2012). Technological pedagogical content knowledge-A review of the literature. Journal of Computer Assisted Learning, 29(2), 109-121. Wang, Q., & Chen, L. (2019). Exploring students' attitudes towards mobile learning: A case study in China. Interactive Learning Environments, 27(7), 904-917. Wang, Y., & Baker, R. (2015). Content-Based Adaptive Review: Effects of Time Delay and Adaptivity on Learning. Journal of Educational Psychology, 107(4), 1070–1085. Warschauer, M. (2003). Technology and social inclusion: Rethinking the digital divide. MIT Press. Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225. Wenglinsky, H. (2005). Using technology wisely: The keys to success in schools. Educational Testing Service. Zheng, B., Warschauer, M., Lin, C. H., & Chang, C. Y. (2016). Learning in one-to-one laptop environments: A meta-analysis and research synthesis. Review of Educational Research, 86(4), 1052-1084.
Article
Full-text available
This study examines the influence of technology integration on student engagement and learning outcomes within secondary education settings. Employing a mixed-methods research design, data were collected through surveys, classroom observations, and academic performance records from a sample of 300 students and 20 teachers across five secondary schools in Ibadan, Nigeria. Quantitative analyses revealed a significant positive correlation between the use of interactive digital tools and increased student engagement levels. Furthermore, students exposed to technology-enhanced instruction demonstrated improved academic performance compared to those in traditional learning environments. Qualitative feedback from teachers highlighted enhanced student motivation and participation as key benefits of technology integration. However, challenges such as limited infrastructure and insufficient teacher training were identified as barriers to effective implementation. The findings suggest that while technology integration can substantially enhance student engagement and learning outcomes, addressing infrastructural and professional development needs is crucial for maximizing its benefits in secondary education.
Article
Full-text available
Mediendidaktische Forschung, die sich mit der Integration und Wirkung digitaler Medien in Bildungskontexten beschäftigt, steht vor einer Reihe bedeutender Herausforderungen, die sich sowohl aus ihrer disziplinären Vielfalt als auch aus ihren institutionellen und geografischen Verortungen ergeben. Diese werden im Beitrag anhand mehrerer zentraler Beobachtungen diskutiert, die ein differenziertes Bild der aktuellen Lage zeichnen. Ich will einige davon nochmals herausstellen und ergänzen.
Article
Full-text available
Dieser Beitrag verortet Mediendidaktik in der erziehungswissenschaftlichen Diskussion und skizziert die relevanten Foren, Fachgesellschaften und Zeitschriften, die die Forschung zu Educational Technology (EdTech) prägen. Zwischen der deutschsprachigen und der internationalen Diskussion zeigen sich wesentliche Unterschiede: International macht die Forschung zu EdTech einen erheblichen Teil der gesamten Bildungsforschung aus, während sie im Vergleich dazu im deutschsprachigen Raum randständig geblieben ist. Die internationale Forschung basiert methodisch stark auf Medienvergleichsstudien, die einen Medieneinsatz dem «traditionellen Unterricht» gegenüberstellen, deren Limitationen regelmässig bemängelt werden. Die deutschsprachige Mediendidaktik zeichnet sich dagegen durch eine breitere theoretische Fundierung und vielfältige forschungsmethodische Ansätze aus. Die Diskussion über Bildung und Didaktik verweist auf eine lange und spezifische kulturelle Tradition, die die Differenz zum internationalen Diskurs begründet. Es werden Perspektiven für die mediendidaktische Forschung aufgezeigt und wird diskutiert, wie Anschlusskommunikation im internationalen Raum hergestellt werden kann.
Article
Full-text available
A synthesis of 319 meta-analyses of psychological, behavioral, and educational treatment research was conducted to assess the influence of study method on observed effect sizes relative to that of substantive features of the interventions. An index was used to estimate the proportion of effect size variance associated with various study features. Study methods accounted for nearly as much variability in study outcomes as characteristics of the interventions. Type of research design and operationalization of the dependent variable were the method features associated with the largest proportion of variance. The variance as a result of sampling error was about as large as that associated with the features of the interventions studied. These results underscore the difficulty of detecting treatment outcomes, the importance of cautiously interpreting findings from a single study, and the importance of meta-analysis in summarizing results across studies.
Article
Full-text available
Conventional reviews of research on the efficacy of psychological, educational, and behavioral treatments often find considerable variation in outcome among studies and, as a consequence, fail to reach firm conclusions about the overall effectiveness of the interventions in question. In contrast meta-analytic reviews show a strong, dramatic pattern of positive overall effects that cannot readily be explained as artifacts of meta-analytic technique or generalized placebo effects. Moreover, the effects are not so small that they can be dismissed as lacking practical or clinical significance. Although meta-analysis has limitations, there are good reasons to believe that its results are more credible than those of conventional reviews and to conclude that well-developed psychological, educational, and behavioral treatment is generally efficacious.
Article
Full-text available
This review summarizes the current meta-analysis literature on treatment outcomes of CBT for a wide range of psychiatric disorders. A search of the literature resulted in a total of 16 methodologically rigorous meta-analyses. Our review focuses on effect sizes that contrast outcomes for CBT with outcomes for various control groups for each disorder, which provides an overview of the effectiveness of cognitive therapy as quantified by meta-analysis. Large effect sizes were found for CBT for unipolar depression, generalized anxiety disorder, panic disorder with or without agoraphobia, social phobia, posttraumatic stress disorder, and childhood depressive and anxiety disorders. Effect sizes for CBT of marital distress, anger, childhood somatic disorders, and chronic pain were in the moderate range. CBT was somewhat superior to antidepressants in the treatment of adult depression. CBT was equally effective as behavior therapy in the treatment of adult depression and obsessive-compulsive disorder. Large uncontrolled effect sizes were found for bulimia nervosa and schizophrenia. The 16 meta-analyses we reviewed support the efficacy of CBT for many disorders. While limitations of the meta-analytic approach need to be considered in interpreting the results of this review, our findings are consistent with other review methodologies that also provide support for the efficacy CBT.