ArticlePDF Available

The effectiveness of technology‐supported personalised learning in low‐ and middle‐income countries: A meta‐analysis

Wiley
British Journal of Educational Technology
Authors:

Abstract and Figures

Digital technology offers the potential to address educational challenges in resource‐poor settings. This meta‐analysis examines the impact of students' use of technology that personalises and adapts to learning level in low‐ and middle‐income countries. Following a systematic search for research between 2007 and 2020, 16 randomised controlled trials were identified in five countries. Studies involved 53,029 learners aged 6–15 years. Coding examined learning domain (mathematics and literacy); personalisation level and delivery; technology use; and intervention duration and intensity. Overall, technology‐supported personalised learning was found to have a statistically significant—if moderate—positive effect size of 0.18 on learning (p = 0.001). Meta‐regression reveals how more personalised approaches which adapt or adjust to learners' level led to significantly greater impact (an effect size of 0.35) than those only linking to learners' interests or providing personalised feedback, support, and/or assessment. Avenues for future research include investigating cost implications, optimum programme length, and teachers' role in making personalised learning with technology effective. Practitioner notes What is already known about this topic? Promoting personalised learning is an established aim of educators. Using technology to support personalised learning in low‐ and middle‐income countries (LMICs) could play an important role in ensuring more inclusive and equitable access to education, particularly in the aftermath of COVID‐19. There is currently no rigorous overview of evidence on the effectiveness of using technology to enable personalised learning in LMICs. What this paper adds? The meta‐analysis is the first to evaluate the effectiveness of technology‐supported personalised learning in improving learning outcomes for school‐aged children in LMICs. Technology‐supported personalised learning has a statistically significant, positive effect on learning outcomes. Interventions are similarly effective for mathematics and literacy and whether or not teachers also have an active role in the personalisation. Personalised approaches that adapt or adjust to the learner led to significantly greater impact, although whether these warrant the additional investment likely necessary for implementation at scale needs to be investigated. Personalised technology implementation of moderate duration and intensity had similar positive effects to that of stronger duration and intensity, although further research is needed to confirm this. Implications for practice and/or policy: The inclusion of more adaptive personalisation features in technology‐assisted learning environments can lead to greater learning gains. Personalised technology approaches featuring moderate personalisation may also yield learning rewards. While it is not known whether personalised technology can be scaled in a cost‐effective and contextually appropriate way, there are indications that this is possible. The appropriateness of teachers integrating personalised approaches in their practice should be explored given ‘supplementary’ uses of personalised technology (ie, additional sessions involving technology outside of regular instruction) are common.
This content is subject to copyright. Terms and conditions apply.
Br J Educ Technol. 2021;52 :1935 –1964.
|
1935
wileyonlinelibrary.com/journal/bjet
Received: 18 December 2020
|
Accepte d: 26 April 2021
DO I: 10 .1111/ b je t .13116
REVIEW
The effectiveness of technology- supported
personalised learning in low- and
middle- income countries: A meta- analysis
Louis Major1 | Gill A. Francis2 | Maria Tsapali1
This is an op en access article under t he terms of t he Creati ve Commons Attribution-NonCommerc ial- NoDer ivs License, which
permits use and dist ribution in any medium, provide d the orig inal work i s proper ly cited, t he use is non - commercial and no
modifications or adaptations are made.
© 2021 The Authors. British Journal of Educational Technology publishe d by John Wiley & Sons Ltd on behalf of Bri tish
Educational Research Association
1Faculty of Education, U niversity of
Cambridge, Cambridge, UK
2Depar tment of Educ ation, Universit y of
Yor k , Yo r k, U K
Correspondence
Louis Major, Faculty of Educ ation,
Universi ty of Cambridge, 184 Hills Road,
Cambridge, CB2 8P Q, UK.
Email: lcm54@cam.ac.uk
Funding information
Ed Tech Hub
Abstract
Digital technology offers the potential to address edu-
cational challenges in resource- poor settings. This
meta- analysis examines the impact of students' use
of technology that personalises and adapts to learning
level in low- and middle- income countries. Following
a systematic search for research between 2007 and
2020, 16 randomised controlled trials were identified
in five countries. Studies involved 53,029 learners
aged 6– 15 years. Coding examined learning domain
(mathematics and literacy); personalisation level and
delivery; technology use; and intervention duration
and intensity. Overall, technology- supported per-
sonalised learning was found to have a statistically
significant— if moderate— positive effect size of 0.18
on learning (p = 0.001). Meta- regression reveals how
more personalised approaches which adapt or adjust
to learners' level led to significantly greater impact (an
effect size of 0.35) than those only linking to learn-
ers' interests or providing personalised feedback, sup-
port, and/or assessment. Avenues for future research
include investigating cost implications, optimum pro-
gramme length, and teachers' role in making person-
alised learning with technology effective.
1936
|
MAJOR et al.
KEYWORDS
computer- assisted learning, learning outcomes, low- and middle-
income, meta- analysis, personalisation, personalised adaptive
learning
Practitioner notes
What is already known about this topic?
Promoting personalised learning is an established aim of educators.
Using technology to support personalised learning in low- and middle- income
countries (LMICs) could play an important role in ensuring more inclusive and
equitable access to education, particularly in the aftermath of COVID- 19.
There is currently no rigorous overview of evidence on the effectiveness of using
technology to enable personalised learning in LMICs.
What this paper adds?
The meta- analysis is the first to evaluate the effectiveness of technology- supported
personalised learning in improving learning outcomes for school- aged children in
LMICs.
Technology- supported personalised learning has a statistically significant, posi-
tive effect on learning outcomes.
Interventions are similarly effective for mathematics and literacy and whether or
not teachers also have an active role in the personalisation.
Personalised approaches that adapt or adjust to the learner led to significantly
greater impact, although whether these warrant the additional investment likely
necessary for implementation at scale needs to be investigated.
Personalised technology implementation of moderate duration and intensity had
similar positive effects to that of stronger duration and intensity, although further
research is needed to confirm this.
Implications for practice and/or policy:
The inclusion of more adaptive personalisation features in technology- assisted
learning environments can lead to greater learning gains.
• Personalised technology approaches featuring moderate personalisation may
also yield learning rewards.
While it is not known whether personalised technology can be scaled in a cost-
effective and contextually appropriate way, there are indications that this is
possible.
• The appropriateness of teachers integrating personalised approaches in their
practice should be explored given ‘supplementary’ uses of personalised technol-
ogy (ie, additional sessions involving technology outside of regular instruction) are
common.
INTRODUCTION
Personalising education by adapting learning opportunities and instruction to individual ca-
pabilities and dispositions is an established aim of educators (Natriello, 2017). Everyday
practice in schools around the world typically involves some personalisation. For example,
|
1937
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
when walking around a classroom, teachers usually personalise their teaching by giving
extra support to those who are struggling, while challenging further those who are making
good progress (Holmes et al., 2018). The idea of personalised learning is, therefore, not new.
There are, however, considerable variations in how personalisation happens in practice.
Antecedents of personalised learning can be seen in the progressive education philoso-
phy of John Dewey, William Kirkpatrick and others in the early 20th century (Redding, 2016).
Research on the role of technology in enabling personalised learning can similarly be traced
back many years (Holmes et al., 2018). More recently, the adaptive and personalisable affor-
dances of educational technology (‘EdTech’) have been suggested as offering the potential
to adjust the learning experience based on age, attainment level, prior knowledge and per-
sonal relevance (FitzGerald, Jones, et al., 2018). Personalised technology may, for instance,
modify the pace of learning in a way that empowers learners to choose how and when they
learn (Ogan et al., 2012). It can also facilitate different kinds of content (to reflect learners'
preferences and cultural context; Kucirkova, 2018) and automatically capture and respond
to students' learning patterns (du Boulay et al., 2018).
In low- and middle- income countries (LMICs), EdTech has been recognised as offering a
promising means of addressing educational challenges (Bianchi et al., 2020). In particular, per-
sonalised and adaptive learning systems offer the potential to support self- led learning as well
as other forms of learning (making this more accessible, impactful and engaging).1 Using tech-
nology to support personalised learning has been proposed as a way to increase learner access
to education both in and out of school, enable teaching at the ‘right’ (ie, the learner's current)
level and reduce the negative effects of high teacher– learner ratios (Kishore & Shah, 2019;
Zualkernan, 2016). Such affordances could play an important role in tackling the greatest disrup-
tion to education in our timean effective response to COVID- 19 which saw 1.6 billion learners
losing access to their classrooms in addition to causing ongoing disruption (UNESCO, 2020).
Even before the pandemic, personalised learning was enjoying a resurgence in popularity
(FitzGerald, Jones, et al., 2018). As the global education community aims to rebuild, interest
in using personalised learning systems, adaptive curricula and data- driven instruction are
candidates to form a key part of the future educational landscape (Selwyn & Jandrić, 2020).
At a time when governments and other stakeholders have turned to technology to support the
immediate education response to COVID- 19 as well as long- term system recovery (EdTech
Hub, 2020), robust evaluations of existing evidence are needed to inform decision making
about the potential of using technology to support personalised learning. This is particularly
the case in LMICs where such technology may help to prevent marginalised learners from
falling further behind (Azevedo et al., 2020), for instance, through enabling remediation that
adapts instruction to children's learning levels on a continued basis (Kaffenberger, 2020).
This work builds on a Rapid Evidence Review (RER) that established the potential of using
personalised technology to improve educational outcomes for children in LMICs (Major &
Francis, 2020). Importantly, the RER revealed how a growing body of randomised controlled
trials (RCTs) explored personalised learning in the context of research on computer- assisted
learning and computer- aided instruction. Undertaking a meta- analysis of such research al-
lows a rigorous and accurate synthesis of the findings of existing studies, thus providing
more information about the current state- of- the- art in this area (Vogel et al., 2006). While
previous systematic reviews have explored developments in technology- enhanced person-
alised learning in mainly high- income contexts (eg, Xie et al., 2019; Zhang et al., 2020),
none have investigated the effectiveness of technology- supported personalised learning
in LMICs through meta- analysis. This study is therefore the first to ask: What is the ef-
fectiveness of technology- supported personalised learning in improving learning outcomes
(mathematics and literacy) for school- aged children in LMICs? In addition to contributing to
improving the precision of the estimated effects of technology- enabled personalised learn-
ing (Haidich, 2010), meta- analysis can answer research questions not posed by individual
1938
|
MAJOR et al.
studies (as considered in Section 4) and inform the generation of new hypotheses (as dis-
cussed in Sections 5 and 6). Findings will inform education decision makers and research-
ers about the potential effectiveness of technology- supported personalised learning, both in
response to COVID- 19 and beyond.
BACKGROUND
Personalised learning
As with many concepts in education, there is no universal definition of personalised learning.
Cuban (2018) describes personalised learning as ‘like a chameleon it appears in different
forms’, suggesting these forms can be conceptualised as a ‘continuum’ of approaches: from
teacher- led to student- centred classrooms, with ‘hybrid’ approaches in between. Robinson
and Sebba (2010) similarly suggest personalised learning should not be equated with ‘indi-
vidual’ or ‘individualised’ learning (although it may include it): that is to say students can expe-
rience personalised learning while working individually, in small groups or in the whole class.
Although definitions of personalised learning vary, there is broad agreement that it is
learner- centred and flexible, and responsive to individual learners' needs (Gro, 2017).
Advocates argue that students— including those who are marginalisedcan achieve higher
levels of learning if they receive personalised instruction tailored to their unique needs and
strengths (Jones & Casey, 2015; Zhang et al., 2020). This involves more than an individual
engaging with content; it may feature addressing social needs and developing collective un-
derstanding through productive interactions with others (Holmes et al., 2018). The promise
of personalisation thus lies in its ability to address a ‘one- size- fits- all’ approach to education
that may disadvantage learners (FitzGerald, Jones, et al., 2018).
Research suggests that personalisation can contribute to improving learning outcomes
through enhancing motivation and attitudes (Jones et al., 2013) and supporting the develop-
ment of metacognitive skills and self- reflection (Arroyo et al., 2014; Kim, Olfman, et al., 2014).
Higher levels of personalisation have been associated with better academic achievement,
improved school culture and greater student engagement (McClure et al., 2010). Compared
with their peers, students who started out behind have also been shown to catch up to per-
form at or above national averages in schools that implement personalised learning (Pane
et al., 2015). However, while the premise of personalised learning is to provide more equi-
table outcomes for all learners, associated research is in its infancy and questions remain
about how to scale effectively (Zhang et al., 2020).
Defining technology- supported personalised learning
Digital technology has been argued as offering a potentially impactful way of supporting
personalised learning. For instance, technology can facilitate learning driven by student
interests, optimise learning based on learner needs (eg, through providing differentiated
feedback) and adaptively adjust learning (eg, the pace of instruction) (Office of Educational
Technology, 2017). Furthermore, it may enable educators to take a more personalised ap-
proach in their teaching and inform data- driven decision making (Maseleno et al., 2018;
Pane et al., 2015). This includes promoting socially interactive learning through game- like
activities (Hirsh- Pasek et al., 2015; Pardo et al., 2019).
In the context of research in LMICs, terms including computer- assisted learning, computer-
aided learning, computer- aided instruction and intelligent/cognitive tutoring systems have
been used interchangeably to describe interventions that may personalise learning (Major &
|
1939
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
Francis, 2020). Bulger's (2016) distinction between ‘responsive’ and ‘adaptive’ personalised
learning systems is, therefore, helpful when considering technology- enabled personalised
learning in LMICs. Responsive systems are those that may enable learners to personalise the
learning interface, choose their own tailored path through instructional material or provide some
degree of personalised support or feedback. Examples are computerised game- like drills or
exercises that provide learners with limited personalised feedback indicating whether their re-
sponses are correct or incorrect. Adaptive systems, on the other hand, actively scaffold learning
by adapting content delivery depending on the user behaviour or performance. Such interven-
tions may adaptively provide content that matches the level of the learner or modify the pace of
instruction. Examples include computer- assisted software that adjusts the delivery of exercises
to the level of the learner and intelligent tutoring systems that proactively guide learning through
using high- tech data- driven features (eg, facial recognition software)2 (Bulger, 2016).
In this paper, we examine the role of technology- supported personalised learning in im-
proving academic outcomes for school- aged learners in LMICs. Influenced by existing re-
search (FitzGerald, Jones, et al., 2018; FitzGerald, Kucirkova, et al., 2018), we define this
broadly as ‘the ways in which technology enables or supports learning based upon partic-
ular characteristics of relevance or importance to learners’. This definition encompasses
both responsive and adaptive approaches to technology- enabled personalisation. Details
of inductive analyses to identify the detailed personalisation affordances of interventions
included in the meta- analysis are outlined in Section 3.4 and Supporting Information File 1.
Using digital technology to support personalised learning in low- and
middle- income countries
Research has consistently found that digital technology is associated with learning gains
for students in high- income countries although there is variation in impact (Education
Endowment Foundation, 2019). In LMICs, less is known about the effectiveness of using dig-
ital technology educationally. While there is a consensus that technology can contribute to
(the facilitation of) learning, many initiatives are designed without taking existing evidence—
nor the local context— into consideration (Tauson & Stannard, 2018).
A seminal study by Banerjee et al. (2007)3 reported a randomised evaluation of a
computer- assisted programme involving over 11,000 children. One feature was that content
and tasks were personalised to each child's current level of achievement, thereby enabling
them to be individually and appropriately stimulated (Banerjee et al., 2007). In addition to
allowing for variation in academic content presented, this enabled different entry points
and differentiated instruction (including preserving the age- cohort- based social grouping
of students; Muralidharan et al., 2019). Such adaptation to learners' needs to teach at the
‘right’ (ie, the learner's current) level has been an increasing focus of research in LMICs
over the past decade, both with (Rajendran & Muralidharan, 2013) and without technology
(Innovations for Poverty Action, 2015; Sawada et al., 2020).
Providing complex issues relating to implementation and sustainability can be overcome
(see Section 5), technology- enhanced approaches to personalised learning may offer a
solution to challenges that have faced other EdTech initiatives in LMICs (Zualkernan, 2016).
Complementary to enabling ‘teaching at the right level’, it has been argued that this could
include helping to address teacher shortages (Ito et al., 2019); closing educational gaps
through adaptive remedial instruction (Ogan et al., 2012); and performing routine tasks to
free up teachers to spend more time on aspects of education where they have comparative
advantages over technology (Perera & Aboal, 2017). Many of these potential benefits reso-
nate with the UN's Sustainable Development Goal 4 to ensure inclusive and equitable qual-
ity education for all.4 However, no meta- analysis to- date has investigated the effectiveness
1940
|
MAJOR et al.
of technology- supported personalised learning in improving learning outcomes for school-
aged children in LMICs.
Related reviews
While this meta- analysis is the first to consider the effectiveness of technology- supported
personalised learning in LMICs, other reviews have explored the role of educational technol-
ogy more broadly. Rodriguez- Segura (2020) summarised 81 (quasi- )experimental studies
undertaken in LMICs. The author found that interventions that improve the quality of instruc-
tion— or are centred around student- led learningare the most effective for raising learning
outcomes. Expanding access to technology alone was also identified to be insufficient for
improving learning (although it may be a necessary first step).
Escueta et al., (2017) similarly synthesised experimental evidence, reporting that
computer- assisted learning (CAL) may be more effective in LMICs given tight capacity con-
straints. They concluded that evidence on using CAL in LMICs is positive, suggesting that
the way this adapts to learner needs may play a central role in addressing the unevenness
of levels that challenges many schools. Infrastructure limitations and challenges that can
impede implementation are noted.
Other reviews on personalised learning more broadly include work by Xie et al. (2019)
who analysed global developments in technology- enhanced personalised learning between
2007 and 2017. Findings included that research on personalised learning typically involves
traditional computers with few studies conducted on wearable devices, smartphones and
tablets. Also with a focus on technology- enhanced learning, the synthesis by FitzGerald,
Jones, et al., (2018) considered the representation of personalisation in the literature since
2000. Finally, a review of personalised learning by Zhang et al. (2020) found that a majority
of 71 studies reported personalised learning— especially that supported by technology— to
be associated with positive findings in terms of academic outcomes, engagement, attitude
towards learning and meta- cognitive skills.
Research questions
While research into educational technology in developed countries may be more advanced,
Kaye and Ehren (2021) argue that such work must be considered separately from that un-
dertaken in LMICs. This is because the deployment of educational technology in LMICs
faces a unique and different set of context- related infrastructural and other challenges, ren-
dering transfer of messages from research in high- income countries often inappropriate.
Recognising this issue, the present meta- analysis complements and extends aforemen-
tioned research by considering the following research questions:
1. Does technology- supported personalised learning improve learning outcomes for school-
aged children more effectively than teachers' standard educational practice (without
technology) in low- and middle- income countries?
2. To what extent do features of technology- supported personalised learning contribute to
the effectiveness of interventions? Specifically, do learning outcomes vary by:
learning domain (mathematics and literacy),
personalisation level,
personalisation delivery type (technology only or teacher and technology) and
intervention intensity and duration?
|
1941
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
METHODOLOGY
Undertaking a meta- analysis offers a transparent, objective and replicable means for in-
vestigating a field and identifying new research opportunities. Their ability to synthesise
evidence on the effects of interventions mean meta- analyses are well suited to inform
evidence- based policy and practice (Borenstein et al., 2009). In addition to the academic
community, meta- analytic techniques have also been influential in enabling rigorous recom-
mendations to be made to other educational stakeholders (particularly with regards to ‘what
works’ in education (Ahn et al., 2012; Slavin, 2008).
Search process
The RER (Major & Francis, 2020) can be viewed as the first stage in the study search. This
involved developing and refining search terms (see Appendix A) and undertaking automated
searches during May 2020 using Google Scholar and the Searchable Publication Database
(SPuD: a database of 3+ million records indexing ProQuest, Web of Science, Scopus, the
Directory of Open Access Journals and the Education Resources Information Center up
until 2019; Adam & Haßler, 2020). ‘Grey literature’ was accepted if relevant. Independent
double screening of titles and abstracts was undertaken by authors LM and GF with any
disagreements discussed. Importantly, the RER identified the potential for undertaking a
meta- analysis as it revealed how 12 experiments with quantified outcomes explored aspects
of personalised learning in the context of computer- assisted/- aided learning. It also informed
the development of a more specific meta- analysis protocol outlining detailed inclusion crite-
ria, additional study search and selection processes, critical appraisal procedures and data
coding/analysis methods.
Having identified potentially relevant studies and established the feasibility of undertak-
ing a meta- analysis (exploring impact on mathematics and literacy outcomes specifically),
additional automated searches of Scopus, the Education Resources Information Center and
Web of Science were undertaken in July– August 2020 to cover any new literature published
in 2019– August 2020. The search terms in Appendix A were again applied and grey litera-
ture was accepted. Studies identified during the RER were also reappraised ensuring that all
data assessed for the meta- analysis followed a common screening process. After title and
abstract screening, studies were read in full (by both LM and GF) and inclusion criteria were
applied (Appendix B). After full- text screening, forward and backward citation snowballing
was carried out (by GF). This involved examining the reference lists of included studies.
Authors of included studies were also contacted for their recommendations of research to
include. To verify the identification of all relevant studies, the included study lists of system-
atic reviews reported in Section 2.4 were compared with the search results to determine if
any studies were missing.
Eligibility criteria
The full eligibility criteria for inclusion in the meta- analysis are outlined in Appendix B. Briefly,
for inclusion, studies must be published between 2007 and 2020; involve learners aged
5– 18 years in LMICs; feature a technology- supported personalised learning intervention
(that enables or supports learning based upon particular characteristics of relevance or im-
portance to learners); feature comparison with a control group in a RCT; consider academic
performance (mathematics or literacy) as a learning outcome. Details of studies excluded
after full- text screening are available in Supporting Information File 1.
1942
|
MAJOR et al.
Research critical appraisal
Studies were assessed using a framework aligned with the Building Evidence in Education
(2015) guidance on assessing research. This features six categories (see Supporting
Information): (a) conceptual framing; (b) contextual detail; (c) research design; (d) validity,
reliability and limitations; (e) cultural sensitivity and ethics; and (f) interpretation and
conclusions. With a possible aggregate score of 21, a rating of low (1 pt), medium (2 pts)
and high (3 pts) is awarded for each category (with the exception of Category 3— research
design— which integrates the Mixed Methods Appraisal Tool to assess RCT designs and
is double weighted out of 6 pts; Hong, Fàbregues, et al., 2018). The assessment was led
by MT. To test the validity of the critical appraisal procedure, a second rater (LM) randomly
appraised six included studies according to the same criteria.
Determining personalisation affordances
To demonstrate the valid inclusion of studies following the study search, inductive analyses
were undertaken (led by MT) to identify and thematically categorise the detailed personali-
sation affordances of reported interventions. Performed using NVivo (2020), this involved
five steps:
1. Extracting verbatim text describing the personalisation affordances of interventions,
before entering this into NVivo.
2. Performing initial inductive coding to examine personalisation affordances, noting poten-
tial descriptive themes.
3. Iteratively revisiting extracts searching for further candidate themes.
4. Refining and merging themes.
5. Re- coding extracted data if appropriate.
Following collaborative review and discussion amongst the research team, three final per-
sonalisation themes were identified: (a) engaging learners through matching their interests
and/or experience; (b) providing personalised feedback, support and/or assessment; and
(c) adapting or adjusting to learners' level (eg, through differentiated pace, learning objec-
tives and content or tools). Returning to Bulger's (2016) typology discussed in Section 2.2,
Categories (a) and (b) can be considered to represent ‘responsive’ personalised learning
systems and Category (c) those ‘adaptive’.
Detailed rationales for the inclusion of each study in the meta- analysis (in addition to ex-
amples of extracted data and codes established) are available in Supporting Information File
1. As a further validation measure, authors of included studies were contacted to validate
this coding of personalisation affordances and to provide any other information about the
personalisation features of the technology used during their study (see Section 4.2).
Study coding and analysis
Coding for the meta- analysis initially involved mapping study characteristics including coun-
try/region; technology type and origin; learning domain; learner stage/age; experimental
design and comparators; population characteristics; and sample size. At a second stage,
moderator variables (variables predicting the overall effect size and selected based on exist-
ing research and the findings of the RER) were coded as follows:
|
1943
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
Academic outcomes. Mathematics and literacy5 outcomes assessed through written
forms (traditional or digital).
Personalisation level. Following the process outlined in Section 3.4., interventions were
coded according to whether they (a) engage learners through matching their interests and/
or experience (eg, to facilitate student engagement); (b) provide personalised feedback,
support and/or assessment (eg, immediate task feedback and/or continuous or final as-
sessment); and (c) adapt or adjust to learners' level (eg, delivering content and activities
adapted to students difficulty level and/or learning pace). For each of these a code of 0
(no) and 1 (yes) was assigned. If a study was coded as adapting or adjusting to learners'
level ([c]) they were coded as featuring a ‘HIGH’ level of personalisation as this factor
represents a key distinction between ‘responsive’ and ‘adaptive’ personalised learning
systems (Bulger, 2016). Otherwise studies were coded as ‘MEDIUM’.
Personalisation delivery type. This variable has two aspects referring to ‘who’ delivered
the personalisation: (a) technology only or (b) teacher and technology. In the former, the
role of the teacher or supervisor was limited to providing technical support when super-
vising the implementation of a programme. In the latter, the teacher had an active role by
choosing the content or activities from possible options provided by the software to meet
the learning goals, and/or by providing academic support and feedback.
Technology use. This variable identifies whether interventions were implemented in a
supplementary, integrative or substitute way. Supplementary approaches offer students
the opportunity to practice instructional content outside regular classroom instruction (eg,
through additional remedial support). Integrative approaches utilise technology during
regular instruction and the teacher has an active role. Substitute approaches use technol-
ogy as a replacement of the regular classroom instruction (instruction delivered only by
technology).
Intervention intensity and duration. To code for intervention intensity, Cheung and
Slavin's (2012) intensity criteria were followed using a cut off of 75 min per week. For in-
tensity × duration a cut off level of 4.5 months was used as this typically represents half of
a school academic year. Interventions were coded as ‘STRONG’ when delivered for more
than 4.5 months with an intensity of greater than 75 min a week. Otherwise they were
categorised as ‘MODERATE’.
Studies were coded by one author (MT) with other authors independently reviewing data
extracted. Codes were assigned based on what was explicitly stated in the text. Study au-
thors were invited to feedback on coding undertaken.
Effect size calculations and statistical analysis
The overall effects of interventions are determined from estimates of the standardised mean
difference or effect size for each study. Where studies report treatment effects for unadjusted
and adjusted ordinary least squares regressions and account for baseline outcome meas-
ures as covariates, effect size estimates extracted were the beta coefficients and standard
errors reported in data tables. According to Higgins et al., (2020), these give the most pre-
cise and least biased estimates of intervention effects. For other studies, standardised mean
differences were calculated using post- intervention value scores (means, standard devia-
tions) using Lipsey and Wilson (2001) online Practical Meta- analysis Calculator. Higgins
(2020) recommends that different standardised effect size estimates can be combined in
one meta- analytic calculation.
1944
|
MAJOR et al.
Following Borenstein (2009), where studies report multiple effect sizes for different
groups (including multiple treatment arms, outcomes and independent groups) these were
combined to formulate composite effect size estimates to calculate summary effects of the
impact of the intervention. In cases where the data were dependent, ie, multiple treatments
or outcomes, average effects were computed to yield a single effect estimate. For the mul-
tiple independent groups, weighted mean effects and standard error were calculated to
obtain a combined effect. Where applicable, individual effects are used in separate meta-
analyses. Only the primary outcome of interventions is reported. Reports of spill over effects
or follow- up effects were excluded.
Data were analysed in Stata using the generic inverse variance method as it produces
a random effects meta- analytic calculation6. Given studies were sampled from diverse
countries, a random effects model was appropriate as this assumes studies will differ such
that there may be different but related effect sizes (Borenstein, 2009). Missing data were
not problematic with the exception of one study for which the authors were contacted but
communication could not be established [S16]. Meta- regression determined the impact of
moderators on overall study effects. There is no universally accepted minimum number of
studies required for a meta- regression and such a number may be arbitrary in any case (Fu
et al., 2011). Nonetheless, recommended lower bounds for the number of studies required in
a meta- analysis (10 studies; Deeks et al., 2020), and for meta- regression involving categor-
ical subgroup variables (eg, 4 studies; Fu et al., 2011), have been met. The average effect
size and variation across studies are reported based on the identified a priori features of
personalisation. Heterogeneity7 was assessed using the Q test (Hedges, 1982), tau (T2) and
I2 (Higgins & Thompson, 2002) to give an indication of dispersion in the study effect sizes.
Publication bias was assessed using the funnel plot method, which is used as a visual aid for
detecting bias stemming mainly from negative results not being published or systematic het-
erogeneity (Bartolucci & Hillegass, 2010). Study limitations are considered in Section 5.4.
RES U LTS
Search, screening and selection
Search results, screening outcomes and selection decisions are presented in Figure 1.
The initial automated searches returned 38,335 results, with 198 potential studies iden-
tified after title and abstract screening. The additional automated searches returned 1218
results with 8 potential studies identified after screening. Following all automated and snow-
balling searches (with author recommendation leading to the identification of one potential
study and citation snowballing identifying four further potentially relevant studies), 54 full-
text studies were assessed for eligibility.
In total, this systematic combination of automated, manual and snowballing searches led
to 16 studies meeting the inclusion criteria (although 15 studies are included in the meta-
analysis). No further studies were identified after comparing search results to the included
study lists of related systematic reviews (indeed, the meta- analysis includes additional stud-
ies not identified by this previous work). Reasons for the exclusion of studies based on the
eligibility criteria are available in Supporting Information File 1.
Most studies reported treatment effects (n = 12) for unadjusted and adjusted ordinary least
squares regressions (OLS) and accounted for baseline outcome measures as covariates. For
remaining studies (n = 3), standardised mean differences were calculated. Some studies re-
ported multiple effect sizes for different groups including multiple treatment arms (n = 3), out-
comes (n = 7) and independent groups (n = 1). Of the 15 studies included in the statistical
analysis, authors of 12 studies confirmed that they agreed with the coding undertaken with
|
1945
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
regard to the personalisation affordances of included interventions. Communication could not
be established with the authors of the three remaining studies. Collaborative review amongst the
research team— and the process of consultation with study authors— led to consensus on the
features of personalisation established for each intervention (Supporting Information File 1). To
eliminate the bias of statistical dependency due to a number of studies coming from the Rural
Education Action Program (REAP) at Stanford University8 sensitivity analysis was undertaken.
Research critical appraisal
Following a discussion between the two raters, there was no disagreement in regard to the
critical appraisal process. All studies were considered to be of an appropriate standard for
FIGURE 1 Flow chart of the study selection process (following adapted PRISMA guidelines; Moher
et al., 2009)
1946
|
MAJOR et al.
inclusion given the average quality score of 16.4/21. Importantly, all studies had medium or
high scores for RCT design (Category 3) suggesting limited chances of bias arising due to
this. The overall quality scores for each study can be seen in Table 1.
Descriptive findings
In total, 16 independent studies were identified. These were conducted9 in China (n = 9), India
(n = 3), Malawi (n = 2), the Russian Federation (n = 1) and El Salvador (n = 1). Populations
were typically of low socio- economic status from rural areas (eg, poor ethnic minority areas;
[S7]) with the exception of three studies that included urban populations ([S12] [S13] [S14]).
Most featured students aged 812 years (n = 14), with one study focusing on learners aged
6– 8 ([S14]) and one learners aged 10– 15 ([S12]).
Studies focused on mathematics (n = 6), literacy (n = 5) and both mathematics and lit-
eracy (n = 5). Outcomes for literacy included: English as an additional language; Russian;
Mandarin; Hindi; and reading in Chichewa (language of instruction in Malawi primary
schools). Learning outcomes were assessed in written form varying from in- app quizzes
(eg, [S14]), standardised tests (eg, [S6]) and researcher- designed tests (eg, [S12]). All in-
terventions delivered supplementary instruction (n = 16) with one study including a second
computer- assisted treatment group that integrated technology into the teaching of English
([S2]).
Most studies report CAL interventions (n = 14). Two report a tablet intervention ([S13]
[S14]). Specific software included: CAL software developed by the Rural Education Action
Program10 (n = 8), an online adaptive version of the same software11 (n = 1), the One Billion
Interactive App (n = 2), Mindspark (n = 1), Khan Academy (n = 1), a software developed by
an established technology organisation ([S4]), bespoke personalised software developed
by a research team ([S5]) and a combination of internally and professionally developed
software (n = 1). Interventions were mostly delivered during the school day (n = 10) with oth-
ers delivered after school (n = 2) and either during lunch time at school or after school with
supervision (n = 4). Most studies reported a ‘STRONG’ intensity and duration level (n = 10)
with others ‘MODERATE’ (n = 5). One incorporated two groups with both levels ([S4]).
Regarding the personalisation features of reported interventions (see Supporting
Information File 1), most studies featured technology delivering personalisation (n = 12) with
others the teacher and the software providing personalisation (n = 4). Six studies featured
‘HIGH’ personalisation and others ‘MEDIUM’ (n = 10). Personalisation features were as
follows: engaging learners through matching their interests or experience (n = 15); providing
personalised feedback, support and/or assessment (n = 14); and adapting or adjusting to
learners' level (n = 6). Included study characteristics, main effect sizes and ID codes (eg,
[S10] referring to Study Ten— Mo et al., 2014) are presented in Table 1.
Meta- analysis results
While 16 studies met the inclusion criteria, the meta- analysis itself is based on 15 stud-
ies. This is because [S1612] could not be included in the analysis due to missing statistical
information. The total number of participants involved was 53,029 (25,850 intervention and
27,179 control group) with a minimum of 232 and a maximum of 11,890 students. The mean
sample size was 3535 (1723 intervention and 1811 control group). The effect sizes for the
15 studies ranged from 0.05 to 0.39. When multiple outcomes (multiple subjects) and com-
parators (multiple treatments) used in subgroup analyses are factored, there are a total of 30
effect sizes ranging from 0.01 to 0.39.
|
1947
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
TAB LE 1 Study characteristics13
Study
Code Study Country
Population
Characteristics
Total Sampl e
size Age Subject
Type of
technology Comparator Delivery time
Type of
Technology use
Intensi ty ×
Duration
Personalisation
Del iver y Type
Personalisation
Level
Experimental
Design
Qualit y
assessment
Effect
Size SE
S1 Banerjee
et al. (2007)
India Urban areas in Vadodara 11,8 90 9 – 10 Mathematics CAL No intervention During and a fter
school
Supplementary Strong Technology HRCT 16 0.39 0.07
S2 Bai et al. (2016) China Rural (poor minority area in
Qinghai Province)
5917 10 – 11 Language (English as
an OL)
CAI & CAL No intervention During school CAI: Integrative Strong Technology MRCT 16 0.05 0.04
CAL: Supplementary
S3 Bai et al., (2018) China Rural China (poor minority
area in Qinghai Province)
1342 10 – 11 Language (English as
an OL)
Online CAL
(OCAL)
No intervention During school Supplementary Moderate Teacher +
Technology
HRCT 18 0.25 0.14
S4 Bettinger
et al., (2020)
Russian
Federation
2 × regions wi th GDP below
the national average
6253 8– 9 Mathematics &
Language
(Russian)
CAL Traditional
homework
After sch ool Supplementary CAL single do se:
Moderate
Teacher +
Technology
MRCT 13 0.07 0.04
CAL double
dose: Strong
S5 Kumar and Meh ra
(2018)
India Low SES backgr ound
from India
232 11– 1 2 Mathematics CA L Traditional
homework
During school Supplementary Strong Teacher +
Technology
HRCT 15 0.21 0.1 3
S6 Lai et al. (2015) China Migra nt children in Beiji ng
(typical ly of low SES
background)
1717 9– 10 Mathematics &Language
(Chinese)
CAL No intervention During lun ch
or after sc hool
supervised
Supplementary Strong Technology MRCT 18 0.08 0.04
S7 Lai et al. (2016) China Poor eth nic minority are as
in China's Qinghai Province
3164 9– 10 Mathematics &
Language (Mandarin)
CAL No intervention During lun ch
or after sc hool
supervised
Supplementary Strong Technology MRCT 15 0.12 0.05
S8 Lai et al., (2012) China Poor minority rural areas in
Qinghai Province
1717 9– 10 L anguage (Chinese) CAL No intervention During lunc h
or after sc hool
supervised
Supplementary Moderate Technology MRCT 19 0.19 0.06
S9 Mo et al. (2020) China Poor min ority areas of
Qinghai Province
5253 10 – 11 Language (English as
an OL)
CAL No intervention During school Supplementary Strong Technology MRCT 18 0.05 0.07
S10 Mo et al. (2014) China Poor rura l areas in
Shaanxi (boarders and
non- boarders)
4757 9– 10 &
11– 12
Mathematics CAL No intervention During school Supplementary Strong Technology MRCT 21 0.1 6 0.06
S11 Mo et al (Pha se 2
only) (2015)
China Shaanxi Province 2426 10– 11 &
12– 13
Mathematics CAL No intervention During school Supplementary Strong Technology MRCT 18 0.26 0.04
S12 Muralidharan
et al. (2019)
India
Low- income neighbourhoo ds
in Delhi
619 10– 15 Mathematics &
Language (Hindi)
CAL No intervention A fter school Supplementary Strong Technology HRCT 18 0.29 0. 29
S13 Pitchford (2015) Malawi Urban area of the c apital
city Malawi
283 8– 10 Mathematics Digital tablet
Intervention
Non- M aths t ablet
control + N o
intervention
During school Supplementary Moderate Technology HRCT 15 0.22 0.09
S14 Pitchford
et al. (2019)
Experiment 3
Malawi Seven schoo l districts in
Malawi
320 6– 8 Reading in Ch ichewa Digital tablet
Intervention
No intervention During school Supplementary Moderate Technology HRCT 14 0.39 0.03
S15 Yang et al. (2013) China Migrant communities
outside of Beijing
6487 8 – 11 Mathematics (Beijing &
Shaanxi) and Language
(Mandarin) (Qinghai)
CAL No intervention During lun ch
or after sc hool
supervised
Supplementary Moderate Technology MRCT 16 0.14 0.02
S16 Buchel
et al. (2020)
El Salvador Rural district 3528 9– 12 Mathematics CAL Additional math
lessons instructed by
a teacher
During school Supplementary Strong Teacher +
Technology
MRCT 12
1948
|
MAJOR et al.
RQ1. Does technology- supported personalised learning improve learning
outcomes for school- aged children more effectively than teachers' standard
educational practice (without technology) in low- and middle- income
countries?
Overall, technology- supported personalised learning interventions had a significant positive
effect of 0.18 on students' learning (95% CI [0.12, 0.24], p < 0.001). The forest plot showing
the distribution of individual studies, summary effects and confidence intervals is presented
in Figure 2. Blue squares indicate the size of the intervention effect and is proportional to
the weight of the study. The 95% confidence interval is indicated by blue lines. The green
diamond displays the weighted average overall effect size, its confidence interval and the
midpoint indicates the magnitude of the effect size. The vertical line running from zero is
the line of null effect or the point where there is no association between the intervention
and control. The overall effect size is statistically significant as indicated by the diamond not
crossing the zero line.
A significant summary effect indicates that students using technology- supported per-
sonalised learning approaches have significantly higher learning outcomes than their peers
who did not use technology. Heterogeneity between individual studies was observed Q(14)
= 95.95, p = 0.001 and I2 = 83.59% suggesting variation in effect sizes across the studies
might be due to characteristics of the different studies (or by the features of personalisa-
tion which have been hypothesised). The results from meta- regression analysis are subse-
quently used to explore potential reasons for variability across studies.
FIGURE 2 Forest plot: overall effect of technology- supported personalised learning interventions is 0.18
(95% CI [0.12, 0.24], p = 0.001)
|
1949
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
Publication bias
The funnel plot in Figure 3 shows that the points (each representing study effects) are fairly
evenly scattered around the reference line at the top of the graph The gap near the middle
and bottom left of the graph is indicative of likely missing data due to publication bias and the
single study at the bottom of the graph, small study effects. A follow- up statistical test, the
trim- and- fill method, was conducted to identify and correct for funnel plot asymmetry aris-
ing from publication bias by providing an estimate of the number of missing studies and an
adjusted intervention effect from including the filled studies (Duval & Tweedie, 2000; Shi &
Lin, 2019). However, results from the trim- and- fill analysis recommended no imputations to
achieve symmetry which suggest that the results of the meta- analysis are not systematically
affected by unpublished work.
Sensitivity analysis
The sensitivity analysis compared overall effects for studies using the same software devel-
oped by REAP to studies coming from other research labs (Figure 4). This is because REAP
studies accounted for a larger proportion of those in the sample (n = 9). Results indicate that
interventions in both groups yielded positive statistically significant results, although studies
across the independent labs had a higher overall effect size of 0.26 (95% CI [0.13, 0.39],
p = 0.001) and were more heterogeneous (Q(5) = 44.21, p = 0.001 and I2 = 82.64%). This is
compared to studies in the REAP group with an effect size of 0.14 (95% CI [0.09, 0.19], p =
0.01) which showed less heterogeneity (Q(8) = 19.48, p = 0.01 and I2 = 62.68%). The test of
group differences confirmed that the group- specific overall effect sizes were not statistically
different (Qb = 3.08, p = 0.08). This supports the decision to include all studies in the meta-
analysis even though several of them came from the same research lab. However, a noticeable
difference is the smaller overall effect estimate for REAP studies. One possible explanation is
that the software used by these studies has ‘MEDIUM’ personalisation features relative to the
software used in other research. The effects of this level of personalisation as a characteristic
feature of studies are investigated as a moderator in the meta- regression analysis.
RQ2. To what extent do features of technology- supported personalised learning
contribute to the effectiveness of interventions?
Features of technology- supported personalised learning (academic outcomes, p ersonalisation
levels, personalisation delivery type, intervention intensity and duration) are predicted to
FIGURE 3 Funnel plot of summary effects
1950
|
MAJOR et al.
influence summary intervention effects. These categorical moderators are explored in four
separate meta- regression analyses (see Appendix C). Graphical representations of the
relationship between categories and summary effects are presented in Figure 5. For each
regression model, the regression coefficient estimates indicate how the intervention effect
in each subgroup differs on a nominated category and whether this difference is significant.
Academic outcome categories refer to studies which assessed learning in mathematics
(n = 12) and literacy (n = 10). There was no difference (p = 0.80 I2 = 79.85) in study effects
whether interventions addressed mathematics with an effect size of 0.17 (95% CI [0.11,
0.23]) or literacy with one of 0.16 (95% CI [0.08, 0.25]). This suggests that technology-
supported personalised learning approaches are effective across both subject areas.
Interventions differed on the types of software used and degree of personalisation af-
fordances provided. The six studies with ‘HIGH’ personalisation features had statistically
significantly higher effect sizes (p = 0.01, I2 = 56.76) compared to the nine studies with
‘MEDIUM’ personalisation features. Effect sizes for studies with ‘HIGH’ personalisation
FIGURE 4 Sensitivity analyses for sub- group analysis
|
1951
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
ranged from 0.22 to 0.39 with an overall effect size of 0.35 (95% CI [0.26, 0.42]), whereas
for studies with ‘MEDIUM’ personalisation features effect sizes ranged from 0.05 to 0.26
with an overall effect of 0.13 (95% CI [0.08, 0.17]).14 This suggests that interventions using
more highly personalised approaches that adapt or adjust to learners‘ level have a greater
impact on learning
Technology- supported personalised learning interventions may employ different person-
alisation delivery types. For instance, this could involve allowing students to work through
remedial activities on software without pedagogical input from the teacher (technology only
condition), or settings where a teacher supports students' learning through assignment of
content or feedback as they use the software (teacher and technology condition). The con-
dition for delivering the intervention, ‘technology only’ (n = 12) or ‘teacher and technology
(n = 3), does not significantly affect reported effectiveness (p = 0.64, I2 = 83.79). It appears
FIGURE 5 Effect sizes and 95% confidence intervals for selected moderator variables. Signif icant
differences between groups were reported only for Personalisation Level p < 0. 0 01)
1952
|
MAJOR et al.
that interventions included in this meta- analysis are similarly impactful whether the personal-
isation delivery type is via ‘technology only’ with an effect size of 0.19 (95% CI [0.12, 0.26]) or
through ‘teacher and technology’ with one of 0.12 (95% CI [0.00, 0.24]). Results for ‘teacher
and technology’ need to be treated with caution given the lower bound CI of zero and the
very few ‘teacher and technology’ studies in comparison. However, these findings can pos-
sibly be taken as preliminary evidence that suggests personalised technology may leverage
positive benefits whether or not teachers also have an active role in the personalisation.
Interventions may vary by the intensity and duration of programmes such that they are
delivered for at least 75 min per week and longer than 4.5 months (‘STRONG’ n = 10), or less
(‘ MODER ATEn = 6). Studies grouped as strong for the dimension of intensity and duration
had an overall effect estimate of 0.15 (95% CI [0.07, 0.22]), whereas studies categorised as
moderate had one of 0.21 (95% CI [0.11, 0.31]). The meta- regression reveals how there is
no statistical difference between studies categorised based on the intensity and duration of
the intervention (p = 31, I2 = 83.23). This suggests that technology implementation for more
than 4.5 months with an intensity of greater than 75 min a week may be similarly effective to
that of a more moderate duration and intensity (between 2 and 4.5 months and of 4575 min
a week), although further research is needed to confirm this (as discussed in the following
sections).
A related unexplored hypothesis is whether personalisation delivery type or technology
that is designed to supplement instruction, substitute instruction or integrate with instruction
determined the effectiveness of the intervention. This hypothesis could not be tested in the
meta- regression due to a lack of variability as all studies report on ‘supplementary’ instruc-
tion only (n = 15).
DISCUSSION
The effectiveness of technology- supported personalised learning
This meta- analysis indicates how technology- supported personalised learning has been
found to have a statistically significant positive effect of 0.18 on learning (p = 0.001). So
how important is this and other reported effects? The US Department of Education (2020)
considers effect sizes of 0.25 standard deviations or larger to be ‘substantively important’ for
education. The Education Endowment Foundation15 in the UK meanwhile suggests that ef-
fect sizes of 0.18 and 0.19 translate to 2 or 3 months additional educational progress. While
an effect size of 0.18 can be characterised as small according to benchmarks provided by
Cohen (0.2 is ‘small’, around 0.5 is ‘medium’ and above 0.8 is ‘large’; 1988) and others (eg,
Acock, 2014), there is no universal guideline for assessing the practical importance of stand-
ardised effect size estimates for educational interventions (Bakker et al., 2019). Instead,
there is consensus that effect sizes should reflect the nature of the intervention being evalu-
ated, its target population and the outcome measure(s) used (Hill et al., 2008; Pigott &
Polanin, 2020). Important also is that smaller effect sizes have increasingly been accepted
in education over time (Bakker et al., 2019).
In their meta- analysis of 77 RCTs undertaken in primary education, McEwan (2015) found
that technology interventions yielded the highest average effect size (0.15) of all educational
interventions in developing countries, which further reinforces the educational importance of
this meta- analysis with overall moderator effect sizes ranging from 0.12 to 0.35. Investigation
of study heterogeneity points to the level of personalisation features as the influential mod-
erator. Specifically, findings highlight the potential significance of interventions that adapt or
adjust to learners' level (effect size of 0.35) in contrast to personalised technologies that do
not (effect size of 0.13).
|
1953
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
In light of previous research, we consider reported effects to be moderate but potentially
educationally significant. We also concur with Mo et al. (2014) that an overall effect size of
around 0.18 is sufficiently large to attract the interest of policymakers, particularly as studies
that employ adaptive instruction have been shown to be effective in LMICs (Conn, 2014).
Furthermore, results indicate how ‘moderate’ use of personalised technology (eg, of be-
tween 2 and 4.5 months) was found to be similarly effective to ‘stronger’ use (eg, for longer
than 4.5 months). This might corroborate research that identified a diminishing marginal rate
of substitution for traditional learning from doubling the amount of technology use (Bettinger
et al., 2020).
While the limitations of the meta- analysis are outlined fully in Section 5.4, the ‘supple-
mentary’ nature of interventions should be considered when interpreting reported effects.
The use of technology typically led to an increase in learning time compared to students
in the control group. As most studies use passive controls or no interventions, this raises
the possibility that learning gains may not solely be attributable to the use of personalised
technology. In already resource- constrained environments, providing access to digital de-
vices to administer a placebo treatment and/or developing non- technology approaches that
are comparable to technology interventions is practically and ethically challenging. Despite
this, the meta- analysis indicates that studies which included an active control group still
report significantly greater gains in academic performance (eg, an effect size of 0.22 when
comparing to a technology placebo group and a standard educational practice control;
Pitchford, 2015), potentially in a way that may outperform traditional instruction (eg, where
students increased their math scores by 0.21– 0.24 standard deviations; Buchel et al., 2020).
Additional research is strongly recommended to investigate whether the ‘added value’ of
technology- supported approaches will be maintained when further RCTs with active con-
trols, and alternative approaches to supplementary personalised learning (eg, integrative or
substitute approaches), are implemented.
Cost implications
In addition to considering effect sizes, whether a programme should be implemented also
depends on its potential to scale at reasonable cost (Angrist et al., 2020; Bakker et al., 2019;
Harris, 2009). Educational technology interventions may not always lead to higher learn-
ing gains compared to low- or non- technology initiatives once the effect of the technol-
ogy use is isolated (Evans & Acosta, 2020; Ma et al., 2020). As such, the question should
not be whether a technological approach could address a problem in the educational sys-
tem, but rather whether it is the most effective and cost- effective way to do so (Rodriguez-
Segura, 2020). The meta- analysis did not set out to investigate cost- effectiveness given
the RER revealed how synthesisable data required were likely to be limited. Nonetheless,
several studies offer relevant information.
Costs associated with technology- supported personalised learning include fixed (eg,
initial and on- going software development; Muralidharan et al. 2019) and variable costs
of implementation (eg, hardware costs of computers; Kumar & Mehra, 2018). Other costs
potentially include teacher support and social costs (Bai et al., 2018). Impact on teacher
and learner time is an additional factor (Kumar & Mehra, 2018). Despite indications that
technology- supported personalised learning approaches need not necessarily be prohibi-
tively expensive (see Appendix D for an overview), significantly more research is required.
This is particularly the case as other research suggests CAL interventions are amongst the
least cost- effective in LMICs (McEwan, 2015). In settings without sufficient infrastructure, it
is likely that implementation costs will be high (at least initially). Non- technology approaches
may also offer comparable gains in learning at a lower cost (eg, Banerjee et al., 2007).
1954
|
MAJOR et al.
Potentially, using existing hardware may help in reducing costs and increasing access
(Global Education Evidence Advisory Panel, 2020). Considering the cost challenges expe-
rienced by countries with limited resources, a promising observation is that personalised
software featuring moderate personalisation affordancestypically developed in close
alignment with the curriculum— can still yield learning rewards. Such approaches might pro-
vide a more immediate entry point in some contexts given higher- tech alternatives may be
unaffordable for some years to come.
Role of teachers and other considerations
While personalised technology appears to show benefits whether or not teachers also have
an active role in the personalisation, relatively few studies have examined teachers' role
in making personalised technology effective as part of their everyday practice. This is be-
cause research often reports on supplementary uses of personalised technology which en-
able students to practise with instructional content outside of regular classroom instruction.
Integrative approaches that utilise technology during regular instruction are uncommon.
Potentially, technology may also be used to empower teachers to implement personalised
learning approaches that do not feature learners using technology (eg, ‘Teaching at the
Right Level’). In both contexts, teachers would need to be equippedthrough appropriate
professional development— with the knowledge to integrate personalised learning, including
diagnostic and formative assessment, with other teaching activities. Absence of teachers in
the implementation of personalised technology interventions also does not negate potential
teacher involvement in the planning stages (eg, aligning supplementary uses of personal-
ised technology to the curriculum and instruction).
Several studies that did not meet the inclusion criteria must also be considered. Chong
et al., (2020) evaluated a 6- month— personalised— internet- based sexual education course
in high schools in 21 Colombian cities, reporting significant improvement in students' knowl-
edge, attitudes and likelihood of redeeming vouchers for condoms. Gambari et al (2015,
2016) examined the effects of computer- assisted instruction on Nigerian secondary school
students' achievement and motivation outcomes in physics and chemistry. Results revealed
that students taught with personalised technology approaches in cooperative settings led
to better learning outcomes than their counterparts taught using individualised computer in-
struction (Gambari, 2015). Finally, Ito et al. (2019) examined the effects of an app that incor-
porates adaptive learning on Cambodian elementary students' cognitive and non- cognitive
skills, reporting positive outcomes on learning productivity and their subjective expecta-
tion to attend college in the future. These studies demonstrate the potential of technology-
supported personalised learning to be effective in domains other than mathematics and
literacy as well as in improving cognitive and affective skills. In addition to improving learning
outcomes, there are also indications that the impact of such approaches may increase as
learner socio- economic level decreases (Perera & Aboal, 2019), including when used at
home (Tang et al., 2020).
Study limitations
The focus on studies in LMICs was motivated by the need to identify evidence in this specific
context (particularly due to the immediate and long- term challenges caused by COVID- 19;
Kaffenberger, 2020). While expanding the search to include high- income countries would
have increased the number of included studies, such action would have risked overlooking
contextual factors specific to LMICs (Tauson & Stannard, 2018). It would also be contrary
|
1955
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
to suggestions that the challenges facing the use of educational technology in LMICs war-
rant independent consideration from research undertaken in high- income countries (Kaye
& Ehren, 2021).
While synthesis of 2 studies is sufficient for a meta- analysis— provided these can be
meaningfully pooled and their results are sufficiently ‘similar’ (Ryan, 2016)— the inclusion
of 16 studies from only 5 countries (including nine from China) must be considered. This is
in addition to findings possibly not being generalisable to other LMIC contexts (particularly
to low- income countries with extremely limited resources). These potential implications and
the relatively small number of studies included in the meta- regression mean care must be
taken when interpreting findings. As outlined in Section 6, more research is now needed
to investigate the complex factors involved in the use of personalised technology in LMICs
(particularly in regards to the implications for policy and practice).
Other limitations may include the search involving English language research from 2007
only. The keywords used or omitted or the selection and/or nature of digital libraries searched
may also have an impact on reported findings. Studies did not always refer to personalised
learning directly, with several examining this in the context of ‘computer- assisted learning’
more broadly. Further, the features of reported interventions may not always be comprehen-
sively described. There is, therefore, a risk that aspects of personalisation may have been
incorrectly inferred, although the rigorous inductive approach to identifying personalisation
affordances and the fact that all study authors were invited to feedback on coding (with 75%
responding) helps to minimise this. All authors agreed with the coding undertaken.
Studies typically adopted an RCT design, clustered at the school level and assessed
learning outcomes in diverse ways. The limitations of RCTs must be acknowledged includ-
ing a potential lack of external validity and limited scope to account for the ways that inter-
ventions are implemented under different circumstances by different people (Deaton, 2020;
Koutsouris & Norwich, 2018). While some studies examined non- academic outcomes
(eg, self- efficacy, self- confidence, school enjoyment and meta- cognition), heterogeneity
and most interventions not being designed to target these outcomes led to their omission.
Potentially, additional lessons conducted by a teacher might arguably have produced similar
or even better results than those involving technology (Buchel et al., 2020).
Sensitivity analysis mitigates the potential limitation of studies being conducted with the
same software and the potential conflict of interest for researcher- developed software. Other
mitigating actions included undertaking pilot searches and taking steps to reduce subjec-
tivity through inter- rater coding. In terms of reported interventions, some older technology
is considered along with newer technology. This is not considered to be problematic given
coding focused on identifying affordances for personalisation and not technical features. It is
also noted how sophisticated intelligent and cognitive tutoring systems did not feature in the
analysis despite several studies exploring such technology being identified during the study
search. This was because such research did not meet the eligibility criteria for inclusion (ie,
this typically did not involve an experimental approach nor a focus on academic outcomes—
see Supporting Information File 1). While the findings of the meta- analysis are inherently
limited by the quality of evidence available, the critical appraisal of studies minimises the risk
of low- quality research adversely impacting findings.
CONCLUSION AND FUTURE RESEARCH
The meta- analysis reveals how technology- supported personalise d learning has a statistically
significant— if moderate— positive effect on learning outcomes in low- and middle- income
contexts. Such interventions are similarly effective for mathematics and literacy learning
and whether or not teachers also have an active role in the personalisation. One potentially
1956
|
MAJOR et al.
important implication for both policy and practice is how personalised approaches that
adapt or adjust to the learner (eg, their level and/or pace) led to significantly greater learning
gains. Whether the inclusion of more adaptive personalisation features in technology-
assisted learning environments warrants the likely additional investment necessary for their
implementation, however, needs to be further investigated given their development and use
is anticipated to be more complex. Another outcome with potential implications for cost and
resource decisions is that personalised technology implementation of moderate duration
and intensity had similar positive effects to that of stronger duration and intensity, although
further research is needed to investigate this. Potentially important for policy and practice
too, it should also be noted that personalised technology approaches featuring moderate
personalisation affordances can also yield learning rewards.
Findings open up a range of other possibilities for future quantitative and qualitative re-
search. Critically it is not yet known whether personalised technology can be scaled in a
cost- effective and contextually appropriate way. Most existing research reports on ‘sup-
plementary’ uses of personalised technology outside of regular classroom instruction.
Additional research into the viability and comparative effectiveness of teachers in LMICs
integrating personalised learning approaches, featuring learners using technology in class
and otherwise, would therefore make a strong contribution to informing policy and practice.
There is also scope to determine the optimum duration for implementing such interventions
and their long- lasting effects on academic achievement and other outcomes (see Bianchi
et al., 2020 for a related discussion).
Other valuable future work would include considering the differential role (positive or
negative) of personalised technology in terms of different learning domains, location (rural
versus urban), gender, disability and baseline achievement level. Assumptions that under-
pin the use of personalised technologies also warrant consideration. This includes whether
there is a risk of perpetuating a narrow idea of what it means to ‘succeed’ academically (eg,
due to an emphasis on ‘drill and testing’ that may be a feature of some personalised technol-
ogies); whether personalised learning risks promoting individualistic learning aspirations (as
it often involves students working alone despite personalised learning not necessarily being
restricted to individualised learning); and ethical and privacy considerations (particularly if
new approaches integrate AI; UNESCO, 2019).
Following COVID- 19, education stands at a time of unprecedented challenge. Of particu-
lar concern is that recent progress in closing the attainment gap for the most disadvantaged
risks being reversed in our ‘new normal’. While the pandemic presents significant issues, it
also presents opportunities as the global education community looks to rebuild. In particular,
there is a chance to revisit and question basic assumptions of the purpose and nature of
education that may have previously been considered impossible or impractical at scale. This
meta- analysis provides promising evidence for the effectiveness of technology- supported
personalised learning in improving learning outcomes for learners in LMICs.
ACKNOWLEDGEMENTS
The authors thank all colleagues who have in some way supported this work, in particu-
lar those based in the EdTech Hub and Professor Carole Torgerson and Dr Christopher
Marshall who acted as critical friends prior to submission. The authors acknowledge the
support of the FCDO- funded EdTech Hub (https://edtec hhub.org/). Thanks also to Ioannis
Kamzolas for assisting with the figure design and to the BJET reviewers for their construc-
tive comments.
CONFLICT OF INTEREST
The authors declare no conflict of interest or ethical concerns.
|
1957
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
ETHICS STATEMENT
This research was undertaken in accordance with the BERA Ethical Guidelines for
Educational Research (BERA, 2018).
DATA AVAILABILITY STATEMENT
Additional information (eg, underpinning data) can be obtained by sending a request email
to the corresponding author.
ORCID
Louis Major https://orcid.org/0000-0002-7658-1417
Gill A. Francis https://orcid.org/0000-0002-0795-2544
Maria Tsapali https://orcid.org/0000-0002-3574-3467
ENDNOTES
1 Wilichowski and Cobo (2021). Considering an adaptive learning system? A roadmap for policymakers. World
Bank Blogs. https://blogs.world bank.org/educa tion/consi derin g- adapt ive- learn ing- syste m- roadm ap- polic ymak-
ers (Accessed 05/02/21).
2 Note, Bulger (2016) observes how more sophisticated technology- enabled personalisation approaches— such
as genuinely ‘intelligent’ tutoring systems— remain mostly aspirational at present.
3 Study [S1] in the meta- analysis.
4 https://sdgs.un.org/goals/ goal4.
5 The ability to read, write, speak and listen in a way that enables e󰀨ective communication and sense of the
world https://liter acytr ust.org.uk/infor matio n/what- is- liter acy/ (accessed 18/12/20).
6 Using the Dersimonian and Laird method.
7 Which can be interpreted using suggested thresholds 25% for low, 50% for medium and 75% for high heteroge-
neity (Borenstein, 2009).
8 https://sccei.fsi.stanf ord.edu/reap/ (accessed 05/02/21)— The Rural Education Action Program (REAP) at Stan-
ford University is an international research organisation that aims to help poor students in rural China overcome
the barriers many face in gaining a proper education.
9 The ve countries represented are all identied by the World Bank as LMICs (https://data.world bank.org/count
ry/XO): Malawi (low- income); El Salvador and India (lower- middle- income); China and Russia (upper- middle-
income— although note participants were typically from disadvantaged communities within this context).
10 http://intro.taoli online.cn/ (accessed 18/12/20)— a game- based platform providing free remedial resources
accompanied by individualised feedback to increase academic performance and interest in learning.
11 https://reap.fsi.stanf ord.edu/resea rch/techn ology/ ocal (accessed 18/12/20)— a game- based online platform that
also features an adaptive learning component (exercise di󰀩culty level automatically adjusts to match individual
student‘s learning progress).
12 [S16] was excluded due to missing data that meant e󰀨ect sizes could not be estimated. The study examined
the e󰀨ectiveness of a computer- assisted learning intervention in mathematics over traditional teaching in prima-
ry schools in El Salvador. Assignment to additional technology- supported lessons signicantly increased math
scores by 0.21σ when overseen by a supervisor and by 0.24σ when instructed by teachers.
13 Corrections made on 4 June 2021, after rst online publication: Table 1 has been updated in this version.
14 Correction added on 4 June 2021, after rst online publication: ‘overall e󰀨ect size of 0.34’ has been corrected to
‘overall e󰀨ect size of 0.35’, in this version.
15 https://educa tione ndowm entfo undat ion.org.uk/evide nce- summa ries/about - the- toolk its/attai nment/ (accessed
18/12/20).
16 https://data.world bank.org/incom e- level/ low- and- middl e- income (accessed 18/12/20).
REFERENCES
References marked with an asterisk (*) are studies included in the meta- analysis.
Acock, A . C. (2014). A gentle introduction to Stata (4th ed.). Stata Press.
1958
|
MAJOR et al.
Adam, T., & Haßler, B. (2020). The EdTech hub searchable publications database (SPuD) v2.0. EdTech Hub.
Ahn, S., Ames, A. J., & Myers, N. D. (2012). A review of meta- analyses in education: Methodological strengths
and weaknesses. Review of Educational Research, 82(4), 436 476. https://doi.org /10. 3102 /0 03 46 54312
458162
Angrist , N., Evans, D. K., Filmer, D., Glennerster, R., Rogers , F. H., & Sa barwal, S. (2020). How to im prove educatio n
outcomes most efficiently? (Policy Research Working Paper 9450). World Bank Group. http://docum ents1.
world bank.org/curat ed/en/80190 16033 14530 125/pdf/How- to- Impro ve- Educa tion- Outco m es- Most- Ef fic iently-
A- Compa rison - of- 150- Inter venti ons- Using - the- New- Learn ing- Adjus ted- Years -of- Schooling- Metric.pdf
Arroyo, I., Woolf, B. P., Burelson, W., Muldner, K., Rai, D., & Tai, M. (2014). A multimedia adaptive tutoring sys-
tem for mathematics that addresses c ognition, metacognition and affect. International Journal of Artificial
Intelligence in Education, 24(4), 387– 426. https://doi.org/10.1007/s4059 3- 014- 0023- y
Azevedo, J. P., Hasan, A., Goldemberg, D., Iqbal, S. A ., & Geven, K. (2020). Simulating the potential impacts of
COVID- 19 school closures on schooling and learning outcomes: A Set of global estimates. The World Bank.
https://doi.org/10.1596/1813- 945 0- 928 4
*Bai, Y., Mo, D., Zhang, L., Boswell, M., & Rozelle, S. (2016). The impact of integrating ICT with teaching: Evidence
from a randomized controlled trial in rural schools in China. Computers & Education, 96(1), 1– 14. https://doi.
org/10.1016/j.compe du.2016.02.005
*Bai, Y., Tang, B., Wang, B., Auden, E., & Mandell, B. (2018). Impact of online computer assisted learning on ed-
ucation: Evidence from a randomized controlled trial in China (Working Paper 329; Rural Education Action
Program (REAP), Issue Working Paper 329, p. 51). Stanford University.
Bakker, A., Cai, J., English, L., Kaiser, G., Mesa, V., & Van Dooren, W. (2019). Beyond small, medium, or large:
Points of consideration when interpreting effect sizes. Educational Studies in Mathematics, 102(1), 1– 8.
https://doi.org/10.1007/s1064 9 - 019- 09 90 8 - 4
*Banerjee, A. V., Cole, S., Duflo, E., & Linden, L. (20 07). Remedying education: Evidence from two random-
ized experiments in India. The Quarterly Journal of Economics, 12 2(3), 1235– 1264. https://doi.org/10.1162/
qjec.122.3.1235
Bartolucci, A. A., & Hillegass, W. B. (2010). Overview, strengths, and limitations of systematic reviews and meta-
analyses. In F. Chiappelli (Ed.), Evidence- based practice: Toward optimizing clinical outcomes (pp. 1733).
Springer.
*Bettinger, E., Fairlie, R. W., Kapuza, A., Kardanova, E., Loyalka, P., & Zakharov, A. (2020). Does EdTech sub-
stitute for traditional learning? Experimental estimates of the educational production function (Working
Paper No. 269 67; Working Paper Series, Issue 26967). National Bureau of Economic Research. https://doi.
org/10.3386/w26967
Bianchi, N., Lu, Y., & Song, H. (2020). The effect of computer- assisted learning on students' long- term devel-
opment (SSRN Scholarly Paper ID 3309169). Social Science Research Network. https://doi.org/10.2139/
ssrn.3309169
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta- analysis. John
Wiley & Sons, Ltd. ht tps://doi.org/10.1002/97804 70743386
British Educational Research Association [BER A] (2018). Ethical Guidelines for Educational Research, fourth
edition, London. Retrieved from https://www.bera.ac.uk/resea rcher s- resou rces/publi catio ns/ethic al- guide
lines - for- educa tiona l- resea rch- 2018
*Buchel, K., Jakob, M., Kuhnhanss, C., Ste󰀨en, D., & Brunetti, A. (2020). The relative effectiveness of teachers
and learning software: Evidence from a field experiment in El salvador. University of Bern, Department of
Social Sciences.
Building Evidence in Education. (2015). Assessing the strength of evidence in the education sector. The Building
Evidence in Education (BE2) working group. https://www.usaid.gov/sites/ defau lt/files/ docum ents/1865/
BE2_Guida nce_ Note_ASE.pdf
Bulger, M. (2016). Personalized learning: The conversations we're not having (Data & Society Working Paper, pp.
1– 29).
Cheung, A. C., & Slavin, R. E. (2012). How features of educational technology applications af fect student read-
ing outcomes: A meta- analysis. Educational Research Review, 7(3), 198– 215. https: //doi.org/10.1016/j.
edurev.2012.05.002
Chong, A., Gonzalez- Navarro, M., Karlan, D., & Valdivia, M. (2020). Do information technologies improve teen-
agers' sexual education? Evidence from a randomized evaluation in Colombia. The World Bank Economic
Review, 34(2), 371– 392. https://doi.org/10.1093/wber/lhy031
Conn, K. M. (2014). Identifying effective education inter ventions in Sub- Saharan Africa: A meta- analysis of rigor-
ous impact evaluations (Doctoral dissertation). Columbia University.
Cuban, L. (2018). A continuum of personalized learning (second draft). https://larry cuban.wordp ress.
com/2018/09/27/secon d- draft - a- conti nuum- of- perso naliz ed- learn ing/
Deaton, A. (2020). Randomization in the tropics revisited: A theme and eleven variations (No. w27600). National
Bureau of Economic Research. https://doi.org/10.3386/w27600
|
1959
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
Deeks, J. J., Higgins, J. P. T., & Altman, D. G. (2020). Analysing data and undertaking meta- analyses. In J. P. T.
Higgins, J. Thomas, J. Chandler, M. Cumpston, T. Li, M. J. Page, & V. A. Welch (Eds.), Cochrane handbook
for systematic reviews of interventions version 6.1 (updated September 2020). Cochrane.
du Boulay, B., Poulovasillis, A., Holmes, W., & Mavrikis, M. (2018). Artificial intelligence and big data technologies
to close the achievement gap. In R. Luckin (Ed.), Enhancing learning and teaching with technology (pp. 256
285). UCL Institute of Education Press.
Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel- plot– based method of testing and adjusting for publi-
cation bias in meta- analysis. Biometrics, 56(2), 455– 463. https://doi.org/10.1111/j.0006- 341X.2000.00455.x
EdTech Hub. (2020). Save our future: Aver ting an education catastrophe for the world's children - world. https://
relie fweb.int/repor t/world/ save- our- futur e- avert ing- educa tion- catas troph e- world - s- children
Education Endowment Foundation. (2019). Digital technology: Teaching and learning toolkit. https://educa tione
ndowm entfo undat ion.org.uk/pdf/toolk it/?id=134&t=Teach ingan dLear ningT oolki t&e=134&s=
Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education technology: An evidence- based review.
National Bureau of Economic Research.
Evans, D. K., & Acosta, A. M. (2020). Education in Africa: What are we learning? (CGD Working Paper 542). :
Center for Global Development. https://ww w.cgdev.org/publi catio n/educa tion- afric a- what- are- we- learning
FitzGerald, E., Jones, A., Kucirkova, N., & Scanlon, E. (2018). A literature synthesis of personalised technology-
enhanced learning: What works and why. Research in Learning Technology, 26. https://doi.org/10.25304/ rlt.
v26.2095
FitzGerald, E., Kucirkova, N., Jones, A., Cross, S., Ferguson, R., Herodotou, C., Hillaire, G., & Scanlon, E. (2018).
Dimensions of personalisation in technology- enhanced learning: A framework and implications for design.
British Journal of Educational Technology, 49(1), 16 5 1 81 . ht tps : //d o i .o r g / 10 .1111/bj e t .12 5 3 4
Fu, R., Gartlehner, G., Grant, M., Shamliyan, T., Sedrakyan, A., Wilt, T. J., Trikalinos, T. A. (2011). Conducting
quantitative synthesis when comparing medical interventions: AHRQ and the Effective Health Care Program.
Journal of Clinical Epidemiology, 64(11), 1187– 1197. https://doi.org/10.1016/j.jclin epi.2010.08.010
Gambari, I. A., Gbodi, B. E., Olakanmi, E. U., & Abalaka, E. N. (2016). Promoting intrinsic and extrinsic motiva-
tion among chemistr y students using computer- assisted instruction. Contemporary Educational Technology,
7(1), 25– 46. https://doi.org/10.30935/ cedte ch/6161
Gambrari, I. A., Yusuf, M. O., & Thomas, D. A. (2015). Effects of computer- assisted STAD, LTM and ICI coop-
erative learning strategies on Nigerian secondary school students' achievement, gender and motivation in
physics. Journal of Education and Practice, 6(19), 16– 28.ISSN- 2222- 1735.
Global Education Evidence Advisory Panel. (2020). Cost- effective approaches to improve global learning. World
Bank, FCDO and Building Evidence in Education. http://docum ents1.world bank.org/curat ed/en/71921 16038
35247 448/pdf/Cost- Effec tive- Appro aches - to- Impro ve- Globa l- Learn ing- What- Does- Recen t- Evide nce- Tell-
Us- Are- Smart - Buys- for- Impro ving- Learn ing- in- Low- and- Middl e- Incom e- Count ries.pdf
Gro, J. S. (2017). The state of the field & future direction. ww w.curri culum redes ign.org
Haidich, A . B. (2010). Meta- analysis in medical research. Hippokratia, 14(Suppl. 1), 29– 37.
Harris, D. N. (2009). Toward policy- relevant benchmarks for interpreting ef fect sizes: Combining ef fects with
costs. Educational Evaluation and Policy Analysis, 31(1), 3– 29. https://doi.org/10.3102/01623 73708 327524
Hedges, L. V. (1982). Estimation of ef fect size from a series of independent experiments. Psychological Bulletin,
92(2), 490– 499. https://doi.org/10.1037/0033- 2909.92.2.490
Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (2020). Cochrane
handbook for systematic reviews of interventions version 6.1 (updated September 2020). Cochrane. www.
train ing.cochr ane.org/handbook
Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in meta- analysis. Statistics in Medicine, 21,
1539– 1558. https://doi.org/10.1002/sim.1186
Hill, C. J., Bloom, H. S., Black, A . R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in
research. Child Development Perspectives, 2(3) , 17 2 17 7. https : //d o i .or g / 10 .1111/j.175 0 - 8 6 0 6 . 20 0 8.0 00 61 .x
Hirsh- Pasek, K., Zosh, J. M., Golinkof f, R. M., Gray, J. H., Bobb, M. B., & Kaufman, J. (2015). Putting education in
“educational” apps: Lessons from the science of learning. Psychological Science in the Public Interest, 16(1),
3– 34. https://doi.org/10.1177/15291 00615 569721
Holmes, W., Anastopoulou, S., Schaumburg, H., & Mavrikis, M. (2018). Technology- enhanced personalised learn-
ing: Untangling the evidence. Robert Bosch Stiftung GmbH. http://ww w.studi e- perso nalis ierte s- lernen.de/
en/
Hong, Q. N., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., Dagenais, P., Gagnon, M.- P., Griff iths, F.,
Nicolau, B., O’Cathain, A., Rousseau, M.- C., Vedel, I., & Pluye, P. (2018). The mixed methods appraisal tool
(MMAT) version 2018 for information professionals and researchers. Education for Information, 34(4), 285–
291. https://doi.org/10.3233/EFI- 180221
Innovations for Poverty Action (2015). Targeted lessons to improve basic skills. Innovations for Poverty Action.
https://www.pover ty- action.org/sites/ defau lt/files/ publi catio ns/TCAI_Final %20Res ults_040115.pdf
1960
|
MAJOR et al.
Ito, H., Kasai, K., & Nakamuro, M. (2019). Does computer- aided instruction improve children's cognitive and non -
cognitive skills?: Evidence from Cambodia (No. 19040; Discussion Papers). Research Institute of Economy,
Trade and Industry (RIETI). https://ideas.repec.org/p/eti/dpape r/19040.html
Jones, A., Scanlon, E., Gaved, M., Blake, C., Collins, T., Clough, G., Kerawalla, L., Lit tleton, K., Mulholland, P.,
Petrou, M., & Twiner, A. (2013). Challenges in personalisation: Suppor ting mobile science inquir y learning
across contexts. Research and Practice in Technology Enhanced Learning, 8(1), 21– 42.ISSN- 1793- 2068.
Jones, L. E., & Casey, M. C. (2015). Personalized learning: Policy & practice recommendations for meeting the
needs of students with disabilities. National Center for Learning Disabilities. http://www.ncld.org/wp- conte nt/
uploa ds/2016/04/Perso naliz ed- Learn ing.WebRe ady.Pdf
Kaffenberger, M. (2020). How much learning may be lost in the long- run from COVID- 19 and how can mitigation
strategies help?. Brookings Institution Blog. https://www.brook ings.edu/blog/educa tion- plus- devel opmen
t/2020/06/15/how- much- learn ing- may- be- lost- in- the- long- run- from- covid - 19- and- how- can- mitig ation - strat
egies - help/
Kaye, T., & Ehren, M. (2021). Computer- assisted instruction tools: A model to guide use in low- and middle-
income countries. International Journal of Education & Development Using Information & Communication
Technology, 17(1), 82 9 9.
Kim, R., Olfman, L., Ryan, T., & Eryilmaz, E. (2014). Leveraging a personalized system to improve self- directed
learning in online educational environments. Computers & Education, 70, 150 160. https://d oi.o rg/10.1016/j.
compe du.2013.08.006
Kishore, D., & Shah, D. (2019). Using technology to facilitate educational attainment: Reviewing the past and look-
ing to the future (No. 23; Pathways for Prosperit y Commission Background Paper Series, Issue 23, p. 47).
Koutsouris, G., & Norwich, B. (2018). What exactly do RCT findings tell us in education research? British
Educational Research Journal, 44(6), 939– 959. https://doi.org/10.1002/berj.3464
Kucirkova, N. (2018). A taxonomy and research framework for personalization in children's literacy apps.
Educational Media International, 55(3), 255 – 272. https://doi.org/10.1080/09523 987.2018.1512446
*Kumar, A., & Mehra, A. (2018). Remedying education with personalized homework: Evidence from a randomized
field experiment in India (SSRN Scholarly Paper ID 2756059; Issue ID 2756059). Social Science Research
Network. https://doi.org/10.2139/ssrn.2756059
*Lai, F., Luo, R., Zhang, L., Huang, X., & Rozelle, S. (2015). Does computer- assisted learning improve learning
outcomes? Evidence from a randomized experiment in migrant schools in Beijing. Economics of Education
Review, 47, 34– 48. https://doi.org/10.1016/j.econe durev.2015.03.005
*Lai, F., Zhang, L., Bai, Y., Liu, C., Shi, Y., Chang, F., & Rozelle, S. (2016). More is not always better: Evidenc e
from a randomised experiment of computer- assisted learning in rural minority schools in Qinghai. Journal of
Development Effectiveness, 8(4), 449– 472. https://doi.org/10.1080/19439 342.2016.1220412
*Lai, F., Zhang, L., Qu, Q., Hu, X., Shi, Y., Boswell, M., & Rozelle, S. (2012). Does computer- assisted learning im-
prove learning outcomes? Evidence from a randomized experiment in public schools in rural minority areas in
Qinghai, China (Working Paper 237; Issue Working Paper 237, p. 41). Rural Education Action Project (REAP).
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta- analysis (p. ix, 247). Sage Public ations Inc.
Ma, Y., Fairlie, R. W., Loyalka, P., & Rozelle, S. (2020). Isolating the “Tech” from EdTech: Experimental evidence
on computer assisted learning in China (No. w26953). National Bureau of Economic Research. https://doi.
org/10.3386/w26953
Major, L., & Francis, G. (2020). Technology- supported personalised learning: Rapid evidence review [Repor t].
EdTech Hub. https://doi.org/10.5281/zenodo.3948175
Maseleno, A ., Sabani, N., Huda, M., Ahmad, R., Jasmi, K. A., & Basiron, B. (2018). Demystif ying learning an-
alytics in personalised learning. International Journal of Engineering & Technology, 7(3), 1124. https://doi.
org/10.14419/ ijet.v7i3.9789
McClure, L. V., Yonezawa, S., & Jones, M. (2010). Can school structures improve teacher- student relation-
ships? The relationship between advisory programs, personalization and students' academic achievement.
Education Policy Analysis Archives, 18, 1– 21. https://doi.org/10.14507/ epaa.v18n17. 2010
McEwan, P. J. (2015). Improving learning in primary schools of developing countries: A meta- analysis of randomized
experiments. Review of Educational Research, 85(3), 353 – 394. https://doi.org/10.3102/00346 54314 553127
*Mo, D. I., Bai, Y., Shi, Y., Abbey, C., Zhang, L., Rozelle, S., & Loyalka, P. (2020). Institutions, implementation, and
program effectiveness: Evidence from a randomized evaluation of computer- assisted learning in rural China
(p. 102487). Stanford University. https://linki nghub.elsev ier.com/retri eve/pii/S0304 38782 0300626
*Mo, D. I., Zhang, L., Luo, R., Qu, Q., Huang, W., Wang, J., Qiao, Y., Boswell, M., & Rozelle, S. (2014). Integrating
computer- assisted learning into a regular curriculum: Evidence from a randomised experiment in rural
schools in Shaanxi. Journal of Development Effectiveness, 6(3), 300– 323. https://doi.org/10.1080/19439
342 . 2 014 . 911770
*Mo, D., Zhang, L., Wang, J., Huang, W., Shi, Y., Boswell, M., & Rozelle, S. (2015). Persistence of learning gains
from computer assisted learning: Experimental evidence from China: Persistence of gains in learning from
CAL. Journal of Computer Assisted Learning, 31( 6) , 5 6 2 581. h t t p s: / /do i . org/1 0.1111/ j c al.1 21 0 6
|
1961
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Preferred reporting items for
systematic reviews and meta- analyses: The PRISMA statement. PLoS Med, 6(7), e1000097. https://doi.
org/10.1371/journ al.pmed1 000097
*Muralidharan, K., Singh, A., & Ganimian, A. J. (2019). Disrupting education? Experimental evidence on
technology- aided instruction in India. American Economic Review, 10 9(4), 14261460. https://doi.
org/10.1257/aer.20171112
Natriello, G. (2017). The adaptive learning landsc ape. Teachers College Record, 119(3), 46.
Office of Educational Technology (2017). Reimagining the role of technology in education: 2017 National
Education Technology plan update. U.S. Department of Education. http://tech.ed.gov
Ogan, A., Walker, E., Baker, R. S. J. D., Rebolledo Mendez, G., Jimenez Castro, M., Laurentino, T., & de Carvalho,
A. (2012). Collaboration in cognitive tutor use in Latin America: Field study and design recommendations. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1381– 1390). https://
doi.org/10.1145/22076 76.2208597
Pane, J. F., Steiner, E. D., Baird, M. D., & Hamilton, L. S. (2015). Continued progress: Promising evidence on per-
sonalized learning. RAND Corporation. https://www.rand.org/pubs/resea rch_repor ts/RR1365.html
Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the
provision of personalised feedback. British Journal of Educational Technology, 50(1), 128– 138. https://doi.
or g / 10 .1111/bje t .12 5 9 2
Perera, M., & Aboal, D. (2017). Evaluación del impacto de la plataforma adaptativa de matemática en los resul-
tados de los aprendizajes. Centro de Investigaciones Económicas. https://www.ceibal.edu.uy/stora ge/app/
media/ evalu acion - monit oreo/CINVE %20Inf orme_PAM_03102 017.pdf
Perera, M., & Aboal, D. (2019). The impact of a mathematics computer- assisted learning platform on students'
mathematics test scores. Maastricht Economic and social Research institute on Innovation and Technology
(UNU- MERIT). https://www.merit.unu.edu/publi catio ns/wppdf/ 2019/wp201 9- 007.pdf
Pigott, T. D., & Polanin, J. R. (2020). Methodological guidance paper: High- quality meta- analysis in a systematic
review. Review of Educational Research, 90(1), 24– 46. https://doi.org/10.3102/00346 54319 877153
*Pitchford, N. J. (2015). Development of ear ly mathematical skills with a tablet intervention: A randomized control
trial in Malawi. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00485
*Pitchford, N. J., Chigeda, A ., & Hubber, P. J. (2019). Interactive apps prevent gender discrepancies in early-
grade mathematics in a low- inc ome country in sub - Sahara Africa. Developmental Science, 22(5), e12 864.
ht tps: / / doi. o r g /1 0.1111/ desc .128 6 4
QSR International Pt y Ltd (2020). NVivo (released in March 2020). Retrieved from https://www.qsrin terna tional.
com/nvivo - quali tativ e- data- analy sis- softw are/home
Rajendran, R ., & Muralidharan, A. (2013). Impact of Mindspark's adaptive logic on student learning. In 2 013
IEEE Fifth International Conference on Technology for Education (T4e 2013) (pp. 119– 122). https://doi.
org/10.1109/T4E.2013.36
Redding, S. (2016). Competencies and personalized learning. In M. Murphy, S. Redding, & J. Twyman (Eds.),
Handbook on personalized learning for states, districts, and schools. IA P.
Robinson, C., & Sebba, J. (2010). Personalising learning through the use of technology. Computers & Education,
54(3), 767– 775. https://doi.org/10.1016/j.compe du.2009.09.021
Rodriguez- Segura, D. (2020). Educational technology in developing countries: A systematic review. University of
Virginia. https://curry.virgi nia.edu/sites/ defau lt/files/ uploa ds/epw/72_Edtech_in_Devel oping_Count ries.pdf
Ryan, R. (2016). Cochrane consumers and communication review group. Cochrane consumers and communica-
tion group: Meta- analysis. http://cccrg.cochr ane.org
Sawada, Y., Mahmud, M., Seki, M., Le, A ., & Kawarazaki, H. (2020). Fighting the learning crisis in developing
countries: A randomized experiment of self- learning at the right level (SSRN Scholarly Paper ID 3471021).
Social Science Research Network. https://doi.org/10.2139/ssrn.3471021
Selwyn, N., & Jandrić, P. (2020). Postdigital living in the age of Covid- 19: Unsettling what we see as possible.
Postdigital Science and Education, 2(3), 989 – 1005. https://doi.org/10.1007/s4243 8- 0 20 - 00166 - 9
Shi, L., & Lin, L. (2019). The trim- and- fill method for publication bias: Practical guidelines and recommendations
based on a large database of meta- analyses. Medicine, 98(23), e15987. https://doi.org/10.1097/MD.00000
00000 015987
Slavin, R. E. (20 08). What works? Issues in synthesizing educational program evaluations. Educational
Researcher, 37(1), 5 – 14. https://w ww.jstor.org/stabl e/30133882
Tang, B., Ting, T. T., Wu, C. I., Ma, Y., Mo, D., Hung, W. T., & Rozelle, S. (2020). The impact of online computer
assisted learning at home for disadvantaged children in Taiwan: Evidence from a randomized experiment.
Sustainability, 12(23), 10092. https://doi.org/10.3390/su122 310092
Tauson, M., & Stannard, L. (2018). Edtech for learning in emergencies and displaced settings— A rigorous review
and narrative synthesis. Save the Children. https://www.savet hechi ldren.org.uk/conte nt/dam/globa l/repor ts/
educa tion- and- child - prote ction/ edtec h- learn ing.pdf
1962
|
MAJOR et al.
U.S. Department of Education. (2020). What works clearinghouse procedures and standards handbook (Version
4.1 ).
UNESCO. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development
(Working Papers on Educ ation Policy No. 7). https://unesd oc.unesc o.org/ark:/48223/ pf000 036699 4
UNESCO. (2020). Education from disruption to recover. https://en.unesco.org/covid 19/educa tionr esponse
Vogel, J. J., Vogel, D. S., Cannon- Bowers, J., Bowers, C. A ., Muse, K., & Wright, M. (200 6). Computer gaming and
interactive simulations for learning: A meta- analysis. Journal of Educational Computing Research, 34(3),
229– 243. https://doi.org/10.2190/FLHV- K4WA- WPVQ- H0YM
Wilichowski, T. & Cobo, C. (2021). Considering an adaptive learning system? A roadmap for policymakers. World
Bank Blogs. Retrieved February 5, 2021, from https://blogs.world bank.org/educa tion/consi derin gadap tive-
learn ing- syste m- roadm ap- polic ymakers
Xie, H., Chu, H.- C., Hwang, G.- J., & Wang, C.- C. (2019). Trends and development in technology- enhanced adap-
tive/personalized learning: A systematic review of jour nal publications from 2007 to 2017. Computers &
Education, 140, 103599. https://doi.o rg/10.1016/j.c omp e du.2019.103599
*Yang, Y., Zhang, L., Zeng, J., Pang, X., Lai, F., & Rozelle, S. (2013). Computers and the academic performance
of elementary school- aged girls in China's poor communities. Computers & Education, 60(1), 335– 346.
https://doi.org/10.1016/j.compe du.2012.08.011
Zhang, L., Basham, J. D., & Yang, S. (2020). Understanding the implementation of personalized learning: A re-
search synthesis. Educational Research Review, 31, 100339. https: //doi.org/10.1016/j.edurev.2020.100339
Zualkernan, I. A. (2016). Personalized learning for the developing world. In B. Gros, Kinshuk & M. Maina (Eds.),
The future of ubiquitous learning (pp. 241– 258). Springer.
SUPPORTING INFORMATION
Additional Supporting Information may be found online in the Supporting Information section.
How to cite this article: Major, L., Francis, G. A., & Tsapali, M. (2021). The
effectiveness of technology- supported personalised learning in low- and middle-
income countries: A meta- analysis. British Journal of Educational Technology, 52,
1935– 1964. h t t p s : / /do i . o r g /10 .1111/ b j e t .13116
APPENDIX A
— SEARCH TERMS
GOOGLE SCHOLAR AND SCOPUS, EDUCATION RESOURCES
INFORMATION CENTER (ERIC) AND WEB OF SCIENCE
“Personalised Adaptive Learning”; "Personalized Adaptive Learning"; “Personalised
technology- enhanced learning”; “Personalized technology- enhanced learning”;
“Technology- enhanced personalised learning”; “Technology- enhanced personalized
learning”; “Personalised TEL”; “Personalized TEL”; “Personalised learning environment”;
“Personalized learning environment”; “Teaching at the right level”; "Combined Activities for
Maximized Learning"
The search string
AND “Personalised education” AND (“Edtech” OR “Education technology” OR “digital
learning” OR "eLearning" OR school) AND ("africa" OR “LMIC" OR "developing world” OR
“developing country*” OR “ICT4D” OR “global south”);
also followed searches for:
“Personalized education”; “Personalised learning”; “Personalized learning”; “adaptive
learning”; “adapting learning”; “Differentiated learning”; “Computer- assisted instruction”;
“Computer- assisted learning”; “Computer- aided learning”; “Intelligent tutoring system”;
|
1963
TECHNOLOGY- SUPPORTED PERSONALISED LEARNING IN
LMICS
“Exploratory learning environments”; “Adaptive Educational Hypermedia”; “Adaptive hyper-
media”; “Personalised Adaptive Learning”; "Personalized Adaptive Learning".
SEARCHABLE PUBLICATION DATABASE (SPUD)
“Teaching at the Right Level”; “TaRL”; “personalized”; “adaptive learning”; “intelligent tutoring
system”; “computer assisted learning”
APPENDIX B
STUDY INCLUSION CRITERIA
INCLUSION CRITERIA EXCLUSION CRITERIA
PO PUL AT ION Involving elementary and/or secondary
school- aged learners (from 5 to
18 years old)
Empirical research taking place in
countries defined as low- or middle -
income by the World Bank16
Involving learners in higher education
or 19 years+
Empirical research taking place in
countries defined as high- income by
the World Bank.
INTERVENTION Involved technology- supported
personalisation (ie, technology
enabling or supporting learning based
upon particular characteristics of
relevance or importance to learners)
An intervention duration/intensity of at
least once a week for 6 weeks or more
Taking place inside or outside school
(eg, non- formal education)
Not including at least one element of
technology- supported personalisation
(ie, focusing on access to technology
with little consideration for how this is
personalised to the needs of learners,
or personalised learning with no use of
technology).
An intervention duration/intensity of
less than 6 weeks
COMPARATOR Learners using non- personalised
learning software or learning in
traditional (or supplementary) settings
with no technology
Comparisons to an unmatched group
not part of the intervention, or no
control group
OUTCOMES Reporting effects on academic
performance measured by grades or
performance on tests (including those
developed by researchers)
Reporting non- academic outcomes
such as engagement or motivation
without considering academic
performance
STUDY DESIGN Describing a randomised experimental
design with an independent
comparison group
Reviews and meta- analyses or
providing a ‘lessons learned’ account
without presenting any empirical
evidence
LIMITS Published 2007– 2020: corresponding
with the introduction of major mobile
operating systems in 2007 (iPhone)
and 2008 (Android phones), as well as
2009 (Android tablet) and 2010 (iPad)
English language only
Studies published before 2007
1964
|
MAJOR et al.
APPENDIX C
— META- REGRESSION ANALYSIS RESULTS
Model
Regression
Component Coefficient SE df p value 95% CI R2
1Academic
Outcomes
0.013 0.052 20 0.801 −0.089 to 0.116 0.0.00
Constant 0 .162 0.039 20 0.000 0.086 to 0.238
2Personalisation
Level***
0.209 0.048 13 0.000 0.115 to 0.303 72.07
Constant*** 0.12 5 0.023 13 0.000 0.075 to 0.172
3Personalisation
Delivery
−0.042 0.091 13 0 .641 −0.220 to 0.135 0.00
Constant* 0.229 0 .110 13 0.037 0.014 to 0.444
4 Intensity ×
Duration
−0.064 0.063 14 0. 313 −0.186 to 0.059 2.05
Constant 0.083 0.093 14 0.372 0.113 to 0.306
Note: Figures are rounded in three digits. Statistical significance: *p < 0.05, **p < 0.01,
***p < 0.001. Predictor variables codes: Learning Outcome: 1 = Maths, 0 = Literacy;
Personalisation Level: 1 = Strong, 0 = Medium; Personalisation Delivery: 1 = Technology,
0 = Technology + Teacher; Intensity × Duration: 1 = Strong, 0 = Moderate.
APPENDIX D
COST- EFFECTIVENESS CONSIDERATIONS REPORTED BY STUDIES INCLUDED
IN THE META- ANALYSIS
Muralidharan et al. (2019) report that, in terms of total costs, delivery of the Mindspark pro-
gramme had an unsubsidised cost of NR 1000 per student (USD 15) per month (even when
implemented with high fixed costs, without economies of scale and based on 58% attend-
ance). Authors conclude that costs at policy- relevant scales are likely to be lower since the
(high) fixed costs of product development have already been incurred. If implemented at
even a modest scale (50 government schools), they estimate that per- student costs reduce
to USD 4 per month (including hardware). For greater than 1000 schools, per- student mar-
ginal costs (software maintenance and technical support) are estimated at USD 2 annually.
Because these can be amortised over a large number of students, the fixed cost of develop-
ing personalised learning software per student is considered to be potentially cost- effective
at scale (Muralidharan et al., 2019).
Other research draws similar conclusions, suggesting that the per learner cost may be as
low as USD 1 if implemented for several thousand students (Kumar & Mehra, 2018). It is also
noted how the marginal costs of shifting from a lower to higher level of personalised software
may be low because learners already have access to the equipment required (Bettinger
et al., 2020).
Finally, it is reported that online personalised learning programmes have the potential to
be more cost- effective than offline ones (Bettinger et al., 2020). Bai et al., (2018) highlights
how online cost per standard deviation raised is expected to be 129 RMB (USD 20) per stu-
dent, whereas that of similar offline programmes is 214 RMB (USD 33) per student.
... Menurut Qureshi [4], literasi keberlanjutan dapat ditingkatkan melalui media pembelajaran yang tidak hanya informatif tetapi juga interaktif, sehingga membangun pengetahuan, sikap, dan perilaku siswa terkait keberlanjutan. Kajian terdahulu menunjukkan bahwa integrasi teknologi dalam pendidikan, seperti e-book interaktif, mampu meningkatkan hasil belajar siswa dalam berbagai bidang, termasuk sains dan matematika [5], [6], [7]. Teknologi pembelajaran yang mendukung personalisasi juga menunjukkan dampak positif signifikan terhadap hasil belajar siswa, terutama di negara-negara berpenghasilan rendah dan menengah [6]. ...
... Kajian terdahulu menunjukkan bahwa integrasi teknologi dalam pendidikan, seperti e-book interaktif, mampu meningkatkan hasil belajar siswa dalam berbagai bidang, termasuk sains dan matematika [5], [6], [7]. Teknologi pembelajaran yang mendukung personalisasi juga menunjukkan dampak positif signifikan terhadap hasil belajar siswa, terutama di negara-negara berpenghasilan rendah dan menengah [6]. Namun, efektivitas teknologi dalam pendidikan sangat bergantung pada bagaimana teknologi tersebut diintegrasikan ke dalam proses pembelajaran dan bagaimana guru serta siswa didukung untuk menggunakannya [8], [9], [10]. ...
Article
Full-text available
Penelitian ini bertujuan untuk mengembangkan buku ajar elektronik interaktif menggunakan Heyzine pada materi perubahan iklim untuk meningkatkan literasi keberlanjutan bagi siswa Sekolah Menengah Pertama (SMP). Jenis penelitian ini adalah penelitian pengembangan dengan jenis design and development research (DDR) dari Richey dan Klein melalui empat tahapan utama, yaitu analisis, desain, pengembangan, dan eval__uasi. Subjek penelitian terdiri dari 16 siswa kelas VII SMP Kartikama Metro serta para ahli di bidang teknologi pembelajaran, pembelajaran IPA, dan guru mata pelajaran IPA. Instrumen penelitian yang digunakan meliputi dua jenis angket, yaitu angket respon siswa untuk uji pra-pemakaian dan angket uji validasi oleh para ahli. Hasil validasi e-book berbasis Heyzine interaktif tentang materi perubahan iklim menunjukkan bahwa buku ajar ini telah memenuhi kriteria validitas, dengan skor keseluruhan 3,46 yang masuk dalam kategori valid. Selain itu, hasil uji pra-penggunaan oleh siswa menunjukkan skor sebesar 91 persen, yang termasuk dalam kategori sangat praktis. Dengan hasil ini, buku ajar ini dapat dilakukan pengujian lanjutan, yaitu melalui uji coba pemakaian di kelas untuk mengukur keefektifannya dalam meningkatkan pemahaman siswa mengenai literasi keberlanjutan.
... It is an important part of current education systems because it is an effective way to teach people [55]. Utilizing technology as a digital tool or platform in current education can adapt to individual needs and preferences, allowing personalized learning experiences [56,57]. By evaluating the performance and development of the students, technology can provide them with individualized content, resources, and feedback [58,59]. ...
Article
Full-text available
Contriving technology is a trend in current TEFL in higher education. IT-Based Integrated Skills Approach (ITBISA) combines IT-based approaches with the EFL education method, ISA (Integrated Skills Approach). This study explores how ITBISA is implemented in teaching-learning and examines participants’ perspectives on its effectiveness. This study used a mixed-method research design. Data were collected through classroom observations and open-ended questionnaires and then analyzed thematically to identify key patterns. Initially, participants sourced information directly from the internet without verification. Integrating AI and other digital tools reflects a desire to adopt new technological advancements. This underscores the importance of innovative technology in creating and distributing college educational materials. Consequently, we argue that the IT-based integrated skills approach can effectively enhance students’ digital literacy and English proficiency. The findings offer a theoretical concept of how ITBISA is implemented. Figure 4 demonstrates how integrating digital tools, AI, and the Integrated Skills Approach fosters a dynamic, interactive learning environment. The findings also indicate a positive student response to ITBISA, with active participation, effective use of mobile devices, and task completion. This approach enhances instruction, making learning more efficient. ITBISA in EFL is a promising method for implementing the teaching-learning process to improve English proficiency and digital literacy.
... Opportunities exist to leverage technology for equity and inclusion in education (Major et al., 2021). Personalized learning through adaptive platforms can cater to diverse Digital Literacy Transformation…. ...
Article
Full-text available
The integration of digital literacy and technology into Arabic language education has emerged as a transformative approach to addressing both educational and cultural objectives in Madrasah Ibtidaiyah. This research investigates the synergistic relationship between modern technological tools and traditional Islamic teachings, focusing on Adab and Tahfidz programs as central components of the curriculum. By leveraging digital resources, educators aim to enhance language proficiency while simultaneously fostering moral character, ethical behavior, and spiritual awareness in students. The study employs a qualitative case study approach to gather in-depth insights into pedagogical strategies and technology integration at selected Madrasah Ibtidaiyah schools. Data were collected through interviews, classroom observations, and document analysis involving teachers and students as primary subjects. The research instruments included structured observation checklists, interview protocols, and reflective journals. Data analysis followed thematic coding to identify patterns and themes related to digital literacy practices and their impact on student engagement and learning outcomes. Findings reveal that integrating Adab and Tahfidz programs with technological tools enhances student motivation, facilitates immersive learning, and promotes ethical digital practices. This comprehensive approach supports the development of linguistic skills, cultural awareness, and spiritual values, offering a model for effectively blending tradition and innovation in Arabic language education.
... The integration of technology into education has been extensively established over time, incorporating various methodologies and adopting robust theoretical frameworks across ages. The development of technology has motivated stakeholders (e.g., policy makers, instructors, pedagogues) to leverage technology in the field of learning and teaching, aiming to enhance students' learning outcomes as learners remain the ultimate focus of all educational inputs, processes, and outcomes (Major et al., 2021). The transformation from traditional pedagogical methods to modern digital education has been a key research focus for years. ...
Article
Full-text available
This study presents a scientometric analysis of educational technology research through examining highly cited articles published between 2014 and 2023 in 19 SSCI-indexed Q1 journals. Using a weighted approach to address citation bias, we analyzed 1,770 highly cited articles through document co-citation analysis, keyword analysis, and abstract content analysis. The findings reveal eight distinct research clusters, with Technology Acceptance Model, Computational Thinking, and Classroom Approach emerging as dominant clusters. The analysis identifies five major research themes, with AI-Enhanced Learning Technologies comprising 39% of the research focus, followed by equal distribution (17% each) among Virtual Learning Environments, Digital Learning Practices, and Learning Assessment & Feedback, while Educational Technology Integration accounts for 11%. Keyword analysis further indicates the field’s evolution toward more sophisticated technological applications such as virtual, online learning, and learning analytics emerging as prominent terms. This study demonstrates a significant transformation from basic technology integration to advanced AI-driven solutions. The findings provide valuable insights for researchers and practitioners in educational technology, suggesting future research directions should focus on AI integration, immersive technologies, and data-driven approaches while maintaining emphasis on pedagogical effectiveness and student engagement.
... To improve the technical suitability of intelligent educational tools, a multi-dimensional integration strategy should be adopted to optimise the whole process from technical design to implementation and application. Intelligent algorithms should be deeply integrated with educational theories, develop multi-dimensional evaluation models based on cognitive science and learning analytics theories, go beyond the single correct rate and speed indicators, and build a comprehensive evaluation system that includes cognitive processes, learning styles and thinking styles [9]. In terms of technical architecture, modular design and microservice architecture are adopted to enable the system to flexibly adapt to different teaching scenarios and hardware environments, and reduce equipment dependency. ...
Article
Full-text available
This study explores the path and effectiveness of intelligent educational tools for personalised learning, and identifies three major problems: insufficient tech-nology adaptability, difficulties in instructional design, and barriers to student independent learning. The study proposes three paths: improving technology adaptability, optimising instructional design methods, and building a system for cultivating students independent learning abilities. Empirical research shows that smart tools help intermediate and advanced students the most, but regional differences are obvious. The research results provide theoretical basis and prac-tical guidance for the design of intelligent education platforms, teacher training and balanced deployment of resources.
Article
The purpose of this study was to analyse educators’ decisions on the continued use of the virtual learning environment (VLE) Blackboard and its associated e-learning technologies in the classroom within the public school system. This cross-sectional descriptive quantitative research collected 306 responses from educators in 30 public schools in Gauteng Province, South Africa. The results revealed that the empirical data’s mean performance expectancy (PEY) was lower than the ‘agree’ range of the hypothesised population, implying that the educators’ assumption is that the deployed technology does not improve their work performance. Furthermore, the results showed that learning tradition (LTD) has a complementary partial mediation effect on the relationship between PEY and continued use intention (CUI). Additionally, facilitating conditions (FCCs) also have a complementary partial mediation effect on the relationship between PEY and CUI. Conditional mediation (CoMe) from the path SOI x PEY -> LTD -> CUI was statistically significant. In probing the conditional indirect effect, the results showed that, if the social influence (SOI) increased, the mediation effect of LTD decreases. On the contrary, if it decreased, the mediation effect of LTD increased. This was also evident in the Johnson-Neyman plot. SOI did not moderate the mediation effect of FCC on the relationship between PEY and CUI. This study concludes that social and operational factors highly influence the dynamics of continued use of VLE and its associated e-learning technologies and cannot be discounted by practitioners and policy-makers in their quest to increase technology use in the school system. This study contributes to the unified technology acceptance and use theory model (UTAUT), advancing the idea that facilitating conditions and learning traditions can be mediators and social influence moderators within certain contexts and research settings.
Article
Personalized Mathematics Learning (PML) holds significant importance in mathematics education in the global learning environment. Accordingly, PML in any institution allows tailored instruction catering to students' individual learning needs and preferences. The study aims to investigate the items as predictors for PML instrument validation for pre-university students in Maldives through Exploratory Factor Analysis (EFA). A total of 120 pre-university students were randomly chosen for data collection at the Maldives National University using a structured questionnaire. The instrument consists of 52 items on the five-point Likert scale with eleven constructs of PML. EFA was conducted for each construct using IBM SPSS version 25.0. The results discovered one dimension in all the constructs; 11 items with factor loading <0.60 were eliminated. 41 items determined a factor loading >0.60 were retained to measure the PML construct. Bartlett's Test of Sphericity was <0.05 for all the constructs, yielding a significant p−value <0.05. Kaiser--Meyer--Olkin Measure of Sampling Adequacy was >0.5 for all the constructs, signifying a sufficient sample size. The results indicate a greater internal consistency for individual and overall constructs. The instrument proved valid and reliable for predicting the application of the PML construct in mathematics education in the Maldives.
Chapter
An essential component of a nation's infrastructure development is education, which has evolved through standards from Education 1.0 to the present Education 5.0. A study evaluates the historical development of education and its applicability to contemporary requirements. The assessment of the literature, we examine the use of Education 5.0 in both industrialized and advancing nations in this research. Saudi Arabia and Malaysia have been chosen for examination as examples of industrialized nations that have effectively implemented Education 5.0, using the PRISMA technique based on research conducted over the previous six years. Furthermore, two poor nations Sri Lanka and Zimbabwe are selected to assess the Education 5.0 deployment gap. The study emphasizes the urgent need to reassess Education 5.0 implementation in developing countries, notwithstanding successful implementation in wealthier ones. The results of this research should be conducted to examine hidden implementation issues with Education 5.0 by employing a variety of approaches.
Article
Full-text available
This article describes research results based on multiple years of experimentation and real-world experience with an adaptive tutoring system named Wayang Outpost. The system represents a novel adaptive learning technology that has shown successful outcomes with thousands of students, and provided teachers with valuable information about students’ mathematics performance. We define progress in three areas: improved student cognition, engagement, and affect, and we attribute this improvement to specific components and interventions that are inherently affective, cognitive, and metacognitive in nature. For instance, improved student cognitive outcomes have been measured with pre-post tests and state standardized tests, and achieved due to personalization of content and math fluency training. Improved student engagement was achieved by supporting students’ metacognition and motivation via affective learning companions and progress reports, measured via records of student gaming of the system. Student affect within the tutor was measured through sensors and student self-reports, and supported through affective learning companions and progress reports. Collectively, these studies elucidate a suite of effective strategies to support advanced personalized learning via an intelligent adaptive tutor that can be tailored to the individual needs, emotions, cognitive states, and metacognitive skills of learners.
Article
Full-text available
This paper examines the causal effects of computer-aided instruction (CAI) on children's cognitive and noncognitive skills. We ran a clustered randomized controlled trial at five elementary schools with more than 1,600 students near Phnom Penh, Cambodia. After 3 months of intervention, we find that the average treatment effects on cognitive skills are positive and statistically significant, while hours of study were unchanged both at home and in the classroom. This indicates that CAI is successful in improving students’ learning productivity per hour. Furthermore, we find that CAI raises students’ subjective expectation to attend college in the future.
Article
Full-text available
Learning outcomes in low-and lower-middle-income countries (LMICs) require significant improvement. With traditional reform efforts taking many years to realise results, education practitioners in LMICs are searching for innovative ways to rapidly strengthen learning outcomes. One tool showing promise is computer-assisted instruction (CAI). While a growing number of studies document CAI's positive impacts on learning outcomes, others have found nil or negative effects. Research has yet to identify why these differences occur, and, most importantly, which factors must be in place to ensure that CAI contributes to improving learning outcomes. The aim of our research was to fill this gap in the research by developing a model highlighting those factors influencing the results of CAI interventions. Adopting a realist-informed methodology, we analysed 21 resources shared by 13 experts from around the world. We used the results of this analysis to develop a model that outlines key trends that facilitate and/or impede the deployment of CAI tools in LMICs. We find that key factors that should be considered when designing CAI interventions include the operating environment; stakeholder engagement; infrastructure; technological trust; CAI tool design; content curation/creation; student engagement; classroom integration; teacher capacity; student capacity; and data collection and use. This model highlights both these individual elements as well as noting how these elements interact. The model provides a foundation that can guide future research in this under-examined area.
Article
Full-text available
In Taiwan, thousands of students from Yuanzhumin (aboriginal) families lag far behind their Han counterparts in academic achievement. When they fall behind, they often have no way to catch up. There is increased interest among both educators and policymakers in helping underperforming students catch up using computer-assisted learning (CAL). The objective of this paper is to examine the impact of an intervention aimed at raising the academic performance of students using an in-home CAL program. According to intention-to-treat estimates, in-home CAL improved the overall math scores of students in the treatment group relative to the control group by 0.08 to 0.20 standard deviations (depending on whether the treatment was for one or two semesters). Furthermore, Average Treatment Effect on the Treated analysis was used for solving the compliance problem in our experiment, showing that in-home CAL raised academic performance by 0.36 standard deviations among compliers. This study thus presents preliminary evidence that an in-home CAL program has the potential to boost the learning outcomes of disadvantaged students.
Technical Report
Full-text available
This Rapid Evidence Review (RER) provides an overview of existing research on the use of technology to support personalised learning in low- and middle-income countries (LMICs). The RER has been produced in response to the widespread global shutdown of schools resulting from the outbreak of COVID-19. It therefore emphasises transferable insights that may be applicable to educational responses resulting from the limitations caused by COVID-19. In the current context, lessons learnt from the use of technology-supported personalised learning — in which technology enables or supports learning based upon particular characteristics of relevance or importance to learners — are particularly salient given this has the potential to adapt to learners’ needs by ‘teaching at the right level’. This RER provides a summary of the potential benefits of technology-supported personalised learning as well as identifying possible limitations and challenges. It intends to inform educational decision makers, including donors and those in government and NGOs, about the potential to use technology-supported personalised learning as a response to the current pandemic. The findings and recommendations are also anticipated to be of interest to other education stakeholders (e.g. researchers and school leaders).
Chapter
This chapter describes the principles and methods used to carry out a meta-analysis for a comparison of two interventions for the main types of data encountered. A very common and simple version of the meta-analysis procedure is commonly referred to as the inverse-variance method. This approach is implemented in its most basic form in RevMan, and is used behind the scenes in many meta-analyses of both dichotomous and continuous data. Results may be expressed as count data when each participant may experience an event, and may experience it more than once. Count data may be analysed using methods for dichotomous data if the counts are dichotomized for each individual, continuous data and time-to-event data, as well as being analysed as rate data. Prediction intervals from random-effects meta-analyses are a useful device for presenting the extent of between-study variation. Sensitivity analyses should be used to examine whether overall findings are robust to potentially influential decisions.