Conference PaperPDF Available

Key Success Indicators (KSIs) for Blended Learning: A Pilot Test of the Coding Manuals

Authors:

Abstract and Figures

Facets pertaining to blended learning milieu such as Definition, Ratio and Quality which manifest the critical success factors are yet to be embraced by practitioners. To inaugurate an agreement to standardized and uniformed guidelines which can act as Key Success Indicators (KSIs) in the implementation of blended learning, this pilot study is aimed to examine the feasibility of the developed coding manuals; Effect Size Coding Manual (ESCM) and Facets of Blended Learning Coding Manual (FBLCM) wherein both meta-analysis and qualitative content analysis (QCA) were employed to synthesize and extract imperative facets pertaining to blended learning. It can be ascertained that these developed coding manuals are substantial in the implementation of blended learning researches, not only statistically and practically significant but also fulfilling the requirements for validity and reliability.
Content may be subject to copyright.
Key Success Indicators (KSIs) for Blended Learning: A
Pilot Test of the Coding Manuals
Malissa Maria Mahmud
Centre for English Language Studies
Sunway University, No. 5, Jalan
Universiti, Bandar Sunway, 47500
Petaling Jaya, Selangor, Malaysia
006 03-7491 8622
malissam@sunway.edu.my
Yazilmiwati Yaacob
General Studies Department, Sunway
University, No. 5, Jalan Universiti,
Bandar Sunway, 47500 Petaling Jaya,
Selangor, Malaysia,
006 03-7491 8622
yazilmiwati@sunway.edu.my
Stephen J. Hall
Centre for English Language Studies
Sunway University, No. 5, Jalan
Universiti, Bandar Sunway, 47500
Petaling Jaya, Selangor, Malaysia
006 03-7491 8622
stephenh@sunway.edu.my
ABSTRACT
Facets pertaining to blended learning milieu such as Definition,
Ratio and Quality which manifest the critical success factors are
yet to be embraced by practitioners. To inaugurate an agreement
to standardized and uniformed guidelines which can act as Key
Success Indicators (KSIs) in the implementation of blended
learning, this pilot study is aimed to examine the feasibility of the
developed coding manuals; Effect Size Coding Manual (ESCM)
and Facets of Blended Learning Coding Manual (FBLCM)
wherein both meta-analysis and qualitative content analysis
(QCA) were employed to synthesize and extract imperative facets
pertaining to blended learning. It can be ascertained that these
developed coding manuals are substantial in the implementation
of blended learning researches, not only statistically and
practically significant but also fulfilling the requirements for
validity and reliability.
CCS Concepts
Keywords
Key Success Indicators; Blended Learning; Technological
Intervention; Coding; Pedagogical Approaches.
1. INTRODUCTION
Recent evidence suggests that technology-mediated
engagement and instruction has emerged as a renewed and
powerful platform for teaching and learning. With the
profound development of technology, the teaching and
learning context has progressed so much that the idea of
embedding technology in the classroom is anticipated by
the students [1], [2], [3], [4]. One of the comprehensive
conceptualizations of technological interventions in
teaching and learning environment was identified by [5] as
the blended learning setting, encompassing lessons that are
premeditated and planned with the integration of an online
approach to substitute some amount of the face-to-face
time. A handbook was developed which is devoted to
educational implementation and issues related to blended
learning (The Handbook of Blended Learning: Global
Perspectives, Local Designs) [6]. Successively, the concept
of technological intervention was later coined by [7], in a
review on blended learning contexts. In 2008, an influential
book called, Blended Learning in Higher Education:
Framework, Principles, and Guidelines [8] that focuses on
the effective use of blended learning in higher education
       
published. The studies presented thus far provide evidence
that the inception of blended learning is the result of
technology integration and the emergence of the Web 2.0
through which an innovative paradigm for teaching and
learning activities has ensued.
Over the past decades, technology has been regarded as an
instrument that is able to encourage achievement and also
      teaching
and learning process [9]. On the account that the current
trend of learning process inclines towards technology,
various information communication technology (ICT) and
computer-assisted (CAI) interventions for learning have
been introduced [10]. Nevertheless, the exact position or
role of technology in particular in creating productive
learning process is not well established even though
extensive studies have been carried out since 1960s. With
more than 60 meta-analyses available since 1980s, each of
the analyses has successfully focused on questions of
addressing diverse features including subject matter, grade
level and category of technology by providing precious
portion of data. Furthermore, there is no strong evidence
that addresses overarching query of the general effects of
technology employed on the best practices in the context of
blended learning. It has been suggested that this objective
could be achieved by carrying out a complete large-scale
meta-analysis that covers a variety of technologies, topic
areas as well as grade levels. Up to date, there is only one
meta-analysis devoted to synthesizing provenances of
blended learning [11] and this article was based on the
meta-analysis of the US Department of Education
originally published in 2009 and subsequently updated in
2010 [12]. In the same vein, an analysis and discussion on
technology and language was presented by [13] in which
positive outcomes of the measured dependent variables of
59 samples were computed using meta-analysis.
Collectively, these studies outline a critical role for more
meta-analyses to be integrated as a crucial instrument in the
context of blended learning researches.
Drawing further in depth into the argument of gaps and
issues pertaining to blended learning, it was found that
there exist inconsistencies when it comes to defining
blended learning [14], allotting specific ratio of face to face
versus technology [15] and assessing the quality of the
implemented technology [16], [17], [18]. Drawing a
conclusion from the synthesized literature, this lack of
extensive accord leads to a variation of frameworks or
guidelines used by different parties.
To align the disparities, specific facets in the context of
blended learning such as definition, ratio and quality have
to be established and developed to inaugurate an agreement
to standardized and uniformed guidelines which can act as
Key Success Indicators (KSIs) in the implementation of
blended learning. Hence, this study is aimed to conduct a
pilot test on the developed coding manuals, preceding the
implementation of the full-scale study, to gauge its
feasibility and efficacy.
2. METHODOLOGY
The phrase pilot study can be operationalized in social
science research in two diverse ways. They are also known
as feasibility studies which are small scale versions of the
study, or trial runs, performed as a form of groundwork
before the execution of the bigger scale version [19]. By
conducting a pilot study, it serves as a preliminary test
which can offer prior insight and caution if the main study
fails or recommended if methods or instruments were inapt
or too arduous to be executed. At this juncture, it is deemed
essential to conduct a pilot test for the developed coding
manuals which is unprecedented. Similarly, specific pre-
testing of such instruments not only indicates a vital
element of good study design but also augments the
inclination of its success which manifests the critical
success factors in a blended learning environment.
Table 1: Five Articles Used in the Pilot Test
Study
Reference
1
Vollands, S. R. (1996). Experimental Evaluation
of Computer Assisted Self-Assessment of
Reading Comprehension: Effects on
Reading Achievement and Attitude.
2
Mersal, F. A., & Mersal, N. A. (2014). Effect of
Blended Learning on Newly Nursing
Student's Outcomes Regarding New
Trends in Nursing Subject at Ain Shams
University. American Journal of
Educational Research, 2(11), 1036-1043.
3
Scott, L. S. (1999). The Accelerated Reader
Program, Reading Achievement, and
Attitudes of Students with Learning
Disabilities.
4
Tsai, C. W. (2010). Involving students in a
blended course via teacher's initiation in
Web-enhanced collaborative learning.
Cyberpsychology, Behavior, and Social
Networking, 13(5), 577-580.
5
Tsai, C. W. (2010). Designing appropriate
blended courses: A students' perspective.
Cyberpsychology, Behavior, and Social
Networking, 13(5), 563-566.
For this study, the search of existing and relevant published
and unpublished studies of blended learning were
conducted to draw thorough criterion for inclusion in the
meta-analysis. Employing a common set of keywords, the
scholarly articles were extracted from the year 1988 to
2015, sourced from electronic databases and major journals
in the field of educational technology. The searches and
scrutiny were executed using the Google Scholar search
engine with a number of keywords related to blended
learning, as detailed in the preceding section. These major
sources were then examined to locate appropriate samples
for this meta-analysis, by which a review of relevant
literature from electronic databases was executed, including
(a) Wiley Online Library, (b) Taylor & Francis Online, (c)
Springer, (d) ERIC, (e) Elsevier, (f) Science Direct, (g)
Research Gate, (h) ProQuest, (i) JSTOR, (j) IEEE, (k) Sage
Journals, (l) APA PsycNET, (m) CALICO Journal, (n)
Penn State University Library, (o) Editlib, (p) IGI Global,
(q) anitacrawley.net, (r) ascilite.org.au / ajet.org.au and (s)
Questia.
The pilot test was conducted from February to March 2017
with the aim of examining the ES of the 12 articles, as well
as blended learning patterns emerging from the 12 articles.
The analysis of the pilot test revealed that only five articles
could be used and included due to the inclusion criteria (see
Table 1). Moreover, from the individual ES obtained, it was
deduced that there was a need for the combined ES to be
determined. On the other hand, the other seven articles had
to be discarded due to the inclusion criteria that was not
discovered in the seven articles. Henceforth, the
identification of the common facets found through the five
articles, indicated patterns that emerged from the pilot test
which consequently aided the researcher in ascertaining the
vital facet in for further investigation. To compute
treatment versus control ES, the indicator used for the
purpose of this study was the standardized mean difference
score, defined as the difference between the posttest mean
of the treatment group and the posttest mean of the control
group divided by the standard deviation pooled across the
treatment and control groups in experimental and quasi-
experimental studies [20], [21]. The strength of the ES is
determined by Cohen [22] using the thresholds: (a) .2 =
Small, (b) .5 = Medium, (c) .8 = Large. To determine the
strength of the ES, the formulas below were applied. The
Formula for the pooled: Standard Deviation is computed
as follows [22]. See Equation 1 3 below for detailed
calculations and alternative formula.
 

-test is
used, with the formula:

Where t is the value of t-test, and df is the degree
of freedom.
Degree of freedom is computed by the following
formula: 
Where n1 is the sample size of the 1st group, and
n2 is the sample size of the 2nd group.
3. FINDINGS
Table 2: Effect Size Coding Manual (ESCM)
Study
Intervent
ion
Effect Size


Cohen’s d
1
Use of
Accelerat
ed
Reader


 .6705
Medium


 .7723
Medium
2
Use of
Websites
and CDs


 .9274
Large


 .8515
Large


 .9393
Large
3
Use of
Accelerat
ed
Reader


 -.5635
Negative
Estes
Reading
Attitude
Scale
Students’
Attitude


 -.8085
Negative
4
Use of
course
website
for
interactio
n
Students’
involvement
in learning in
the Web-
enhanced
collaborative
learning


 .3579
Small
5
Self-
regulated
Learning,
Blended
Learning
with 5
online
classes
Students’
performance
in certified
exam

.707***
Medium
Table 2 illustrates the findings of the effect size obtained
from the pilot-testing of the Effect Size Coding Manual
(ESCM). The manual comprises of five columns, put
adjacent to the specific details mined from the samples. For
instance, the intervention column includes the type of
intervention employed in the study whereas the outcome
column illustrates outcomes on the measured dependent
variable(s). The third column is where the formulas are
inserted in which effect size is yielded from the keyed-in
values from selected samples to compute the standardized
mean difference score. The last column indicates the power
of effect size
It is observed that one sample generated large effect size,
reporting satisfaction as the dependent variable whereas
two samples yielded medium effect size and one sample
reported small effect size. For study 3, the effect sizes are
negative because the mean value of the controlled group is
higher than the experimental group. The fact that it is
reported, evidently, meta-analysis is able to resolve the
  -    
researchers filing away studies with negative results and
this may distort outcomes and increase scientific bias
whereby the frequency of positive outcomes in the research
are the only main interest. Most importantly, the coding of
features and attributes of a sample were done using a
coding manual which was designed and created specifically
to categorize the different facets and attributes
autonomously. The manual went through rigorous and
laborious processes of drafting before a final version was
confirmed. The manual captured all the detailed and
specific information of each sample where the intervention,
outcome, variables and the values for both controlled and
experimental groups were successfully extracted from the
samples. These values were then converted into ES before
the magnitudes of the ES were determined.
(Equation 2)
(Equation 1)
(Equation 3)
Table 3: Facets of Blended Learning Coding Manual
(FBLCM)
No
Definition
Ratio
Quality
Intervention
(%)
Traditional
(%)
Results
Feedback
1
Delivery

Not stated
Not stated
Independent
Variables
Results are
reported,
however,
no
feedback
indicated
Technolo
gy

Accelerated
Reader (AR),
computerized
reading
management
system
Students
read real
books

frequency

Chronolo
gy

Locus

Roles
Dependent
Variables
Pedagogy



Focus
Direction

2
Delivery

Not stated
Not stated
Independent
Variables
Results are
reported,
however,
no
feedback
indicated
Technolo
gy

Analysis,
synthesis and
evaluation
levels of
knowledge
for the same
content as in
face-to-face
class
Covered the
knowledge
level of the
same content
that was
taught to the
control
group

location

course



Chronolo
gy

Locus

Roles

Dependent
Variables
Pedagogy



Focus
Direction

3
Delivery

Not stated
Not stated
Independent
Variables
Results are
reported,
however,
no
feedback
indicated
Technolo
gy

Accelerated
Reader (AR),
computerized
reading
management
system
Students
read real
books

comprehension
Chronolo
gy

Locus

Roles
Dependent
Variables
Pedagogy


Focus
Direction

4
Delivery

Not stated
Not stated
Independent
Variables
Results are
reported,
however,
no
feedback
indicated
Technolo
gy

The use of
the course
Website to
share and
discuss
information
and questions
Face-to-face


involvement
Chronolo
gy

Locus

Roles

Dependent
Variables
Pedagogy


Focus
Direction

5
Delivery

Not stated
Not stated
Independent
Variables
Results are
reported,
however,
no
feedback
indicated
Technolo
gy

The use of
Microsoft
Access to
solve
problems.
Recordings of
lectures
Face-to-face
-Regulated
Learning (SRL)

online classes
Chronolo
gy

Locus

Roles

uploaded
onto the
Internet.
Dependent
Variables
Pedagogy

Performance
Focus

Direction

Besides the meta-analysis coding manual to extract the ES,
the coding manual to extract the (a) Definition, (b) Ratio,
and (c) Quality facets was also developed and subsequently
piloted. Table 3 illustrates how the facets from five articles
were extracted into the coding manual. In this manual, each
facet was assigned a specific column to juxtapose the
exhaustive details for each sample to be included. These in-
depth, verbatim examination and extremely laborious
scrutiny were imperative to the whole data analysis
process. It should be noted that the Definition facet is
probed adapting from a study conducted by Sharpe et al.
[23] to espouse the dimensions of definition Quality facet
leveraged on the Sloan Consortium Quality Framework
[24]. The five articles were probed in which vital
information was extracted, thus the (a) Definition, (b) Ratio
and (c) Quality facets were discovered. The findings
generated comprehensive facets and attributes which
captured the robust details needed for future studies,
confirming the viability of the manual before applying it to
the actual number of samples. Thus, clear predisposition
can be surmised to grasp the important facets involved in
this study. In addition, the findings yielded by the first
research question acted as a requisite, as well as an initial
step in (a) discovering and answering the powerful ESs, (b)
the combined ESs, (c) the definitions used, (d) the types of
technological intervention used, (e) the specific ratios or
percentages used, and (f) the quality indicators found in the
language related blended learning studies and other
subjects related to blended learning studies.
4. CONCLUSION
There is predilection on the use of meta-analysis to probe
on the efficacious aspect of blended learning. This pilot
study was undertaken to streamline and validate the
developed coding manual prior to the full-scale
implementation. The overall result is feasible in the design
of a prospective research instruments which is paramount
to satisfy the requirements for validity and reliability within
the less rigid parameters of mixed-method research by
offering valuable insight into the process and outcomes of a
study, which are indispensable to any researcher.
Henceforth, from both meta-analysis and the extraction of
facets done for the pilot test, it can be ascertained that these
coding manuals were feasible to be applied on a full-scale
study.
5. REFERENCES
[1] Boelens, R., Voet, M., & De Wever, B. (2018). The design of
blended learning in response to student diversity in higher
educati
instruction in blended learning. Computers & Education,
120, 197-212.
[2] Dziuban, C., Graham, C. R., Moskal, P. D., Norberg, A., &
Sicilia, N. (2018). Blended learning: the new normal and
emerging technologies. International Journal of Educational
Technology in Higher Education, 15(1), 3.
[3] Porter, W. W., Graham, C. R., Bodily, R. G., & Sandberg, D.
S. (2016). A qualitative analysis of institutional drivers and
barriers to blended learning adoption in higher education.
The internet and Higher education, 28, 17-27.
[4] Ross, B. & Gage, K. (2006). Global perspectives on blended
learning: Insight from WebCT and our customers in higher
education. In C. J. Bonk & C. R. Graham (eds.), Handbook
of blended learning: Global perspectives, local designs (p.
167). San Francisco, CA: Pfeiffer.
[5] Picciano, A. (2009). Blending with purpose: The multimodal
model. Journal of the Research Center for Educational
Technology, 5(1), 4-14.
[6] Bonk, C. J., & Graham, C. R. (2012). The handbook of
blended learning: Global perspectives, local designs. San
Francisco, CA: John Wiley & Sons.
[7] Bliuc, A. M., Goodyear, P., & Ellis, R. A. (2007). Research
focus and methodological choices in studies into students'
experiences of blended learning in higher education. The
Internet and Higher Education, 10(4), 231-244.
[8] Garrison, D. R., & Vaughan, N. D. (2008). Blended learning
in higher education: Framework, principles, and guidelines.
John Wiley & Sons.
[9] Johnson, L., Becker, S. A., Cummins, M., Estrada, V.,
Freeman, A., & Hall, C. (2016). NMC horizon report: 2016
higher education edition (pp. 1-50). The New Media
Consortium.
[10] Archer, K., Savage, R., Sanghera-Sidhu, S., Wood, E.,
Gottardo, A., & Chen, V. (2014). Examining the
effectiveness of technology use in classrooms: A tertiary
meta-analysis. Computers & Education, 78, 140-149.
[11] Means, P., Toyana, Y., Murphy, R., & Baki, M. (2013). The
Effectiveness of Online and Blended Learning: A Meta-
Analysis of the Empirical Literature.pdf. Teachers College
Record, 115(3). Retrieved from
http://www.terecord.org/library/content.asp?contentid=16882
[12] Means, P., Toyana, Y., Murphy, R., Baki, M., & Jones, K.
(2010). Evaluation of evidence-based practices in online
learning: A meta-analysis and review of online learning
studies. Washington, D.C.: Office of Planning, Evaluation,
and Policy Development, Center for Technology in Learning,
U.S. Department of Education.
[13] Mahmud, M. M. (2018). Technology and LanguageWhat
Works and What Does Not: A Meta-analysis of Blended
Learning Research. The Journal of AsiaTEFL, 15(2), 365-
382.
[14] Oliver, M., & Trigwell, K. (2005). Can 'blended learning' be
redeemed?. E-Learning, 2(1), 17-26.
[15] Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in:
The extent and promise of blended education in the United
States. Sloan Consortium. PO Box 1238, Newburyport, MA
01950.
[16] Richardson, J. C., & Swan, K. (2003). Examing social
presence in online courses in relation to students' perceived
learning and satisfaction. Journal of Asynchronous Learning
Networks, 7(1), 68-84.
[17] Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D.
(2008). What drives a successful e-Learning? An empirical
investigation of the critical factors influencing learner
satisfaction. Computers & education, 50(4), 1183-1202.
[18] Yeh, Y. C., Yeh, Y. L., & Chen, Y. H. (2012). From
knowledge sharing to knowledge creation: A blended
knowledge-management model for improving university

245-257.
[19] Polit, D.F., Beck, C.T. and Hungler, B.P. (2001). Essentials
of Nursing Research: Methods, Appraisal and Utilization. 5th
Ed., Philadelphia: Lippincott Williams & Wilkins.
[20] Cohen, J. (1988). Statistical Power Analysis for the
Behavioral Sciences, 2nd edn. Hillsdale, NJ: Laurence
Erlbaum Associates. Inc CIT0006.
[21] Mark, W., Lipsey, & Wilson, D. B. (2001). Practical meta-
analysis (Vol. 49). Thousand Oaks, CA: Sage publications.
[22] Cohen, J. (1992). A power primer. Psychological bulletin,
112(1), 155.
[23] Sharpe, R., Benfield, G., Roberts, G., & Francis, R. (2006).
The undergraduate experience of blended e-learning: A
review of UK literature and practice. York, UK: The Higher
Education Academy. Retrieved from
http://www.grossmont.edu/don.dean/pkms_ddean/ET795A/
WhitePaper_BlendLearn.pdf.
[24] Moore, J. C. (2005). The Sloan consortium quality
framework and the five pillars. The Sloan Consortium.
Retrieved July 15, 2007, from
http://www.aln.org/publications/books/qualityframework.pdf
6. APPENDIX
SAMPLES UTILISED TO BE SELECTED FOR THE STUDY
1.
Vollands, Stacy R., K. J. Topping & H. M. Evans; And
Others, June 1996,
Experimental Evaluation of Computer Assisted Self-
Assessment of Reading Comprehension: Effects on
Reading Achievement and Attitude
2.
Scott, Louise Shewfelt, 1999,
The Accelerated Reader Program, Reading Achievement
and Attitudes of Students with Learning Disabilities.
3.
A. Hwang & J.B. Arbaugh, 2009,
Seeking feedback in blended learning: competitive versus
cooperative student attitudes and their links to learning
outcome
4.
Oh, E., & Park, S., 2009,
How are universities involved in blended instruction?
5.
Lee, H.-J., & Rha, I., 2009,
Influence of Structure and Interaction on Student
Achievement and Satisfaction in Web-Based Distance
Learning
6.
Lim, D. H., & Morris, M. L., 2009,
Learner and Instructional Factors Influencing Learning
Outcomes within a Blended Learning Environment
7.
Alonso Daz, L., & Blzquez Entonado, F., 2009,
Are the Functions of Teachers in e-Learning and Face-to-
Face Learning Environments Really Different?
8.
Chia-Wen Tsai, Ph.D., 2010,

Perspective
9.
Chia-Wen Tsai, Ph.D., 2010,

Initiation in Web-Enhanced Collaborative Learning
10.
Li-Ling Hsu, 2011,
Blended learning in ethics education: A survey of nursing
students
11.
Fathia Ahmed Mersal, Nahed Ahmed Mersal, 2014,
Effect of Blended Learning on Newly Nursing Student's
Outcomes Regarding New Trends in Nursing Subject at
Ain Shams University
12.
Jaslin Ikhsan, 2014,
The Use of ICT-Based Media in Web-Based Collaborative
Assistance of Hybrid Learning on Chemical Kinetic to

Authors’ Research Background
*The form itself will not be published.
*Title can be chosen from: master student, Ph.D. candidate, assistant professor, lecturer, associate professor, full professor
Your Name
Title*
Research Field
Personal Website
Malissa
Maria
Mahmud
Assistant
Professor
Blended Learning
Instructional System
Development (ISD)
Communication
studies
Linguistics
Sociolinguistics
https://university.sunway.edu.my/profiles/english/malissa-
mahmud
https://scholar.google.com/citations?user=GfP3rVUAAAAJ&hl=en
https://www.researchgate.net/profile/Malissa_Mahmud
Yazilmiwati
Yaacob
Assistant
Professor
Islamic Studies,
Entrepreneurship
Motivation
Human Development
Leadership and
Civilization
Education
https://scholar.google.com/citations?user=dQyC0rsAAAAJ&hl=en
Stephen J.
Hall
Professor
Cross cultural
reflective pedagogy
Digital literacy and
language learning
Spoken English
TESOL methodology
TESOL/TEFL
reflective teacher
education
Plurilingualism and
ASEAN English
language
development
www.stephenjhall.com
https://www.researchgate.net/profile/Stephen_Hall29
https://thol.academia.edu/StephenHall
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This meta-analysis examines the effectiveness of technology employed in language-related blended-learning research by summarizing the outcomes of the measured dependent variables of 59 samples. The effect sizes yielded from the samples were acquired by applying Cohen’s (1988; 1992) d formula. The estimation was done using the standardized mean difference score, divided by the standard deviation pooled across the treatment and control groups. The findings denote that there is an overall effectiveness to blended-learning; however, the disparity of the effect sizes found implies that the effectiveness is contingent and reliant to the context and how technology is applied. There were also instances of negative effect sizes, suggesting hidden factors that adversely altered the outcomes of the technological intervention. The review also discovered that there is a pattern for performance to be used predominantly as the dependent variable in assessing the effectiveness of the technology. Nevertheless, this should not limit the use of performance as the only measure. Other dependent variables, such as motivation and attitudes, warrant consideration as indicators for measuring the efficacy of a blended-learning intervention.
Article
Full-text available
This study addressed several outcomes, implications, and possible future directions for blended learning (BL) in higher education in a world where information communication technologies (ICTs) increasingly communicate with each other. In considering effectiveness, the authors contend that BL coalesces around access, success, and students’ perception of their learning environments. Success and withdrawal rates for face-to-face and online courses are compared to those for BL as they interact with minority status. Investigation of student perception about course excellence revealed the existence of robust if-then decision rules for determining how students evaluate their educational experiences. Those rules were independent of course modality, perceived content relevance, and expected grade. The authors conclude that although blended learning preceded modern instructional technologies, its evolution will be inextricably bound to contemporary information communication technologies that are approximating some aspects of human thought processes. Keywords: Blended learning, Higher education, Student success, Student perception of instruction, New normal
Article
Full-text available
The implementation of blended learning in higher education is increasing, often with the aim to offer flexibility in terms of time and place to a diverse student population. However, specific attention for the diversity of this group, and how to cater individual needs, is still scarce. Therefore, this study explores instructors’ strategies for and beliefs about differentiated instruction in blended learning, together with how the differences between instructors can be explained. A total of 20 instructors working in two adult education centers participated in semi-structured interviews focusing on their (a) use of strategies for differentiated instruction, and (b) beliefs about designing blended learning to address student diversity. The findings reveal that the most commonly used differentiated instruction strategy in a blended learning context was providing students with additional support throughout product development. In addition, three instructor profiles about designing blended learning to address student diversity emerged from the data: (1) disregard: instructors considered no additional support in the blended learning arrangements to match students’ needs, (2) adaptation: instructors believed that increased support in the existing blended learning arrangements was sufficient to match students’ needs, and (3) transformation: instructors thought that blended learning arrangements should be designed in a completely different way, and be tailored to the characteristics of the students. The results show that half of the instructors considered a transformation of their blended learning arrangements in response to student diversity. Furthermore, instructors’ beliefs appear to be strongly connected to the organization and trajectory in which they work. A major implication of these findings is that professional support focusing on instructors’ beliefs is of crucial importance to unlock blended learning's full potential. As such, it is important for organizations to develop a clear stance on this issue, which pays explicit attention to responding to learners’ needs in blended learning contexts.
Article
Full-text available
The authors previously proposed a framework for institutional BL adoption (Graham, Woodfield, & Harrison, 2012), identifying three stages: (a) awareness/exploration, (b) adoption/early implementation, and (c) mature implementation/growth. The framework also identified key strategy, structure, and support issues universities may address at each stage. In this paper, the authors applied that framework as well as Rogers’ (2003) diffusion of innovations theory to determine the degree to which institutional strategy, structure, and support measures facilitate or impede BL adoption among higher education faculty. In addition, the authors explored whether higher education faculty’s innovation adoption category affects which measures facilitate or impede BL adoption. To achieve these objectives, the authors surveyed 214 faculty and interviewed 39 faculty at a school in the adoption/early implementation stage of BL adoption. The authors published the survey results in a prior article. The current article explores the results of the interviews.
Article
Background/Context Earlier research on various forms of distance learning concluded that these technologies do not differ significantly from regular classroom instruction in terms of learning outcomes. Now that web-based learning has emerged as a major trend in both K–12 and higher education, the relative efficacy of online and face-to-face instruction needs to be revisited. The increased capabilities of web-based applications and collaboration technologies and the rise of blended learning models combining web-based and face-to-face classroom instruction have raised expectations for the effectiveness of online learning. Purpose/Objective/Research Question/Focus of Study This meta-analysis was designed to produce a statistical synthesis of studies contrasting learning outcomes for either fully online or blended learning conditions with those of face-to-face classroom instruction. Population/Participants/Subjects The types of learners in the meta-analysis studies were about evenly split between students in college or earlier years of education and learners in graduate programs or professional training. The average learner age in a study ranged from 13 to 44. Intervention/Program/Practice The meta-analysis was conducted on 50 effects found in 45 studies contrasting a fully or partially online condition with a fully face-to-face instructional condition. Length of instruction varied across studies and exceeded one month in the majority of them. Research Design The meta-analysis corpus consisted of (1) experimental studies using random assignment and (2) quasi-experiments with statistical control for preexisting group differences. An effect size was calculated or estimated for each contrast, and average effect sizes were computed for fully online learning and for blended learning. A coding scheme was applied to classify each study in terms of a set of conditions, practices, and methodological variables. Findings/Results The meta-analysis found that, on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction. The advantage over face-to-face classes was significant in those studies contrasting blended learning with traditional face-to-face instruction but not in those studies contrasting purely online with face-to-face conditions. Conclusions/Recommendations Studies using blended learning also tended to involve additional learning time, instructional resources, and course elements that encourage interactions among learners. This confounding leaves open the possibility that one or all of these other practice variables contributed to the particularly positive outcomes for blended learning. Further research and development on different blended learning models is warranted. Experimental research testing design principles for blending online and face-to-face instruction for different kinds of learners is needed.
Article
This overview introduces the Sloan Consortium (Sloan-C), explains its quality framework for guiding quality and sharing effective practices, and suggests directions for research and development. As an association of colleges, universities and organizations dedicated to making higher education accessible to all, Sloan-C uses a quality framework that focuses on five pillars that support quality learning environments. Sloan Consortium (Sloan-C) believes academic knowledge and industry knowledge can complement each other to improve the quality of learning in both sectors. In particular, practitioners can learn how to improve higher order learning online, how to adapt technology to continuously improve interaction, how to use assessment to mainstream best practices, and how to combine ALN and face-toface learning.