ChapterPDF Available

Benchmarking for Technology-Enabled Learning

Authors:

Abstract and Figures

The practice of institutions adopting technology-enabled learning (TEL) has been steadily increasing in momentum for a good two decades now. Although there are many similarities in the way institutions implement TEL, there are also many inconsistencies (Anthony, 2012). In many cases, these inconsistencies are brought to an institution’s attention when students comment on the irregularities they experience in the varied approaches taken to teaching with TEL. A number of institutions, professional bodies and associations have recognised this and have begun to establish a range of quality assurance mechanisms to assist higher education (HE) institutions in aspiring to a greater level of consistency in their TEL practice, at both the macro level (across the whole institution) and the micro level (at the individual course/unit level)
Content may be subject to copyright.
C O L
C O L
PERSPECTIVES ON OPEN AND DISTANCE LEARNING
Technology-Enabled Learning: Policy, Pedagogy and Practice
PERSPECTIVES ON OPEN AND DISTANCE LEARNING
Sanjaya Mishra and Santosh Panda
Editors
Technology-Enabled
Learning:
Policy, Pedagogy and Practice
PERSPECTIVES ON OPEN
AND DISTANCE LEARNING
TECHNOLOGY-ENABLED LEARNING:
POLICY, PEDAGOGY AND PRACTICE
Teaching and learning have undergone considerable transformation from the traditional
classroom model to the current online and blended models. Developments in information and
communications technologies hold the key to such transformation. Seizing the opportunities
and affordances of these technologies, COL’s Technology-Enabled Learning (TEL) initiative
has focused on several activities to support governments and educational institutions in the
Commonwealth since July 2015.
Significant and sustainable interventions include: the Commonwealth Digital Education
Leadership Training in Action programme; ICT in education policy development, including
open educational resources policy and implementation; massive open online courses on TEL
and blended learning practices; systematic TEL implementation in educational institutions;
and advanced ICT skills development.
Technology-Enabled Learning: Policy, Pedagogy and Practice, based mostly on various TEL
projects in the last five years, presents diverse experiences of TEL from a critical research
perspective, offering lessons that can be deployed elsewhere.
The book’s 17 chapters provide success stories about the planned and systematic
integration of technology in teaching and learning, and present models for online training at
scale using massive open online courses and other platforms. Within the framework of the
policy–technology–capacity approach to TEL implementation at the micro, meso and macro
levels, the chapters also provide guidelines for researching and evaluating similar projects
and interventions.
In the post-COVID-19 world of education, the lessons learnt and recommendations in this
book will help policy makers and educational leaders rethink existing models of education
and training.
The Commonwealth of Learning (COL) is an intergovernmental organisation created
by Commonwealth Heads of Government to promote the development and sharing
of open learning and distance education knowledge, resources and technologies.
© 2020 by the Commonwealth of Learning. Technology-Enabled Learning: Policy,
Pedagogy and Practice is made available under a Creative Commons Attribution-
ShareAlike 4.0 Licence (international): http://creativecommons.org/licences/by-
sa/4.0.
For the avoidance of doubt, by applying this licence the Commonwealth of
Learning does not waive any privileges or immunities from claims that it may be
entitled to assert, nor does the Commonwealth of Learning submit itself to the
jurisdiction, courts, legal processes or laws of any jurisdiction.
The designations employed and the presentation of material throughout this
publication do not imply the expression of any opinion whatsoever on the part
of COL concerning the legal status of any country, territory, city or area or of its
authorities, or concerning the delimitation of its frontiers or boundaries.
The ideas and opinions expressed in this publication are those of the authors; they
are not necessarily those of COL and do not commit the organisation. All products
and services mentioned are owned by their respective copyright holders, and mere
presentation in the publication does not mean endorsement by COL.
ISBN 978-1-894975-98-8
Published by:
COMMONWEALTH OF LEARNING
4710 Kingsway, Suite 2500
Burnaby, British Columbia
Canada V5H 4M2
Telephone: +1 604 775 8200
Fax: +1 604 775 8210
Web: www.col.org
Email: info@col.org
iii
Contents
Foreword ............................................................................................................... v
Acknowledgements .............................................................................................vii
Contributors ........................................................................................................ viii
List of Abbreviations and Acronyms ...................................................................xiv
List of Tables ....................................................................................................... xv
List of Figures......................................................................................................xvi
PART I: PROLOGUE
Chapter 1
Prologue: Setting the Stage for Technology-Enabled Learning ......................3
Sanjaya Mishra and Santosh Panda
PART II: ICT in Education Policy and National Development
Chapter 2
Technology Applications in Education: Policy and Prospects ......................19
Sanjaya Mishra
Chapter 3
COVID-19 Education Responses and OER–OEP Policy in the
Commonwealth .............................................................................................33
Shafika Isaacs
PART III: Technology-Enabled Learning Strategy and Implementation:
Case Studies
Chapter 4
Designing Blended Learning Courses to Improve Student Learning ...........49
Indira Koneru
Chapter 5
Faculty Experiences of Delivering Blended Learning Courses .....................71
Jayashree Shinde
Chapter 6
Implementing Technology-Enabled Learning in a Technically Challenged
Environment: The Case of the National University of Samoa .......................85
Ioana Chan Mow, Agnes Wong Soon, Tara Patu, Mose Mose and
Oloa Lipine
iv
Technology-Enabled Learning: Policy, Pedagogy and Practice
Chapter 7
Improving Learner Engagements through Implementing
Technology-Enabled Learning ......................................................................97
Silvance O. Abeka and Joseph Bosire
Chapter 8
Developing Institutional Capacities for OER-Based eLearning ..................109
Sanjaya Mishra and Manas Ranjan Panigrahi
PART IV: Researching and Evaluating Technology-Enabled Learning
Chapter 9
Methodological Challenges in Researching the Impact of Technology-
Enabled Learning: The Case of Universiti Malaysia Sabah ........................119
Kaushal Kumar Bhagat and Fong Soon Fook
Chapter 10
Understanding TELMOOC Design and the Learner Experience ................129
Martha Cleveland-Innes, Nathaniel Ostashewski and Dan Wilton
Chapter 11
Evaluating MOOCs’ Long-Term Impact: A Theory of Change Approach ...143
Leigh-Anne Perryman
Chapter 12
Becoming Digital Education Leaders: Teacher Stories from Sri Lanka ......165
Shironica P. Karunanayaka
Chapter 13
Developing Digital Education Leadership in the Commonwealth ...............181
CherylBrown
Chapter 14
Changing Access to Learning Resources ...................................................191
Michael Paskevicius
Chapter 15
Return on Investment from an Open Online Course on Open
Educational Resources ...............................................................................199
Santosh Panda
Chapter 16
Benchmarking for Technology-Enabled Learning .......................................213
Michael D. Sankey
PART V: Epilogue
Chapter 17
Epilogue: Towards Mainstreaming Technology-Enabled Learning ............. 225
Santosh Panda and Sanjaya Mishra
v
Foreword
This book comes at an unprecedented moment in history, when the COVID-19
pandemic has disrupted every sphere of activity. The impact on the education
sector left millions of students out of school due to institutional closures.
Governments, institutions, students and teachers had to make an almost
overnight transition to distance and online learning. But crisis generates
creativity, and many Commonwealth countries found appropriate solutions to
ensure that students continued to learn. A range of technologies was used —
printed text, radio, television, interactive radio instruction, community radio,
multimedia, and online learning — based on the requirements of dierent
constituencies.
However, the emphasis on online learning in contexts where electricity,
computers and connectivity were not readily available led to protests from
both students and teachers. In addition to the lack of resources, it was clear that
eective online learning required adequate planning, quality content and teacher
capacity. Most campus institutions were completely unprepared. On the other
hand, the distance education system was well equipped to keep the doors of
learning open.
Many open universities and open schools continued to teach while their campus
counterparts were forced to close. Several of the latter quickly converted their
courses for online delivery. As an intergovernmental organisation established to
promote distance and technology-enabled learning (TEL), the Commonwealth of
Learning (COL) responded quickly by curating teaching and learning resources,
releasing guidelines on distance education, and launching a platform to create a
network of organisations to collaborate and share expertise and resources. Some of
the lessons learnt in the process are:
with adequate ICT infrastructure, countries can develop a resilient
education system;
distance education, especially blended learning, can be the way forward
for many educational institutions;
synchronous technologies (such as video conferencing) are increasingly
available but are not necessarily the only way to teach online;
teacher readiness and capacity building are necessary for eective
teaching and learning; and
open educational resources (OER) need to be harnessed to provide
quality content quickly.
213
CHAPTER
Introduction
The practice of institutions adopting technology-enabled learning (TEL) has
been steadily increasing in momentum for a good two decades now. Although
there are many similarities in the way institutions implement TEL, there are also
many inconsistencies (Anthony, 2012). In many cases, these inconsistencies are
brought to an institution’s attention when students comment on the irregularities
they experience in the varied approaches taken to teaching with TEL. A number
of institutions, professional bodies and associations have recognised this and
have begun to establish a range of quality assurance mechanisms to assist higher
education (HE) institutions in aspiring to a greater level of consistency in their
TEL practice, at both the macro level (across the whole institution) and the micro
level (at the individual course/unit level).
One of the practices that has gained significant momentum is that of institutions
benchmarking their TEL practices against an established suite of performance
indicators. This involves HE institutions formally self-assessing their current
practices across these indicators and then comparing the outcomes of this
assessment with those from one or more other institutions who have undergone a
similar activity against the same indicators.
Very recently, the Commonwealth of Learning (COL) published a new
Benchmarking Toolkit for Technology-Enabled Learning that institutions can apply
to their current practice, with the aim of making improvements across a range
of performance areas (Sankey & Mishra, 2019). Although this particular tool
is new, the concept of benchmarking, and more particularly the methodology
of benchmarking TEL practices used in this tool, is not new, and there is
now significant evidence for the value of undertaking formal and regular
benchmarking activities.
16 Benchmarking for Technology-
Enabled Learning
Michael D. Sankey
214
Technology-Enabled Learning: Policy, Pedagogy and Practice
This chapter will first define the benchmarking paradigm to which this tool
relates and discuss how this paradigm may be applied in practice. It will then
report on the benefits that have been realised by some 58 institutions from five
Commonwealth countries that have undertaken similar benchmarking activities
over the last six years. It will also demonstrate that the value of benchmarking
is seen across multiple levels within an institution, from the macro to the micro
levels. The reassurance this can bring to an institution cannot be understated, and
this chapter will look to provide some keys principles that, when applied, will help
an institution realise similar levels of assurance.
Benchmarking and Benchmarks
Benchmarking
Benchmarking in HE has been evolving for some time across many levels of
practice, at both the discipline level and the business or practice level (for
example, the application of TEL). Earlier eorts focused on reputation, but now,
benchmarking has become a required component of HE quality assurance, or
regulatory compliance schemes (Bridgland & Goodacre, 2005). This is seen quite
starkly in Australia, where the quality agency TEQSA (the Tertiary Education
Quality and Standards Agency) has developed the Guidance Note: External
Referencing (including benchmarking), which provides the sector with clear
directions about what is expected of institutions in their “monitoring, review
and improvement processes” (TEQSA, 2019). In this document, TEQSA defines
benchmarking as:
A structured, collaborative learning process for comparing practices,
processes or performance outcomes. Its purpose is to identify
comparative strengths and weaknesses, as a basis for developing
improvements in academic quality or performance. Benchmarking
can also be defined as a quality process used to evaluate performance
by comparing institutional practices with identified good practices
across the sector. (TEQSA, 2019)
Generally speaking, benchmarking can be either a formal or an informal
knowledge-sharing process based on the comparative analysis of practices for
improvement purposes beyond that of evaluation (Ronco, 2012; Tomlinson &
Lundvall, 2001). Early forms of benchmarking in the HE sector were seen first
in North America in the early 1990s, then in Australia, the UK and continental
Europe by about 2000 (Jackson, 2001). This early use was mostly as a continuous
improvement tool in response to the introduction of quality standards (Bridgland
& Goodacre, 2005; Massaro, 1998).
Thinking first in terms of formal benchmarking, this commonly takes the form of
a continuous, structured, data-driven evaluation based on the use of a tool (a set of
benchmarks or standards) that is employed to identify, measure and understand
practices. The application of such a tool leads to self-improvement and/or the
setting of institutional goals towards improvement (Anand & Kodali, 2008;
Ettorchi-Tardy et al., 2012).
In contrast, informal benchmarking is more a set of indicators, rather than a
formal metric based on statistical precision. Meeting these indicators is usually
215
Benchmarking for Technology-Enabled Learning
demonstrated by providing what is deemed meaningful evidence (Bhutta & Huq,
1999; Braadbaart & Yusnandarshah, 2008). Informal benchmarking is more
than simply a comparison of performance, however. This method’s value to an
organisation is based on the extent to which useful organisational learning can
be gained and then translated into improvements or an action plan (Mann, 2012).
Furthermore, in a university situation, benchmarking may be seen as a means of
“connecting up relevant stakeholders both within and outside the institution in
such a way that leads to knowledge exchange about why, what, where and how
improvement might occur” (Garlick & Langworthy, 2008, p. 6).
There are a number of well-rehearsed reasons why HE institutions might
undertake benchmarking as a means of helping them reconcile their practice.
Elmuti and Kathawala (1997) identified these as:
continuous improvement,
determining areas for development or growth (gap or opportunity
identification),
developing strategy,
enhancing organisational learning and improving organisational sense-
making,
increasing productivity or improving the design of a product or service,
performance assessment, and
performance improvement through recalibration or setting of goals.
Importantly for HE, eective benchmarking is not simply a matter of capturing
metrics (a numbers-only exercise), as this generally does not lead to an
understanding of how an institution’s practice has reached a particular outcome.
Rather, it is commonly achieved by participating in a structured and documented
process, and by using this as a means of identifying practices designed to improve
one’s processes and recognising what might better meet institutional aims. This
is particularly important when an institution wishes to compare or contrast its
practices with those of like-minded entities (which is where deep learning happens).
Benchmarks
Not surprisingly, benchmarking usually indicates the presence of “benchmarks.”
These are the points of reference for performance, typically in the form of setting
either baseline indicators and guidelines, or standards that support evaluation
activities and the framing of subsequent organisational activities. They can be set
externally by a regulatory body or accreditation entity and/or internally (Hart &
Northmore, 2011).
In HE, benchmarks should be suciently specific to be useful indicators to follow
(Hart & Northmore, 2011). The process of setting benchmarks is not dissimilar
to standards formation, and benchmarks are generally the result of a consensus-
forming process. As with standards, benchmarks are created through consultation
with subject experts in the sector and/or other stakeholders who recognise the
need for a benchmark and its subsequent application to the sector (International
Organization for Standardization, 2010).
216
Technology-Enabled Learning: Policy, Pedagogy and Practice
The OECD defines a benchmark in HE to be: “The observed performance of a
higher education system to which other higher education systems can compare
themselves” (OECD, 2017, p. 58). It is this comparison against a set of defined
indicators in TEL that the good-practice example provided later in this chapter
will focus on.
Technology-Enabled Learning
In the context of this chapter it is important first to position the term technology-
enabled learning within the broader context of the use of technology within HE
to support learning and teaching (L&T). Figure 16.1 indicates (proposes) that
there are, broadly speaking, three levels of TEL seen within the sector, largely
dependent on the capacity of the following:
1. The educational jurisdiction. This refers to how technology might be used
by institutions on a continuum, from used simply to provide documents
to their students, through to teaching fully immersed in technology-rich
spaces, either virtually or in class, using tools such as virtual reality and
artificial intelligence.
2. The national technology infrastructure and geographical constraints. In some
developing countries, there are severe limitations in relation to accessing
a computer or the Internet. Again, this sits on a continuum, between a
standalone computer that is not networked, through to fully 4G-enabled
networks allowing multiple devices to interact and share information across
national boundaries.
3. The level of sta training. Using technology eectively for teaching students
requires certain skills that can be gained either through formal study or
through years of experience. This level of skill largely determines to what
extent technology is used to support L&T.
Figure 16.1. The nested model of technology use to support L&T.
However, focusing on the first level, the definition provided in COL’s Technology-
Enabled Learning Implementation Handbook (Kirkwood & Price, 2016, p. 2) is useful
to frame the context of TEL for this chapter. It describes TEL as:
the use of technology to support students’ learning. . . . Technology-
Enabled Learning is just about making learning possible, whether
that means dierent ways of serving existing learners or, potentially,
providing opportunities for learners who were previously regarded
as being “out of reach” — that is, those learners who typically have
T
e
c
h
n
o
l
o
g
y
-
E
n
a
b
l
e
d
L
e
a
r
n
i
n
g
T
e
c
h
n
o
l
o
g
y
-
E
n
h
a
n
c
e
d
L
e
a
r
n
i
n
g
Technology-Intensive
Learning
217
Benchmarking for Technology-Enabled Learning
little to no access to educational opportunities because of a variety of
circumstances.
Given this context and the framing of TEL in this way, this provides us with
an opportunity to then put together a range of indicators that would help
us understand what good practice or performance might look like within an
institution, and one based on a collective experience of those within the HE sector.
Domains of Practice and Performance Indicators Used to Support TEL
Generally speaking, when developing quality indicators, we are looking to ensure
that a base level of quality practices is present across the key domains of institutional
practice. However, these domains are indicative and built on the premise that each
institution is on a journey towards quality practice, and that individual institutions
may be found to be at dierent stages on this journey. In the COL benchmarks, for
instance, ten key domains of practice have been identified (Table 16.1). These domains
cover what are seen to be the foundations of quality organisational TEL practice — in
other words, those things that need to be in place to assure a level of quality in an
institution’s L&T practice using TEL (Sankey & Mishra, 2019).
Table 16.1. TEL domains of practice.
1. Policy 6. Documentation
2. Strategic Plan 7. Organisational Culture
3. IT Support 8. Leadership
4. Technology Applications 9. Human Resource Training
5. Content Development 10. Technology-Enabled Learning Champions
Simply providing the words “Policy” or “Strategic Plan” as a domain is not
enough. Although they indicate that these things should be in place, in practice
it is not that simple, as there is a range of associated elements (indicators) that
need to be aligned with this to demonstrate that these things are actually in place.
These are called performance indicators (PIs). To illustrate, let us take the first
two domains of the COL benchmarks and see what PIs have been identified to
evidence this practice of that domain (Figure 16.2).
Figure 16.2. Example of performance indicators in domains 1 and 2.
There is a strategic plan for the implementation of TEL.
The strategic plan for TEL is actively promoted by the senior management of the organisation.
The strategic plan for TEL has goals with measurable outcomes.
The strategic plan for TEL is supported by adequate financial provisions.
DOMAIN 1:
POLICY
DOMAIN 2:
STRATEGIC
PLAN
There is a well-documented TEL policy at this institution.
The vision and mission of the TEL policy is aligned with the mission of the organisation.
The vision and mission of the TEL policy are well understood across the organisation.
There is a commitment on the par t of institutional leaders to use technology to achieve
strategic academic goals.
218
Technology-Enabled Learning: Policy, Pedagogy and Practice
We note in the above that having a policy in place is one thing, but this in itself
is insucient if nobody knows or applies the policy, or if the policy is not aligned
to other key elements within the institution. Similarly, there may be a strategic
plan, but unless it is enacted and funded accordingly, then it may as well not be
there. Therefore, each of the benchmarking domains has a number of PIs in them
(either four or six) to help provide a greater level of focus to the domain. Inherent
within the PIs is the understanding that an institution may score well in one and
not in another, but this information is then used as a stimulus to improve in those
particular areas.
Evidence for the Effectiveness of Benchmarking in TEL
Although the COL benchmarks for TEL are relatively new and are the first attempt
to look at quality assuring TEL, very similar tools have been developed in relation
to the second level of the hierarchy shown in Figure 16.1 — technology-enhanced
learning (also TEL). Examples of this are the ACODE Benchmarks for TEL, and
a study of their use reveals important evidence about the value and impact of
benchmarking.
Since 2014, the Australasian Council on Open, Distance and e-Learning (ACODE)
has been using its Benchmarks for technology-enhanced learning (Sankey et
al., 2014) to run biennial inter-institutional benchmarking activities within
the Australasian sector (in 2014, 2016 and 2018) and another activity in the
United Kingdom (UK) in 2017. These activities have been the subject of many
papers (some of which will be cited here); this chapter will not re-rehearse all the
evaluations undertaken at these activities but instead will provide a brief meta-
analysis of the findings.
Over the last six years, more than 58 institutions, all from Commonwealth
countries, have formally used the ACODE Benchmarks to help them quality
assure their TEL practice. Of these, 34 were in Australia, 17 in the UK, six in New
Zealand, and one each in Fiji and South Africa. Across all these activities, the
institutions involved first undertook an internal activity to apply the lens of the
benchmarks, and the PIs within them, to their practice.
Participants engaging in these benchmarking activities over this six-year period
were asked whether there was sucient scope within the current suite of PIs in the
benchmarks to cover the TEL scenarios at their institution; 93.3% either agreed or
strongly agreed with this proposition (Sankey & Pedro, 2019). Further, when asked
about their agreement with the statement “The ACODE Benchmarks made me
think twice about what we as an institution are doing in relation to TEL,” 92.5%
of participants agreed or strongly agreed. This response clearly demonstrates that
the benchmarks are helping institutions to critically self-assess their capacity in
TEL — the benchmarks’ intended function. Finally, when asked whether “[t]his
benchmarking self-assessment activity has provided an opportunity to stimulate
a more in-depth discussion about TEL at their institution,” 90% agreed or strongly
agreed that the tool had provided this opportunity.
Importantly, a benchmarking activity like this should not reference the voice
of just one or two people but should be representative of all those within the
institution. Pleasingly, over the years these activities have been running, many
people within the Australasian institutions have been involved. For example,
219
Benchmarking for Technology-Enabled Learning
the data indicate that on average, ten people have been involved per institution
(Marshall & Sankey, 2017; Sankey & Pedro, 2018).
In key qualitative comments made by those representing their institutions (the
leads) in the surveys conducted, some tangible and interesting benefits have been
identified. Typical statements about the benefits include (Sankey & Pedro, 2018):
“It has helped us to better align our activities with the university’s
goals.”
“informed the formation of a new unit and teams”
“helped develop much better cross-unit cooperation”
“development of a new TEL strategy, new TEL advisory group”
“It got the conversation started for the first time within the
institution.”
“worked as a catalyst to address TEL at the institutional level.”
As previously mentioned, the new COL Benchmarking Toolkit is built on the
same underlying premise as the ACODE Benchmarks, but with a specific focus
on technology-enabled learning rather than on technology-enhanced learning.
Having said that, the outcomes from rigorously applying either tool would be
expected to be very similar, as it is the activity of gathering key members of sta
together within the institution, around a common set of indicators, and having
the conversation, that builds a new sense of corporate awareness. Therefore, the
lessons from the ACODE example, provided above, may well be applicable to those
applying the new COL Benchmarking Toolkit.
Undertaking a Benchmarking Activity
Benchmarking is perhaps the most elaborate form of external referencing that
institutions can undertake and typically consists of focused improvement
through relationships with a benchmarking partner or partners (internally and
externally), but it can also include comparing elements of practice against publicly
available information and market intelligence (TEQSA, 2019). It is a journey that
starts with a self-assessment based in evidence, not opinion.
Therefore, two critical factors need to be in place for a successful benchmarking
activity. First, because HE institutions are reasonably large organisations, rarely
does an in-depth knowledge of what is happening across the many and varied
departments within an institution reside in just one place. That being the case,
it is important that the resultant view be collectively established by having
representatives from a range of departments undertake the benchmarking
activity; specifically, ask those who might have knowledge, or access to the
appropriate evidence, to be the ones involved.
This leads us to the second critical factor, which is that any rating of one’s
position, as described in the PIs, needs to be evidence based and not just based in
opinion, as evidence is what will be required when the quality agency comes to
your institution and asks, “Where is your proof?” For example, PI 4 in Domain
2 of the COL toolkit (as seen in Figure 16.2) states, “The strategic plan for TEL
is supported by adequate financial provisions.” It may be easy to agree to this
in principle, but what evidence can be provided that this actually is the case?
220
Technology-Enabled Learning: Policy, Pedagogy and Practice
Generally, such evidence might include a statement in the university’s financial plan
or budget that is explicitly earmarked with the same words that appear in the strategic
plan. This may not always be the case, so what other evidence might be used? There
might be statements within departmental plans that reference the strategic plan and
have an internal budget line established for this. If these things are not present, then it
is dicult for an institution to say, hand on heart, “This is fully in place.”
Any good benchmarking tool will generally have explicit procedures for how best
to conduct an activity contained in its documentation. For example, the COL
Benchmarking Toolkit suggests the following six-step process:
1. A nominated department representative will first undertake an individual
self-assessment of the benchmarks.
2. The departments typically represented would include those from IT, the
central learning and teaching units, assessment and evaluation and/
or support units, representatives from the schools/faculties, a library
representative and possibly someone from the finance or planning
department.
3. Those involved would generally be the main stakeholders for each
benchmark.
4. The nominated individuals come together and share their self-assessments
with each other to then form a collective view or agreed stance.
5. It may well be that dierent departments are contributing to most or all of
the benchmarks, while others may only be involved in one or two.
6. Once a consolidated stance is established, this is then used as the initial
position.
More details about how these procedures can be applied may be found in the COL
Benchmarking Toolkit. Needless to say, whether one is looking to use the COL
benchmarks or the ACODE Benchmarks, generally the organisations themselves
are keen for these tools to be used and can be contacted if more information is
required on how to undertake a benchmarking activity.
Conclusion
There is clear evidence that benchmarks and benchmarking activities have value
and importance for continuous improvement and quality assurance in diverse
settings. The focus on TEL is now mission critical for most higher education
institutions to ensure quality in the delivery of courses and programmes. The use
of a benchmarking tool, as outlined here, can help improve practice by supporting
a better understanding of the operational systems and processes present within an
institution. Benefits found by institutions undertaking benchmarking include:
the identification of strengths and weaknesses — for planning and priority
setting;
an improved understanding of strategic and operational requirements;
a recognition of areas of achievement;
the generation of ideas and a reinvigoration of practice, through the
development of strategies for improvement in areas of need.
221
Benchmarking for Technology-Enabled Learning
It is now in the hands of the reader to look to establish the best ways of improving
their pursuit of technology-enabled learning, and one might hope that the
application of a benchmarking tool, such as the COL Benchmarking Toolkit, will
serve to help them meet this end.
References
Anand, G., & Kobali, R. (2008). Benchmarking the benchmarking models.
Benchmarking: An International Journal, 15(3), 257–291.
Anthony, K. V. (2012). Analyzing the influence of course design and gender on
online participation. Online Journal of Distance Learning Administration, 15(3).
https://www.westga.edu/~distance/ojdla/fall153/anthony153.html
Bhutta, K. S., & Huq, F. (1999). Benchmarking best practices: An integrated
approach. Benchmarking: An International Journal, 6(3), 254–268.
Braadbaart, O., & Yusnandarshah, B. (2008). Public sector benchmarking: A
survey of scientific articles, 1990–2005. International Review of Administrative
Sciences, 74(3), 421–433.
Bridgland, A., & Goodacre, C. (2005). Benchmarking in higher education: A
framework for benchmarking quality improvement purposes. Paper presented at
Educause Australasia, Auckland, New Zealand, April 5–8.
Elmuti, D., & Kathawala, Y. (1997). An overview of benchmarking process: A tool
for continuous improvement and competitive advantage. Benchmarking for
Quality Management & Technology, 4(4), 229–243.
Ettorchi-Tardy, A., Levif, M., & Michel, P. (2012). Benchmarking: A method for
continuous quality improvement in health. Healthcare Policy, 7(4), e101–e119.
Garlick, S., & Langworthy, A. (2008). Benchmarking university community
engagement: Developing a national approach in Australia. Higher Education
Management and Policy: Higher Education and Regional Development, 20(2), 153.
http://dx.doi.org/10.1787/hemp-v2 0 -art17-en
Hart, A., & Northmore, S. (2011). Auditing and evaluating university-community
engagement: Lessons from a UK case study. Higher Education Quarterly, 65(1),
34–58.
International Organization for Standardization [ISO]. (2010). Guidance for ISO
national standards bodies: Engaging stakeholders and building consensus. ISO.
http://www.iso.org/iso/guidance_nsb.pdf
Jackson, N. (2001). Benchmarking in UK HE: An overview. Quality Assurance in
Education, 9(4), 218–235.
Kirkwood, A., & Price, L. (2016) Technology-enabled learning implementation
handbook. Commonwealth of Learning. http://oasis.col.org/
handle/11599/2363
Mann, R. (2012). Everything you need to know about benchmarking. http://www.
financepractitioner.com/performance-management-best-practice/
everything-you-need-to-know-about-benchmarking?page=1
222
Technology-Enabled Learning: Policy, Pedagogy and Practice
Marshall, S., & Sankey, M. (2017). The ACODE benchmarks for technology enhanced
learning. Paper presented at the THETA 2017 Conference, Connecting Minds.
Creating The Future. Auckland, New Zealand, May 7–10.
Massaro, V. (1998). Benchmarking in Australian higher education. In A. Schofield
(E d.), Benchmarking in higher education: An international review (pp. 33–43).
Commonwealth Higher Education Management Service.
OECD. (2017). Benchmarking higher education system performance: Conceptual
framework and data. OECD Paris. https://www.oecd.org/education/skills-
beyond-school/Benchmarking%20Report.pdf
Ronco, S. L. (2012). Internal benchmarking for institutional eectiveness. In G. D.
Levy & N. A. Valcik (Eds.), Benchmarking in institutional research (pp. 15–23).
New Directions for Institutional Research, no. 156.
Sankey, M., Carter, H., Marshall, S., Obexer, R., Russell, C. & Lawson, R. (Eds).
(2014). Benchmarks for technology enhanced learning. Australasian Council
on Open, Distance and E-Learning. http://www.acode.edu.au/pluginfile.
php/550/mod_resource/content/3/TEL_Benchmarks.pdf
Sankey, M., & Mishra, S. (2019). Benchmarking toolkit for technology-enabled learning.
Commonwealth of Learning. http://oasis.col.org/handle/11599/3217
Sankey, M., & Padro, F. 2018. Longer-term benefits for those benchmarking technology
enhanced learning: Facilitating excellence slowly but surely. Paper presented at
the Third Annual TEQSA Conference and Higher Education Quality Forum:
Innovation, Excellence, Diversity. Melbourne, Australia, November 28–30.
Sankey, M., & Padro, F. 2019. Seeing COL’s technology-enabled learning benchmarks
in the light provided by the ACODE benchmarking process. Paper presented at
the 9th Pan-Commonwealth Forum on Open Learning (PCF9). Edinburgh,
Scotland, September 9–12.
TEQSA. (2019, April 16). Guidance note: External referencing (including benchmarking).
Version 2.5. Tertiary Education Quality and Standards Agency, Australia.
https://www.teqsa.gov.au/latest-news/publications/guidance-note-external-
referencing-including-benchmarking
Tomlinson, M., & Lundvall, B-A. (2001). Policy learning through benchmarking
national systems of competence building and innovation – learning by comparing:
Report for the “Advanced Benchmarking Concepts” (ABC) Project. European
Council.
C O L
C O L
PERSPECTIVES ON OPEN AND DISTANCE LEARNING
Technology-Enabled Learning: Policy, Pedagogy and Practice
PERSPECTIVES ON OPEN AND DISTANCE LEARNING
Sanjaya Mishra and Santosh Panda
Editors
Technology-Enabled
Learning:
Policy, Pedagogy and Practice
PERSPECTIVES ON OPEN
AND DISTANCE LEARNING
TECHNOLOGY-ENABLED LEARNING:
POLICY, PEDAGOGY AND PRACTICE
Teaching and learning have undergone considerable transformation from the traditional
classroom model to the current online and blended models. Developments in information and
communications technologies hold the key to such transformation. Seizing the opportunities
and affordances of these technologies, COL’s Technology-Enabled Learning (TEL) initiative
has focused on several activities to support governments and educational institutions in the
Commonwealth since July 2015.
Significant and sustainable interventions include: the Commonwealth Digital Education
Leadership Training in Action programme; ICT in education policy development, including
open educational resources policy and implementation; massive open online courses on TEL
and blended learning practices; systematic TEL implementation in educational institutions;
and advanced ICT skills development.
Technology-Enabled Learning: Policy, Pedagogy and Practice, based mostly on various TEL
projects in the last five years, presents diverse experiences of TEL from a critical research
perspective, offering lessons that can be deployed elsewhere.
The book’s 17 chapters provide success stories about the planned and systematic
integration of technology in teaching and learning, and present models for online training at
scale using massive open online courses and other platforms. Within the framework of the
policy–technology–capacity approach to TEL implementation at the micro, meso and macro
levels, the chapters also provide guidelines for researching and evaluating similar projects
and interventions.
In the post-COVID-19 world of education, the lessons learnt and recommendations in this
book will help policy makers and educational leaders rethink existing models of education
and training.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Since 2014 the Australasian Council for Open Distance and eLearning (ACODE) have been holding biennial inter-institutional benchmarking summits for those higher education institutions wishing to benchmark their capacity in technology enhanced learning. Over this time the evidence has been mounting as to the longer-term benefits for many of the institutions undertaking this activity. For those who have regularly applied this tool, it can be demonstrated that there have been improvements in particular areas of their practice. In the light of this, and now that the Commonwealth of Learning have developed their own Benchmarks for technology-enabled learning, it is worth understanding how this tool can be applied by institutions, so that similar or, if one might be bold enough to suggest, better results may be afforded. This paper will compare the two tools and the methodologies adopted and provide suggestions based on the lessons learned from over 40 institutions in Australasia. It will report on the three ACODE Benchmarking activities that have occurred since 2014 in Australia and provide a longitudinal view of the key features and outcomes of these activities. In conclusion this paper will challenge institutions to take seriously their mandate to provide their students with learning environments that meet the highest possible quality, particularly now in a higher education setting that will come under increased scrutiny by regulatory bodies. More importantly, it will reflect on what the potential implications are for institutions in moderating their learning management and associated systems.
Conference Paper
Full-text available
The ACODE Benchmarks were developed to assist higher education institutions in their practice of delivering a quality technology enhanced learning (TEL) experience for their students and staff (recognising that some institutions refer to their practice with terms such as e-learning, online or flexible learning, blended, etc.). The original ACODE benchmarks were developed as part of an ACODE funded project in 2007. In 2014 the Benchmarks underwent a major review to ensure they are now both current and forward looking. These revised benchmarks were then applied by 24 intuitions in the first ACODE Inter-institutional Benchmarking Summit held in Sydney. As part of ACODE’s ongoing commitment to both this tool and the sector, it ran a second major Inter-Institutional Benchmarking Summit in June of 2016 in Canberra, again using the Benchmarks (http://www.acode.edu.au/mod/resource/view.php?id=193). A total of 27 universities from Australia, New Zealand, the Pacific, South Africa and the United Kingdom attended and engaged in a richly collaborative workshop that explored their individual capabilities across the ACODE benchmarks, working to identify shared issues, potential solutions and opportunities for ongoing improvements in the use of technology to enhance student outcomes and organisational systems.
Book
Full-text available
The Technology-Enabled Learning Implementation Handbook has been developed to assist educational institutions in adopting appropriate policies, strengthening technology infrastructure, building the capacities of teachers, helping learners to take advantage of the available technology and open educational resources (OER) for learning, and undertaking a rigorous approach to the assessment and evaluation of TEL. The objective is to provide both a systematic approach and evidence of improved learning outcomes in a TEL environment. We expect that institutions implementing TEL will use this handbook to gather data for evidence- based decision making. This handbook provides you with a strategy to engage in a systematic process of critical thinking, decision making, implementation and reflection not just to promote but also to demonstrate improved student engagement and learning.
Article
Full-text available
Executive Summary • Benchmarking is much more than a comparison of performance. • Benchmarking focuses on learning from the experience of others and can be defined as "identifying, adapting, and implementing the practices that produce the best performance results." • Benchmarking is a powerful method for breakthrough thinking, innovation, and improvement, and for delivering exceptional bottom-line results. • New benchmarking methodologies aim to ensure that benchmarking projects result in major benefits, both financial and nonfinancial. • New tools available on the internet make benchmarking easier.
Chapter
This article provides the background and describes the processes involved in establishing a national approach to benchmarking the way universities engage with their local and regional communities in Australia. Local and regional community engagement is a rapidly expanding activity in Australian public universities and is increasingly being seen as part of the universal quality assurance assessment process. An initiative of the Australian Universities Community Engagement Alliance (AUCEA), the benchmarking framework was developed over almost three years and involved considerable consultation and testing. The framework comprises an institutional questionnaire, a partner perceptions survey and a "good practice" template. The instruments were tested in a pilot of 12 AUCEA member universities and will be implemented in all 33 AUCEA member universities in late 2008. Comparative results will be available early in 2009. The framework will assist universities and their community partners to improve their contribution to society and the environment through mutual knowledge exchange, learning and enterprising action.
Article
This chapter discusses the application of benchmarking processes within higher education institutions.
Article
This article assesses the past 15 years' evolution of Public Sector Benchmarking (PSB) research. We do so with a database of 147 peer-reviewed articles published between 1990 and 2005. Over this period PSB evolved into a mature and strongly international field of research. A theoretical and conceptual rift runs through the literature, with those advocating PSB as a tool for managed competition on one side, and those promoting benchmarking as a voluntary and collaborative learning process on the other. A first challenge facing future PSB researchers is that of closing the gap between the managed and voluntary benchmarking perspectives; a second challenge concerns empirical tests that capture the effects of different benchmarking regimes on the performance of public sector providers. Points for practitioners Benchmarking is widely advocated as a tool for enhancing the performance of public sector providers. This article reviews the academic literature on benchmarking. This literature suggests that public sector benchmarking can fulfill its promise if only policy-makers pay sufficient attention to benchmarking design, particularly the development of appropriate accounting systems and the balancing of collaborative and competitive elements. Dos and don'ts for PSB innovators are discussed in the concluding section. An extensive reference list is appended to the article.
Article
Purpose A review of benchmarking literature revealed that there are different types of benchmarking and a plethora of benchmarking process models. In some cases, a model has been uniquely developed for performing a particular type of benchmarking. This poses the following problems: it can create confusion among the users as to whether they should use only the unique benchmarking model that has been developed for particular type or they can use any model for any type of benchmarking; a user may find it difficult when it becomes necessary to choose a best model from the available models, as each model varies in terms of the number of phases involved, number of steps involved, application, etc. Hence, this paper aims to question the fundamental classification scheme of benchmarking and thereby the unique benchmarking models that are developed for each type of benchmarking. Further it aims to propose a universal benchmarking model, which can be applied for all types of benchmarking. Design/methodology/approach The fundamental benchmarking model developed by Camp has been used to benchmark the existing models, irrespective of the type of benchmarking, to identify the best practices in benchmarking. Findings Benchmarking the benchmarking models revealed about 71 steps in which around 13 steps have been addressed by many researchers. The remaining unique steps were considered to be the best practices in benchmarking. Research limitations/implications The proposed model is highly conceptual and it requires validation by implementing the same in an organization to understand its effectiveness. Originality/value Though some of the methodologies used in this paper are already available in the literature, their context of application in the field of benchmarking is new. For example, utilizing the benchmarking process itself to improve the existing benchmarking process is an original concept.
Article
The continuous pursuit of excellence is the underlying and ever present goal of benchmarking practices. Benchmarking is an external focus on internal activities, functions, or operations in order to achieve continuous improvement. It is the process of judging a company’s processes or products by comparing them to the world’s best, including those in other industries. Benchmarking is emerging in leading-edge companies as a tool for obtaining the information needed to support continuous improvement and gain competitive advantage. In order to benchmark effectively, there needs to be a strong strategic focus and some flexibility in achieving the goals set forth by management. Perhaps the most important aspects of effective implementation are adequate planning, training, and open interdepartmental communication.