Content uploaded by Marco Rüth
Author content
All content in this area was uploaded by Marco Rüth on Apr 24, 2017
Content may be subject to copyright.
ISSN 1479-4403 94 ©ACPIL
Reference this paper as: Rüth M and Kaspar K, “The E-Learning Setting Circle: First Steps Toward Theory Development in E-
Learning Research” The Electronic Journal of e-Learning Volume 15 Issue 1 2017, (pp94-103) available online at
www.ejel.org
The E-Learning Setting Circle: First Steps Toward Theory
Development in E-Learning Research
Marco Rüth and Kai Kaspar
Department of Psychology, University of Cologne, Cologne, Germany
marco.rueth@uni-koeln.de
kkaspar@uni-koeln.de
Abstract: E-learning projects and related research generate an increasing amount of evidence within and across various
disciplines and contexts. The field is very heterogeneous as e-learning approaches are often characterized by rather unique
combinations of situational factors that guide the design and realization of e-learning in a bottom-up fashion.
Comprehensive theories of e-learning that allow deductive reasoning and hence a more top-down strategy are missing so
far, but they are highly desirable. In view of the current situation, inductive reasoning is the prevalent way of scientific
progress in e-learning research and the first step toward theory development: individual projects provide the insights
necessary to gradually build up comprehensive theories and models. In this context, comparability and generalizability of
project results are the keys to success. Here we propose a new model – the E-Learning Setting Circle – that will promote
comparability and generalizability of project results by structuring, standardizing, and guiding e-learning approaches at the
level of a general research methodology. The model comprises three clusters – context setting, structure setting, and
content setting – each of which comprises three individual issues that are not necessarily sequential but frequently
encountered in e-learning projects. Two further elements are incorporated: on the one hand, we delineate the central role
of objective setting and the assessment of the goal attainment level (guiding element); on the other hand, we highlight the
importance of multi-criteria decision-making (universal element). Overall, the proposed circular model is a strategic
framework intended to foster theory development in the area of e-learning projects and research.
Keywords: e-learning research, e-learning projects, research methodology, theory development, major project issues,
decision-making, new model
1. Introduction
Learning that is enhanced by information and communication technology (ICT) is continuously expanding
across scientific disciplines (e.g., language learning, physics, and medicine, cf. Coryell and Chlup, 2007; Martín-
Blas and Serrano-Fernández, 2009; Ruiz, Mintzer, and Leipzig, 2006), geographic regions (e.g., Germany,
Nigeria, and South Korea, cf. Brosser and Vrabie, 2015; Folorunso, Shawn Ogunseye, and Sharma, 2006; Lee,
Yoon, and Lee, 2009), and covers diverse educational institutions and target groups (e.g., primary school,
secondary school, and university, cf. Biasutti, 2011; Ho, 2004; Woo, et al., 2011). Also, such e-learning, in its
broadest sense, is not tied to specific technological devices but “includes instruction delivered via all electronic
media including the Internet, intranets, extranets, satellite broadcasts, audio/video tape, interactive TV, and
CD-ROM” (Govindasamy, 2001, p.288). As a consequence, the field of e-learning projects is very
heterogeneous and one can only hardly compare different approaches characterized by rather unique
combinations of boundary conditions and context factors (hereinafter referred to as situational factors).
Situational factors include, inter alia, technological infrastructure, discipline-specific didactical constraints,
curriculum-dependent degrees of freedom, actors involved in the project, and institutional features. For
example, the latter comprise national policies, institutional strategies, and available financial support. Indeed,
based on a survey of the European University Association, one out of four respondents from 249 higher
education institutions across 38 countries stated awareness on national strategies for e-learning in higher
education and education in general (Gaebel, et al., 2014). In addition, the “vast majority of respondent
institutions (89%) have an institutional or faculty-level strategy, or are currently preparing one” (p.22). Such
national policies and institutional strategies are only two situational factors that may lead to rather unique e-
learning approaches being too specific to be generalizable. This fact might explain why e-learning projects and
corresponding research are often designed and realized in a bottom-up fashion, guided by existing situational
factors. Comprehensive theories of e-learning that allow deductive reasoning and hence a more top-down
strategy are missing so far, but they are highly desirable. Accordingly, Pange and Pange (2011) conclude that
“e-learning research is still far from stating an explicit e-learning theory and designing an integrated solution
with concrete learning outcomes that covers the online learners’ needs” (p.935). In this paper, we will outline
first steps that facilitate to overcome the heterogeneity of e-learning projects in favor of a better
comparability and generalizability being necessary preconditions for theory development.
Marco Rüth and Kai Kaspar
www.ejel.org 95 ISSN 1479-4403
2. Why we should strive for comparability and generalizability of project results
We assume that it will be a long way from the status quo to comprehensive e-learning theories and a more
top-down strategy for e-learning projects, but the benefits will be worth the effort. In view of the current
situation, inductive reasoning is the prevalent way of scientific progress in e-learning research; that is,
individual projects provide the insights necessary to gradually build up comprehensive theories and models. In
this context, comparability and generalizability of project results are the keys to success. Theories, on the
other side, allow deductive reasoning, help to order massive data, describe relations between objects and
processes, expatiate causal mechanisms, provide predictive power, and can systematically guide interventions
at the end. By no means we want to state that past and current e-learning projects are “theory-free”; of
course, many projects already refer to aspects of learning theories and consider established knowledge about
cognitive processes, but what is missing is a theory of what the inclusion of technology adds to learning. In
fact, although e-learning approaches are manifold, they usually try to improve learning by using some kind of
technology (Sarsa and Escudero, 2016). Accordingly, the diverse definitions of e-learning seem to converge to
this point when stating that “e-learning […] uses network technologies to create, foster, deliver, and facilitate
learning, anytime and anywhere.” (Raab, Ellis, and Abdon, 2002, p.221), that e-learning is “the use of new
multimedia technologies and the Internet to improve the quality of learning” (European Commission, 2001,
quoted from Alonso, et al., 2005, p.218), that e-learning is “learning facilitated and supported through the use
of information and communication technologies” (Clarke, et al., 2005, p.34), or that it should be understood as
“instruction delivered on a digital device (such as a desktop computer, laptop computer, tablet, or smart
phone) that is intended to support learning” (Clark and Mayer, 2016, p.8). In this sense, the primary goal of
each e-learning project is the improvement of otherwise “classic” learning approaches. This goal should be
constitutive for e-learning research and theory development.
As a consequence, e-learning research, as an applied science heavily intertwined with practice, needs both
valid conclusions about why a specific e-learning project is effective (i.e., improves learning) and a better
comparability of alternative e-learning approaches. Unfortunately, conclusions about the effectiveness of
individual e-learning projects is a challenging task as “the high number of features involved in e-learning
processes complicates and masks the identification and isolation of the intervening factors” (Sarsa and
Escudero, 2016, p.337). Similarly, the comparability of different projects is not only threatened by unique
combinations of situational factors; even when most situational factors are comparable, the effectiveness of
an e-learning approach might considerably depend on the primary actors: teachers and learners. For example,
different teachers might differently implement an electronic learning tool into lessons, the tool might be used
for different purposes by different groups of students (cf. Kaspar, Aßmann, and Konrath, 2017), and the
learners might also differ in media competence (e.g., elementary school versus high school students).
Inductive reasoning is a difficult task under such circumstances due to an impaired generalizability and
comparability of project results.
It appears obvious that a comparison between alternative learning approaches (not limited to e-learning) is
necessary to decide whether to continue, modify, or completely replace specific approaches. Such benchmark
analyses are not only desirable from the perspective of teachers and learners but also from an economic
perspective; most institutions have to manage their limited resources very carefully, calling for evidence-based
decisions about alternative projects and institutional strategies. And, of course, this also implies a political (and
sometimes diplomatic) dimension as the promotion of a specific approach (and its proponents) is often at the
expense of another approach. Therefore, we claim that e-learning projects and corresponding research would
significantly benefit from a more top-down and hence systematic research strategy.
However, we want to emphasize that it would be inappropriate to call for a stronger convergence of diverse e-
learning projects at the expense of approaches that optimally fit the situational factors given (some of which
are imposed and immutable). We rather propose to make situational factors and related design decisions
within individual projects explicit. Figure 1 depicts a schematic decision grid in which each junction represents
a specific decision occasion set by situational factors. As far as each decision and its relation to situational
factors is adequately documented, one can identify causes of different outcomes in the case of very similar
(but not identical) project routes (red vs. blue line) or detect alternative routes producing comparable
outcomes (red vs. green line). It is obvious that the precision of effect estimates will increase with an
increasing number of (different) projects realizing different routes, calling for an accumulation of empirical
evidence. This is only a very simplified model as it neglects, inter alia, potential differences in the weighting of
The Electronic Journal of e-Learning Volume 15 Issue 1 2017
www.ejel.org 96 ©ACPIL
decisions (not all situational factors are equally influential) or interdependent decisions. Still, it illustrates that
an adequate documentation of situational factors and related design decisions is a promising account. This
account would (a) sensitize stakeholders in e-learning projects to potential consequences of decisions, (b)
make decision sequences traceable and thereby facilitate the identification of (in)effective intervening factors,
(c) allow systematic variation of project characteristics tied to situational factors in order to further improve
the learning approach at hand, and (d) enable cross-referencing and accumulation of evidence from similar
sub-processes of different e-learning projects. Together, the explication of situational factors and related
decisions in e-learning projects will substantiate the basis for inductive reasoning and subsequent theory
development.
Figure 1: Schematic illustration of decision routes provoked by situational factors of e-learning projects.
3. Critical decisions in the context of e-learning projects
In order to identify common situational factors and critical decision occasions of e-learning projects, we
initially analyzed several process models being applicable in the context of e-learning:
• The idealized e-learning lifecycle with its seven stages: analysis of problem, design of e-learning
artefact, prototype of e-learning artefact, design of e-learning environment and conduction of pilot
study, refinement of e-learning environment and conduction of full trial, and two final phases of
evaluation research on the mature system (cf. Phillips, Kennedy, and McNaught, 2012)
• Design research approaches (e.g., Peffers, et al., 2006; Seufert, 2015)
• The ADDIE (analysis, design, development, implementation, and evaluation) model (cf. Molenda,
2003)
• The international standard ISO/IEC 40180 (ISO/IEC, 2016) providing a reference framework for the
description of quality approaches that comprises an initial needs analysis, followed by a framework
analysis, a conception/design phase, a development/production phase, an implementation phase, a
learning/realization phase, and a final evaluation/optimization phase (see also ISO/IEC 19796-1:
Pawlowski, 2007; Stracke, 2007).
As can be seen, the models show some complementary aspects but also some conceptual overlap; for
example, evaluation is a central aspect in all models. However, due to their generic nature, the models cannot
be simply applied to specific e-learning projects but must be carefully adapted. Moreover, they are too
unspecific with respect to some of the critical issues constituting the route of e-learning projects. At the end,
we identified eleven major issues e-learning project teams are usually confronted with. We arranged all issues
in the E-Learning Setting Circle as illustrated in Figure 2. The circular arrangement indicates that these issues
cannot always be addressed in the same sequential order due to their strong interdependence. The guiding
element of each project should be the setting of objectives and the related assessment of the goal attainment
level. The universal and hence core task is to make the right decisions with respect to each major issue; thus,
weighting of each issue is subject to multi-criteria decision-making (MCDM) placed in the circle’s center.
Marco Rüth and Kai Kaspar
www.ejel.org 97 ISSN 1479-4403
Figure 2: The E-Learning Setting Circle promoting comparability and generalizability of project results by
structuring, standardizing, and guiding e-learning approaches at the level of a general research methodology.
3.1 The guiding element: Objective setting and assessment of goal attainment level
According to the primary goal of all e-learning projects, the application of technology in learning settings
should somehow improve learning (see above), but secondary goals may also exist and hence should be
defined from the outset. For example, previous studies showed that the usage of tablet computers in the
classroom can enhance student performance by creating an interactive learning network that stimulates active
participation and provides direct feedback loops (e.g., Enriquez, 2010). We may consider additional positive
effects of such an interactive classroom environment in terms of improving social cohesion and promoting
inclusion processes, marked as secondary objectives. Similarly, the improvement of ICT literacy often
represents a secondary goal. In each case, conclusions about potential improvements require a reference level
that must be explicitly defined, either in terms of concrete test values (e.g., the test score should increase by
ten points) or by using an adequate control group (e.g., a group that uses an alternative learning approach) to
assess the relative effectiveness of the approach (cf. Nikopoulou-Smyrni and Nikopoulos, 2010). Also, it is a key
task to select adequate operationalizations of outcome variables being in focus; objective measures (e.g., test
performance or processing time) and subjective measures (e.g., self-efficacy, motivation, or satisfaction)
should be based on established and well-validated instruments whenever possible. Importantly, one must be
careful about the duration and timing of measurements. Some effects occur with a considerable time lag after
the intervention, so one has to think a priori about timing in order to capture the effect (Ployhart and
Vandenberg, 2010). All decisions in the context of objective setting and related post-intervention evaluation
should be documented as detailed and comprehensive as possible to facilitate comparability with other e-
learning projects and to get one step closer to a theory of e-learning. In fact, insufficient study designs and
poor descriptions in published project protocols drastically lower the validity and, respectively, replicability of
findings. This appears to be a serious problem in current e-learning research (Sarsa and Escudero, 2016).
3.2 Cluster 1: Context setting
3.2.1 Definition of project scope and status
Context setting includes the definition of an e-learning project’s scope and status within the educational
institution. For example, individual projects being separated from the institution’s standard operation
nonetheless might be of importance as they act as pilot projects that, in case of success, will sustainably
influence the institution’s general education strategy. This may imply many degrees of freedom for the project
The Electronic Journal of e-Learning Volume 15 Issue 1 2017
www.ejel.org 98 ©ACPIL
team and the design and implementation process. Alternatively, a specific e-learning approach might be part
of the institution’s general education strategy and hence is intended to be implemented on a large scale by
means of a top-down strategy, reducing the degrees of freedom for individual project teams. Further scenarios
are conceivable. Either way, what is required is the development of a concrete vision of e-learning within the
institution, including awareness building to promote commitment of diverse stakeholders (cf. Pawlowski,
2007). The project’s scope can directly affect its route and likelihood of success and should therefore be
documented in project reports.
3.2.2 Identification of external and environmental constraints
Constraints of e-learning projects are manifold and comprise available resources such as staff, time, space,
technological infrastructure, and budget, but also educational policy and curriculum standards defining, inter
alia, examination dates or the maximum number of course members. External and environmental constraints
additionally determine the specific target group(s) of e-learning projects (e.g., secondary school students
versus university students), including group characteristics such as age, individual needs, didactical demands,
level of knowledge, and competencies. Sometimes these constraints even limit competencies available for the
project team, for example, when institutions do not employ experts in (educational) technology or teaching
methodology (and budget is too low for temporary support). A detailed documentation of these constraints is
mandatory as they constitute the room for manoeuvre and hence set the global decision frame for the project
(compare the schematic decision grids of Figure 1). Documentation will facilitate comparisons between
projects and the estimation of how well a specific project can be generalized in terms of external validity.
3.2.3 Identification of stakeholders and competence distribution
E-learning projects differ remarkably with regard to the number and type of stakeholders involved. In
principle, many stakeholders can be part of an e-learning team and hence can influence the route of the
project. Stakeholders include, but are not limited to the target audience (e.g., students or employees),
teachers, researchers, managers, and numerous specialists (e.g., collaborating teachers from other disciplines,
providers and designers of learning material or technology, system administrators, and ambassadors of the
education ministry). With an increasing number of stakeholders involved in a project, the necessity of a
specification of responsibilities increases. Larger project teams are no obstacle per se, rather they bundle more
competencies. However, coordination and alignment of competencies in favor of a successful project is a key
task, particularly when conflicting interests are present. Also, if critical design decisions are taken on a
democratic basis (i.e., majority decisions), underrepresented perspectives may lose their impact. We hence
suggest to explicitly document the structure of the project team, the distribution of competencies, and
assigned responsibilities (also in published project reports). Sometimes e-learning projects being comparable
with regard to most situational factors take very different routes depending on the composition of the team.
An adequate documentation can help to detect such cases and help to explain differences in project outcomes
not sufficiently explained by other situational factors.
3.3 Cluster 2: Structure setting
3.3.1 Specification of sequential, parallel, and iterative project components
By project component we mean here individual (but often interrelated) parts of an e-learning project such as,
inter alia, an ICT training component, a main learning component, an evaluation component, a technology
component, or an assessment component.
It is a mandatory step to specify whether all components of an e-learning project are implemented in a
sequential order or whether some components run in parallel. In the latter case, peak intervals may result,
where limited resources (especially on the staff level) have to be optimally coordinated (if at all possible) to
reduce the risk of low-quality project outcomes. Additionally, it might be that components interact in an
unpredictable (and undesired) fashion. For example, imagine a university course in which students create their
own personal learning environment (PLE) to structure and learn basic knowledge about cell division processes
in biology. For this purpose, they can select out of a range of tools provided by the institution’s learning
management system (LMS) (cf. Sclater, 2008). Such an e-learning approach requires that all learners exceed a
specific threshold of ICT literacy. In an optimal case, necessary competencies are acquired in a corresponding
training phase before the focal learning phase begins, but temporal constraints of the project may lead to the
decision that these competencies should be acquired during the phase in which the focal knowledge is
addressed. As a consequence, learners might select only the simplest tool from a wide range of tools offered,
Marco Rüth and Kai Kaspar
www.ejel.org 99 ISSN 1479-4403
reducing the quality of their own PLE and (perhaps) learning performance in the focal domain; or,
alternatively, students invest too many cognitive resources into learning the use of the technology at the
expense of the main learning component.
Apparently, it is important to document such parallel project components to better understand the final
project results. Similarly, our analysis of diverse process models (see above) revealed an evaluation
component in all models. In most cases, evaluation is scheduled at the end of a project cycle in terms of
summative evaluation. However, it might be advantageous if the central phase in which (e-)learning occurs or
the subsequent phase in which student performance is assessed are accompanied by formative evaluation.
This strategy allows for faster adjustments (or corrections) of the implementation process and reduces the
likelihood of failed projects. Finally, sometimes e-learning projects include a component of technology
development. For example, students might use a LMS that is under continuous development during a large-
scale project over several semesters; the system is continuously updated and adapted to the needs of learners,
based on the results of formative evaluations in the form of iterative usability tests (cf. Kaspar, et al., 2010).
Thus, observable learning outcomes may depend on the dynamic status of the technology component. Such
iterative project components have strong implications on result comparability and generalizability at the end.
3.3.2 Specification of component scaling at macro and micro level
Scaling is a universal issue per se and it directly affects the route of projects. For instance, the project’s scope,
external and environmental constraints, and the number and composition of stakeholders are scaling issues in
part. However, here we suggest a narrow understanding of scaling limited to the macro level and the micro
level of project components. Scaling at the macro level means that, for example, a specific e-learning
component – e.g., video-based pre-service teacher education (cf. Blomberg, et al., 2013) – can be used in one
university course or many parallel courses as well as in the context of one or many scientific disciplines (e.g.,
chemistry, geography, or history), constituting the quality and quantity of the sample. The more e-learning
instances are available within a project, the more precisely one can estimate the robustness (i.e., replicability)
of results and/or their context-sensitivity. A higher number of parallel courses also allows creating quasi-
experimental designs incorporating both e-learning intervention groups and adequate control groups. Thus,
scaling at the macro level has a direct impact on the generalizability of results and the validity of inductive
reasoning. In contrast, scaling at the micro level includes, for example, the number of different tools of an LMS
used by learners within the learning phase or the number of different objective and subjective measures used
in the assessment phase. It is obvious that more tools allow to assess their relative effectiveness and that more
indicators of the learning progress allow to assess the generalizability of e-learning effects across different
cognitive domains.
3.3.3 Standardization of implementation phase
As outlined above, even in the case of comparable situational factors, the result of an e-learning approach
might be very different depending on how teachers and learners (but also other actors) behave in the
implementation phase. In scientific fields such as physics or biology, many instances of a natural phenomenon
(e.g., acceleration of objects or cell division processes) can be generalized (and formalized) by experimental
observation and measurement. In contrast, e-learning projects are artificial event phenomena strongly
determined by situational factors such as time, place, and actors (Phillips, Kennedy, and McNaught, 2012); that
is, the results of individual e-learning projects can heavily depend on those people involved in the
implementation phase and their unique spatiotemporal needs and interactions, making comparisons and
generalizations difficult. Therefore, after designing and producing all materials, the project team should create
a manual that guarantees process objectivity for each team member and stakeholder in the implementation
phase.
3.4 Cluster 3: Content setting
3.4.1 Referencing to learning and media theories
Whenever and wherever possible, e-learning projects should explicitly refer to evidence-based knowledge of
“classic” learning theories that delineate the acquisition of knowledge and specific competencies in
perceptual, cognitive, and behavioral terms. According to Pange and Pange (2011), most e-learning
approaches can be assigned to one of four main classic learning theories: behaviorism, cognitivism,
constructivism, or active theory. Similarly, Klement and Dostál (2016) demonstrated that different e-learning
interventions relate to classic learning theories such as programmed learning (behaviorism), cognitive theory
The Electronic Journal of e-Learning Volume 15 Issue 1 2017
www.ejel.org 100 ©ACPIL
(cognitivism), and constructive learning (constructivism). Therefore, e-learning interventions can and must be
(partially) based on classic learning theories, whereby it is particularly important to explicitly describe how the
application of technology will support learning in terms of cognitive mechanisms (including motivational,
emotional, and sensomotoric processes).
Furthermore, e-learning projects should also refer to media theories to conceptually capture those particular
aspects that constitute e-learning – the technological component. Media theories should not substitute but
complement learning theories. For example, projects that apply virtual reality-based instructions to improve
learning outcomes (for a current meta-analysis, see Merchant, et al., 2014) may consider the context-specific
concept of telepresence (Steuer, 1992) when formulating a priori hypotheses about variables that might
mediate the expected learning gain. However, the focus should not be limited to media effect theories;
sometimes media selection models may add explanatory value: for example, the social influence model (cf.
Schmitz and Fulk, 1991) may provide a profound justification why a specific e-learning tool should be
prioritized over an alternative tool although the latter is more powerful – because the former tool could be
more in line with social norms and shared opinions within the project’s target group. A detailed
documentation of which theoretical account guided decisions during the design of an e-learning project would
not only substantiate the approach per se, but it would also indirectly mark those project components that are
not sufficiently based on a theoretical scaffold. This is valuable information facilitating theory development; in
the extreme case, when all components of the e-learning project are already based on established learning
and media theories (and the observed outcome supports the corresponding predictions), no specific e-learning
theory appears to be necessary.
3.4.2 Description of the relation between technological and didactical concepts
Continuous technological advances provide many venues for e-learning. On the downside, some e-learning
projects are too much centered on technology aspects and too little focused on pedagogical and didactical
values. Accordingly, Pastor, Sánchez, and Alvarez (1994) concluded with respect to new technologies that
“systems are designed and developed first, and possible uses and users are tried to find afterwards” (p.267).
Not only in favor of a gradual development of e-learning theories, but also in favor of successful individual
projects it is a prerequisite to explicitly describe how technological and didactical concepts are intertwined. For
example, it makes a difference whether someone considers PLEs as a didactical concept (e.g., Attwell, 2007) or
whether they are understood as a technological concept of how to integrate diverse tools in a coherent system
(e.g., Chatti, et al., 2010) that must be additionally enriched by, for instance, the didactical concept of
problem-based or research-based learning. This is not only a subtle issue of terminology (see below), but a
significant difference in the understanding of the role of technology in e-learning. Project teams that report on
how learning objectives and technology are aligned help to understand whether technology is obligatory or
additional with regard to the learning component. With respect to the primary goal of e-learning – supporting
and improving learning – we need to know how technology may improve “classic” learning approaches and
what it qualitatively adds to learning; knowing this will substantially ease theory development.
3.4.3 Application of unequivocal terminology
An unequivocal terminology is one essential prerequisite for the comparability of different e-learning projects
as well as for the generalizability of the results of individual projects. Of course, a commonly shared
terminology would be the ideal case, but e-learning is used throughout many (or even all?) scientific disciplines
(each of which has its own parlance); also, technological terms are often ambiguous – for instance, the term
virtual reality may be a synonym for games, simulations, or virtual worlds (Merchant, et al., 2014) but also for
head mounted headsets (e.g., Schneider, et al., 2004). Project teams therefore need to reflect about correct
and precise wording to avoid any ambiguity. If they meet this criterion, other researchers and practitioners will
be able to easily reproduce or compare project characteristics. Similarly, if project teams use identical terms
but interpret them very differently, then related evidence becomes partially incompatible across projects.
Importantly, one should not assume that even prominent terms in the area of learning are precise. For
example, reviews on learner satisfaction and e-learning effectiveness conclude that these terms are neither
sufficiently defined nor methodologically specified (Bahramnezhad, et al., 2016; Noesgaard and Ørngreen,
2015). We propose that learner satisfaction, learning effectiveness, and other central concepts as well as
collective terms such as quality are essential terms in e-learning projects; agreements on these terms is a
highly desirable goal supporting theory development. In contrast to essential terms, auxiliary terms “add
nuances to, alter our understanding of, or enhance our perspectives of those familiar terms” (West, 2004,
p.147), such as hyper-learning, interactive learning, or media-rich learning. Auxiliary terms are subject to the
Marco Rüth and Kai Kaspar
www.ejel.org 101 ISSN 1479-4403
evolution of technology and socio-economic factors in the field of e-learning (cf. Sangrà, Vlachopoulos, and
Cabrera, 2012). Hence, project teams should be aware of the temporally limited validity of auxiliary terms.
Finally, it should be considered that vague and imprecise terminology might lead to the exclusion of project
reports in narrative reviews and meta-analyses.
3.5 The universal element: Multi-criteria decision-making
It is obvious that many of the critical issues outlined above are heavily interrelated. The E-Learning Setting
Circle presented in Figure 2 takes account of this. Although (nearly) all e-learning projects are confronted with
these major issues, the individual issues (and parts of them) may differ in their importance and hence have a
different impact on the project’s overall route. Project teams have to determine the weighting of each (sub-
)issue. Also, each issue implies several strategic and design decisions, some of which may be antagonistic. As a
consequence, project teams have to handle a bulk of decision criteria. Therefore, decision-making becomes a
challenging part of e-learning projects as one has to find optimal solutions for multi-criteria problems. Several
multi-criteria decision-making (MCDM) methods have been proposed to assess and examine the effectiveness
of e-learning approaches, but they are beyond the focus of the present paper (for a current review, see Zare,
et al., 2016). Indeed, the selection of the best method is also a (second) challenging task; this kind of paradox
can be paraphrased by the question what decision-making method should be applied to choose the best
decision-making method (Triantaphyllou, 2000). However, most of these MCDM methods are demanding in
general and require substantial expertise in methodology and statistics. Thus, they might be not practicable for
all projects and teams. At least we want to recommend that project teams carefully document which set of
criteria they apply to which (sub-)issues, how they weight each criterion, how they address interdependencies
between criteria, and how they decide at the end.
4. Conclusion
Contributions from e-learning project teams to theory development should become common practice to allow
inferring general statements on how technology may improve learning. Project teams should be aware of the
artificial nature of e-learning projects and research. Future progress in theory development relies on that each
project team explicitly identifies, addresses, and documents both the situational factors being relevant to the
design and realization of e-learning projects as well as all related decisions. The E-Learning Setting Circle
presented here proposes three clusters – context setting, structure setting, and content setting –, each of
which comprises three individual issues that are not necessarily sequential but frequently encountered in e-
learning projects. Importantly, we highlighted two additional issues as being global to e-learning projects: on
the one hand, the formulation of primary and secondary objectives as well as related measures of the goal
attainment level are constitutive for each project. On the other hand, the project team must decide and
document the approach and course of (multi-criteria) decision-making that touches all main issues of the E-
Learning Setting Circle. This circular model is specific with regard to the major issues that should be addressed
within an e-learning project; the model also provides sufficient degrees of freedom to be adaptable to very
different projects. In a nutshell, the proposed model is intended to structure, standardize, and guide diverse e-
learning approaches at the level of a general research methodology.
Acknowledgements
This work was partly funded by a grant assigned to Kai Kaspar by the University of Cologne and the Ministry of
Innovation, Science, and Research of North-Rhine Westphalia (NRW, Germany).
References
Alonso, F., López, G., Manrique, D., and Viñes, J. M., 2005. An instructional model for web‐based e‐learning education with
a blended learning process approach. British Journal of Educational Technology, 36(2), pp.217-235.
Attwell, G., 2007. E-portfolio: the DNA of the Personal Learning Environment?. Journal of E-learning and Knowledge
Society, 3(2), pp.41-64.
Bahramnezhad, F., Asgari, P., Ghiyasvandian, S., Shiri, M. and Bahramnezhad, F., 2016. The Learners' Satisfaction of E-
learning: A Review Article. American Journal of Educational Research, 4(4), pp.347-352.
Biasutti, M., 2011. The student experience of a collaborative e-learning university module. Computers & Education, 57(3),
pp.1865-1875.
Blomberg, G., Renkl, A., Sherin, M.G., Borko, H. and Seidel, T., 2013. Five research-based heuristics for using video in pre-
service teacher education. Journal for educational research online, 5(1), pp.90-114.
Brosser, L. and Vrabie, C., 2015. The Quality Initiative of E-Learning in Germany (QEG)-Management for Quality and
Standards in E-Learning. Procedia-Social and Behavioral Sciences, 186, pp.1146-1151.
The Electronic Journal of e-Learning Volume 15 Issue 1 2017
www.ejel.org 102 ©ACPIL
Chatti, M.A., Jarke, M., Agustiawan, M.R. and Specht, M., 2010. Toward a Personal Learning Environment Framework.
International Journal of Virtual and Personal Learning Environments, 1(4), pp.66-85.
Clark, R.C. and Mayer, R.E., 2016. E-learning and the science of instruction: Proven guidelines for consumers and designers
of multimedia learning. John Wiley & Sons.
Clarke, A., Lewis, D., Cole, I. and Ringrose, L., 2005. A strategic approach to developing e‐learning capability for healthcare.
Health Information & Libraries Journal, 22(s2), pp.33-41.
Coryell, J.E. and Chlup, D.T., 2007. Implementing e-learning components with adult English language learners: Vital factors
and lessons learned. Computer Assisted Language Learning, 20(3), pp.263-278.
Enriquez, A.G., 2010. Enhancing student performance using tablet computers. College Teaching, 58(3), pp.77-84.
European Commission, 2001. Communication from the Commission to the Council and the European Parliament. The
eLearning Action Plan - Designing tomorrow's education. Brussels.
Folorunso, O., Shawn Ogunseye, O. and Sharma, S.K., 2006. An exploratory study of the critical factors affecting the
acceptability of e-learning in Nigerian universities. Information management & computer security, 14(5), pp.496-505.
Gaebel, M., Kupriyanova, V., Morais, R. and Colucci, E., 2014. E-learning in European Higher Education Institutions: Results
of a mapping survey conducted in October-December 2013. Brussels: European University Association Publications.
Govindasamy, T., 2001. Successful implementation of e-learning: Pedagogical considerations. The Internet and Higher
Education, 4(3), pp.287-299.
Ho, E.S.C., 2004. Self-regulated learning and academic achievement of Hong Kong secondary school students. Education
Journal, 32(2), pp.87-107.
International Organization for Standardization/International Electrotechnical Commission, 2016. ISO/IEC DIS 40180.
Information Technology - Learning, Education, and Training - Quality for learning, education and training -
Fundamentals and reference framework. International Organization for Standardization.
Kaspar, K., Aßmann, S. and Konrath, D., 2017. Studierende als Gestalter*innen einer kollektiven virtuellen Lernumgebung.
In: Jahrbuch Medienpädagogik 13. Wiesbaden: Springer Fachmedien, pp.195-211.
Kaspar, K., Hamborg, K. C., Sackmann, T., and Hesselmann, J., 2010. The effectiveness of formative evaluation in the
development of usable software: a case study. Zeitschrift für Arbeits- und Organisationspsychologie, 54(1), pp.29-38.
Klement, M. and Dostál, J., 2016. Theory of Learning and E-Learning. Proceedings of INTED 2016 Conference, pp.3206-3212.
Lee, B.C., Yoon, J.O. and Lee, I., 2009. Learners’ acceptance of e-learning in South Korea: Theories and results. Computers &
Education, 53(4), pp.1320-1329.
Martín-Blas, T. and Serrano-Fernández, A., 2009. The role of new technologies in the learning process: Moodle as a
teaching tool in Physics. Computers & Education, 52(1), pp.35-44.
Merchant, Z., Goetz, E.T., Cifuentes, L., Keeney-Kennicutt, W. and Davis, T.J., 2014. Effectiveness of virtual reality-based
instruction on students' learning outcomes in K-12 and higher education: A meta-analysis. Computers & Education,
70, pp.29-40.
Molenda, M., 2003. In search of the elusive ADDIE model. Performance improvement, 42(5), pp.34-37.
Nikopoulou-Smyrni, P. and Nikopoulos, C., 2010. Evaluating the impact of video-based versus traditional lectures on
student learning. Educational Research, 1(8), pp.304-311.
Noesgaard, S.S. and Ørngreen, R., 2015. The Effectiveness of E-Learning: An Explorative and Integrative Review of the
Definitions, Methodologies and Factors that Promote e-Learning Effectiveness. The Electronic Journal of e-Learning,
13(4), pp.278-290.
Pange, A. and Pange, J., 2011. Is E-learning Based On Learning Theories? A Literature Review. International Journal of
Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 5(8), pp.932-936.
Pastor, E., Sanchez, G. and Alvarez, J., 1994. Distributed Multimedia Environment for Distance Learning. In: Collaborative
Dialogue Technologies in Distance Learning. Berlin/Heidelberg: Springer. pp.258-269.
Pawlowski, J. M., 2007. The Quality Adaptation Model: Adaptation and Adoption of the Quality Standard ISO/IEC 19796-1
for Learning, Education, and Training. Journal of Educational Technology & Society, 10(2), pp.3-16.
Peffers, K., Tuunanen, T., Gengler, C.E., Rossi, M., Hui, W., Virtanen, V. and Bragge, J., 2006. The design science research
process: a model for producing and presenting information systems research. In: Proceedings of the first
international conference on design science research in information systems and technology (DESRIST 2006). ME
Sharpe, Inc. pp.83-106.
Phillips, R., Kennedy, G. and McNaught, C., 2012. The role of theory in learning technology evaluation research.
Australasian Journal of Educational Technology, 28(7), pp.1103-1118.
Ployhart, R.E. and Vandenberg, R.J., 2010. Longitudinal research: The theory, design, and analysis of change. Journal of
Management, 36(1), pp.94-120.
Raab, R.T., Ellis, W.W. and Abdon, B.R., 2002. Multisectoral partnerships in e-learning: a potential force for improved
human capital development in the Asia Pacific. The Internet and Higher Education, 4(3), pp.217-229.
Ruiz, J.G., Mintzer, M.J. and Leipzig, R.M., 2006. The impact of e-learning in medical education. Academic Medicine, 81(3),
pp.207-212.
Sangrà, A., Vlachopoulos, D. and Cabrera, N., 2012. Building an Inclusive Definition of E-Learning: An Approach to the
Conceptual Framework. The International Review of Research in Open and Distributed Learning, 13(2), pp.145-159.
Sarsa, J. and Escudero, T., 2016. A Roadmap to Cope with Common Problems in E-Learning Research Designs. Electronic
Journal of E-learning, 14(5), pp.336-349.
Marco Rüth and Kai Kaspar
www.ejel.org 103 ISSN 1479-4403
Schmitz, J. and Fulk, J., 1991. Organizational colleagues, media richness, and electronic mail: A test of the social influence
model of technology use. Communication Research, 18(4), pp.487-523.
Schneider, S.M., Prince-Paul, M., Allen, M.J., Silverman, P. and Talaba, D., 2004. Virtual reality as a distraction intervention
for women receiving chemotherapy. Oncology Nursing Forum, 31(1), pp.81-88.
Sclater, N., 2008. Web 2.0, personal learning environments, and the future of learning management systems. Research
Bulletin, 13(13), pp.1-13.
Seufert, S., 2015. Design Research für die Implementation von eLearning: ein vielversprechendes Paradigma für die
Zusammenarbeit von Wissenschaft und Praxis?. HMD Praxis der Wirtschaftsinformatik, 52(1), pp.120-131.
Steuer, J., 1992. Defining virtual reality: Dimensions determining telepresence. Journal of Communication, 42(4), pp.73-93.
Stracke, C.M., 2007. Quality Standards for Quality Development in e-Learning: Adoption, Implementation and Adaptation
of ISO/IEC 19796-1. QED-The Quality Initiative E-Learning in Germany. The National Project for Quality in e-Learning.
Triantaphyllou, E., 2000. Multi-criteria decision making methods. In: Multi-criteria Decision Making Methods: A
Comparative Study, Springer US, pp.5-21.
West, D., 2004. Object thinking. Washington: Microsoft Press.
Woo, M., Chu, S.K.W., Ho, A. and Li, X., 2011. Using a wiki to scaffold primary-school students' collaborative writing.
Educational Technology & Society, 14(1), pp.43-54.
Zare, M., Pahl, C., Rahnama, H., Nilashi, M., Mardani, A., Ibrahim, O. and Ahmadi, H., 2016. Multi-criteria decision making
approach in E-learning: A systematic review and classification. Applied Soft Computing, 45, pp.108-128.