Content uploaded by Jeroen J. G. Van Merrienboer
Author content
All content in this area was uploaded by Jeroen J. G. Van Merrienboer on Mar 08, 2014
Content may be subject to copyright.
Interactive Educational Multimedia, number 3 (October 2001), pp. 12−26
www.ub.es/multimedia/iem
ID for Competency−based Learning:
New Directions for Design, Delivery and Diagnosis
Jeroen J.G. van Merriënboer
Educational Technology Expertise Center
Open University of the Netherlands.
Abstract
Currently, there is a clear trend towards competency−based learning. But Instructional Design models
provide yet little guidance for the development of such competency−based instructional systems. It is
argued that rich, realistic learning tasks are always at the heart of competency−based learning. From this
starting point, nine directions for a new paradigm of Instructional Design are presented: Three directions
pertain to the design of learning tasks; three directions pertain to the delivery of those tasks and learning
resources in multimedia learning environments, and three directions pertain to the diagnosis of learners'
progress.
ID for Competency−based Learning:
New Directions for Design, Delivery and Diagnosis
Societal and technological developments go faster and faster. Routine tasks are taken over by machines.
Knowledge is quickly becoming obsolete. And for those reasons, education has to focus more and more on
complex cognition, which is best reflected in the ability to recognize new problems and to find creative
solutions for solving them. If people show complex cognition in a particular domain or profession, we
often call them competent in that particular domain. According to Keen (1992), competent performance
refers to the ability to:
• Deal with non−routine and abstract work processes;
• handle decisions and responsibilities;
• operate in ill−defined and ever−changing environments;
• operate within expanding geographical and time horizons;
• understand dynamic systems, and
• work in groups.
This list is probably not exhaustive. Its elements are important features of so−called competencies − and
due to technological and societal developments these things become more and more important in work
settings. It should further be clear that these are characteristics of competencies, but not competencies
themselves. Competencies are always bound to a particular domain or profession. They are, in fact, a mix
of complex cognitive skills, interpersonal skills, and attitudes that allow someone to show competent
behavior in a particular domain or profession. Simply said, competency−based learning aims at the
development of such competencies. In order to design such education, we should develop a view on how
competencies are represented in the human mind.
One perspective is that the ability to exhibit competent behavior in a particular domain depends upon the
availability of a highly integrated network of complex cognitive and interpersonal skills, attitudes, and
subordinate knowledge structures. Or, to use a popular term, a mental model that allows one to understand
problems in a domain from different points of view and to act effectively in that domain according to the
most promising perspective. A key aspect of competent behavior is the ability to co−ordinate the
constituent skills involved, and to continuously use knowledge in order to recombine skills and attitudes in
such a way that they are most helpful to dealing with a new situation. This is in line with Meaning Theory
(Bartlett, 1932), in which creativity and problem solving are related to the ability to (mentally) restructure
given situations in such a way that solutions can be tried, compared and (sometimes) found through
combining the new situation with existing schemata in memory.
This view presents designers of instruction with a serious challenge. Nearly all theories that exist for the
design of instruction apply some version of Gagné's Conditions of Learning (4th ed., 1985), stating that the
optimal conditions for learning depend on the goal of the learning process. For example, repetition is a
good condition for learning a simple motor skill, but not for learning problem solving; and modeling is a
good condition for learning strategic approaches to problem solving, but not for learning plain facts. These
theories assume that one can describe a subject matter domain in terms of learning goals, and can then
develop instruction for each of the learning goals − taking the optimal conditions of learning for each goal
into account. This may work well for a domain that is characterized by independent learning goals, but
certainly not for developing competencies that are characterized by highly integrated, complex sets of
learning goals.
Thus, we need a new paradigm for the design of instruction! We should acknowledge that lists of
independent learning goals could never form the basis for competency−based learning. It might not even be
a good idea to provide specific learning goals to students, because they will then focus on attaining each of
the distinct learning goals and not on the co−ordination and integration of skills, knowledge and attitudes
involved. Instead, the starting point for competency−based learning must on the one hand be a highly
integrated network of learning goals that stresses the relationships between those goals, and on the other
hand learner activities must be designed in such a way that they stimulate the construction of such a
network. How to reach this?
The most promising model assumes that learners develop competencies on the basis of interacting with a
series of different real events or, in educational settings, simulations of those real events. In
competency−based learning, the learning tasks that are performed by the students can be situated in such a
simulated task environment and provide the necessary (vicarious) experience. The design of learning tasks
is thus at the heart of competency−based learning or a competency−based curriculum. Second, the learning
tasks will be performed more and more in technology−enhanced learning environments, posing new
requirements to delivery in multimedia environments. And finally, when realistic, rich tasks are the kernel
of competency−based learning such tasks must also be used for testing and assessment, asking for new
approaches to diagnosing learner progress.
To summarize this Introduction, future students will develop competencies in their domain of study by
working on rich learning tasks in multimedia learning environments where assessments will be based on
their complex performances (see also Kirschner, van Vilsteren, Hummel, & Wigman, 1996). The next three
sections discuss the main instructional design questions that must be answered in order to be able to
develop such environments: (1) how to design realistic learning tasks for complex learning to occur?, (2)
how to deliver such tasks and resources in multimedia environments?, and (3) how to diagnose learners'
progress on the basis of their complex task performances?
Design of Tasks for Complex Learning
Learning tasks that aim at the development of competencies involve complex learning. They must allow
that the acquisition of cognitive and interpersonal skills and constituent skills, the construction of
subordinate knowledge, and the formation of attitudes and values take place in a simultaneous, integrated
process. It is precisely the integration and co−ordination of all aspects that characterize a competency
which allows for transfer to new problems and new situations and for lifelong retention. Learning tasks
may, for instance, refer to the analysis of case studies (as in the Harvard case−method); to working on
problems in a particular domain (as in the Maastricht model of problem−based learning); to the design of
some product or process (as in the Aalborg model of project−oriented learning), and so further. There are
thus many types of learning tasks that may play a role in a competency−based curriculum. In the following
subsections, three directions or challenges with regard to the design of learning tasks will be formulated
(see top of Table 1). These are viewed as urgent goals that must be attained if it comes to the development
of competency−based education:
• Unite the World of Knowledge and the World of Work in Learning
• Build Learner Support that Works
• Promote the Development of Higher−order Skills
Designing learning tasks for competency−based learning
1. Unite the World of Knowledge and the World of Work in Learning
2. Build Learner Support that Works
3. Promote the Development of Higher Order Skills
Delivering learning tasks and resources in multimedia learning environments
1. Develop Web−based Instruction that Makes a Difference
2. Defeat the Transfer Paradox
3. Make Students Work Together
Diagnosing learner progress incompetency−based learning systems
1. Provide Meaningful Feedback
2. Use Tests for Complex Performances
3. Assure the Quality of Competency−based Learning
1st Direction: Unite The World of Knowledge and the World of Work in Learning
The traditional approach to the design of learning tasks is pretty straightforward and familiar to most of us.
It takes the World of Knowledge as a starting point (van Merriënboer & Kirschner, in press). A particular
discipline or subject matter domain is analyzed and ordered. Methods for domain analysis, subject matter
analysis and task analysis are used to make this process more efficient. The main output of the process is a
highly structured description of the domain, or, simply said a study book. Learning is primarily active
reading and understanding these study books. The presentation of subject matter is typically used as the
skeleton for further instruction: Learning tasks have the form of assignments or practice items that are
added to this skeleton for us during reading. Or, it is the exercises that you usually find at the end of each
chapter in the study book to evaluate learning.
This approach has its charms. It is neat, elegant, conveniently arranged and familiar. But it also has its
drawbacks. Constructivist approaches to learning correctly stress that knowledge is not something that can
simply be described in a study book and then be transmitted to learners. Instead, knowledge must be
constructed by the learners − and learning tasks or meaningful problem solving can help them to do so.
Indeed, one may wonder if reading a study book is the best learning task for reaching this goal. Moreover,
it may be argued that constructivist learning environments, defined as environments in which learners work
on relatively complex, meaningful learning tasks, yield instruction that is less fragmented, offers more
opportunities for an interdisciplinary approach, and provides better opportunities for transfer of what is
learned to new problem situations. But where do these learning tasks come from? A popular approach is to
replace the World of Knowledge by the World of Experience, or, in the field of professional education, by
the World of Work. It is no longer the discipline or subject matter domain that is analyzed, but the jobs
performed by professionals in the domain of study. Job profiles become the basis of a curriculum, and the
learning tasks more or less mimic the tasks that students will encounter in their professional,
post−academic life. These are then said to be "authentic" learning tasks.
It may be argued that the replacement of the World of Knowledge by the World of Work will not solve, but
merely replace old problems with new ones. Three of them will be briefly mentioned. The first problem
relates to the least effort principle. Students tend to consult a minimum of study materials in order to
complete their tasks. Thus, the knowledge that students gain for working on particular learning tasks often
lacks a broader structure, making it impossible to develop a historical overview of the discipline or to
develop a deeper understanding of the theoretical relationships in the field of study. And it is precisely this
type of knowledge that may be necessary for transfer to occur.
The second problem is the supportive knowledge problem. Supportive knowledge is all knowledge that
may be helpful to solve particular problems in a domain. It is often not known which knowledge underlies
effective performance on complex tasks that involve problem solving. So, it is impossible to determine
which information students must have available for their work on one particular learning task. Real
professionals act opportunistic: They try a particular approach and quickly switch to a new approach if the
current approach does not work. And they can only do this because they know a lot about the domain, or,
because they have the overview that students lack and cannot develop by only working on learning tasks.
And third, there is the professional mobility problem. Employees quickly change their jobs nowadays,
which makes it less useful to take job profiles as the basis for a curriculum. To a lesser degree, this problem
also occurs in the World of Knowledge, because knowledge is nowadays also liable to fast changes. One
may seriously wonder how to deal with the relationship between labor market demands and the design of
competency−based curricula.
A first major challenge is to develop procedures that help us to combine and integrate the World of
Knowledge and the World of Work in teaching. One approach may be to take competencies as a starting
point, and analyze these competencies in order to develop task classes (also called "case types", van
Merriënboer, 1997) that give an abstract, general description of a broad category of learning tasks. On the
one hand, such task classes allow one to identify the knowledge that may be helpful in solving a particular
category of problems in the domain of study. It allows for the teaching of larger, integrated bodies of
knowledge, as we used to do in the World of Knowledge. On the other hand, task classes might also be
used for helping content experts to identify professional tasks that are really useful as learning tasks − and
so bring in the World of Work. Each learning task should nicely fit the task class it exemplifies, and the
complete set of learning tasks for a particular task class should provide a good mapping of all the skills and
knowledge required for solving the problems in this class. This approach has been successful in training for
complex skills (see Clark & Estes, 1999).
2nd direction: Build Learner Support that Works
If ID succeeds in integrating the World of Knowledge and the World of Work, the learning tasks that are
given to students will still be much more complex and time−consuming than the assignments or practice
items that can be found in a typical study book. In order to fruitfully work on the tasks, learners need
support − and we clearly need to build support that works. We cannot leave the task of supporting students
to the teachers, many of whom are already swamped with work. When students work on more complex
tasks this will only become worse. Over the last decade, quite a lot of research has studied the effectiveness
of support given by electronic performance support systems (EPSS, e.g. Bastiaens, Nijhof, Streumer, &
Abma, 1997), cognitive tools, help systems, learning aids, and other "...things that [are supposed to] make
us smart" (Norman, 1993).
The results of research on the use and effectiveness of support systems are often disappointing. The most
salient finding is possibly that those learners who need the most support are least inclined to use it. They
act as typical computer users who encounter software problems: "Only when everything else fails, consult
the documentation (or, the available support)". This finding can easily be explained by Cognitive Load
Theory (Sweller, 1988; Sweller, van Merriënboer, & Paas, 1998). When students encounter problems while
working on a learning task, the last thing they are inclined to do is further increase their cognitive load by
adding additional information−from the support system−to their working memory. For this reason, support
systems that are "add−ons" to the learning environment may increase the gap between weak and strong
learners.
If traditional support fails, what must support look like to succeed? It seems likely that support must be
fully embedded in the learning task or learning environment in order to be effective (Martens & Valcke,
1995). In the field of constructivism, providing embedded support is often called "scaffolding", and the
term performance constraint is then probably more appropriate than the term performance support. We are
all familiar with the training wheels on children's' bikes, which is a performance constraint that prevents
them from falling over (see Carroll & Carrithers, 1984). These training wheels are clearly more effective
than add−on performance support, like the parent who is running alongside and shouting "Keep your
handlebars straight!" What we need to do is define the training wheels that promote learning from complex
tasks. Three of them will be discussed.
A first training wheel is to divide a complex learning task in subtasks. For instance, if law students have to
prepare a plea to be presented in court, they can be simply instructed to prepare a plea. Or they can be
provided with a "systematic approach to problem solving" for preparing a plea by instructing them to (1)
study the files and determine their strategy for pleading, (2) translate the strategy for pleading to an outline
for the plea, and (3) write the plea. So, embedded support is given by decomposing the learning task in
phases.
A second training wheel is known as sequencing. Sequencing learning tasks from simple to complex is
sometimes associated with old−fashioned instructional design, but in the field of constructivism authors
also stress its utmost importance. For instance, Collins, Brown, and Newman (1987) correctly argued in an
influential article that "...the ability to produce a coherent and appropriate sequence of case studies and
problems [i.e., learning tasks] is a key feature in the design of constructivist learning environments". This is
not to say that we should only adhere to traditional simple−to−complex orderings of learning tasks. For
example, the work of Gropper (1983) or Krammer and myself (van Merriënboer & Krammer, 1987)
indicated the usefulness of backward chaining approaches to sequencing for complex learning. Learning
tasks are then ordered in the reverse order of how an expert would encounter them. For instance, in
learning instructional design, student would start with the evaluation and revision of existing instructional
materials. Such sequencing techniques are very effective because they quickly provide useful models to the
learners and because they offer meaningful, relatively complex learning tasks from the start.
A third training wheel can be embedded in the nature of the learning task itself. For instance, studying a
case which provides a real−world solution for a given problem provides more support than a conventional
problem, in which students are asked to come up with a solution for the same given problem themselves.
There is ample evidence that students who are novices in a domain learn more from the case or worked−out
example than from the conventional problem (e.g., Paas, 1992; Paas & van Merriënboer, 1994). As another
example, completing or extending a given design for some product or process in project−oriented learning
provides more support than a conventional project for which students have to design the product or process
from scratch. And again, there is ample evidence that novice students learn most from the completion
problems (e.g., van Merriënboer, 1990; van Merriënboer & de Croock, 1992).
3rd Direction: Promote the Development of Higher−order Skills
So far, an educational system was sketched in which meaningful work on relatively complex tasks forms
the kernel of a curriculum. And from an instructional point of view, scaffolding learners is critical in such a
system. If we do not embed enough support in the learning environment in a clever way, teachers will be
overburdened because learners will typically need more support than in a traditional educational system.
But the training wheels will be mounted higher and higher, that is, support diminishes, as students become
more proficient in performing particular tasks (belonging to a particular "task class"). It is thus important
not to focus only on instructional support, but also on the question of what is required from students in such
a new, competency−based curriculum. There are many answers to this question: Students must
learn−how−to−learn, students must regulate their own learning processes, students must monitor and assess
their own performance as well as the performance of others, students must develop better study skills,
metacognitive skills, learning strategies and even general problem solving skills, and so forth. Obviously,
there is a delicate trade−off between the necessity for learner support and the desire to develop independent
learners.
For the purpose of this article, the many useful distinctions made in the literature with regard to
independent learning will be discarded and all referred to as higher−order skills. While first−order skills are
bound to a particular learning domain, these higher−order skills seem−incorrectly−to be independent of any
domain. If you have learned to learn in domain A, you will also be able to learn in domain B; and if you
can regulate your own learning in domain X, you will also be able to regulate your own learning in domain
Y. These higher−order skills are indeed the key for effective learning to occur. Students who lack these
higher−order skills will simply not be able to learn in such a way that acquired cognitive schemata are
useful beyond the educational context. But at the same time, it is often surprising how the discussion on
higher−order skills takes place in the literature and in the educational field. Three comments relate to (1)
the importance of training, (2) the assumed domain−independence, and (3) the claim that domain
knowledge is becoming less important.
To start with the first issue: Higher−order skills are still skills. And more in particular, they are highly
complex skills. A first−order complex cognitive skill (e.g., diagnosing cardio−vascular diseases,
performing psychological research) typically takes hundreds or even thousands of hours to develop. It thus
seems fair to state that the development of a complex higher−order skill will also take at least hundreds of
hours of experience, preferably distributed over many years. We should provide opportunities for the
development of higher−order skills from primary school on − and not limit this to secondary and tertiary
education. And it should also be acknowledged that explicit training in such higher order skills is often
necessary. It is naive to withhold support from students during their work on complex learning tasks and
then expect them to spontaneously show independent learning behaviors. There are no "hocus−pocus"
higher order skills! Yet, this is what can be observed nowadays in some research, reform and development
projects. This way, efforts directed at independent learning will seriously jeopardize the quality of
education.
This brings us to a second issue: The design of training for higher−order skills. These skills can be trained
− but is it also possible to train them outside a particular learning domain? This parallels the old discussion
on the possibility to teach general, domain−independent problem solving skills. What seems to be critical
in this discussion is that both general problem solving skills and other higher−order skills mainly indicate
"strategic knowledge", that is, knowledge about which rules−of−thumb and systematic approaches are
effective to approach a learning task or problem. As argued elsewhere (van Merriënboer, 1997), there is
always a bi−directional relationship between strategic knowledge and supportive knowledge. The better a
learner's knowledge about a particular domain is organized, the more likely it is that strategic knowledge
can help to operate in this domain. And the reverse is also true. A rich knowledge base is only useful if
learners possess the strategic knowledge enabling them to make effective use of it. The bi−directional
relationship between supportive and strategic knowledge simply indicates that both are cognitively
represented in an integrated fashion and that one is of little use without the other. If this is true, higher
order skills can only be trained in a particular domain. And if we want the strategic component of
higher−order skills to transfer between domains, they should be trained in as many domains (or, courses) as
possible and it should be made explicit to students that a higher−order skill that works in one domain may
also work, or may not work, in another domain.
Third, this analysis leads to questioning a claim that is becoming more and more popular in the field of
education. What you hear is that a highly technological society such as ours is requiring more and more
employees who have developed competencies and who exhibit higher−order skills, and that domain
knowledge is [thus] becoming less important. The first part of this statement is certainly true, but the
second part is a very dangerous misunderstanding. There is no such thing as complex cognition outside a
domain, and we will never be able to develop complex cognition, including higher−order skills, outside
domains. The human cognitive architecture is bound to domain−knowledge (see Sweller, van Merriënboer
& Paas, 1998).
This section discussed the direction to fruitfully combine the World of Knowledge and the World of Work
in a competency−based curriculum that is based on learning tasks; the direction to support or scaffold
students who work on the tasks in such a way that learning is improved, and, finally, the direction to
develop higher−order skills in such a curriculum − taking the delicate balance between required learner
support and desired independent learning into account. The next section will turn to a second aspect of
future competency−based learning, namely that it will more and more take place in Web−based multimedia
learning environments.
Delivery in Multimedia Learning Environments
Multimedia learning environments evolved from programmed tutorials, drill−and−practice
computer−based training, hypertext systems, and intelligent tutoring systems towards simulation−based
learning environments and all kinds of combinations of these. And nowadays, Web−based instruction is in
the center of interest because it facilitates distributed distance delivery and combines presentation and
communication facilities. We must answer the question if these technologies can be used to support
competency−based learning − and if so, how? Three new directions will be formulated, now with regard to
the delivery of learning tasks in multimedia learning environments, including the Web (see middle part of
Table 1). Again, these are viewed as urgent goals that must be attained for making multimedia instruction
more effective, efficient and appealing:
• Develop Web−based Instruction that Makes a Difference
• Defeat the Transfer Paradox
• Make Students Work Together
4th Direction: Develop Web−based Instruction that Makes a Difference
Web−based instruction is hot! It is easily accessible from the whole world, it offers integrated presentation
and communication facilities, it provides better opportunities for updating and re−using learning materials,
and so forth. This is all true, which is why some authors argue that it provides a "technology push" for
improving the quality of education. But media will never influence learning (Clark, 1994). Only
instructional methods may improve the quality of education, and it is an open question if current Web
technology supports the use of instructional methods that are necessary for complex learning to occur.
If we seriously study what is really going on at the moment, Web technology yields a
backward−push instead of a forward−push. The key concept with regard to Web−based instruction seems
to be content, as it was in the World of Knowledge, and so−called "content−providers" (publishers,
universities) are expected to supply ready−made content that can be delivered over the Internet. Most
Web−based instruction that you find on the Internet takes us back to the early days of programmed tutorials
and electronic books, where learner activities mainly consist of reading from the screen and filling in
boxes. This is in clear contrast with the constructivist ideas that emerged in the 80's, stressing the
importance of active work on meaningful learning tasks for knowledge construction and skill acquisition to
take place.
This development threatens to lower the quality of education instead of improving it. In order to promote
the development of competencies or complex cognitive skills, the kernel of Web−based instruction should
not consist of content, but of rich learning tasks that are presented in a meaningful (simulated) task
environment. For a limited number of tasks, the Web already provides the necessary functionalities. For
instance, the Virtual Company Project (Westera & Sloep, 1998) offers a collaborative, distributed learning
environment in which students work on rich learning tasks in a simulated company. But for many other
tasks, like pleading in court, controlling aircraft, or conducting psychological experiments, Web−based
instruction currently lacks the necessary functionalities (e.g., input−output facilities, simulation models that
can run on the background, etc.). Of course, things will be better in the future, with Giganet ports and
broadband Internet connections, but for now we should simply acknowledge that the Web often lacks the
functionalities that are necessary for implementing instructional methods that promote complex learning.
And what about the content? In competency−based learning, content or information to be presented to
learners is always subordinate to, although harmonized with the learning tasks. Part of this content is best
presented when students actually need it, that is, while they are working on the learning tasks. This type of
content can best be characterized as just−in−time (JIT) information. It is mainly the information that is
relevant to the recurrent aspects of effective task performance, that is, those aspects that are the same from
problem situation to problem situation (van Merriënboer, 1997). JIT information presentation then best
allows this information to be restrictedly encoded in the cognitive rules or schemata that represent these
particular aspects of task performance. Computer−based instruction, including Web−based instruction,
offers excellent opportunities for the just−in−time presentation of information. For instance, this content
can easily be hyperlinked to the parts of the learning task for which it is relevant.
Another part of the content cannot easily be connected to particular learning tasks. It is the content which
represents the knowledge that supports performing non−recurrent, problem−solving intensive aspects of the
learning task. It encompasses the "integrated bodies of domain knowledge" that may help to solve
problems in the domain of interest − but you can never be sure for which specific problem they are helpful.
This information is best made available to students before they start to work on a particular category of
learning tasks (which were called "task classes" before), and should remain available during their work on
those tasks. This will best allow them to elaborate on the information, that is, to integrate it with their
existing prior knowledge. Printed materials are still the best medium for delivering this type of information.
They can easily be consulted (in bed, in the train, or on the beach), specific information is relatively easy to
search for, it is easy to take notes and make annotations, and reading from a book is easier than reading
from a monitor.
Nonetheless, the Internet may be preferred over printed materials for the presentation of supportive
information, or be used in addition to printed materials, simply because it contains so much useful
information. This view is popular in the field of Resource−based Learning (e.g., Rakes, 1996). Here, we
should acknowledge the fact that the Internet contains useful information, but also a multitude of
information that is not useful. This brings us back to the discussion on learner support versus higher−order
skills. On the one hand, we might structure the information in such a way that learners can find what they
are looking for. This helps us in fighting the Butterfly Defect (Salomon, 1998): "... touch, but don't touch,
and just move on to make something out of it". On the other hand, we might focus on the development of
search literacy skills. The development of search literacy skills is indeed important, but one should be
extremely careful that searching for information does not interfere with learning in the primary domain.
5th Direction: Defeat the Transfer Paradox
Instruction that yields higher transfer to new situations, or yields better transfer from the educational setting
to future job performance, usually takes more time than traditional instruction and/or poses higher
requirements to the cognitive involvement of the learners (van Merriënboer, de Croock, & Jelsma, 1997).
Thus, whether we like it or not, learners have to pay a price for learning in such a way that what is learned
becomes useful in a broader context. This is due to the fact that transfer depends on the richness and
interconnectedness of the cognitive schemata that learners are required to develop when they are working
on the learning tasks. The construction of schemata in such a way is a highly effort−demanding and
time−consuming process. There is no simple solution to this paradox. But, especially in multimedia
learning environments, much can be gained by lowering the extraneous cognitive load that is imposed on
learners and, at the same time, explicitly helping them to focus their attention on those activities that
promote deeper cognitive processing. This process is also known as "redirecting attention" (van
Merriënboer, Schuurman, de Croock & Paas, in press), from learner activities that are not relevant for
learning to learner activities that are relevant for learning.
Essential in alleviating or at least reducing the paradox is effective and efficient use of cognitive resources
and thus elimination or reduction of extraneous cognitive load. In many multimedia learning environments
learners are overwhelmed and confused by the amount of available options for navigation, by the amount
of available information, or nowadays even by the amount of advertisements! Students have to find out
how the interface works, which information is useful and which is not, which parts of the screen belong to
each other, and many other things that have little to do with learning. Of course, usability engineering is
important for all software products − but it is critical for multimedia learning environments. If usability is
low, no learning will occur. And given the transfer paradox, a trade−off can be expected between the
usability of a multimedia learning environment and the transfer of learning. Simplicity of the interface is a
key issue in learning that is too often underestimated.
In addition to simplicity, an optimal use of modalities may also help in making multimedia environments
more suitable for learning. In general, little is known about the optimal combination of audio or speech,
screen texts, and illustrations in pictures or video. But as argued by Mayer (1997), effective working
memory capacity can be increased by a good combination of audio, text and pictorial information.
Only when multimedia learning environments are characterized by simplicity and an optimal use of
modalities, it makes sense to focus the attention of the learners on activities that promote intentional deeper
processing of the materials, or, increase their so−called "germane" cognitive load devoted to the
construction of cognitive schemata. One way to reach this goal is to increase the variability in a set of
learning tasks that belong to the same task class (Paas & van Merriënboer, 1994; de Croock, van
Merriënboer, & Paas, 1998). Or, alternatively, learning tasks might be interspersed with questions that
make the tasks into epistemic tasks (Ohlsson, 1996). Collins and Ferguson (1994) and Goodyear (1998) go
one step further and claim that multimedia learning environments should mainly engage students in playing
"epistemic games" that provoke deep cognitive processing and promote understanding.
To conclude, dealing with the transfer paradox is even more difficult because there are large
inter−individual differences between students. A learning task that yields high extraneous cognitive load,
and thus leaves little cognitive capacity for genuine learning for one student, may be a good learning task
for another student. For this reason, adaptive interfaces in multimedia learning environments could give
less functionalities to students who experience high cognitive load and more functionalities to students who
experience low cognitive load. This is a clear application of the training wheels approach to interface
design.
6th Direction: Make Students Work Together
According to my 4th direction, Web−based instruction should be developed that makes a difference. The
presentation facilities of the Web are yet far from perfect. In its current form, it certainly does not always
allow for the use of instructional methods that may be necessary for complex learning to occur. But
Web−based instruction provides another feature which importance cannot be overestimated:
communication facilities. This refers both to asynchronous types of communication, such as E−mail and
discussion lists, and to synchronous types of communication, such as chat boxes and video−conferences.
While the use of these means of communication is becoming increasingly popular in Western society,
experiences in education are mixed. Sometimes students do not use them at all; sometimes they use them to
discuss all kind of things (football, music etc.) that are not related to learning, and sometimes they use them
to sustain learning. We should find out under which conditions the last option is true. Three possible
approaches will be discussed.
First, it should not be expected from students that they work together if there is no clear need to do so.
Social factors play a role but it is also related to the least effort principle. When you are working on an
individual task, you will only start to communicate about this task when things go seriously wrong. People
are only inclined to learn and work together if it has a clear added value. Fortunately, competency−based
learning provides excellent opportunities for proving this added value. Many competencies include
interpersonal skills, so that there will no doubt be learning tasks that require students to practice such skills
in a simulated task environment. In short, some learning tasks will be distributed team tasks, that is, tasks
with a co−operative goal structure (Johnson, Maruyama, Johnson, Nelson, & Skon, 1981) which makes
working together into a strict condition for completing the task.
Second, Web−based communication technology is likely to experience the same problem as learner support
systems. When things get tough, students are least inclined to use support systems and probably also least
inclined to use communication facilities. Like support systems, communication systems must probably be
fully embedded in the learning environment before they are optimally used. Add−on communication
facilities, like commercial programs for e−mail, discussion lists, and chats may hamper learning because
learners suffer from the so−called "split−attention effect" (Sweller, van Merriënboer & Paas, 1998).
My third point with regard to computer−mediated communication is that we should not only take the
transfer paradox into account for instructional on−screen messages (remember the 5th direction!), but also
for student−generated messages. Have you ever been involved in a collaborative problem solving effort, in
which you were confronted with twenty e−mail messages from peers that all gave different directions for
solving a particular subproblem − and in which it was your role to make sense out of all the e−mail
messages and come to a substantiated solution? I have been, and I can assure you that the only thing you
want to do in such an "epistemic game" is to forget about the e−mails and present your own solution (or
switch off the computer). If we want these types of learning to be successful (and they can be!), we need
simpler ways of organizing and representing the available information.
This section on delivery in multimedia learning environments discussed the direction to develop
Web−based environments for competency−based, complex learning; the direction to defeat the transfer
paradox by using instructional methods that promote deeper cognitive processing balanced out by methods
that decrease extraneous cognitive load; and, finally, the direction to make students work and learn together
on distributed team learning tasks. The next section will turn to the third and last aspect of future
competency−based learning, namely that it poses new challenges to the diagnosis of learner progress.
Diagnosis of Learner Progress
Learning cannot take place without feedback. For basic learning processes, Knowledge of Results (KR)
may be sufficient. You simply see the outcomes of what you do. But for complex learning to occur, the
feedback that students receive should generally be more informative. In order to give such feedback,
judgments of the quality of complex performances are necessary. And such judgments are not only
necessary to improve the quality of learning, but also to certify learners, to make pass/fail decisions, or to
make placement decisions. Again, three new directions will be formulated, now with regard to the
diagnosis of learner progress (see bottom of Table 1). In fact, the last direction concerns the interface
between diagnosing learners ("student evaluation") and diagnosing educational systems ("system
evaluation"). All three of them should be viewed as challenges that must be attained in successful,
competency−based learning:
• Provide Meaningful Feedback
• Use Tests for Complex Performances
• Assure the Quality of Competency−based Learning
7th Direction: Provide Meaningful Feedback
Quite a lot is known about providing effective feedback to learners. At least, if it comes to learning
declarative knowledge or learning procedural skills. For instance, it is known that feedback is then most
effective when it is provided immediately after performance. And in case of incorrect performance,
feedback should explain why there was an error and give hints for how to reach the correct goal.
Nevertheless, providing feedback to students is a major problem in traditional education − probably
because it requires that teachers closely monitor the performance of their individual students. This may be
possible in one−to−one tutoring, but not in a group−based educational system. We all know that there are
still too many courses for which the only feedback that students get is a final grade, which is not very
informative if it comes to improving learning.
There is bad news. These problems can become even worse in a competency−based curriculum. One
reason for this is that performance on a rich learning task is never right or wrong, it is merely more or less
effective, efficient or satisfactory. The best students can do is apply a systematic approach to problem
solving, and try out the heuristics that can be helpful to reaching success. Another reason is that many
different aspects can be judged for complex performances. It should be clear that only one or a few
judgments on the quality of performance provide learners with little detail about how to improve
performance; feedback should ideally be given on the many different performance aspects that can be
distinguished for the learning task.
Such feedback is critical to learning complex cognitive skills − but to date, little is known about the
characteristics of optimal feedback for complex performances. It is clear that students must be allowed to
discover the advantages and disadvantages of applying particular approaches and heuristics and to make
mistakes. Feedback can only be given retrospectively. It should discuss the similarities and dissimilarities
between the approach that has been taken by the students and expert approaches, the application and
misapplication of particular rules−of−thumb, the qualities of the solution in comparison to other possible
solutions, and so on.
Butler and Winne (1995) presented an interesting model for providing so−called cognitive feedback to
students, in such a way that it promotes self−regulated learning from rich learning tasks. The central idea is
that feedback should provide students with information that allows them to link particular "cues" to the
quality of their performance. Cues may, for instance, concern features of the task, the learning activities, or
the cognitive processes the learners were engaged in. The cues should enable students to reflect on the
quality of found solutions, on the quality of problem solving processes, and on the quality of learning itself.
So, cues that promote reflection become a central element of feedback to students, like it is a central
element for (reflective) practitioners and life−long learners.
But even if we succeed in identifying the characteristics of effective feedback for complex learning,
providing this type of feedback will remain a heavy burden for teachers. While some progress in the field
of Artificial Intelligence is made (e.g., the use of Latent Semantic Analysis for providing feedback on
papers; see Landauer, Foltz, & Laham, 1998), computers are still far away from taking over this task from
teachers. The most feasible approach, both from a practical viewpoint and from the viewpoint of the
development of higher order skills, is to delegate an important part of the work to students themselves.
Debriefing sessions, group discussions, and peer and self−assessments can offer a valuable approach to
providing meaningful feedback (Sluijsmans, Dochy, & Moerkerke, 1999).
8th Direction: Use Tests for Complex Performances
If we find solutions for the seven challenges discussed above, we begin to see an ideal, future environment
for complex learning: 1. Students work on rich learning tasks that combine the World of Knowledge with
the World of Work. 2. They receive enough, embedded support to ensure learning. 3. They are assisted in
developing higher order skills. 4. They will often work on their tasks in a simulated (Web−based) task
environment, where the information that is prerequisite to task performance is presented just−in−time and
the information that supports the work on broader classes of tasks is available in books or in highly
accessible electronic repositories. 5. They are elicited by the environment to redirect their attention from
irrelevant processes towards processes that are important for genuine learning. 6. They have optimal
facilities for performing team learning tasks through communication with peers and tutors. 7. And finally,
they receive meaningful feedback from peers and tutors on the quality of their complex performances.
But there is still one thing that may destroy this dream: Examinations! Frederiksen (1984) convincingly
described the "real test bias", that is, the tendency of teachers and students to focus their teaching and
learning on that which is tested. We can put a lot of effort in the design and development of powerful
environments for complex learning, but if we subsequently test students on their factual knowledge and
procedural skills it will certainly be a waste of time and effort. As argued before, we have to deal with the
least effort principle. Like all of us, students act as "calculating citizens" and will only learn what they are
required to learn with a minimum of time and effort devoted to it. And we cannot blame them for that.
There is only one solution. We should test how we teach. Tasks for testing must mimic the rich learning
tasks used for learning, and students must be judged on their complex performances.
Several authors plead for such an integration of teaching and testing (Frederiksen, 1994). And from the
viewpoint of cognitive psychology, the problem of performance−based testing is largely solved together
with the problem of providing meaningful feedback on complex performances (the 7th direction). Both
problems concern judgments of the quality of performance on rich tasks, and the only difference seems to
be in their purpose. For feedback, the purpose is to improve learning; for testing, the purpose is to make
pass/fail decisions or to certify learners.
However, these different purposes have some important implications. Two of them will be briefly
discussed. First, if the purpose is to improve learning, written or verbal interpretive summaries, giving
judgments on the quality of all relevant aspects of complex performance, are most useful. Much more
information is conveyed in such summaries than in numerical ratings. While some authors seem to argue
that these qualitative judgments are the only way to judge complex performances (e.g., Delandshere &
Petrosky, 1998), we nevertheless need numerical ratings for the purpose of certification. At least and most
simply, a judgment on a numerical 0−1 scale (0 = fail / do not certify; 1 = pass / certify) is necessary. It is
thus important to develop scoring and judging procedures for complex performances and to support
teachers in their use of such procedures.
Second, in a traditional curriculum certification usually takes place course−by−course. Students simply get
their diploma after they passed all examinations. This is not possible in a competency−based curriculum
that uses performance assessments, because competencies are not linked to particular courses, but are
expected to develop throughout the whole curriculum. This necessitates some form of progress testing (cf.,
van der Vleuten, 1996), yielding information on the quality of different aspects of complex performances
with regard to the end objectives of the curriculum. Student dossiers can thus no longer be a simple file
with pass/fail results for each course, but must keep track of student progress in a much more detailed
fashion.
9th Direction: Assure the Quality of Competency−based Learning
Learner diagnosis for giving meaningful feedback and improving learning was discussed, which is mainly
related to the process of learning ("throughput"); and for certification or making pass/fail decisions, which
is mainly related to the output of learning. This leaves us with the input: Learner diagnosis with the purpose
of making placement decisions (or, in some contexts, selection). This type of learner diagnosis is especially
important for institutes for Open Learning, which serve a highly heterogeneous group of students.
One obvious requirement for intake procedures is that they should be representative for the system of
competency−based learning and performance−based assessment that underlies the whole educational
system. Only knowledge testing is not enough! In addition, institutes for Open Learning are too often
characterized by a high drop−out rate. Representative, performance−based assessment procedures may
better help students to determine their suitability for a (particular) study and so increase the success rate of
study programs.
Making placement decisions in order to increase success rates of academic programs marks the transition
between student evaluation and system evaluation (i.e., diagnosing the quality of the educational system).
In this article, so far a rough sketch was given of a future educational system, in which students work on
rich (team) learning tasks in multimedia learning environments and are assessed on their complex
performances. There is no doubt that the methods, techniques and instruments needed for evaluating and
assuring the quality of such a system will be different from the ones that are currently available. For this
reason, it needs to become clear what the implications of this new approach to learning are for system
evaluation and quality assurance.
Conclusion
In this article, nine new directions for Instructional Design for competency−based learning were presented
and discussed. Together, they define a new paradigm for Instructional Design. However, presenting the
directions is easier than applying them in practice. ID projects are, by definition, bound to a highly
particular educational context. Reigeluth (1983) describes the process of ID and makes a distinction
between conditions or context variables, methods and outcomes. For ID projects, the conditions cannot
easily be manipulated and include, for instance, size and grouping of the target learners, available
technological and physical infrastructure (computer and network facilities, rooms), organizational
characteristics, lesson schedules, available expertise among parties involved, and so forth. Within these
limitations, it is up to the professionalism, expertise and creativity of the designer or design team to specify
the instructional methods that are appropriate for reaching the desired outcomes, that is, to specify a
learning environment that is as effective, efficient and appealing as possible. But in addition, a process of
organizational change and deep innovation is needed to create fruitful conditions for application of the
directions presented in this article.
References
Bartlett, F. C. (1932). Remembering. Cambridge, UK: Cambridge University Press.
Bastiaens, Th., Nijhof, W. J., Streumer, J. N., & Abma, H. J. (1997). Working and learning with Electronic
Performance Support Systems: An effectiveness study. Training for Quality, 5(1), 10−18.
Butler, D. L., & Winne, P. H. (1995). Feedback and self−regulated learning: A theoretical synthesis.
Review of Educational Research, 65(3), 245−281.
Carroll, J. M., & Carrithers, C. (1984). Blocking learner error states in a training wheels system. Human
Factors, 26, 377−389.
Clark, R. E. (1994). Media will never influence learning. Educational Technology, Research and
Development, 42(3), 39−47.
Clark, R. E., & Estes, F. (1999). The development of authentic educational technologies. Educational
Technology, 39(2), 5−16.
Collins, A., Brown, J. S., & Newman, S. E. (1987). Cognitive apprenticeship: Teaching the craft of
reading, writing, and mathematics. In L. B. Resnick (Ed.), Cognition and instruction: Issues and agendas.
Hillsdale, NJ: Lawrence Erlbaum.
Collins, A., & Ferguson, W. (1994). Epistemic forms and epistemic games: Structures and strategies to
guide inquiry. Educational Psychologist, 28(1), 25−42.
De Croock, M. B. M., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). High versus low contextual
interference in simulation−based training of troubleshooting skills: Effects on transfer performance and
invested mental effort. Computers in Human Behavior, 14(2), 249−267.
Delandshere, G., & Petrosky, A. R. (1998). Assessment of complex performances: Limitations of key
measurement assumptions. Educational Researcher, 27(2), 14−24.
Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American
Psychologist, 39(3), 193−202.
Frederiksen, N. (1994). The integration of testing with teaching: Applications of cognitive psychology in
instruction. American Journal of Education, 102, 527−564.
Gagné, R. M. (1985). The conditions of learning (4th Ed.). New York: Holt, Rinehart & Winston.
Goodyear, P. (1998, March). New technology in higher education: Understanding the innovation process.
Invited keynote paper presented at the International Conference on Integrating Information and
Communication Technology in Higher Education (BITE), Maastricht, The Netherlands.
Gropper, G. L. (1983). A behavioral approach to instructional prescription. In C. M. Reigeluth (Ed.),
Instructional design theories and models: An overview of their current status (pp. 101−161). Hillsdale, NJ:
Lawrence Erlbaum.
Johnson, D. W., Maruyama, G., Johnson, R., Nelson, D., & Skon, L. (1981). Effects of Cooperative,
Competitive, and Individualistic Goal Structures on Achievement: A Meta−Analysis. Psychological
Bulletin, 89(1), 47−62.
Keen, K. (1992). Competence: What is it and how can it be developed? In J. Lowyck, P. de Potter, & J.
Elen (Eds.), Instructional Design: Implementation issues (pp. 111−122). Brussels, Belgium: IBM
International Education Center.
Kirschner, P. A., van Vilsteren, P. P. M., Hummel, H. G. K., & Wigman, M. C. S. (1996). The design of a
study environment for acquiring academic and professional competence. Studies in Higher Education,
22(2), 151−172.
Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to Latent Semantic Analysis. Discourse
Processes, 25, 259−284.
Martens, R. L., & Valcke, M. A. (1995). Validation of a theory about functions and effects of embedded
support devices in distance learning materials. European Journal for the Psychology of Education, 10,
181−196.
Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions? Educational Psychologist,
32(1), 1−19.
Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine.
Reading, MA: Addison Wesley.
Ohlsson, S. (1996). Learning to do and learning to understand. In P. Reimann & H. Spada (Eds.), Learning
in Humans and Machines (pp. 37−62). Oxford: Pergamon.
Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem−solving skill in statistics: A
cognitive load approach. Journal of Educational Psychology, 84, 429−434.
Paas, F. G. W. C., & van Merriënboer, J. J. G. (1994). Variability of worked examples and transfer of
geometrical problem solving skills: A cognitive load approach. Journal of Educational Psychology, 86,
122−133.
Rakes, G. (1996). Using the Internet as a Tool in a Resource−Based Learning Environment. Educational
Technology, 36(5), 52−56.
Reigeluth, C.M. (Ed.). (1983). Instructional−design theories and models: An overview of their current
status. Hillsdale, NJ: Lawrence Erlbaum.
Salomon, G. (1998). Novel constructivist learning environments and novel technologies: Some issues to be
concerned with. Research Dialogue in Learning and Instruction, 1(1), 3−12.
Sluijsmans, D., Dochy, F., & Moerkerke, G. (1999). Creating a learning environment by using self−, peer−
and co−assessment. Learning Environments Research, 1, 293−319.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12,
257−285.
Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional
design. Educational Psychology Review, 10, 251−296.
Van der Vleuten, C. P. M. (1996). Fifteen Years of Experience with Progress Testing in a Problem−Based
Learning Curriculum. Medical Teacher, 18(2), 103−109.
Van Merriënboer, J. J. G. (1990). Strategies for programming instruction in high school: Program
completion vs. program generation. Journal of Educational Computing Research, 6, 265?287.
Van Merriënboer, J. J. G. (1997). Training complex cognitive skills. Englewood Cliffs, NJ: Educational
Technology Publications.
Van Merriënboer, J. J. G., & de Croock, M. B. M. (1992). Strategies for computer−based programming
instruction: Program completion vs. program generation. Journal of Educational Computing Research, 8,
365−394.
Van Merriënboer, J. J. G., de Croock, M. B. M., & Jelsma, O. (1997). The transfer paradox: Effects of
contextual interference on retention and transfer performance of a complex cognitive skill. Perceptual and
Motor Skills, 84, 784−786.
Van Merriënboer, J. J. G., & Kirschner, P. A. (in press). Three worlds of instructional design: State of the
art and future directions. Instructional Science.
Van Merriënboer, J. J. G., & Krammer, H. P. M. (1987). Instructional strategies and tactics for the design
of introductory computer programming courses in high school. Instructional Science, 16, 251?285.
Van Merriënboer, J. J. G., Schuurman, J. G., de Croock, M. B. M., & Paas, F. G. W. C. (in press).
Redirecting Learners' Attention during Training: Effects on Cognitive Load, Transfer Test Performance
and Training Efficiency. Learning and Instruction.
Westera, W., & Sloep, P.B. (1998). The Virtual Company: Toward a self−directed, competence−based
learning environment in distance education. Educational Technology, 38(1), 32−37