ArticlePDF Available

Predicting cognitive load in acquisition of programming abilities

Authors:

Abstract and Figures

In this paper, we propose a method to predict cognitive load and its factors affecting the learning efficiency in programming learning from the learning behavior of learners. Generally, since the concepts of programming are difficult for learners, some of them suffer inappropriate cognitive load to understand them. Although teachers must keep cognitive load of such learners appropriate, it is difficult for them to find learners who has inappropriate cognitive load from a large number of learners. To find learners with inappropriate cognitive load, we construct models with the random forest algorithm, using learning behavior collected from learners solving fill-in-the-blank tests. An experiment shows the models can detect cognitive load for IL and GL along with their factors. Teachers must address adjustment of cognitive load of learners. This result clarifies the learning factors affecting cognitive load of learners, which enables teachers to address the adjustment with small burdens.
Content may be subject to copyright.
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 9, No. 4, August 2019, pp. 3262~3271
ISSN: 2088-8708, DOI: 10.11591/ijece.v9i4.pp3262-3271 3262
Journal homepage: http://iaescore.com/journals/index.php/IJECE
Predicting cognitive load in acquisition
of programming abilities
So Asai1, Dinh Thi Dong Phuong2, Fumiko Harada3, Hiromitsu Shimakawa4
1, 4Ritsumeikan University, Japan
2Paracel Technology Solutions Co., Ltd, Vietnam
3Connect Dot Ltd, Japan
Article Info
ABSTRACT
Article history:
Received Mar 2, 2019
Revised Mar 29, 2019
Accepted Apr 8, 2019
In this paper, we propose a method to predict cognitive load and its factors
affecting the learning efficiency in programming learning from the learning
behavior of learners. Generally, since the concepts of programming are
difficult for learners, some of them suffer inappropriate cognitive load to
understand them. Although teachers must keep cognitive load of such
learners appropriate, it is difficult for them to find learners who has
inappropriate cognitive load from a large number of learners. To find
learners with inappropriate cognitive load, we construct models with the
random forest algorithm, using learning behavior collected from learners
solving fill-in-the-blank tests. An experiment shows the models can detect
cognitive load for IL and GL along with their factors. Teachers must address
adjustment of cognitive load of learners. This result clarifies the learning
factors affecting cognitive load of learners, which enables teachers to address
the adjustment with small burdens.
Keywords:
Data mining
e-learning
Machine learning
Programming learning
Copyright © 2019 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
So Asai,
Ritsumeikan University,
National Chung Cheng University,
Nojihigashi 1-1-1, Kusatsu, Shiga, 525-8577, Japan.
Email: asai@de.is.ritsumei.ac.jp
1. INTRODUCTION
Educational institutes to teach information technology provide programming exercise classes for
many novice learners. Novice learners must understand many abstract concepts to acquire programming
abilities. It is difficult for novice learners because they have never experienced how the concepts are realized
on computers. There are not a few learners to drop out [1].
The reason why learners cannot understand the specific concepts of programming is that
inappropriate cognitive load is imposed on them [2]. Cognitive load is one of the important elements to
consider in order to promote the learners to understand the specific concepts. Cognitive load is closely
involved in acquisition and fixing of their programming skills. Learners are more likely to acquire
programming abilities if they have appropriate cognitive load. It is necessary to pay attention so that learners
do not have inappropriate cognitive load. However, each learner has its own way to have cognitive load.
Usually, in a usual exercise class, one or a few education stuff teach more than decades of learners. It is
difficult for teachers to find learners who have inappropriate cognitive load.
To provide learners with a preferable learning environment, many methods have been proposed for
instructional design [3]. Keller [4] values motivation for such environments.The work in [5] lists motivations
and strategies in learning. Phuong [6] regards them as factors determining the learning behavior of each
learner. Phuong proposes a data analysis method to figure out the factors, to determine which students should
be supervised.However, almost all of the instructional design methods try to extract learner’s mental factors
Int J Elec & Comp Eng ISSN: 2088-8708
Predicting cognitive load in acquisition of programming abilities (So Asai)
3263
such as motivation to establish successful environments [7]. Even highly motivated learners need support
from their teachers to overcome difficulties when they struggle with difficult learning tasks.A systematic
method is necessary for teachers to detect learners with high cognitive load.
This paper proposes a method to predict factors to impose cognitive load on learners through
analysis of learning behavior they show at solving programming assignments. There are several types of
cognitive load [8]. The method leverages fill-in-the-blank test to determine what type of cognitive load
learners have. To determine the type of cognitive load, the method generates a model of random forest
analyzing learning behavior they take at finding correct answers to fill blanks in the program. The output of
the model identifies the type of cognitive load on learners, while the importance of predictor variables
indicates its factors. This clarifies the state and factors of cognitive load of each learner. Teachers can address
adjustment of the excessive cognitive load of learners into an appropriate state with minimal effort.
2. COGNITIVE LOAD IN PROGRAMMING LEARNING
2.1. Requirements to obtain programming ability
In order to acquire the programming ability, it is essential to be able to read given programs and
write appropriate programs, utilizing concepts specific to programming to be understood. To achieve it,
learners are required to organize various kinds of knowledge on many concepts along with their usages.
Many learners cannot solve programming assignments because of difficulties of abstract concepts and ways
to utilize them [9]. Learners without enough understanding of the concepts and the ways do not know what
program they should write when they engage in assignments. Even if learners grasp programming concepts
and ways to utilize them, many of them cannot imagine actual behavior of given programs. Those learners
cannot understand why the programs behave in specified ways. Such learners may fail to learn programming,
which may cause them to escape from programming learning. Teachers must find learners who are likely to
have understanding failures to prevent them from escaping. It is indispensable to identify what impedes
learners to understand programming.
2.2. Cognitive load affecting learning
People use working memory when thinking something. The amount of working memory represents
the capability for process abilities to think. People must put many elements on their working memories when
they learn new things. Since there are individual differences in working memory, the allowable amount of
learning varies. When the same elements are repeatedly processed in working memory, they are organized as
a schema. Once elements become a schema, learners can utilize them without cognitive load, because the
elements have been organized with its usage. The goal of learning is that learners get able to solve problems
never seen before without effort by combining elements they have understood. In other words, learning
means to construct a schema into which elements are organized along with their usage.
Cognitive load affects understanding failure caused by difficulties in programming
concepts [2, 10]. Cognitive load indicates how much working memory is assigned to tasks to understand
unknown items and to utilize acquired knowledge in solving assignments [11]. The cognitive load theory
classifies usage of the working memory into the following three types [8, 12].
Intrinsic Load (IL): IL occurs due to the inherent difficulty of the assignment against abilities of
learners. IL is imposed when learners engage in problem solving under an unknown item and ways of
consideration using it. This load gets high, when learners feel excessive difficulty for the assignment, because
of the small amount of their working memory.
Extraneous Load (EL): EL is caused by surrounding learning environments and brought by the poor
quality of teaching materials and lectures provided for learners. Teachers are required to design the materials
and the lectures so as to reduce this load.
Germane Load (GL): GL is related to the schematization of contents to be learned. The imposition
of GL implies learners are in the process of organizing given learning contents as schemata. Learners are
encouraged to experience this load [13].
If it is an ideal learning situation that learners continue to acquire new knowledge, it is desirable that
cognitive load is high in GL, in which knowledge is being schematized, while it should be low in IL
and EL [8]. This work refers to it as an appropriate state of the cognitive load. Teachers must endeavor to
keep cognitive load of learners an appropriate state. However, every learner feels different difficulties for
each of the given learning content. There are various learners who have different effects of lesson design of
programming lectures/exercises conducted as mass classes. It is difficult to estimate the cognitive load for
each learner. Even more, it is nearly impossible to judge it from the appearance of learners on the spot during
the class.
ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 3262 - 3271
3264
2.3. Estimation of cognitive load
A number of researchers have engaged in works to measure cognitive load [10, 14, 15]. Several
methods are proposed to measure cognitive load. Their usefulness is confirmed in various fields.
Morrison et al. [16] proposed a method to measure cognitive load of learners in the programming classes,
without sensors. He extended the method Leppink et al. [17] established for statistics classes to programming
learning. The method provides learners with several question items. Each of the question items is correlated
to figure out either of IL, EL or GL. For each question item, learners present their answers by 11-scale.
The Leppink’s method assesses cognitive load for the whole learning process in the class, not factors causing
it. The Morrison’s method does not clarify factors of cognitive load of each learner, either.
Dividing visual information and character information in a programming class, Yousoof et al. [18]
proposed a method to measure cognitive load from its accumulation, to reduce it. This method mainly
focuses on EL. It does not fully consider IL or GL, which come from the utilization of working memory by
learners. It is necessary to clarify essential factors of their understanding failures, considering learning
behavior which appears for each type of cognitive load. Fridman et al. [19] measured the cognitive load
during driving vehicle without wearing any sensors. This work investigates the cognitive load in real time,
feeding video data of eye movements to the deep learning. This study does not distinguish the three types of
the cognitive load.
In programming learning, it is essential to discriminate IL, EL, and GL. Since IL and EL decrease
learning efficiency, they should be low. Meanwhile, high GL is preferable, because it implies the learner is
working on the schematization of learning items. It should be avoided to attach physical sensors to learners,
to obtain cognitive load in programming learning. There are many learners in the class. It brings huge costs to
attach sensors to all of them. Sensors may also influence learning efficiency inappropriately. In addition to
the investigation of cognitive load without sensors, it is necessary to clarify what learning behavior
distinguishes the three types of cognitive load of learners. Furthermore, their factors should be identified for
teachers to make them appropriate. A contribution of our work is to identify factors of cognitive load, as well
as to investigate the three types of the cognitive load without specific sensors.
3. PREDICTING FACTORS AFFECTING COGNITIVE LOAD
3.1. Method overview
Our work aims to estimate cognitive load from learning behaviors at solving programming
assignments, to predict each type of cognitive load along with factors causing it. A method is proposed to
predict the cognitive load along with its factors when learners study programming with a procedural language
like C. Figure 1 illustrates the method overview. To train classification models, learning behavior is collected
from learners answering fill-in-the-blank tests. The method lets the learners specify their cognitive load for
each assignment with the questionnaires explained in [16]. It trains models of random forest with the learning
behaviors and the cognitive load. The models of random forest extract correspondences between the learning
behavior and the cognitive load of the learners. When the learning behaviors of a new learner are provided,
the model predicts learner’s cognitive load along with its factors. The method helps teachers to confirm
whether the new learners in programming learning are under appropriate cognitive load. The teachers can
also address learners with inappropriate cognitive load. They can take measures to adjust their cognitive load,
taking its factors into account.
Figure 1. Overview of our method to predict factors affecting cognitive load
Int J Elec & Comp Eng ISSN: 2088-8708
Predicting cognitive load in acquisition of programming abilities (So Asai)
3265
3.2. Collecting learning behavior
We focus on learning behavior when learners answer assignments of fill-in-the-blank test.
Fill-in-the-blank test is frequently used to measure learner’s understanding in programming classes [20].
Learners must fill code fragments suitable for blanks, considering coincidence with code fragments disclosed
in other parts than blanks. Fill-in-the-blank tests reveal the understanding of learners because learners are
requested to read the disclosed code fragment, understand them, and conceive code to fill the blanks.
In fill-in-the-blank tests, it is less likely to speculate answers than in multiple-choice tests [21].
Fill-in-the-blank tests can examine learning achievements. Any types of cognitive load are not
imposed on learners who have acquired programming abilities. Learners who are schematizing learning items
have high GL because they are in the process of acquiring programming ability. High IL is imposed on
learners when they engage assignments whose solution itself is hard to seek. Learners seem to have high EL
by the assignments which bring unnecessary burdens such as sentences hard to read. Fill-in-blank tests,
where the parts to be answered are limited, are useful for the measurement of the cognitive load as well as the
understanding level of learners.
When answers which learners have convinced correct are judged to be incorrect, they consider the
reasons, consuming their working memory. High IL occurs in this case. Recognizing their schema is wrong,
learners reconstruct another schema. GL gets high in the reconstruction. Proper cognitive load of learners is
collected, only if learners can receive the grading result on the spot when they solve fill-in-the-blank tests.
In general learning with fill-in-the-blank tests, learners submit their answers on the sheet, with their grading
results fed back after a few days. It is not expected proper cognitive load can be obtained with this learning
procedure.
The method provides an automatic grading system [22]. It is implemented as a web application, with
which learners can grade their answers interactively. The system grades an answer a learner gives for each
blank on demand. It notifies the correctness of the answer immediately. Learners using the system are
allowed to submit their answers many times until their answers become correct within the time limit.
In our method, one test corresponds to the code of a program, parts of which are blanked out.
More than one tests are provided for learners. As learning behavior, the method collects 3 data items:
the consuming time, change histories of answers, and the number of grading demands. More concretely,
the system records the elapsed time from the time point a learner starts a specific test, a chronological list of
answers the learner submits for each blank, and the number of grading demands transmitted to the system,
respectively. We focus on 3 types of predictors explained below, to detect each type of cognitive load along
with its factors. All of the predictors can be derived from the learning behavior the automatic grading system
for fill-in-the-blank tests collect.
Grading requests
Learners can check whether their answers are correct many times for each blank. High IL means
learners have difficulties to fill blanks in the test because the test itself is hard to them. They have few
candidate code fragments to fill the blanks. When none of them works well, the learners have nothing to do.
Therefore, we assume that such learners demand to grade rarely. In the meantime, learners with high EL
might fail to understand what the test requests or even how they should use the system, which leads them to
do nothing. The system allows learners to demand to grade as many times as they want. It aims to cause
learners to reach right answers after careful consideration. The method counts the number of grading for each
blank. It does not count grading when the code fragment has not been modified from the previous demand,
even if the learner demands to grade.
Page transitions
In fill-in-blank tests, teachers provide multiple assignments in one session in order to confirm their
understanding for various programming concepts. Learners can solve assignments in an order of their own
choice. They can also switch them halfway. The learners change assignments to other ones when they either
complete right answers for all blanks of an assignment or give up answering because they cannot imagine
any other answer. The more difficulty learners perceive for assignments, the more they transit assignments.
Trying to reconsider previous assignments, learners move back to them. In other case, learners move forward
to new assignments. The method counts page transition, distinguishing the next assignment, the previous one,
and one ahead more than two.
Time transitions for correct answer rate
When a learner engages in a test, the learner repeats to send a grading demand after specifying an
answer for each blank. Let the correct answer rate as the ratio of blanks they fill with correct answers against
all the blanks. As a whole, learners increase their correct answer rate, as the number of grading request
grows. Learners who have enough understanding of the tests can answer all of them easily. Their correct
answer rate quickly reaches to the full mark or one close to it. On the other hand, learners who lack
understanding need a long time to find correct answers or give up to find them. It is expected the time
ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 3262 - 3271
3266
transition of the correct answer rate plays a vital role in discriminating the cognitive load. In order to
represent the role in an integrated way, the method quantifies the time transition for assignment by the
following equation:


where is the number of grading demands of the learner, is the correct answer rate at the th grading,
is the elapsed time at the th grading from , and is the deadline of answering. The time is excluded while
learners answer assignments other than assignment .    stands for the state at the start time of
answering, where   and  . The quantification enables us to represent the accumulation of the
correct answer rate of a learner over the elapsed time, as Figure 2 shows. In the case that the correct answer
rate reaches high early, gets large as shown in Figure 2(a). On the other hand, when the correct answer
rate of a learner remains low for a long answering time, is small as shown in Figure 2(b).
Figure 2. Examples of time transitions for correct answer rate
3.3. Investigating cognitive load
The method uses the cognitive load measurement questionnaire [16] in order to investigate three
types of the cognitive load of learners. For the investigation, learners answer the questionnaire consisting of
10 questions. They are classified into 3, 3, and 4 questions corresponding to IL, EL, and GL, respectively.
To clarify the target of each question, qualifiers are added to each statement of the cognitive load
measurement questionnaire in the method. Figure 3 lists up the questions. Learners evaluate each of them by
11-scale. A larger number corresponds to a strong agreement for the question. When the sum of marks in
questions corresponding to a specific type of cognitive load is large, its degree of the learner is judged
to be high.
Figure 3. Questions of cognitive load measurement questionnaire
3.4. Identifying cognitive load with random forest
The method generates models to associate characteristics of learning behavior with factors of the
cognitive load evaluated by the questionnaire. The models found on the random forest algorithm. Predictor
variables and response variables of the models are the learning behaviors and a specific type of cognitive
load, respectively. Models based on the random forest algorithm present how important each predictor
Int J Elec & Comp Eng ISSN: 2088-8708
Predicting cognitive load in acquisition of programming abilities (So Asai)
3267
variable is in detecting a target type of cognitive load, which contributes to identifying its factors.
An individual model is generated for each of three types of cognitive load because our work aims to clarify
factors for each of the three types of the cognitive load.
The random forest algorithm constructs a multitude of decision trees from randomly chosen
predictor variables and produces a model classifying learners according to the degree of cognitive load by
majority voting of the outputs of those decision trees. Each node of a decision tree composing the model
bisects states of learners specified with the predictor variable, from the learning behavior in solving
fill-in-the-blank test. It is desirable that one of the divided nodes includes more learners of a target type of
cognitive load. Namely, the impurity in each node of the decision tree should be small. The impurity is
represented by entropy , which is calculated with the following equation:
   
   log  
The difference of the entropy after the branch from the one before the branch should be small, which
corresponds to maximizing the information gain. The information gain  at the node is obtained from
the following equation:
  
 
Each node is divided so as to minimize the information gain. To prevent the decision trees from
overfitting, nodes whose information gain is less than a threshold value are not divided anymore, regarded as
leaves. Learners matching the branching condition in their cognitive load are classified in each node.
Eventually, learners with specific characteristics in learning behavior fall into each of leaf nodes.
The characteristics correspond to a response variable. Combination of branching conditions along a path
from the root to a leaf corresponding to high cognitive load reveals factors of learning behavior which affect
the cognitive load.
In the analysis using random forest, a lot of decision trees are used to judge whether cognitive load
is imposed on learners. In the determination of a specified type of the cognitive load, the more frequently a
specific predictor variable is used overall decision trees, the more important the predictor variable gets.
Models based on random forest present contributions of each predictor variable to the judgement of cognitive
load as the variable importance.
3.5. Predicting cognitive load of learners
A model is constructed through training data collected from many learners solving fill-in-the-blank
tests with the automatic grading system. New learners also solve fill-in-the-blank tests as the learners for the
training did. Their learning behavior is applied to the trained models. Each cognitive load of the new learners
is determined with majority votes by the models. Teachers are notified of detected type of the cognitive load
and learning behaviors of the learner. When IL or EL is high or GL is low, the teacher should follow up the
learners to lead their cognitive load to an appropriate state.
Under appropriate states of all types of cognitive load, learners can acquire programming skills
more effectively. It contributes to preventing learners from failing programming learning, suppressing
burdens of teachers. Teachers can utilize saved efforts to prepare better lectures to provide higher
educational effects.
4. EXPERIMENT
4.1. Overview
An experiment was conducted to confirm whether the method identifies factors of cognitive load.
The purpose of this experiment is the followings:
Collecting datasets of learning behavior of learners answering fill-in-the-blank test
Verifying models of random forest generated from the datasets
Subjects are Vietnamese college students who are learning programming in C and information
technology. They are the second year college students. A preliminary survey confirmed they have already
learned C programming for beginners. At the time of the experiment, there are various students in terms of
interests and abilities toward C programming. The materials in the experiment were provided in English
because the preliminary survey has confirmed most of them can understand English fairly well. In the
ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 3262 - 3271
3268
experiment, the subjects solved five assignments of fill-in-the-blank tests, where several code fragments are
blanked out. Table 1 shows these assignments and the number of their blanks. Concepts on the learning units
are generally difficult in programming learning [23, 24]. Assignments regarding the concepts are expected to
reveal the difference in understanding of learners. The assignments questioning the concepts were chosen so
that unbiased datasets can be obtained. The quality of the assignments is guaranteed because these
assignments are actually used in programming classes at Ritsumeikan University.
The subjects use a website implemented the automatic grading system for fill-in-the-blank test
described in Section 3.2. They access the experimental website with a browser that they usually use, and log
in with user ID and password given in advance to solve assignments. They can solve assignments in any
order. They can switch an assignment to another on the way within the time limit. Learning behavior of
subjects is stored on the server with asynchronous communication of Web beacon [25] immediately every
time learners take predefined actions such as pressing buttons and reloading Web pages. After the time limit
has elapsed, they finish solving the assignments. They subsequently answer the two kinds of questionnaires.
The one is for cognitive load measurement, and the other is for assessing the degree of difficulty for each
assignment. They evaluate cognitive load in 11-scale, while difficulties for each the assignments in 5-scale.
Table 1. Assignments of fill-in-the-blank test in the experiment
Assignment No.
Learning units
No. of blanks
F1
2-dimensional array
3
F2
2-dimensional array
7
F3
Structure, pointer, and linked list
7
F4
2-dimensional array, and function
9
F5
Sorting algorithm
8
4.2. Result
Datasets of learning behavior are obtained from 54 subjects. 3 subjects are excluded due to the
insufficient number of learning behavior. The proposed method constructed models of random forest [26] for
the 3 kinds of cognitive load using the datasets. The model construction reveals important variables to detect
IL, EL, and GL. The important variables [27] are determined based on Gini index [28]. Predictor and
response variables for training with random forest are followings.
Predictor variable: 43 features based on the 3 kinds of learning behavior described in Section 3.2
Response variable: high or low of IL, EL, and GL obtained from the questionnaire
The datasets are divided into data for the model construction and the verification. In order to verify
the accuracy of the decision tree, we adopted 6-fold cross-validation [29]. Table 2 shows the results of the
accuracy which represents the correct rate of the verification data with the cross-validation. The predictor
variables are arranged in descending order of the average of the importance. We choose the top 10 variables
of them. The top 10 variables and their importance for each cognitive load are shown in Tables 3, 4 and 5.
Variable importance means how much contribution to the model of random forest. The sum of them is 1.
Table 6 indicates the difficulties for each assignment obtained by the questionnaire.
Table 2. Accuracies of the generated models
Cognitive Load
Training data
Test data
Intrinsic
0.907
0.740
Extraneous
0.870
0.537
Germane
0.944
0.870
Table 3. The top 10 important variables for IL
Variable
Importance
Grading requests for blank 4 in F4
0.074
Grading requests for blank 1 in F3
0.065
Time transitions for F3
0.059
Time transitions for F4
0.046
Grading requests for blank 3 in F4
0.045
Time transitions for F5
0.043
Time transitions for F1
0.039
Grading requests for blank 5 in F5
0.038
Grading requests for blank 5 in F4
0.038
Grading requests for blank 5 in F2
0.037
Table 4. The top 10 important variables for EL
Variable
Importance
Time transitions for F2
0.086
Page transitions to previous
0.077
Grading requests for blank 3 in F3
0.052
Time transitions for F3
0.050
Sum of page transitions
0.046
Grading requests for blank 5 in F5
0.042
Time transitions for F1
0.041
Grading requests for blank 1 in F3
0.037
Page transitions to next
0.037
Grading requests for blank 3 in F1
0.035
Int J Elec & Comp Eng ISSN: 2088-8708
Predicting cognitive load in acquisition of programming abilities (So Asai)
3269
Table 5. The top 10 important variables for GL
Variable
Importance
Page transitions to previous
0.116
Grading requests for blank 1 in F2
0.093
Time transitions for F4
0.076
Grading requests for blank 3 in F1
0.070
Time transitions for F1
0.066
Time transitions for F2
0.049
Time transitions for F3
0.045
Grading requests for blank 3 in F4
0.043
Grading requests for blank 3 in F2
0.036
Page transitions ahead more
0.035
Table 6. Evaluation of difficulty level for the assignments
Assignment No.
Mean
Variance
F1
2.63
0.74
F2
3.14
0.74
F3
4.10
0.42
F4
3.87
0.58
F5
3.30
0.97
5. IMPORTANT PREDICTORS
This section assesses the usefulness of the model of random forest for cognitive load along with the
influence of the important variables on the 3 types of cognitive load of the subjects. It also discusses
measures teachers should address against inappropriate states of cognitive load. It can be said that variables
important to distinguish subjects of high cognitive load from ones of low cognitive load take largely different
values for the two kinds of subjects.
Intrinsic load
Many of the important variables for IL are variables related to assignment F3 and F4. As shown in
Table 6, the subjects evaluated F3 and F4 the most difficult among the five assignments. This result implies
the difficulty of assignments strongly influences to detect IL. Let us check the code fragments blanked out in
the assignments. The important variables are the number of grading requests for blanks, which should be
filled with multiple statements or variables in value settings. Subjects must seek a correct answer for the
blanks, considering the influence on other parts of the program. The top 10 variables include the time
transitions for correct answer rate, which is explained in Figure 2, for the assignments except for F2. Since
this learning behavior indicates how short subjects could have answered correctly, it is directly linked with
the difficulty of the assignment.
These facts suggest that learners with high IL excessively consume working memory to IL because
they engage in the assignments for a long time. On the other hand, learners with low IL can easily solve the
assignment in a short time. Teachers can predict IL of learners, focusing on the learning behavior against
assignments with high difficulty. Teachers can mitigate IL, providing assignments of low difficulty for
learners with high IL.
Germane load
The most important variable is the page transitions to the previous assignment. Transitioning to the
previous assignment means that the subjects retry to solve the assignments. In our method, the subjects can
solve any assignments of fill-in-the-blank test many times within the time limit. Because subjects have solved
the assignments repeatedly, it seems they achieved to establish a schema related to the blank and the contents
of the assignment.
The next important variables are related to assignment F1, F2, and F4. The contents of the
assignments are related to linear algebra. They should be solved using 2-dimensional array. All of the
subjects have obtained skills of linear algebra calculation before they learn programming. They have
achieved each skill of programming and linear algebra calculation. Because they need to solve the
assignments using both skills concurrently, the assignments seem to reveal the difference in schematization
of programming knowledge.
On the other hand, predictor variables related to assignment F5 are smaller. Assignment F5 is solved
with a sorting algorithm. Because the sorting algorithm is unknown for most of the subjects, they showed low
GL in the assignment F5. Therefore, in order to detect GL on learners, it is effective to solve assignments
utilizing knowledge learners have already obtained or assignments similar to the learning unit. In addition,
learning behaviors predicting GL is important to confirm how many learners have benefited from the
learning contents and supervision. In case that few learners have been benefited, teachers should review the
learning contents and supervision, so that the learners can establish schemata for programming abilities.
ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 3262 - 3271
3270
Extraneous load
The model of random forest in our experiment did not show sufficient accuracy for the testing data.
This means the model cannot predict learners’ EL from their learning behaviors in solving assignments of
fill-in-the-blank test. It is difficult to figure out variables of learning behavior to detect EL because there are
various kinds of variables in the 10 important variables. The cause of the result is considered that the subjects
were unfamiliar with the use of the experimental website which implements the automatic grading system.
The subjects have practiced with a simple assignment of fill-in-the-blank test to grasp usage of the website.
However, it might not be enough. As another cause, it is also conceivable that a language used in the
experiment was not the primary language for the subjects. Most of the subjects understand English. However,
it is not everyday languages. Answering in non-primary language seems to affect EL.
In order to improve the accuracy of the model to detect EL, it is necessary to provide an
environment where learners are familiar with learning environments. We should have the subjects to be
accustomed to the website, letting them try the automatic grading system more times. It is also required to
revise the website to be easy to use for the learners.
Our method detects cognitive load from learning behavior without attaching any sensors to learners.
It clarifies cognitive load of learners at an early stage, avoiding extra burdens not only on teachers but also on
learners. It is difficult to find out learners who have IL or GL inappropriately without our method. Teachers
can address to make each cognitive load of learners appropriate on a light burden.
6. CONCLUSION
In this paper, we propose models to detect cognitive load along with its factors, founding on
the random forest algorithm. We also discuss the usefulness of the models. The models are constructed with
predictor variables representing three kinds of learning behavior. They are effective to detect IL and GL.
The learning behavior during solving difficult assignments is useful to detect IL. It is effective in the
detection of GL to analyze learning behavior of learners engaging in assignments they have already learned.
On the other hand, EL can be detected if we improve the usability of the automatic grading system.
Teachers can find learners who have inappropriate cognitive load early because the proposed method clarifies
cognitive load of learners from learning behavior. In the future, we elaborately clarify the accuracy of
learners’ cognitive load and its factors, extending areas of learning contents.
REFERENCES
[1] J. Bennedsen and M. E. Caspersen, “Failure rates in introductory programming,” SIGCSE Bull., vol. 39, no. 2,
pp. 32-36, Jun. 2007.
[2] M. Okamoto and H. Kita, “A study of novices missteps in shakyo-style learning of computer programming,”
in Memoirs of the center for educational research and training, shiga university 22, pp. 49-53, 2014.
[3] R. A. Reiser and J. V. Dempsey, Trends and issues in instructional design and technology, 3rd ed. New York:
Peason, 2012.
[4] J. M. Keller, Motivational design for learning and performance, the arcs model approach, New York:
Springer, 2010.
[5] P. Pintrich, A manual for the use of the motivated strategies for learning questionnaire (mslq), Ann Arbor:
National Center for Research to Improve Postsecondary Teaching; Learning, 1990.
[6] D. T. D. Phuong and H. Shimakawa, “Grasping motivation and strategy of current students referring to past
programming course,” IEEJ Transactions on Fundamentals and Materials (A), vol. 136, no. 12, pp. 787-796, 2016.
[7] R. Gagné, W. Wager, K. Golas, and J. Keller, Principles of instructional design, 5th ed. Belmont: Wadsworth
Pub., 2005.
[8] J. Sweller, “Element interactivity and intrinsic, extraneous, and germane cognitive load,” Educational Psychology
Review, vol. 22, no. 2, pp. 123-138, 2010.
[9] S. Shuhidan, M. Hamilton, and D. D’Souza, “Understanding novice programmer difficulties via guided learning,”
in Proceedings of the 16th annual joint conference on innovation and technology in computer science education,
pp. 213-217, 2011.
[10] J. Sweller, P. Ayres, and S. Kalyuga, Cognitive load theory, Springer, 2011.
[11] W. Schnotz and C. Kürschner, “A reconsideration of cognitive load theory,” Educational Psychology Review,
vol. 19, no. 4, pp. 469-508, Dec. 2007.
[12] K. E. DeLeeuw and R. E. Mayer, “A comparison of three measures of cognitive load: Evidence for separable
measures of intrinsic, extraneous, and germane load,” Journal of Educational Psychology, vol. 100, pp. 223-234,
Feb. 2008.
[13] J. Sweller, J. van Merrienboer, and F. Paas, “Cognitive architecture and instructional design,” Educational
Psychology Review, vol. 10, no. 3, pp. 251-296, Sep. 1998.
[14] E. Haapalainen, S. Kim, J. F. Forlizzi, and A. K. Dey, “Psycho-physiological measures for assessing cognitive
load,” in Proceedings of the 12th acm international conference on ubiquitous computing, pp. 301-310, 2010.
Int J Elec & Comp Eng ISSN: 2088-8708
Predicting cognitive load in acquisition of programming abilities (So Asai)
3271
[15] F. Paas, J. Tuovinen, H. Tabbers, and P. W. van Gerven, “Cognitive load measurement as a means to advance
cognitive load theory,” Educational Psychologist, vol. 38, no. 1, pp. 63-71, Jan. 2003.
[16] B. B. Morrison, B. Dorn, and M. Guzdial, “Measuring cognitive load in introductory cs: Adaptation of an
instrument,” in ICER ’14 proceedings of the tenth annual conference on international computing education
research, pp. 131-138, 2014.
[17] J. Leppink, F. Paas, C. P. M. Van der Vleuten, T. Van Gog, and J. J. G. Van Merriënboer, “Development of an
instrument for measuring different types of cognitive load,” Behavior Research Methods, vol. 45, no. 4,
pp. 1058-1072, Dec. 2013.
[18] M. Yousoof, M. Sapiyan, and and Khaja Kamaluddin, “Measuring cognitive load-a solution to ease learning of
programming,” World Academy of Science, Engineering and Technology International Journal of Computer and
Systems Engineering, vol. 1, no. 2, pp. 32-35, 2007.
[19] L. Fridman, B. Reimer, B. Mehler, and W. T. Freeman, “Cognitive load estimation in the wild,” in Proceedings of
the 2018 chi conference on human factors in computing systems, pp. 652:1-652:9, 2018.
[20] K. Chang, B. Chiao, S. Chen, and R. Hsiao, “A programming learning system for beginners a completion strategy
approach,” IEEE Transactions on Education, vol. 43, no. 2, pp. 211-220, May 2000.
[21] R. Medawela, D. Ratnayake, W. Abeyasinghe, R. Jayasinghe, and K. Marambe, “Effectiveness of ‘fill in the
blanks’ over multiple choice questions in assessing final year dental undergraduates,” Educación Médica, vol. 19,
no. 2, pp. 72-76, 2018.
[22] S. Asai and H. Shimakawa, “Automatic scoring system of fill-in-the-blank tests to measure programming skills,”
in Proc. Of the 6th the international conference on information technology and its applications, pp. 23-29, 2017.
[23] E. Lahtinen, K. Ala-Mutka, and H.-M. Järvinen, “A study of the difficulties of novice programmers,”
in Proceedings of the 10th annual sigcse conference on innovation and technology in computer science education,
pp. 14-18, 2005.
[24] I. Milne and G. Rowe, “Difficulties in learning and teaching programming–Views of students and tutors,”
Education and Information Technologies, vol. 7, no. 1, pp. 55-66, Mar. 2002.
[25] J. C. Sipior, B. T. Ward, and R. A. Mendoza, “Online privacy concerns associated with cookies, flash cookies, and
web beacons,” Journal of Internet Commerce, vol. 10, no. 1, pp. 1-16, 2011.
[26] J. Han, M. Kamber, and J. Pei, Data mining, concept and techniques, 3rd ed. Waltham: Morgan Kaufmann, 2010.
[27] T. Hastie, R. Tibshirani, and J. Friedman, The element of statistical learning: Data mining, inference, and
prediction, 2nd ed. Springer, 2009.
[28] K. P. Murphy, Machine learning, a probabilistic perspective,” Cambridge: MIT Press, 2010.
[29] T. M. Mitchell, Machine learning,” New York: McGraw-Hill, 1997.
... That is hindering factor of students' programming success (Gomes & Mendes, 2007;Kelleher & Pausch, 2005;Stachel et al., 2013;Yukselturk & Altiok, 2017). Asai et al. (2019) states that complexity of programming steps excessively challenge working memory and increase intrinsic cognitive load (CL). Interface features of programming tool such as coding screen, drag/drop options, button colors may perceive as extraneous CL sources (Çakiroğlu et al., 2018;Moons & De Backer, 2013). ...
... This situation is called cognitive overload which negatively affects learning (Mavilidi & Zhong, 2019;Moreno, 2010;Paas et al., 2004). It can reduce efficiency of programming language learning process (Asai et al., 2019;Garner, 2002;Tsai, 2019;Yousoof et al., 2006). At this point, the germane CL, called the optimum mental effort, need run to work effectively to be schematized the new knowledge (Debue & Van De Leemput, 2014;Doolittle et al., 2005;Paas & Van Merriënboer, 1993). ...
... According to Salleh et al. (2018), activities which contain interrelated information enable student to focus on performing the programming task, and reduce cognitive overload on working memory. Asai et al. (2019) stated that difficulty of tasks increases intrinsic CL and suggested that students is given a long time to complete such activities. If intrinsic and extraneous CL are taken under control in learning environments, it is easier to make sense of connection between pieces of the information, mental effort decreases to spent getting new information, and the information retention increases in memory (Debue & Van De Leemput, 2014;Mavilidi & Zhong, 2019;Moreno, 2010). ...
Article
Full-text available
In this study, based on quasi-experimental research, was investigated the effects of teaching Python programming language via Blockly tool, which had hybrid interface, on students’ computer programming anxiety, cognitive load level, and achievement. Participants were 90 high school students, 44 of them in experimental group (hybrid interface) and 46 of them in control group (non-hybrid interface). According to results, there was a meaningful difference between programming achievement scores of students in favor of experimental group while there was no difference in terms of computer programming anxiety between groups. Moreover, after 10-week implementation process, students’ anxiety increased in each group. It was found out cognitive load levels of both groups in the first week were higher than final week. Although both weekly and 10-week intrinsic, extraneous, germane, and total cognitive load levels of experimental group were lower than control group, there was no significantly difference between groups. Consequently, it can be said that programming via hybrid interface, using Blockly, has not an effect on students’ computer programming anxiety positively whereas it helps to keep cognitive load at low level and to increase students’ programming success more. It is recommended that considering these results to make computer programming education is more efficient in high schools and administrators encourage the teachers to use programming tool had hybrid interface such as Blockly.
... Cognitive style, along with learning style, has become the essential concepts in adaptive learning. It encompasses the working memory, control and speed of processing, and visual attention [11], [33]. Researchers have identified that the cognitive processing of the human is often related to age, exercise, and experience [5], [16]. ...
Article
Full-text available
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
... Designing education-related materials following the principles of CLT and measuring cognitive load values has seen a growing interest in the field of research in recent years. A few of these studies have attempted to examine the application of the cognitive load theory in computer science educationespecially in teaching programming [1,11,12,17]. But despite many conducted studies, the problem of how to measure the cognitive load occurring during learning is still widely discussed [13]. ...
Chapter
Programming as a cognitive activity requires the utilization of various kinds of mental models that involve different cognitive loads while students learn to program. The article discusses the results of an experiment aimed at answering the following question: are eye tracking based measures related to the intrinsic cognitive load (ICL) connected with program comprehension? Thirty one students of computer science took part in the experiment. They analyzed two program codes written in the C++ language to search for (1) logical errors (LER) and (2) syntax errors (SER). ICL was measured by subjective rating of the difficulty of each task. There were significant differences found for the subjective measures of intrinsic load, the effectiveness and the time of tasks performance, and the values of eye tracking parameters: fixation duration average (FDA) and saccade amplitude average (SAA) in two experiment conditions. Longer fixation and shorter saccades were associated with higher ICL. The results obtained suggest that FDA and SAA are eye tracking measures sensitive of intrinsic cognitive load.
Article
This study aims to comparatively determine the experiences of high school students in programming language education via Python editor or Blockly tool. The comparative case study was conducted in this study. The participants consisted of total 19 high school students with no previous experience on any programming language, 9 of them in Python editor group and 10 of them in Blockly tool group. The qualitative data obtained with a semi-structured interview at the end of 10-week programming education process and analyzed by content analysis. The findings was presented in dimensions of programming process, course outcomes, and future programming courses. In each dimension, even if common codes obtained for both groups in some themes, the effects of these codes on students differed in each group. According to results, in the programming process, students faced some difficulties and conveniences in terms of mental effort. Some situations caused the learning anxiety in students, while others did not. The students achieved positive and negative course outcomes. In addition, students' preferences whether or not to attend the future programming courses changed for various reasons. Considering the scarcity of programming education studies via Python editor and Blockly tool, the results and implications of this study will strengthen future research by providing the rich data.
Article
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Contribution: This article features a systematic literature review with the objective of presenting a study that reflects the current scenario of research on the cognitive load theory (CLT) in the domain of teaching and learning computer programming. Background: Computer programming is a highly cognitive skill, requiring mastering multiple competencies, and recognized as being difficult to learn, for this reason, the cognitive load (CL) in the learner’s working memory emerged as an influential concept, making CLT one of the most common theories in computing education research. Research Questions: What are the approaches that relate CLT to teaching and learning computer programming? What CLT-related concepts are covered? What evidence is reported with respect to this context? Methodology: Following a formal protocol, a survey was performed for papers linking CLT to teaching and learning programming published between 2010 and 2020. The selection of papers was based on a set of criteria established to drive the selection process, including alignment with the research questions and paper quality assessment. Findings: The approaches applied in the papers are based on measuring the CL; through instructional design based on the development or use of resources or tools, a range of different pedagogy strategies and the CLT concepts. With respect to the concepts, the subjective measurement technique and worked example effect are the most commonly deployed. As far as the evidence reported, the better part is related to the worked example effect and measuring CLs.
Chapter
Cybersecurity education is critical in addressing the global cyber crisis. However, cybersecurity is inherently complex and teaching cyber can lead to cognitive overload among students. Cognitive load includes: 1) intrinsic load (IL- due to inherent difficulty of the topic), 2) extraneous (EL- due to presentation of material), and 3) germane (GL- due to extra effort put in for learning). The challenge is to minimize IL and EL and maximize GL. We propose a model to develop cybersecurity learning materials that incorporate both the Bloom’s taxonomy cognitive framework and the design principles of content segmentation and interactivity. We conducted a randomized control/treatment group study to test the proposed model by measuring cognitive load using two eye-tracking metrics (fixation duration and pupil size) between two cybersecurity learning modalities – 1) segmented and interactive modules, and 2) traditional-without segmentation and interactivity (control). Nineteen computer science majors in a large comprehensive university participated in the study and completed a learning module focused on integer overflow in a popular programming language. Results indicate that students in the treatment group had significantly less IL (p < 0.05), EL (p < 0.05), and GL (p < 0.05) as compared to the control group. The results are promising, and we plan to further the work by focusing on increasing the GL. This has interesting potential in designing learning materials in cybersecurity and other computing areas.
Conference Paper
Full-text available
Cognitive load has been shown, over hundreds of validated studies, to be an important variable for understanding human performance. However, establishing practical, non-contact approaches for automated estimation of cognitive load under real-world conditions is far from a solved problem. Toward the goal of designing such a system, we propose two novel vision-based methods for cognitive load estimation, and evaluate them on a large-scale dataset collected under real-world driving conditions. Cognitive load is defined by which of 3 levels of a validated reference task the observed subject was performing. On this 3-class problem, our best proposed method of using 3D convolutional neural networks achieves 86.1% accuracy at predicting task-induced cognitive load in a sample of 92 subjects from video alone. This work uses the driving context as a training and evaluation dataset, but the trained network is not constrained to the driving environment as it requires no calibration and makes no assumptions about the subject's visual appearance, activity, head pose, scale, and perspective.
Article
Full-text available
Background: Possibility of guessing in Multiple Choice questions (MCQ) when assessing undergraduates is considered a weakness. There are limited studies on the use of "Fill in the Blanks" (FIB) to overcome this issue. Objective: To assess the effectiveness of FIB in MCQ for assessing final year dental undergraduates. Methods and materials: A total of 134 final year dental undergraduates were randomly assigned to Group A and B. Group A was given a questionnaire with fifteen single best answer MCQ questions, and then the FIB questionnaire (which included the same questions in FIB form). At the same time Group B was given the FIB questionnaire initially, and then the MCQ questionnaire in the given period of time. The mean scores of the two groups were then compared. Results: Group A obtained a mean score of 10.94 (SD. ±. 3.203) for MCQ, and 10.48 (SD. ±. 2.993) for FIB, whereas Group B obtained a mean score of 6.8 (SD. ±. 2.949) for FIB and 10.05 (SD. ±. 2.619) for MCQ. There was a statistically significant difference in the mean scores obtained for the two types of tests between Group A (P = .04) and Group B (P = .0001). The difference in the mean scores obtained for the FIB were statistically significant (P = .0001) between the groups, whereas the results were not statistically significant for MCQ (P = .127). Conclusion: MCQ results revealed that the knowledge of the two groups was similar. The differences in the scores obtained for the two types of assessment tools suggest further research is needed to investigate the factors that led to the above observation.
Article
Full-text available
A student's capacity to learn a concept is directly related to how much cognitive load is used to comprehend the material. The central problem identified by Cognitive Load Theory is that learning is impaired when the total amount of processing requirements exceeds the limited capacity of working memory. Instruction can impose three different types of cognitive load on a student's working memory: intrinsic load, extraneous load, and germane load. Since working memory is a fixed size, instructional material should be designed to minimize the extraneous and intrinsic loads in order to increase the amount of memory available for the germane load. This will improve learning. To effectively design instruction to minimize cognitive load we must be able to measure the specific load components for any pedagogical intervention. This paper reports on a study that adapts a previously developed instrument to measure cognitive load. We report on the adaptation of the instrument to a new discipline, introductory computer science, and the results of measuring the cognitive load factors of specific lectures. We discuss the implications for the ability to measure specific cognitive load components and use of the tool in future studies.
Article
Full-text available
This paper discusses cognitive load measurement techniques with regard to their contribution to cognitive load theory (CLT). CLT is concerned with the design of instructional methods that efficiently use people's limited cognitive processing capacity to apply acquired knowledge and skills to new situations (i.e., transfer). CLT is based on a cognitive architecture that consists of a limited working memory with partly independent processing units for visual and auditory information, which interacts with an unlimited long-term memory. These structures and functions of human cognitive architecture have been used to design a variety of novel efficient instructional methods. The associated research has shown that measures of cognitive load can reveal important information for CLT that is not necessarily reflected by traditional performance-based measures. Particularly, the combination of performance and cognitive load measures has been identified to constitute a reliable estimate of the mental efficiency of instructional methods. The discussion of previously used cognitive load measurement techniques and their role in the advancement of CLT is followed by a discussion of aspects of CLT to which measurement of cognitive load is likely to be of benefit. Within the cognitive load framework, we will also discuss some promising new techniques.
Article
Cognitive Load Theory John Sweller, Paul Ayres, Slava Kalyuga Effective instructional design depends on the close study of human cognitive architecture—the processes and structures that allow people to acquire and use knowledge. Without this background, we might recognize that a teaching strategy is successful, but have no understanding as to why it works, or how it might be improved. Cognitive Load Theory offers a novel, evolutionary-based perspective on the cognitive architecture that informs instructional design. By conceptualizing biological evolution as an information processing system and relating it to human cognitive processes, cognitive load theory bypasses many core assumptions of traditional learning theories. Its focus on the aspects of human cognitive architecture that are relevant to learning and instruction (particularly regarding the functions of long-term and working memory) puts the emphasis on domain-specific rather than general learning, resulting in a clearer understanding of educational design and a basis for more effective instructional methods. Coverage includes: • The analogy between evolution by natural selection and human cognition. • Categories of cognitive load and their interactions in learning. • Strategies for measuring cognitive load. • Cognitive load effects and how they lead to educational innovation. • Instructional design principles resulting from cognitive load theory. Academics, researchers, instructional designers, cognitive and educational psychologists, and students of cognition and education, especially those concerned with education technology, will look to Cognitive Load Theory as a vital addition to their libraries.
Article
The paper presents a formative assessment method of the internal factors consisting of motivation and learning strategy of individual students. It enables educators to choose students to focus their supervision. The method considers the internal factors of students result in their learning behavior. The learning behavior is extracted from learning logs which are automatically taken from the e-learning sites to study programming. The internal factors of individual students are figured out through the decomposition of their learning behavior with the non-negative matrix factorization. For 48 students of C programming course of Ritsumeikan University, the method showed predicted active students who can solve difficulties by themselves. Consequently, the educators can focus their efforts on students needing cares in a timely fashion.
Book
It is impossible to control another person's motivation. But much of the instructor's job involves stimulating learner motivation, and learning environments should ideally be designed toward this goal. Motivational Design for Learning and Performance introduces readers to the core concepts of motivation and motivational design and applies this knowledge to the design process in a systematic step-by-step format. The ARCS model-theoretically robust, rooted in best practices, and adaptable to a variety of practical uses-forms the basis of this problem-solving approach. Separate chapters cover each component of the model-attention, relevance, confidence, and satisfaction-and offer strategies for promoting each one in learners. From there, the motivational design process is explained in detail, supplemented by real-world examples and ready-to-use worksheets. The methods are applied to traditional and alternative settings, including gifted classes, elementary grades, self-directed learning, and corporate training. nd the book is geared toward the non-specialist reader, making it accessible to those without a psychology or teaching background. With this guide, the reader learns how to: Identify motivation problems and goals Decide whether the environment or the learners need changing Generate attention, relevance, confidence, and satisfaction in learners Integrate motivational design and instructional design Select, develop, and evaluate motivational materials Plus a wealth of tables, worksheets, measures, and other valuable tools aid in the design process Comprehensive and enlightening, Motivational Design for Learning and Performance furnishes an eminently practical body of knowledge to researchers and professionals in performance technology and instructional design as well as educational psychologists, teachers and trainers. © Springer Science-Business Media, LLC 2010. All rights reserved.
Article
According to cognitive load theory, instructions can impose three types of cognitive load on the learner: intrinsic load, extraneous load, and germane load. Proper measurement of the different types of cognitive load can help us understand why the effectiveness and efficiency of learning environments may differ as a function of instructional formats and learner characteristics. In this article, we present a ten-item instrument for the measurement of the three types of cognitive load. Principal component analysis on data from a lecture in statistics for PhD students (n = 56) in psychology and health sciences revealed a three-component solution, consistent with the types of load that the different items were intended to measure. This solution was confirmed by a confirmatory factor analysis of data from three lectures in statistics for different cohorts of bachelor students in the social and health sciences (ns = 171, 136, and 148), and received further support from a randomized experiment with university freshmen in the health sciences (n = 58).