Content uploaded by Varadraj P Gurupur
Author content
All content in this area was uploaded by Varadraj P Gurupur on Dec 23, 2021
Content may be subject to copyright.
Artificial Intelligence-Based Student Learning
Evaluation: A Concept Map-Based Approach for
Analyzing a Student’s Understanding of a Topic
G. Pankaj Jain, Varadraj P. Gurupur, Jennifer L. Schroeder, and Eileen D. Faulkenberry
Abstract—In this paper, we describe a tool coined as artificial intelligence-based student learning evaluation tool (AISLE). The main
purpose of this tool is to improve the use of artificial intelligence techniques in evaluating a student’s understanding of a particular topic
of study using concept maps. Here, we calculate the probability distribution of the concepts identified in the concept map developed by
the student. The evaluation of a student’s understanding of the topic is assessed by analyzing the curve of the graph generated by this
tool. This technique makes extensive use of XML parsing to perform the required evaluation. The tool was successfully tested with
students from two undergraduate courses and the results of testing are described in this paper.
Index Terms—Concept maps, evaluation, probability distributions, XML parsers
Ç
1INTRODUCTION
CONCEPT maps, which are visual representations of a
particular topic and its subcomponents, have been
used in multiple settings to teach information. The power of
the concept map lies in the fact that it requires the elucida-
tion of the relationships between the subcomponents of a
particular topic. The effectiveness of using concept maps for
knowledge retention over other forms of summarizing
information has been demonstrated in multiple studies [1]
and in naturalistic settings [2]. In addition, concept maps
can be used as a form of evaluation of student learning [3],
[4]. When a particular topic is taught, concept maps can be
utilized to determine what the student knows about a sub-
ject, rather than using more traditional forms of assessment
such as multiple-choice exams.
We are in the process of developing a tool to evaluate
student learning using concept maps [5], [6]. Here, a student
would be given a topic to learn and build [7] a concept map
based on their understanding of the topic. This tool, coined
as artificial intelligence based student learning evaluation
tool (AISLE), would then evaluate [8], [9] the concept map
and assess if the student has captured enough concepts
from the given topic. This will help the instructor in evaluat-
ing a student’s understanding of the topic.
The objective of this project is as follows:
To develop a tool that understands student psychol-
ogy in terms of the learning process [5], [6] under-
taken by student using concept maps.
This project can have the following impact on the aca-
demic community:
It will provide a better understanding of the student
learning process, which will have practical curricu-
lum and classroom applications for educational psy-
chologists [10].
The project will provide the school districts in north-
east Texas with a new educational tool to use in their
classrooms.
The research question targeted in this project is as follows:
“Can we use a concept map-based approach in validating
student performance?” While many concept map-based
approaches have been proposed for assessing a student’s
knowledge of a particular topic, AISLE provides the follow-
ing core contribution: “Development of a comparitive analy-
sis using probability distribution to compare concept maps
developed by students.” In this paper we first discuss some
related work. We then present a detailed discussion on
methodology involved in using AISLE and details pertain-
ing to the processing involved with algorithms, examples
and details of the analysis of the input. To conclude, we pro-
vide results of experimentation, comparison with related
tools and sections describing the usefulness of this tool.
1.1 Related Work
Some of the investigators dealing with concept maps have
developed assessment systems using this tool. Here we
would like to note that most of these systems.
1.1.1 Intelligent Knowledge Assessment System
The knowledge assessment system presented in [39] by
Lukasenko and Vilkelis provides a structured approach to
asssessing a student’s knowledge on a particular topic. The
G.P. Jain is with the Department of Computer Science and Information
Systems, Texas A&M University—Commerce, 1816 Hunt St, Commerce,
TX 75428. E-mail: pankaj.08jain@gmail.com.
V.P. Gurupur is with the Department of Health Management and Infor-
matics, University of Central Florida, 4000 Central Florida Blvd.,
Orlando, FL 32816. E-mail: varadrajprabhu@gmail.com
J.L. Schroeder is with the Department of Psychology and Special
Education, Texas A&M University—Commerce, Commerce, TX.
E-mail: jennifer.schroeder@tamuc.edu.
E.D. Faulkenberry is with the Department of Mathematics, Texas A&M
University—Commerce, Commerce, TX.
E-mail: eileen.faulkenberry@tamuc.edu.
Manuscript received 7 May 2013; revised 14 Mar. 2014; accepted 27 May
2014. Date of publication 12 June 2014; date of current version 12 Sept. 2014.
For information on obtaining reprints of this article, please send e-mail to:
reprints@ieee.org, and reference the Digital Object Identifier below.
Digital Object Identifier no. 10.1109/TLT.2014.2330297
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014 267
1939-1382 ß2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
software application associated with this system presents
questions to the user and generates an analysis using the
answers provided to these questions. This system uses a
well defined structured approach in gathering the required
information and performing the required analysis. More-
over, this system provides feedback to the student as well
as the teacher. Some of the key contributions of this system
are: a) Providing necessary feedback to students in restruc-
turing their acquired knowledge, and b) Providing feedback
from the teacher to the student by using the system.
1.1.2 Personalized Assessment System Supporting
Adaptation and Learning (PASS)
The personalized assessment system supporting adaptation
and learning [40] provide an assessment of a student’s pres-
ent knowledge and helps identify the knowledge areas that
the student may not have covered. This helps the student
analyze the progress made in the learning process and iden-
tify the areas where more learning may be required. Some
of the key contributions of this system are: a) identification
of prior knowledge of the student; b) diagnosis of concepts
unknown to the student; and c) identifying the growth in a
student’s overall understanding of the topic.
1.1.3 Knowledge Assessment System
The knowledge assessment system presented in [41] makes
use of the concept maps developed by domain experts in
analyzing a student’s understanding of the concepts. It
makes a comparison between these concept maps and the
concept map developed by the student. This approach is
based on the assumption that the concepts identified by the
experts would represent the complete knowledge domain
while the concept map developed by the student would be
somewhat incomplete.
2METHODOLOGY FOR USING AISLE
As indicated previously, the method used by the tool to eval-
uate student understanding of specific topics as discussed in
the class is different than the regular methods used, with dif-
fering qualities such as quizzes, oral presentations, and proj-
ects. While most instructors attempt to measure a student’s
understanding of topics discussed in the class by personally
evaluating the student’s work, this tool automates the task
of evaluation. The method used by the tool can be used by
having a deeper measurement of a student’s understanding
[11], [12] of the domain in discussion by inspecting the areas
of the domain concepts in which the student may be inter-
ested [5], [6]. Fig. 1 provides a brief description of the meth-
odology involved in using our tool. As reflected in Fig. 1,
this methodology involves the following steps:
Students develop concept maps after studying the
prescribed material.
These concept maps are converted into XML-based
documents [13].
Fig. 1. Overview of the methodology used in AISLE.
268 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
The XML analyzer module of our tool extracts the
concepts embedded in the XML document [14].
The analysis module makes an assessment of the
importance of the concepts captured by the students
and provides a summary of the results of the user
interface module.
The instructor perceives the result from the user interface
and makes a judgment on the study carried out by the
student.
3IMPLEMENTATION
3.1 Analysis of Concept Maps
AISLE is a tool that helps the instructors analyze a
student’s understanding of a particular topic. This analysis
is based on the concept maps developed by the students
for a particular topic under study. The instructor uses the
conceptmapandrunsitonAISLEtoreceivethestatistics
based on probability density function [15], [16], [17]. To
generate these statistics in AISLE, we use the hierarchy of
the concepts involved in the concept map. For example,
consider the following concept map in Fig. 2 and how the
statistics can be obtained for this particular concept map.
The analysis is based on a concept map that is developed
by using IHMC tools [18], which is used by the students
for a particular topic under study. We take this concept
map and extract the XML documents and develop statis-
tics based on probability distribution functions. To get the
required statistical results from these concept maps, we
use the hierarchy of the concepts involved in the concept
maps. The concept maps are then converted into an XML-
based file to provide machine-actability. The following
steps provide a brief description of the use of concept
maps to develop the system:
In the first step, the concept map is developed by the
instructor, and then is used as reference for evaluat-
ing the concept maps which are developed by the
students.
In the next step, we convert the concept map into an
XML—based document.
A suitable parser is used to extract concepts and rela-
tions from the XML file.
Finally, the extracted information is used to develop
the statistics required to derive the necessary
conclusion.
As mentioned before, we convert the concept map into
an XML file. This XML file contains all the required infor-
mation on concepts and their associated relationships. The
linking phrase between any two concepts represents the
relation and the technique, which comprehends these rela-
tions among different concepts, and is called “Concept
Mapping” [19]. In concept mapping, we identify the con-
cepts and hierarchically organize these concepts and dif-
ferentiate the concepts into different levels where the
lower most level represents the important information
[14], [19] in understanding the topic. Fig. 3 gives a preview
of this XML file.
From Fig. 4, we observe that each and every concept is
assigned with a unique “concept id” which helps the devel-
oper in programmatically extracting the concept name by
using these “concept ids.” By using these XML files, software
developers write programs to extract the “concept id” and its
corresponding “label” concepts [20] present in concept map.
Each concept is assigned with a unique “concept id.”
This information also provides the total number of concepts
that are present in the concept maps. These concepts may or
may not relate to each other [21]. But the interconnection of
concepts that are linked to each other is not taken into con-
sideration in this list. That is, we still have to extract the rela-
tionship that exists between these concepts in the concept
map. We may have the concepts that can have no relation to
other concepts or that have at least one relation that exists
between two concepts [18], [22]. For example, the concept
“War of American independence” has relation “includes,”
and the relation “includes” is further interlinked with
“Boston campaign”. The concept “Boston campaign” does
not have any further relation.
Hence the relationships between the concepts play an
important role as they are useful in determining the actual
layout of these concepts and help in understanding the topic
which is understood by the student. Therefore, the concept
maps developed by the students are required to be
completely verified by the instructor. To achieve the relation
Fig. 2. Concept map 1 for instructor.
Fig. 3. XML file representing concepts.
Fig. 4. XML file representing relations.
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 269
between the concepts, the XML document contains a
“linking-phrase id,” which is assigned to each relation that
is present from the concept map. Fig. 6 briefly describes
how the “linking-phrase id” is assigned for each and every
relation that is extracted from the concept map as shown in
Fig. 4. Fig. 4 describes how the relation of the above con-
cepts are assigned with a unique “linking-phrase id”. Using,
this “linking-phrase list” has a developer who can easily
extract the corresponding ids and labels for the relation-
ships that are connected with the concepts from the concept
map. With this, we will have the total number of relations
that are present in the concept map. However, the problem
of interconnecting the concepts based on relationships is
not solved. That is, we do not have any clear form of data
that tells us how the concept is interlinked with the other
concept. In order to overcome this problem, the XML file
contains a tag with “connection-id,” “from-id,” and “to-id.”
By using these ids, we can clearly distinguish how these
concepts are interlinked and the relation that joins them
together, with the help of their unique ids, which are
assigned to each individual concept and to each relation
that are present in the concept map. Here, the “connection-
id” is a unique id that represents the linked form of con-
cepts. For example, we have a concept “War of American
Independence” related with “Boston campaign” using the
relation “includes.” Here, two unique connection-ids are
created: one describes the connection from “War of Ameri-
can Independence” to “includes,” and the other describes
the connection from the relation “includes” to the concept
“Boston campaign.” Using this information, a developer
can easily programmatically differentiate the interlinkage of
concepts with their relations. Fig. 5 briefly describes how
the data is organized using the above mentioned ids.
From Fig. 5 we can see that the “connection-id,” “from-id,”
and “to-id” comprise a set of issued ids which are used to
form the interconnection between the concepts. Here,
“connection-id” is a unique id that is issued to each connec-
tion from a concept to a property and vice versa. This
connection can be described as follows:
Concept “War of American Independence” is
assigned with a unique id “1LOBTD6H2-R0QQHM-
C4,” which is interlinked to the concept “Boston
campaign,” which is also assigned a unique id
“1L0BTFSLC-PK13WW-F7”.
These two concepts are interlinked with a relation
named “can be,” which is assigned a unique id.
In order to identify the connection between the concepts,
the XML file creation process creates a connection-id that
helps in identifying the connection. The XML file creates
this connection in the form of “concept->relation” and
“relation->concept”. Based on this connection we can
develop hierarchy of concepts that can be implemented in
AISLE. This can be perceived by the example depicted
in Fig. 6.
From Fig. 6, we can see that there are two new and
unique connections created where one id represents the
connection between “1L0BTD6H2-R0QQHM-C4,” which
depicts the concept “War of American Independence,” and
“1L0BTFSLJ-V07P09-FB,” which delineates a relation
“includes”. And the other connection id represents the con-
nection between “1L0BTFSLJ-V07P09-FB,” which is the rela-
tion “includes,” and “1L0BTFSLC-PK13WW-F7,” which
happens to be the concept “Boston Campaign”. From this,
we can clearly identify that the concept “War of American
Independence” is linked with “Boston Campaign” with the
relation “includes”.
Remember that there is always a unique relation id that
exists for each relation in XML files, though there may be
more than one same relation name in the concept map. The
ids “from-id” and “to-id” are used to represent all the con-
nections between the concepts to the relation and relation to
the concepts that forms a link from one concept to another
concept from the concept maps. Therefore, by using the
above interconnection of information, we propose the hier-
archy of the concepts which are based on this connection-
list. We propose three levels, and the concepts can be in any
of these levels.
A concept is said to be in Level 0 if the concept-id is
only present in “from-id.”
A concept is said to be in Level 1 if the concept-id is
present in “from-id” and “to-id.”
A concept is said to be in Level 2 if the concept-id is
only present in “to-id.”
Hence, for the above concept map represented in
Fig. 7, the concepts that are present in the hierarchy of
the AISLE system.
4THEORY AND TECHNIQUES
4.1 Overview of the Technique
The hierarchy of concepts plays an important role in under-
standing the topic represented using concept maps. This
can be explained as follows.
There is always single root node [18], [20] that is
present in the concept map. This usually represents
the name of topic in the concept map.
Fig. 5. XML file representing connections.
Fig. 6. Linking concepts and relations.
270 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
There are concepts that are connected to the root
node and are related to other concepts. These repre-
sent some aspect of the overall topic covered by the
concept map.
We have concepts that are interlinked with other
concepts. These represent in-depth knowledge [9],
[19] about the topic in the concept map.
Hence, there is only one concept that is always present in
the Level 0 and it is always the root node of the concept.
The concepts after the root node are considered to be in
Level 1 and concepts interlinked to these Level 1 concepts
are found in Level 2. This system of hierarchy still remains
if there are more concepts beyond Level 2. The concepts
beyond Level 2 will be recursively and iteratively identified
as either a Level 1 or Level 2 concept.
An important characteristic of concept maps is that the
students develop them and they tend to be unique. This
means that different students will very rarely have the same
hierarchy or the same name for the concepts. In our process,
each student develops a concept map which is translated
into an XML file [23], [24], [25]. These files also tend to be
unique to avoid plagiarism. Based on the depth of the hierar-
chy of concepts, we have developed a random scoring sys-
tem that provides a discrimination of random scores for
each concept at a particular level represented by the student.
A particular concept may be related to another concept
with a relation; this relation is pivotal in identifying the
level of hierarchy involved in the concept map. The more
concepts a student represents in the concept maps with its
relation [26], [27], the more we believe he or she under-
stands the idea about the topic in study. Thus, the number
of concepts identified in a concept map plays a significant
role in identifying the depth of knowledge for that particu-
lar topic conceived by the student. The scoring analysis is
carried out in two steps:
1) First, a random score [15], [16] is given for each con-
cept represented in the concept map.
2) Second, a random score value depends on the level
of the concepts in the hierarchy.
Scoring [8], [9] in AISLE is assigning scores in the form of
numbers to every concept that is presented in the hierarchy
of concept (Fig. 8). The scoring strategies used here are con-
cerned with the total number of concepts existing at a par-
ticular level in the hierarchy. This scoring system is
described as follows:
1) All the concepts at Level 1 are given equal incre-
ments of a random score to each of the concepts.
2) A concept at Level 0, which happens to be the root
node, is given a random score after the scores are
assigned to Level 1 concepts.
3) All the concepts in the Level 2 are given an equal
increment of random scores after assigning the score
to Level 0 concepts (for this particular example).
Remember that scoring of concepts at each level is ran-
dom. Hence, we develop the algorithm for all these con-
cepts, which is present in the hierarchy of the AISLE
system. A score of 5 is first given to first concept at Level 1.
The algorithm is as shown below.
Scoring Algorithm:
Score½countðConceptlevelÞ
Score[0] ¼5
for i:1 to countðConceptlevel1Þ1do
score[i] ¼score[i-1] þ5
score½countðConceptlevel1Þ ¼ 5þscore½count
ðConceptlevel1Þ1
for i: countðConceptlevel1Þþ1 to countðConceptlevel1Þþ1þ
countðConceptlevel2Þdo
score[i] ¼score[i-1] þ5
As mentioned above, we assign a random score to
each concept present in the hierarchy. We would like to
bring to your notice that in the first iteration, the Level 1
concepts have been assigned scores, followed by the root
concept and Level 2 concepts. The random score will be
high for all Level 2 concepts when compared with Level
0 and Level 1. This is because Level 2 concepts signify
the depth about the topic and hence a higher score is
provided to Level 2 concepts. Fig. 8 shows that a ran-
domscoreisgiventoeachconceptpresentinthehierar-
chy of the concepts in the concept map. Observe that in
Level 1, the concept “War of American Independence”
has been given a score of 5 randomly and then we incre-
mentally assign the score of 10 to “World War I”. After
assigning scores to Level 1 concepts, a score of 30 is pro-
vided to the root concept. Finally, Level 2 concepts are
assigned scores starting from 35. This scoring system,
implemented in AISLE, is based on the structure of the
concept map [18], [19]. We believe that the depth of the
hierarchy involved in the concept map determines the
level of the student understands of the topic in study.
Fig. 7. Hierarchy of concepts in a concept map. Fig. 8. Scoring individual concepts in a concept map.
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 271
From these random scores assigned as shown above, we
calculate the mean of concept scores for all the concepts and
the standard deviation for these scores using the basic for-
mulas [15], [16].
4.2 Using Z-Scores of Concepts to Perform
Analysis
Based on the aforementioned scores, we calculate the
z scores for every concept in the concept map by using the
formula [28], [29], [30] below:
zconcept ¼Scoreconcept Meanofconceptscores
StandardDeviationScore
;(1)
where StandardDeviationScore 6¼ 0
zconcept is known as z-value of concept or z-score of con-
cept. Now that the z-score of concept is calculated for every
concept in the concept map [5], [6], the random score [6],
[28] and is now standardized with the z-scores of concept.
The Standardization of z-scores is needed for these random
scores which can be explained as follows:
A Standardized value has no units [15], [16].
Standardizing value into z-scores does not change
the shape of the distribution [15], [16].
We can easily compare all the concept maps devel-
oped by the students to that of the reference of the
instructor in a standard manner [16].
The properties of standardizing these scores are as
follows:
1) The mean of z-scores is always zero [15].
2) The standard deviation of z-scores is always one
[15], [16].
3) The distribution curve for standardized scores is the
same as non-standardized scores [15], [16].
These standardized values are tabulated as shown in
Table 1.
As depicted in Table 1, a Concept Number is given to
each and every level concept in the hierarchy of AISLE. This
Concept Number is used to avoid the confusion for the
names that are used in two different concept maps. Also,
we observe that z-score of concept can be positive or it can
be zero, or negative. The significance of the z score of con-
cept is as follows:
1) If the z-score of concept is negative (zconcept <0) [16],
this means that the concept specified by the student
to that value is located below the mean of scores of
all concepts.
2) If z-score of concept is positive (zconcept >0) [16], this
means that the concept specified by the student to
that value is located above the mean of scores of all
concepts.
3) If z-score of concept is zero (zconcept ¼0) [16], this
means that the concept specified by the student is
equal to the mean of scores for all concepts.
The standard form of probability distribution function
[17], [31] in AISLE is given by the equation
P conceptðÞ¼ 1
StandardDeviationScore ffiffiffiffiffiffi
2p
pescoreconceptMeanofconceptscores
ðÞ
2
2 StandardDeviationScore
ðÞ
2;
(2)
where StandardDeviationScore 6¼ 0&0 <scoreconcept <1
The above Equation (2) clearly represents that mean of
scores [15] [16] and standard deviation [16] of scores play
an important role in obtaining the normal curve for the con-
cepts covered at a particular level of hierarchy.
As the scores are standardized with the z-scores of con-
cept, the probability density equation for AISLE has to
relate with the z-score of concept. The relation of z-score of
concept and probability density function for AISLE can be
derived as shown below:
We know from Equation (1):
zconcept ¼Scoreconcept Meanofconceptscores
StandardDeviationofscore ;
Scoreconcept ¼zconceptðStandardDeviationScore Þ
þMeanofconceptscore:(1a)
Now if we differentiate the equation (1a) on both sides,
we get:
dScoreconcept ¼StandardDeviationScore
ðÞðdzconceptÞ;
i:e:; dzconcept ¼dScoreconcept
StandardDeviationScore
:(1b)
As mentioned, we have done standardization of scores
by using z-score for each concept. Hence, the probability
distribution for all concepts in the hierarchy of concepts
must also be the function of standardization of these scores.
Therefore, by substitution, we get
Pz
concept
¼1
ffiffiffiffiffiffi
2p
pezconcept2
2:(3)
The above equation is used in AISLE to calculate the
probability values for each concept present in the hierarchy.
For a single concept map, we calculate the values for each
concept and the results are tabulated as shown in Table 2.
From Table 2, we calculate the standard probability dis-
tributions in AISLE for every concept present in the hierar-
chy. The concept number is mainly used to evaluate the
TABLE 1
Standardized Values of Z-Scores for Every Concept
in the Concept Map
Concept Number Name of Concept zconcept
1 War of American Independence 1.53
2 World War I 1.26
3 The Civil war 0.98
4 World War II 0.70
5 The French and Indian War 0.42
6 American History 0.12
7 Battle of Gettysburg 0.40
8 Boston Campaign 0.68
9 Bombing of Hamburg 0.95
10 Forbes Expedition 1.23
11 Selective service act 1.51
272 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
concept maps, which are drawn by students to ensure that
no complications arise because of the concept name when a
chart is plotted for all these values. As seen from the table,
the root node of the concept has the highest probability dis-
tribution of all concepts in the hierarchy. This is because the
score of the concept is centered at its mean. Also, the con-
cepts in Level 2 indicate the depth of the topic, and at the
same time, it indicates the level of understanding about the
topic. When a chart is drawn by taking these probability dis-
tributions on vertical axis and the concept number on hori-
zontal axis, the resultant line chart for all these distributions
is shown in Fig. 9. The above chart shows the standard
probability distribution values for all the concepts present
in the hierarchy. The root concept is exactly at the center of
the mean of all values. The mean of this curve is zero and
the standard deviation of this curve is equal to one. The
number of concepts present for this concept map is 11, and
they are indicated by their concept number. The concept
number on the left side of the curve represents Level 1 Con-
cepts, which are necessary, and the concept numbers on the
right side of curve represent Level 2 concepts, which indi-
cate the depth of the concepts in the concept map.
4.3 Evaluation of Concept Maps
So far, we have shown how the concept maps are used in
AISLE. We would now like to describe the evaluation of
these concept maps developed by students. The standard
probability distribution of the curve is used as a reference
curve to evaluate the concept maps drawn by the students.
The concept map drawn by the students must be verified
and validated by the instructor. To elaborate on the above
discussion, we take two sample concept maps developed by
two students for a homework assignment in American
History. While Fig. 10 illustrates the concept map developed
by Student 1, Fig. 11 depicts the concept map developed by
Student 2. Fig. 11 shows that the student has mentioned
an equal number of concepts in both the levels. We calculate
z-scores for all the concepts present in the hierarchy and
standard probability distributions. Table 3 below shows
the comparison of two concept maps, which are depicted by
the students.
The above table clearly shows that the number of con-
cepts in both concept maps is same. Student 1 has more con-
cepts in Level 1. This indicates that Student 1 has done well
in understanding the basic supporting concepts about the
topic. But the Level 2 concepts described by Student 1 are
fewer in number when compared with compared to Stu-
dent 2. This indicates that Student 2 has gone deeper into
each topic with necessary supporting concepts. AISLE gen-
erates the chart combined for all of the values calculated for
concept map of instructor and student. The observed chart
is as shown below. As may be observed from the above
chart, we note that the line curve for Student 2 deviates
towards the instructor curve. This indicates Student 2 has
more Level 2 concepts than Student 1. Hence we can say
that Student 2 has gone deeper into the topic under study.
Also observe that Student 1 has done well in having more
concepts at Level 1. This is indicated in the chart by observ-
ing that Student 1 has a curve deviating on the left side.
TABLE 2
Probability Values of Concepts Listed in Fig. 2
Concept Number Name of the Concept PðzconceptÞ
1 War of American Independence 6.18
2 World war I 10.34
3 The civil war 16.24
4 World war II 23.98
5 The French and Indian war 33.39
6 American History 44.97
7 Battle of Gettysburg 34.31
8 Boston campaign 24.78
9 Bombing of Hamburg 16.87
10 Forbes Expedition 10.81
11 Selective service act 6.49
Fig. 9. Line chart distribution for concept map 1.
Fig. 10. Concept map developed by student 1.
Fig. 11. Concept map developed by student 2.
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 273
4.4 Why Not Bar Charts?
Since we now have the probability distribution values for
concept maps of the instructor and students, these values
can also be plotted on a bar chart as described in Fig. 12. As
observed from Fig. 12, we can directly indicate the number
of concepts that are present in the hierarchy. However, to
ease evaluation, we do not prefer bar charts based on the
reasons listed:
1) When the concept maps used for evaluation are higher
in numbers, it will be difficult to show which student
has done well incovering the depth of the topic.
2) Clear analysis of a large number of concept maps is
difficult to achieve using bar graphs.
Hence, it is better to perceive line chart distributions for
evaluating concept maps.
4.5 Parameters Used to Evaluate Concept Map
in AISLE
To evaluate the concept maps in AISLE, the parameters that
play an important role are as follows:
1) Height of the curve, which represents the standard
probability distribution value where the mean of the
curve is equal to zero.
2) Concept number, which represents the numeric val-
ues assigned to each of the concepts in the hierarchy.
3) Leaning of the curve with the standard curve, which
represents the depth of the topic or supporting
concepts about the topic. Table 4 shows the effect
of parameters explained above in evaluating the
concept maps.
5ANALYSIS AND RESULTS
5.1 Analysis of Concept Maps for CSCI 428
Using AISLE
Students registered for CSCI 428 (Object Oriented Program-
ming) were requested to develop concept maps based on
the use of design patterns and object oriented design princi-
ples used for one of their homework assignments. While the
class strength was twenty, eight students volunteered to
participate in testing this tool. These developed maps pro-
vided some indication of the depth of their knowledge in
object oriented programming and design in providing a
solution for the assigned problem. Students represented
their different ideas demonstrating their knowledge. We
evaluate these concept maps to observe the depth of a
student’s level of understanding of the topic. We took the
concept maps, ran them on AISLE, and found the interest-
ing results.
As seen from the above graph, clearly Student 1, Stu-
dent 5, Student 6, and Student 8 have equal heights. Stu-
dent 8 has the highest peak of all the concept maps;
therefore, he has represented well in identifying the key
concepts and represented depth in his knowledge. Stu-
dent 5 has not identified the key concepts well for his
implementation of the project, which can be seen in
Fig. 13. Student 2 and Student 4 have represented fewer
concepts and have not represented their information as
in-depth. Student 1 and Student 3 have identified few
key concepts and have gone deeper in representing the
information for these concepts.
Since the tool provides a better representation of the con-
cepts that are present in the hierarchy of the concept map,
we use this as a measurement technique in evaluating the
concept maps. Below, the table shows the concepts that are
present in the hierarchy of the concept map.
As seen from the Fig. 13, Student 1 and Student 5 have
the same height, but they are differentiated by the number
of concepts present in their hierarchy. As seen by the curve,
Student 5 is leaning left of Student 1. This indicates that Stu-
dent 1 has identified more key concepts in the hierarchy,
which can be seen from Table 1. Similarly for the curves for
Student 6, Student 7, and Student 8, we see that they lean
more towards to the right of Student 1. Student 8 has repre-
sented a deeper knowledge and has done well in represent-
ing his idea on implementation of the project. Student 6 and
Student 7 has identified the key concepts in Level1 but has
not gone as deep in representing the information.
5.2 Analysis of Concept Maps for CSCI 359 Using
AISLE
Students who registered for CSCI 359 (Systems Analysis and
Design) were requested to develop concept maps for devel-
oping a project using Scrum. While the class strength was
thirty, nine students volunteered to participate in testing this
tool. Both the courses detailed in the study were taught by
the same instructor. The concept maps developed contained
Product Backlogs and Sprint Backlogs that may be perceived
by the student. This assignment focused on the depth and
detail of a student’s comprehension of the project.
TABLE 3
Concept Map Comparison for Students
Parameter Student 1 Student 2
Number of concepts in the hierarchy of concept map 7 7
Number of Level 1 concepts in the hierarchy 4 3
Number of Level 2 concepts in the hierarchy 2 3
Fig. 12. Bar chart depicting standard probability distribution.
274 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
For Student 2’s concept map, the height of the curve is
more when compared to other concept maps. This indi-
cates that concept map 2 is well structured and that the
Student has a different perception, and he sees that he
elaborated his ideas on the implementation of the project.
He has a different perspective of identifying the key
concepts regarding the implementation of the project. For
Student 3, the height of the curve is less. This indicates he
has simple ideas and he represents the interrelation with
the key concepts that he has identified with the project.
Overall, he has less understanding about the project ideas
and representing those ideas. As the tool gives a better
representation of the concepts that are present in the hier-
archy of the concept map, we evaluate this as a parameter
for evaluating the concept maps.
As seen from Table 6, we can identify that Student 7 has
the highest number of concepts at Level 2. This indicates
that he has done well in representing his total ideas in
depth. Student 2, Student 5 and Student 8 have an equal
number of concepts in Level 2, but Student 8 has repre-
sented elaborated knowledge with Level 1 concepts. Stu-
dent 5 has done better in representing his ideology in
representing his domain knowledge. Student 2 has gone to
deeper in his idea of implementing his own project ideas.
Student 1 and Student 9 have an equal number of con-
cepts in the hierarchy, but Student 1 has a deeper represen-
tation of knowledge when compared to Student 9 which
can be observed from Fig. 14.
The concept map submitted by Student 4 has identified
fewer concepts in the hierarchy. This indicates that the stu-
dent has identified fewer concepts and has represented a
simple concept map with some basic ideas of implementing
his project. There is not much in-depth knowledge in his
concept map.
5.3 Observation on the Use of AISLE
for Classrooms
Based on the experimentation explained in Sections 4.1 and
4.2, we have made the following observations (Table 5):
Use of AISLE considerably reduces the time
involved in assessing a student’s understanding of a
topic in study for the instructor.
Allows for the comparison of multiple students’
understanding on a given topic, such that the
TABLE 4
Parameters Used to Evaluate Concept Maps in Aisle
Parameters Explanation
Height of curve (Highest standard probability
distribution value)
The greater the height of the curve when compared
with the standard curve, the more will be the
understanding about the topic the concept map represents.
Concept Number in the hierachy The concept map is verified by the instructor.
The more the concepts in the hierarchy,
the more depth about the topic concept map represents.
Leaning of curve with reference to standard curve The more leaning of curve to left side of the
standard curve, the more information the
concept map represents.
Fig. 13. Probability distribution curves for concept maps developed for CSCI 428.
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 275
instructor can assess how much variability there is
between students.
Provides an overall picture on the depth of a
student’s understanding of the topic.
The technique used for assessment works well when
the concept map developed by the student involves
a good use of hierarchy. However, more sophisti-
cated methods may be required to assess concept
maps that do not involve hierarchy in representing
information using concept maps.
The experiment indicated that all students having a
curve to the right made an excellent score on the
related course item. A few students who had the
curve to left made scores below the class average.
The limitations of our current approach include:
The method used to assess concept maps does not
work very well when the concept maps submitted
by the students are not hierarchial in nature.
The validation of the concepts contained in the con-
cept maps has to be done manually by the instructor.
5.4 Comparison With Other Related Methods
The concept map-based intelligent knowledge assessment
system presented by Lukasenko and Vilkelis [39], and Per-
sonalized Assessment System developed by Gouli et al.
[40], attempts to individually analyze the depth of a
student’s knowledge by allowing them to add concepts and
relations to the concept maps developed by the instructor.
These systems provide emphasis on assessment of an indi-
vidual student. However, it may be sometimes required
that a comparative assessment of students’ knowledge be
carried out to ascertain the efficiency of the process
involved in instruction. For example, Student 1 may score
78 and Student 2 may score 98 on a scale of 100 on a test
given by the instructor. However, many other analysis
methods may indicate that Student 1 has a better under-
standing of the topic than Student 2. This situation requires
the need for developing an alternative method to compare
the students in a course. The other issue associated with a
system that assesses and compares student knowledge
using concept maps would be the choice of method used to
carry out that assessment. Annohina-Naumeca and Milase-
vicha [41] present a scoring technique for concept maps
termed as “Knowledge Assessment System.” In this system,
they provide a broad theoritical framework for assessing
concept maps based on concept-link-concept triples. This
approach exploits the semantic relationships between con-
cepts using XML. Zapata Rivera [42] were among the first
to exploit semantic relationships using XML for developing
TABLE 6
Number of Concepts in Hierarchy of Aisle
S. No Concepts in Level 1 Concepts in Level 2
Student 1 15 13
Student 2 19 16
Student 3 3 3
Student 4 6 4
Student 5 28 16
Student 6 39 13
Student 7 33 17
Student 8 47 16
Student 9 16 12
TABLE 5
Number of Concepts in Aisle
S. No Concepts in Level 1 Concepts in Level 2
Student 1 14 11
Student 2 8 6
Student 3 15 8
Student 4 16 3
Student 5 2 2
Student 6 33 11
Student 7 21 14
Student 8 30 21
Fig. 14. Probability distribution curves for concept maps developed for CSCI 359.
276 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
educational systems .Tables 7, 8, and 9 provide a compari-
son between AISLE and the aforementioned systems.
5.5 Teacher’s Reaction to AISLE
As indicated before, AISLE was tested with two undergrad-
uate classes in computer science. The instructor for these
courses requested the students, who volunteered for the
project, develop concept maps based on a homework ques-
tion. The following observations were identified by the
instructor:
The scores the students made on the regular home-
work question somewhat reflected the observation
made on the graph generated from AISLE. For exam-
ple, a student who scored well on the homework
question also had a curve leaning towards the right.
The instructor also identified that AISLE could be
used to validate the testing and grading procedures
carried out in the course.
6CONCLUSION AND PLANNED FUTURE WORK
The implementation of AISLE can be extended by using
Markov Chains. We are currently in the process of devel-
oping Markov Chain models by extracting all the
required information from the concept maps developed
by students. This information includes concepts, relations
between concepts, and the scores [9], [13] which are
giventoeachandeveryconceptintheconceptmap.
Markov Chain models have been used extensively for
decision making [32], [37], and we use these models to
predict the understanding of the concept maps devel-
oped by the students. These models use the transition
probabilities [33] where all concepts undergo transitions
from one state [32], [34] to another between a finite or
countable number of all possible states [34]. The pro-
posed model is applied and evaluated by developing
Transition State Matrix (TSM) [35] and using the Markov
Chain Monte Carlo (MCMC) [36] simulation technique
where the next transition depends on current state but
not on preceding steps.
In this paper, we have described our tool for the pur-
pose of identifying the level of a student’s understanding
of a particular topic using concept maps. AISLE has been
developed using Java and XML parsers associated with it
to extract the necessary information from the concept
maps [38]. As mentioned in the previous section, we are
now in the process of advancing the assessment tech-
nique incorporated in AISLE by experimenting with
other methods such as Markov Chains. Overall, we
believe that our tool and the method associated with it
will be useful for instructors in identifying and assessing
their ability to induce good understanding of topics and
improve their teaching methods.
REFERENCES
[1] J. C. Nesbit and O. O. Adesope, “Learning with concept and
knowledge maps: A meta-analysis, Rev. Edu. Res., vol. 76, no. 3,
pp. 413–448, 2006.
[2] B. J. Daley and A. J. Canas, “Tracy stark-schweitzer, cmap tools:
Integrating teaching, learning, and evaluation in online courses,”
in New Directions for Adult and Continuing Education, no. 113. New
York, NY, USA: Springer, 2007.
TABLE 8
Comparison between Aisle and Personalized Assessment System Supporting Adaptation and Learning [40]
AISLE PASS
1 Validation of concept maps is left to the instructor Supports investigation of unknown
concepts and misbelieves [40]
2 Focus is on an individual’s relative
understanding of the topic with respect
to other students
Provides focus on an individual’s
understanding of a particular topic
TABLE 7
Comparison between Aisle and Intelligent Knowledge Assessment System
AISLE Intelligent Knowledge Assessment System
1 Comparing concept maps to access
student’s understanding of a topics
Access an individual concept map
for analyzing the depth of a students
understanding of the topics [39]
2 Can be used to be used to validate the
process used for testing and grading the students
Focused more on identifying an
individual’s understanding of the topic
TABLE 9
Comparison between Aisle and Knowledge Assessment System
AISLE Knowledge Assessment System
1 Concepts maps are developed by
the students from the scratch
Allows students to add to an
existing concept map [41]
2 Uses a definitive scoring system to
assess the strength of a concept
map using probability distribution
The scoring system is more or less
theoretical in nature
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 277
[3] J. W. Berry and S. L. Chew, “Improving learning through inter-
ventions of student generated questions and concept maps,”
Teaching Psychol., vol. 35, pp. 305–312, 2008.
[4] H. DC West, J. R. Pomeroy, J. K. Park, E. A. Gerstenberger, and J.
Sandoval,“Critical thinking in graduate medical education—A
role for concept mapping assessment?” JAMA-J. Amer. Med.
Assoc., vol. 284, pp. 1105–1110, 2000.
[5] S. C. Lin, “A new structural knowledge based on weighted con-
cept maps,” Comput. Edu., vol. 1, pp. 679–680, Dec. 2002.
[6] R. Castles, “Knowledge maps and their applications to students
and faculty assessment,” in Proc. Frontiers Edu. Conf., Oct. 2008,
pp. S4A–9–S4A-14.
[7] A. R. Najdawi and N. Ghatasha,” Using concept mapping tools to
enhance collaborative enhancement solving and innovation in
corporate e-learning,” in Proc. Interactive Mobile Comput. Aided
Learn., 2012, pp. 197–199.
[8] C. T. Calafate, J. C. Cano, and P. Manzoni, “Improving the evalua-
tion of concept maps: A step by step analysis”, in Proc. EAEEIE
Annu. Conf., 2009, pp. 1–6.
[9] R. Siddharth, “A tool supporting concept map evaluation and
scoring,” Published master’s thesis, Dept. Comput. Sci., Umea
Univ., Umea, Sweden, 2010.
[10] W. Wang and D. Yisheng, “Teaching thinking and experiment of
database system course,” ZheShe edition, 2003, vol. 2, pp. 23–28.
[11] S. M. Zvacek, M. T. Restivo, and M. F. Chouzal, “Visualizing
understanding with concept maps,” in Proc. 15th Int. Conf. Interac-
tive Collaborative Learn., 2012, pp. 1–5.
[12] D. L. Darmofal, D. H. Soderholm, and D.R. Brodeur, “Using con-
cept maps and concept questions to enhance conceptual under-
standing,” in Proc. IEEE 32nd Annu. Frontiers Edu., 2002, vol. 6,
pp. T3A–1–T3A-6.
[13] K. Jihong, and L.Wen, “The comparison research of concept map
tools,” in Proc. Int. Conf. E-Bus. E-Government, 2011, pp. 1–4.
[14] S. Sea Chong and S. Ng kang, “A file-based implementation of
XML encryption,” in Proc. 5th Malaysian Conf. Softw. Eng., 2011,
pp. 418–422.
[15] A. L. Douglas, G. M. William, and A. L. Samuel, Statistical Techni-
ques in Business and Economics, 15 ed. New York, NY, USA:
McGraw Hill, 2012.
[16] A. Papoulis, Probability Random Variables. New York, NY, USA:
McGraw Hill, 1965.
[17] M. Simon M, “On the probability density function of squared
envelope of sum of random phase vectors,” IEEE Trans. Commun.,
vol. TC-33, no. 9, pp. 993–996, Sep. 1985.
[18] Joseph D. Novak, and Alberto j. Canas. “The theory underlying
concept maps and how to construct and use them,” Inst. Human
Mach. Cognition, Pensacola Fl, USA, Tech. Rep. IHMC Cmap
Tools 2006-01 Rev 01-2008, 2006.
[19] H. Funaoi, E. Yamaguchi, and S. Inagaki, “ Collaborative concept
mapping software to reconstruct learning processes,” Comput.
Edu., vol. 1, pp. 306–310, 2002.
[20] N. Derbensteva, F. Safayeni, and A. J. Canas, “Experiments on the
effect of map structure and concept quantification during concept
map construction,” presented at the 1st Int. Conf. Concept Map-
ping, Pamplona, Spain, 2004.
[21] V. P. Gurupur and R. S. Sadasivam, “Representing process s con-
cepts: Towards reducing semantic gap,” in Proc. 12th SDPS Trans-
disciplinary Conf. Workshop Integrated Syst., Des. Process Sci., 2009,
pp. 173–180.
[22] D. J. Novak and D. B. Gowin, Learning How to Learn. New York,
NY, USA: Cambridge Univ. Press, 1984.
[23] Y. Zhou, “A run time adaptive and code size efficient XML
parser,” in Proc. 30th Annu. Conf. Comput. Softw. Appl., Sep. 2006,
pp. 18–21.
[24] C. K. A. Rangan, “A generic parser to parse reconfigure XML
files,” in Proc. Recent Adv. Intell. Comput. Syst., Sep. 2011,
pp. 823–827.
[25] H. Zhang, “Schemas extraction for XML documents by XML ele-
ment sequence patterns,” in Proc. 1st Int. Conf. Inf. Sci. Eng., Dec.
2009, pp. 5096–5099.
[26] T. Goldsmith, P. Johnson, and W. Action, “Assessing structural
knowledge,” J. Educ. Psychol., vol. 83, pp. 88–96, 1991.
[27] C. C. Liu, P. H. Don, and C. M. Tsai, “Assessment bases on linkage
patterns in concept maps,” J. Inf. Sci. Eng.. vol. 21, pp. 873–890,
2005.
[28] Y. Zhongyum, “Optimization model of credit asset portfolio based
on z-score,” in Proc. Int. Conf. Manage. Sci. Ind. Eng., Jan. 2011,
pp. 112–115.
[29] C. Graff,” Z-score transformation of T-wave morphology values to
a standardized scale,” in Proc. Comput. Cardiol., Sep. 2011, pp. 737–
740.
[30] J. McQueen,” Some methods for classification and analysis of mul-
tivariate observations,” in Proc. 5th Symp. Math. Statist. Prob., Sep.
1967, pp. 281–297.
[31] D. A. Hill, “Probability density function of power received in a
reverberation chamber,” IEEE Trans. Electromagn. Compat., vol. 50,
no. 4, p. 1019, Nov. 2008.
[32] J. Raviv, “Decision making in Markov chains applied to the prob-
lem of pattern recognition,” Inf. Theory, vol. 13, no. 4, pp. 536–551,
1967.
[33] M. Sato, K. Abe, and H. Takeda, “Learning control of finite Mar-
kov chains with unknown transition probabilities,” Automat.
Control, vol. 27, no. 2, pp. 502–505, 1982.
[34] F. O. Hocaoglu, O.N. Gerek, and M. Kurban, “The effect of Mar-
kov chain state size for synthetic speed wind generation,” in Proc.
10th Int. Conf. Probabilistic Methods Appl. Power Syst., 2008, pp. 1–4.
[35] W. Yong, H. Xueshan, and D. Ying, “Power system operational
reliability equivalent modeling and analysis based on the Markov
chain,” in Proc. IEEE Int. Conf. Power Syst. Technol., 2012, pp. 1–5.
[36] M. Perninge and L. Soder, “Analysis of transfer capability by Mar-
kov chain Monte Carlo simulation,” in Proc. IEEE Int. Conf. Power
Energy, 2010, pp. 232–237.
[37] J. G. Amit, R. Rod, W. John, B. Gautam, and S. Daniel. (2013, Feb.
7). Using hidden Markov models to characterize student behav-
iors in learning-by-teaching environments [Online]. Available:
http://esdi.us/research/models/TA-markov-models-2008.pdf
[38] G. P. Jain, V. P. Gurupur, and E. D. Faulkenberry, “Artificial intel-
ligence based student learn. Evaluation tool,”in Proc. IEEE Global
Eng. Conf., 2013, pp. 751–756.
[39] R. Lukasenko and M. Vilkelis, “Feedback in the concept map-
based intelligent knowledge assessment system,” Sci. J. Riga Tech.
Univ., vol. 41, pp. 17–26, 2010.
[40] E. Gouli, A. Gagoulou, and M. Grigoriadou, “A coherent and inte-
grated framework using concept maps for various educational
assessment functions,” J. Inf. Technol. Edu., vol. 2, pp. 215–240,
2003.
[41] A. Annohina-Naumeca and S. Milasevicha, “Studying possibili-
ties to use several experts’ maps in the concept map based knowl-
edge assessment system,” in Proc. Int. Conf. Comput. Syst. Technol.,
2011, pp. 534–539.
[42] J. Diego and Z. Rivera, “Supporting negotiated assessment using
open student models,” in Proc. 8th Int. Conf. UM, Jul. 2001,
pp. 282–294.
G. Pankaj Jain received the bachelor’s degree
in electrical and electronics engineering from
Jawaharlal Nehru Technological University in
2010. He was a graduate research assistant
in the Department of Computer Science and
Information Systems at Texas A&M University-
commerce.
Varadraj P. Gurupur received the PhD degree.
He also received the Doctor of Philosophy degree
in computer engineering from the University of
Alabama at Birmingham in 2010. He is currently
an assistant professor in the Department of
Health Management and Informatics, University
of Central Florida, Orlando, FL. He was previ-
ously with the Department of Computer Science
and Information Systems at Texas A&M Univer-
sity-Commerce.
278 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 7, NO. 3, JULY-SEPTEMBER 2014
Jennifer L. Schroeder received the PhD degree
specializing in Educational Psychology from
University of Wisconsin—Madison in 2002. She
is currently an associate professor and an interim
chair in the Department of Psychology and
Special Education, Texas A&M University–
Commerce.
Eileen D. Faulkenberry received the PhD
degree from Oklahoma State University in the
area of mathematics education. She is currently
an associate professor in the Department of
Mathematics, Texas A&M University-Commerce.
She happens to be a member of National Council
of Teachers of Mathematics.
JAIN ET AL.: ARTIFICIAL INTELLIGENCE-BASED STUDENT LEARNING EVALUATION: A CONCEPT MAP-BASED APPROACH FOR... 279