ArticlePDF Available

Abstract and Figures

This paper proposes a generic methodology and architecture for developing a novel conversational intelligent tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a student’s learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during tutoring, and personalising the tutorial to boost confidence and improve the effectiveness of the learning experience. Learners can intuitively explore and discuss topics in natural language, helping to establish a deeper understanding of the topic. The Oscar CITS methodology and architecture are independent of the learning styles model and tutoring subject domain. Oscar CITS was implemented using the Index of Learning Styles (ILS) model (Felder & Silverman, 1988) to deliver an SQL tutorial. Empirical studies involving real students have validated the prediction of learning styles in a real-world teaching/learning environment. The results showed that all learning styles in the ILS model were successfully predicted from a natural language tutoring conversation, with an accuracy of 61–100%. Participants also found Oscar’s tutoring helpful and achieved an average learning gain of 13%.
Content may be subject to copyright.
A conversational intelligent tutoring system to automatically predict learning styles
Annabel Lathama, Keeley Crocketta, David McLeana, Bruce Edmondsb
aThe Intelligent Systems Group, School of Computing, Mathematics and Digital Technology
bThe Centre for Policy Modelling
The Manchester Metropolitan University, Chester Street, Manchester M1 5GD, UK
Email: a.latham@mmu.ac.uk, k.crockett@mmu.ac.uk, d.mclean@mmu.ac.uk, b.edmonds@mmu.ac.uk
Abstract
This paper proposes a generic methodology and architecture for developing a novel conversational intelligent
tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a
student’s learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during
tutoring, and personalising the tutorial to boost confidence and improve the effectiveness of the learning
experience. Learners can intuitively explore and discuss topics in natural language, helping to establish a deeper
understanding of the topic. The Oscar CITS methodology and architecture are independent of the learning styles
model and tutoring subject domain. Oscar CITS was implemented using the Index of Learning Styles (ILS)
model (Felder & Silverman 1988) to deliver an SQL tutorial. Empirical studies involving real students have
validated the prediction of learning styles in a real-world teaching/learning environment. The results showed
that all learning styles in the ILS model were successfully predicted from a natural language tutoring
conversation, with an accuracy of 61-100%. Participants also found Oscar’s tutoring helpful and achieved an
average learning gain of 13%.
Keywords:
Architectures for educational technology system
Human-computer interface
Intelligent tutoring systems
Interactive learning environments
Teaching/learning strategies
1. Introduction
The widespread use of computers and access to the Internet has created many opportunities for online education,
such as improving distance-learning and classroom support. Intelligent Tutoring Systems (ITS) extend
traditional content-delivery computerised learning systems by adding intelligence to improve the effectiveness
of a learner’s experience (Brusilovsky & Peylo 2003). This normally involves personalising tutoring using
factors such as learner knowledge, emotion or learning style to alter the sequence and style of learning material.
Most ITS are hyperlink menu based (Cha, Kim, Park, Yoon, Jung & Lee 2006; Klasnja-Milicevic, Vesin,
Ivanovic & Budimac 2011; Popescu 2010; Wang, Wang & Huang 2008) and adapt the tutoring by reordering
menu items (Garcia, Amandi, Schiaffino & Campo 2007), allowing learners to manage their own study at a time
and place to suit them. This experience has more in common with computerised textbooks than classroom
tutorials, where human tutors direct the learning. An extension of ITS is Conversational Intelligent Tutoring
Systems (CITS), which integrate natural language interfaces rather than menus, allowing learners to explore
topics through discussion and to construct knowledge as they would in the classroom. However, it is a complex
and time consuming task to develop a CITS which can converse naturally. Consequently only a few CITS exist
at present (D’Mello, Lehman, Sullins, Daigle, Combs, Vogt et al 2010; Arnott, Hastings & Allbritton 2008;
Sarrafzadeh, Alexander, Dadgostar, Fan & Bigdeli 2008).
A CITS that can imitate a human tutor by leading an adaptive tutorial conversation uses a familiar format which
can help improve learner confidence and motivation, leading to a better learning experience. Human tutors adapt
their tutoring style and content based on cues they pick up from learners, such as their level of understanding
and learning style. Learning styles model the way groups of people prefer to learn (Felder & Silverman 1988;
Hsieh, Jang & Hwang 2011), for example by active experimentation or by observation. Some ITS adapt tutoring
to an individual’s learning style, either determined using a formal questionnaire (Papanikolaou, Grigoriadou,
Kornilakis & Magoulas 2003) or by analysing learner behaviour (Kelly & Tangney 2006). However, there are
no tutor-led CITS that can predict and adapt to learning style during the tutoring session like a human tutor.
This paper describes the architecture and methodology for creating a novel CITS called Oscar that dynamically
predicts and adapts to an individual’s preferred learning style during a tutorial conversation. The aim of the
research was to imitate a human tutor by using knowledge of learning styles and learner behaviour to predict
learning style rather than an interface specifically designed to capture learning styles, as in (Cha et al 2006).
Whilst this considerably increases the complexity of predicting learning styles, conversational interfaces are
intuitive to use and the discussion of problems can prompt a deeper understanding of topics. This paper also
Page 1
describes a series of experiments that aim to determine if it is possible to predict learning style from a learner’s
behaviour during a tutorial conversation, and thus validate the proposed methodology and architecture.
In this paper, section 2 introduces some background and related work of intelligent tutoring systems,
conversational agents and the Index of Learning Styles (Felder & Silverman 1988). Section 3 introduces the
Oscar CITS, and Sections 4 and 5 describe a generic methodology and architecture for creating an Oscar CITS.
Section 6 describes the implementation of Oscar CITS and the real-world experiments conducted to investigate
the prediction of learning styles from a natural language tutoring dialogue. Section 7 presents the experimental
results and discussion and Section 8 outlines the conclusions and future work.
2. Related work
2.1. Intelligent tutoring systems
Computerised learning systems were traditionally information-delivery systems developed by converting tutor
or distance-learning material into a computerised format (Brooks, Greer, Melis & Ullrich 2006). The popularity
of the Internet has enhanced the opportunities for e-learning, however most online systems are still teacher-
centred and take little account of individual learner needs (Spallek 2003). Within the field of computerised
learning systems, adaptive educational systems attempt to meet the needs of different students by offering
individualised learning (Brusilovsky & Peylo 2003). Intelligent Tutoring Systems (ITS) are adaptive systems
which use intelligent technologies to personalise learning according to individual student characteristics, such as
knowledge of the subject, mood and emotion (D’Mello et al. 2010) and learning style (Yannibelli, Godoy &
Amandi 2006).
There are three main approaches to intelligent tutoring (Brusilovsky & Peylo 2003):
Curriculum sequencing introduces adaptation by presenting students with learning material in a sequence and
style best suited to their needs (Klasnja-Milicevic et al 2011).
Intelligent solution analysis adds intelligence to ITS by giving students detailed feedback on incomplete or
erroneous solutions, helping them learn from their mistakes (Mitrovic 2003).
Problem solving support techniques offer learners intelligent assistance to reach a solution (Melis, Andres,
Budenbender, Frishauf, Goguadse, Libbrecht et al 2001).
Curriculum sequencing is the most widely used technique (Brusilovsky and Peylo 2003). Traditionally ITS
adapt to existing student knowledge but more recently learner affect factors have been incorporated, such as
emotion (Ammar, Neji, Alimi & Gouarderes 2010), personality (Leontidis & Halatsis 2009) and learning style
(Popescu 2010). Few ITS incorporate all three techniques as they are complex and time-consuming to develop,
but the Oscar CITS presented in this paper will incorporate all three intelligent technologies by personalising
learning material and discussing problems and solutions with students.
ITS are normally menu or hyperlink based, with choices re-ordered or ranked to recommend a particular
sequence to learners (Klasnja-Milicevic et al 2011; Garcia et al 2007). Whilst this design simplifies the capture
of learner behaviour and choices, learners are really being assisted in self-learning rather than tutored, which is
little different from recommending chapters of a book. CITS address this issue by employing natural language
interfaces whose intuitive, dialogue-based tutoring is more comparable to classroom tutorials (Chi, Siler, Jeong,
Yamauchi & Hausmann 2001; D’Mello et al 2010; Sarrafzadeh et al 2008). However, despite their more
instinctive, teacher-led learning experience (which supports the construction of knowledge adopted by human
tutors), it is difficult to automate natural conversations and so CITS are uncommon (D’Mello et al 2010; Woo
Woo, Evens, Freedman, Glass, Seop Shim, Zhang et al 2006; Sarrafzadeh et al 2008).
ITS that adapt to learning styles capture them using a formal questionnaire (Papanikolaou et al 2003) or by
analysing learner behaviour (Cha et al 2006; Garcia et al 2007). Whilst questionnaires are the simplest measure
of learning styles, learners find them onerous and may not lend enough time or attention to complete them
accurately (Yannibelli, Godoy & Amandi 2006). Implicitly modelling learning styles by analysing learner
behaviour history (Garcia et al 2007) removes the requirement for a questionnaire, but delays adaptation until
several modules have been completed. Also, this method does not incorporate changes in learning style which
may occur over time or for different topics. EDUCE (Kelly & Tangney 2006) and WELSA (Popescu 2010) both
estimate learning style dynamically for curriculum sequencing, but do not include a conversational interface or
other intelligent tutoring technologies. The Oscar CITS will dynamically predict learning style throughout the
tutoring conversation and adapt its intelligent tutoring style to suit the learning style predicted.
2.2. Conversational agents
Conversational agents (CAs) are computer programs which allow people to communicate with computer
systems using natural language (O’Shea, Bandar & Crockett 2011). CA interfaces are intuitive to use, and have
been used effectively in many applications, such as web-based guidance (Latham, Crockett & Bandar 2010),
database interfaces (Pudner, Crockett & Bandar 2007) and tutoring (D’Mello et al 2010). CAs can add natural
Page 2
dialogue to ITS, but are used infrequently as they are complex and time-consuming to develop, requiring
expertise in the scripting of dialogues (O’Shea, Bandar & Crockett 2011). ITS which aim to mimic a human
tutor (such as Oscar CITS) need CA interfaces to support the construction of knowledge through discussion (Chi
et al 2001).
Textual CAs usually adopt a pattern matching (Michie 2001) or semantic based (Li, Bandar, McLean & O’Shea
2004; Khoury, Karray & Kamel 2008) approach. Semantic-based CAs seek to understand the meaning of the
natural language whereas pattern-matching CAs use an algorithm to match key words and phrases from the
input to a set of pattern-based rules (Pudner, Crockett & Bandar 2007). As pattern matching CAs match key
words within an utterance, they do not require grammatically correct or complete input. However, there are
usually numerous patterns in a given context (Sammut 2001), leading to many hundreds of rules in the CA’s
knowledge base, which demonstrates the complexity and time required to script rules for a pattern-matching
CA. A pattern matching CA was adopted for Oscar CITS as it must cope with grammatically incomplete or
incorrect utterances that are commonly found in text-based chat by students.
2.3. Index of learning styles
The Index of Learning Styles (ILS) model (Felder & Silverman 1988) describes the teaching and learning styles
in engineering education. The ILS model represents an individual’s learning style as points along four
dimensions that indicate both the strength and the nature of their learning style preference. Each learning style
dimension relates to a step in the process of receiving and processing of information, as illustrated in Fig. 1. The
ILS is assessed using a 44-question forced-choice questionnaire (11 questions per learning style dimension), that
assigns a style and score for each dimension.
Fig. 1. ILS dimensions.
In addition to the formal assessment questionnaire, the ILS model describes typical learner behaviours that can
be used to informally group types of learners. The ILS model was adopted when implementing the Oscar CITS
as engineering students make up the initial experimental groups. However, the Oscar CITS is generic and its
flexible modular structure does not restrict the choice of learning styles model to the ILS.
3. Oscar CITS
The Oscar CITS is a novel conversational intelligent tutoring system which dynamically predicts a student’s
learning style during a tutoring conversation, and adapts its tutoring style appropriately. Oscar’s pedagogical
aim is to provide the learner with the most appropriate learning material for their learning style to promote a
more effective learning experience and a deeper understanding of the topic. Rather than being designed with the
purpose of picking up learning styles (such as Cha et al 2006) the Oscar CITS aims to imitate a human tutor by
leading a two-way discussion and using cues from the student’s dialogue and behaviour to predict and adapt to
their learning style. Oscar CITS incorporates intelligent technologies to sequence the curriculum according to
learner knowledge and learning style, intelligently analyse solutions and give hints to assist learners in
constructing knowledge. Oscar’s natural language interface and classroom tutorial style are modelled on
classroom tutorials (Crown copyright 2004), enabling learners to draw on their experience of face-to-face
tutoring to feel more comfortable and confident in using the CITS. Oscar CITS is an online personal tutor which
can answer questions, provide hints and assistance using natural dialogue, and which favours learning material
to suit each individual’s learning style. The Oscar CITS offers 24-hour personalised learning support at a fixed
cost.
General descriptions of Oscar CITS, including its implementation, example learner dialogue and the results of
initial studies in predicting learning styles, have been reported in Latham, Crockett, McLean, Edmonds &
O’Shea (2010) and Latham, Crockett, McLean & Edmonds (2010). The Oscar CITS adaptation strategies were
described in Latham, Crockett, McLean & Edmonds (2011), which reported empirical results showing that
students whose learning material matched their learning styles performed 12% better than those with unmatched
material.
The rest of this paper will describe an original methodology and architecture for creating an Oscar CITS and the
experiments conducted to investigate its success in predicting learning styles in a real teaching/learning
environment.
4. Predicting learning styles through natural language dialogue
CITS are complex and time-consuming to develop, requiring expertise in knowledge engineering (the capture
and formatting of expert knowledge (O’Shea, Bandar & Crockett 2011), such as tutoring, learning styles and
domain knowledge) and CA scripting. Formalising the development of a CITS which can be applied to different
learning styles models and tutoring domains will help to speed up the development. This section presents a
methodology for creating an Oscar CITS which can predict learning styles from a natural language dialogue.
4.1. Methodology for creating Oscar CITS
Page 3
The methodology for creating an Oscar CITS consists of three phases as shown in Table 1. The first phase of the
methodology relates to the creation of the learning styles module and the second phase to the tutorial subject
domain. The third phase incorporates the learning styles predictor and tutorial conversation into a CITS
architecture. Each phase will now be described.
Table 1.
3-Phase methodology for creating Oscar CITS.
Phase 1: Create the Learning Styles Predictor Module
1.1. Select a Learning Styles Model
a. Reduce the learning styles model if necessary
b. Extract the behaviour characteristics
1.2. Map learning style behaviour to the conversational tutoring style
1.3. Analyse the learning styles model for language traits
1.4. Adapt the generic logic rules to predict learning styles
Phase 2: Design a Tutorial Conversation
2.1. Capture the tutorial scenario and questions (including movies, voice, images, examples, etc.) from
human tutors in a specific domain
2.2. Determine the conversational structure/style
2.3. Map tutorial questions onto the generic question styles and templates
2.4. Script CA natural language dialogue for each tutorial question using the 3-level model
2.5. Link tutorial dialogue to logic rules through CA variables
Phase 3: Construct the CITS Architecture
4.2. Methodology phase 1: create the learning styles predictor module
4.2.1. Step 1.1: select a learning styles model
The first step in creating the learning styles predictor module requires a learning styles model (Felder &
Silverman 1988, Honey & Mumford 1992) to be selected. To illustrate and validate Phase 1 of the methodology,
the ILS model (Felder & Silverman 1988) was selected as the initial experimental group will be university
engineering students. The ILS questionnaire contains 44 questions, which is too many to incorporate into a
single tutoring session without being onerous for students. To reduce the ILS model, a study was undertaken to
investigate which were the best predictor questions (Latham, Crockett, McLean & Edmonds 2009). The study of
103 completed ILS questionnaires found that 17 questions predicted the overall learning style result in at least
75% of cases, with the top three questions predicting the result in 84% of cases. The resulting subset of the best
ILS predictor questions formed the basis of further analysis in developing the Oscar CITS strategy for the
prediction of learning styles.
The ILS model describes typical behaviour characteristics for each learning style. For clarity and ease of
analysis, the behaviour characteristics were extracted from the ILS model and summarised in a table of common
learner behaviour (Table 2).
Page 4
Table 2.
Typical learner behaviour characteristics extracted from the ILS model.
Sensor
Prefer facts, data, experimentation
Prefer solving problems using standard methods
Dislike surprises
Patient with detail
Do not like complications
Good at memorising facts
Careful but slow
Comfortable with symbols (eg. words)
Visual
Remember what they see
Like pictures, diagrams, flow charts, time lines, films
Prefer visual demonstration
Active
Do something with information – discuss/explain/test
Active experimentation
Do not learn much in passive situations (lectures)
Work well in groups
Experimentalists
Process information by setting up an experiment to test an
idea, or try out on a colleague
Sequential
Follow linear reasoning processes
Can work with material they have only partially or
superficially understood
Strong in convergent thinking and analysis
Learn best when information is presented in a steady
progression of complexity and difficulty
Intuitor
Prefer principles and theories
Prefer innovation
Dislike repetition
Bored by detail
Welcome complications
Good at grasping new concepts
Quick but careless
Uncomfortable with symbols
Verbal
Remember what they hear, or what they hear then say
Like discussion
Prefer verbal explanation
Learn by explaining to others
Reflective
Examine and manipulate information introspectively
Reflective observation
Do not learn much if no chance to think (lectures)
Work better alone
Theoreticians
Process information by postulating explanations/interpretations, drawing
analogies, formulating models
Global
Make intuitive leaps
Difficulty working with material not understood
Divergent thinking and synthesis
Sometimes better to jump directly to more complex and difficult material
4.2.2. Step 1.2: map learning style behaviour to the conversational tutoring style
To map learning style behaviour to the conversational tutoring style, each behaviour characteristic extracted in
step 1.1b (in Table 2) is assessed using the following criteria:
1. Is it possible to map the behaviour trait onto a two-way online conversational tutorial?
2. How could the behaviour trait be used to implicitly predict learning styles?
All behaviour traits that can be mapped onto a tutorial conversation and used to predict learning styles should be
included in a summary table along with a description of how they could be used to predict learning styles (Table
3).
Page 5
Table 3.
Aspects of learner behaviour for predicting learning styles from a natural language tutorial dialogue.
Behaviour by Learning Style
Sensor
Prefer facts, data, experimentation
Dislike surprises
Careful but slow
Comfortable with symbols (e.g. words)
Intuitor
Prefer principles and theories
Dislike repetition
Bored by detail
Quick but careless
Uncomfortable with symbols
Visual
Remember what they see
Like pictures, diagrams, flow charts, time lines, films
Prefer visual demonstration
Verbal
Remember what they hear, or what they hear then say
Like discussion
Prefer verbal explanation
Learn by explaining to others
Active
Do something with information – discuss/explain/test
Experimentalists
Process information by setting up an experiment to test
an idea, or try out on a colleague
Reflective
Examine and manipulate information introspectively
Theoreticians
Sequential
Follow linear reasoning processes
Learn best when information is presented in a steady
progression of complexity and difficulty
Global
Sometimes better to jump directly to more complex and
difficult material
Implication for Learning Style Prediction
Perform better in questions with facts, examples and results
Prefer introductions, overviews and working in a sequential predictable order
Consider timing interactions and number of errors
Consider amount of discussion with the tutor
Perform better in theory questions
Present information usually only once
Perform better where information is summarised
Consider timing interactions and number of errors
Consider amount of discussion with the tutor
Perform better in questions with diagrams, pictures, movies
Perform better in questions with pictures, diagrams, flow charts, time lines, films
Perform better in questions with visual walkthroughs rather than textual
explanation
Perform better in questions with movies and sound clips
Consider amount of discussion with the tutor
Perform better in questions with movies, sound clips and tutor explanations
Consider amount of discussion with the tutor
Consider amount of discussion with the tutor; perform better in questions with
practical exercises
Perform better in practical questions
Consider amount of discussion with the tutor; perform better in questions with
practical exercises
Consider amount of discussion with the tutor
Perform better in theoretical questions
Perform better when information presented in a steady progression of complexity
and difficulty
Perform better when information presented in a steady progression of complexity
and difficulty
Perform better where information is summarised and when they can attempt
problems in one go
Next, it is necessary to decide which aspects of behaviour need to be captured during a tutoring conversation.
Each behaviour trait in Table 3 was studied in turn and the list was reorganised according to behaviour, with
similar behaviours grouped together. For example, as both Verbal and Active learners like discussion, they were
grouped together under the ‘like discussion’ behaviour category. Next, this list of behaviours was reduced
further by considering the behaviour that would need to be captured from a natural language conversation. For
example, the ‘like discussion’ category now became the ‘discussion’ category and included also the Sensor (like
discussion), Intuitor (do not like discussion) and Reflective (do not like discussion) learning styles. The result of
this analysis is a list of behaviour cues to be captured during the conversational tutorial that can be used to
predict learning style. Table 4 lists the behaviour to be captured during a tutorial conversation in order to predict
ILS learning styles, and relates each behaviour variable to the learning styles it may be used to predict.
Table 4.
Learner behaviour cues to be captured during tutoring.
Behaviour variable to be captured
Number of discourse interactions
Number of questions asked
Tutorial duration
Reading time
Number of errors due to not reading the question
Right answer after seeing an image
Right answer after seeing a movie/walkthrough
Right answer after an explanation of theory
Right answer after seeing an example
Choose to be guided through the steps of solving a problem
Choose to solve a problem straight away
Score for practical questions
Score for theoretical questions
Learning style
Sensor, Intuitor, Verbal, Active, Reflective
Sensor, Intuitor, Verbal, Active, Reflective
Sensor, Intuitor,
Sensor, Intuitor,Visual, Verbal
Sensor, Intuitor
Visual
Visual, Verbal, Active
Intuitor
Sensor
Sensor, Sequential
Intuitor, Global
Active, Sensor
Reflective, Intuitor
4.2.3. Step 1.3: analyse the learning styles model for language traits
Page 6
Mairesse, Walker, Mehl & Moore (2007) showed that it was possible to automatically recognise an individual’s
personality type using language cues (such as the type of vocabulary used) from conversation and text (essays).
As learning style is linked to personality (Coffield, Moseley, Hall & Ecclestone 2004), it may be possible that
the type of vocabulary used can indicate an individual’s learning style. Özpolat and Akar (2009) mapped a short
list of key words to ILS learning styles, and analysed student Internet search terms to successfully predict
learning styles for three of the four ILS dimensions. Step 1.3 of the methodology involves analysing the learning
styles model to extract any language traits that could be indicative of learning style. The key words list in
Ozpolat & Akar (2009) was extended by analysing the descriptions of behaviour traits in the ILS model.
Indicative words and phrases used to describe behaviour traits were extracted and mapped to learning styles.
This key words list was then expanded using a thesaurus to produce an initial set of key words and phrases that
were indicative of learning style. For example, the key word show (e.g. “Can you show me an example”)
indicates a Visual learning style, whereas the keyword tell (e.g. “Can you tell me what to do”) indicates a Verbal
learning style. The process of discovering associations between key words and particular learning styles requires
experimentation and analysis of tutoring dialogues, so the content of the list should be tested and expanded by
analysing actual tutoring discourse once the Oscar CITS has been developed for a particular domain.
4.2.4. Step 1.4: adapt the generic logic rules to predict learning styles
The final step in phase 1 is to convert the knowledge of the learning styles model (the captured behaviour
factors and key words gathered from steps 1.2 and 1.3) into a set of logic rules. The aim of such rules is to
continually increment student learning style values as the tutoring conversation takes place. A generic set of 33
logic rules was created using the learner behaviour captured from the ILS (Table 4). As the generic logic rules
relate to learner behaviour, the set should be adapted and expanded for different learning styles models that may
define other behaviours. Table 5 shows two examples of logic rules developed using the behaviour cues in Table
4 and mapped to the ILS. The first example, rule 1, is generated from the behaviour cue ‘Right answer after
seeing an image’ and is linked to the Visual learning style. If a student does not know the answer, is shown an
image and then gets the answer right, this visual presentation has helped their understanding so the Visual
learning style value is incremented. Rule 2 is generated from the ‘Number of errors due to not reading the
question’ behaviour, linked to the Intuitor and Visual learning styles. If the answer to a question is given in the
explanation text and a student gets the answer wrong, this behaviour indicates they are careless and not
comfortable with reading text, so the Intuitor and Visual learning style values are incremented.
Table 5.
Example logic rules to adjust student learning style values based on tutoring conversation.
1. Example rule to test whether presenting information visually helps the student’s information perception:
IF student shown image/diagram
AND student gives correct answer
THEN increase VISUAL;
2. Example rule to test how comfortable the student is with words and with detail:
IF answer is given in the explanation text
AND student does not know the answer
THEN increase INTUITOR
AND increase VISUAL;
The set of logic rules resulting from this step are to be applied during a tutoring conversation to dynamically
predict learning styles.
This section has described the steps in phase 1 of the generic Oscar CITS methodology to create a Learning
Styles Predictor module using the ILS model as an example.
4.3. Methodology phase 2: design a tutorial conversation
Phase 2 of the methodology involves capturing the tutorial from human tutors and iteratively developing a
tutorial conversation with input from the human tutors. This part of the methodology will be illustrated using an
example tutoring domain of the database Sequential Query Language (SQL).
4.3.1. Step 2.1: capture the tutorial scenario and questions (including movies, voice, images, examples, etc.)
from human tutors in a specific domain
The first step in designing a tutorial conversation is to capture a tutorial scenario from human tutors. The
domain of SQL was selected as the target audience for the pilot study is undergraduate computing students, for
whom a Databases course including SQL is compulsory. First, interviews were conducted with undergraduate
level database course tutors to identify important SQL concepts for the tutorial syllabus. Ten tutorial questions
and a multiple choice question (MCQ) test were devised to cover the learning outcomes of the tutorial. To
capture the tutorial scenario, a document was produced in consultation with lecturers that contained a
conversation script for each question, including possible learner answers and tutor’s responses to these. For each
learner response, a further tutor response was written, and so on, until each question in the tutorial had a number
of different paths depending on individual learner knowledge and responses. The design of the tutorial
Page 7
conversation was a time consuming and iterative process. However, by planning and detailing the dialogue at
this point, the development of the conversational agent was more efficient. Resources such as examples, movies,
images etc. were embedded into the tutorial conversation as appropriate.
4.3.2. Step 2.2: determine the conversational structure/style
A CITS that attempts to mimic a human tutor must be able to manage a tutoring conversation on a number of
levels, each with a different goal. Step 2.2 of the methodology determines the structure of the CA tutorial
conversation. Drawing on experience of classroom tutorials (Crown copyright 2004), three parts of a tutorial
conversation with separate goals were distinguished and a three-level model of a tutorial conversation was
designed (Fig. 2). At the highest level (the ‘social level’), Oscar CITS needs the ability to maintain a natural
language tutorial conversation, and like a human tutor must pick up cues if the learner is not engaging in the
tutorial (e.g. use of bad language) and choose to end the session. At the main ‘tutoring level’, Oscar CITS
directs the tutorial, explains topics and asks questions, guiding the learner towards an understanding of the topic.
This may involve Oscar CITS giving feedback on erroneous or incomplete solutions (intelligent solution
analysis), explaining the topic using different methods if required, such as practical examples (curriculum
sequencing) and giving hints to help the learner construct a solution (problem solving support). During a
tutorial, learners may discuss a related topic to help their understanding, requiring a deeper ‘discussion level’
with the ability to discuss and explain a predefined set of Frequently Asked Questions related to the domain.
Fig. 2. 3-Level model of a tutorial conversation.
As part of this step, a list of FAQs and answers should be captured from the human tutors, scripted in natural
language and added to the tutorial conversation document.
4.3.3. Step 2.3: map tutorial questions onto the generic question styles and templates
The third step in phase 2 of the methodology links the captured tutorial questions to the behaviour
characteristics identified in phase 1 step 1.2. This is done by mapping tutorial questions to the set of generic
question styles and templates. During the development of the Learning Styles Predictor module (Phase 1 steps
1.1 and 1.2), questions and behaviour from the ILS model were mapped to a conversational tutoring style.
Applying this knowledge, a set of four generic question styles (e.g. practical and theoretical style questions) and
two generic question templates were developed. The set of question styles and templates should be expanded
when different learning styles models and domains are implemented.
Fig. 3 shows an example generic question template that could be applied to both practical and theoretical
question styles. The template is for a question where different kinds of hints are given to learners and
information is captured about the type of help that is most effective. In Fig. 3, the question is asked in box 1 and
if the learner responds with the correct answer at any point, they are given feedback and taken to the next
question (response 2). If the learner does not know the answer or their answer is wrong, Oscar explains the
concept and repeats the question (response 3). If the learner still does not know the answer or their answer is
wrong, Oscar shows different resources and repeats the question (responses 4, 5 and 6). Finally, if the learner
still does not know the correct answer, Oscar tells them the answer, suggesting that they revise the topic
(showing additional resource links), then asks if they wish to continue with the tutorial (response 7). If the
learner wishes to continue, they are taken to the next question; if not the tutorial is ended.
Fig. 3. Example generic question template with hints.
In this step, tutorial questions are mapped onto the generic styles and templates, with extra resources included as
required, and the dialogue updated in the tutorial conversation document.
4.3.4 Step 2.4: script CA natural language dialogue for each tutorial question using the 3-level model
Step 2.4 of the methodology involves creating CA scripts to conduct the tutoring dialogue defined in steps 2.1,
2.2 and 2.3 (and recorded in the tutorial conversation document). This involves first adopting a CA that can
capture and receive information using variables, then scripting the conversation using an appropriate scripting
language. Convagent Ltd (2005) InfoChat CA was selected as it is a pattern matching CA that allows
information to be captured using variables. CA scripts, organised into contexts, were developed for the tutorial
based on the tutorial conversation document and applying the 3-level tutorial conversation model. Overall, there
were 38 contexts containing around 400 rules written using the InfoChat PatternScript language (Michie &
Sammut 2001). An example FAQ rule from one of the tutorial scripts is shown in Table 5. In the rule, a is the
activation level used for conflict resolution (Michie 2001); p is the pattern strength followed by the pattern that
is matched against the user utterance. r is the CA response. Also seen in the example is the wildcard (*) and
Page 8
macros (<explain-0>) containing a number of standard patterns that are each matched separately. When the rule
fires, the variable FAQ is set to ‘true’ by the *<set> command.
Table 6.
Example CA script: FAQ rule.
<Rule-01>
a:0.01
p:50 *<explain-0> *select*
p:50 *select* <explain-0>*
p:50 *<remind-0> *select*
p:50 *select* <remind-0>*
p:50 *<confused-0> *select*
p:50 *select* <confused-0>*
r: The SQL SELECT command is used to retrieve data from
one or more database tables. *<set FAQ true>
4.3.5. Step 2.5: link tutorial dialogue to logic rules through CA variables
The final step in phase 2 of the methodology links the behaviour captured by the tutorial conversation to the set
of logic rules (produced in phase 1) that predict learning styles. Moving through the tutorial conversation
document, for each learner behaviour found, annotate the document with the learning style defined in the
associated logic rule. The logic rules from Phase 1 (step 1.4) specify which learning styles are to be incremented
when particular events occur (such as incrementing the Sensory learning style value after an example is shown).
Next, the CA scripts must be updated to capture the behaviour by setting variable values when particular rules
fire. Now that the tutorial conversation has been fully scripted for a CA it must be tested and verified by expert
human tutors.
This section has described the steps of the generic methodology to design a tutoring conversation illustrated by
the development of a tutorial for SQL using the InfoChat CA.
4.4. Methodology phase 3: construct the CITS architecture
Once the learning styles predictor module and the tutorial conversation have been designed, it is necessary to
incorporate them into a CITS architecture. The CITS will require a CA that allows information to be passed in
and out, a Graphical User Interface (GUI) and a Student Model. The next section will propose a standard Oscar
CITS architecture that is generic and incorporates the required components.
5. Oscar CITS architecture
The proposed Oscar CITS architecture is shown in Fig. 4. The Oscar CITS is independent of the learning styles
model adopted and the subject domain being taught. As such, the proposed Oscar CITS architecture is modular,
allowing individual components to be reused or replaced as necessary. The proposed generic architecture allows
alternative tutorial knowledge bases and CA scripts developed following phase 2 of the methodology to be
simply ‘plugged in’ to adapt the tutoring to new subjects. Similarly, different learning styles models may be
applied by replacing the Learning Styles Predictor component (created following the methodology phase 1).
Fig. 4. Oscar CITS system architecture
Each component in the proposed architecture will now be briefly described.
The Controller is the central manager of the system, responsible for instantiating objects and system
variables, communicating with all components and managing the learner interaction.
The Learning Styles Predictor component receives information from the CA, GUI and student model to
predict a student’s learning style, using information about learning styles held in a knowledge base.
This module is developed following phase 1 of the Oscar CITS methodology.
The Student Model component receives and sends information from and to the controller about the
student, such as their level of knowledge, topics visited, test scores and learning style.
The Graphical User Interface (GUI) component is responsible for display, managing events (such as
clicking of buttons) and sending communication to and from the user. The display consists of a
webpage that provides instructions, displays questionnaires, tests, images, documents, interactive
movies and the chat area used to communicate with the user.
The Tutorial Knowledge Base is responsible for managing course information, such as topics and their
breakdowns, related tests and teaching material. The tutorial knowledge base receives information and
instructions from the GUI, learning styles predictor and CA components via the controller, and sends
information to the GUI and CA via the controller.
Page 9
The Conversational Agent component is responsible for accepting natural language text and
information about topic and learning style from the GUI, tutorial knowledge base and learning styles
components via the controller, and generating a natural language response. The CA accesses a database
of tutorial conversation scripts (related to but not linked to the tutorial knowledge base) in order to
match the input to rules that generate a response. The CA records the dialogue in log files that can be
accessed by the controller.
A modular, generic architecture and an original, generic methodology have been proposed for creating an Oscar
CITS. The Oscar CITS architecture has been designed with component reuse in mind, and can be adapted for
different learning styles models by following phase 1 of the Oscar CITS Methodology to develop another
learning styles predictor module. Similarly, different subject domains can be applied by following phase 2 of the
Oscar CITS Methodology to develop the tutorial conversation. The next section will describe the experiments
carried out to validate the proposed Oscar CITS methodology and architecture.
6. Experiments
The Oscar CITS was implemented and tested by real university students in a real teaching/learning environment
in order to:
validate the Oscar CITS prediction of learning styles from a natural language tutoring dialogue;
analyse the effectiveness of Oscar CITS as a learning tool;
study the impact of the Oscar CITS natural language tutoring on students.
Oscar CITS was implemented to deliver an SQL revision tutorial by applying the methodology and architecture
proposed in sections 4 and 5. First, the ILS model was adopted and analysed following Phase 1 of the
Methodology described in section 4 to develop the Learning Styles Predictor module. In the next phase of the
Methodology (phase 2) a ten question SQL revision tutorial was captured from university lecturers and the
generic tutorial question templates and styles were applied. A 12 question MCQ test was devised to assess the
tutorial learning outcomes. The InfoChat pattern-matching CA (Convagent Ltd 2005) was adopted, and the
tutorial conversation was scripted using its PatternScript language (Michie & Sammut 2001). The logic rules
developed for the Learning Styles Predictor module were then mapped to the CA scripts to ensure that relevant
behaviour was captured using variables. In Phase 3 of the methodology, the proposed Oscar CITS architecture
was implemented using the .net framework and mySQL, and the Oscar CITS was installed onto a web server.
The Oscar CITS is at present available via the Internet to Manchester Metropolitan University (MMU) students.
Oscar CITS conducts its tutoring conversations in real time and is currently being used to support a number of
undergraduate and postgraduate computing modules within MMU. The Oscar CITS GUI is shown in Fig. 5.
Fig. 5. Oscar CITS
The experiments described in this paper have been selected from a larger study to demonstrate how different
types of behaviour may be used to predict learning styles.
6.1. Experimental design
As the aim of the experiments is threefold, the Oscar CITS will be evaluated on three levels:
1. Can Oscar CITS predict learning styles dynamically from a two-way tutoring discourse? How
successful is the prediction of learning styles? The Oscar CITS prediction of learning styles will be
measured against the results of the ILS questionnaire. The main hypothesis ‘it is possible to predict
learning style from a two-way tutoring conversation’ was broken down into five hypotheses (H) as
follows:
H1: the success of a learner after experiencing a particular style of tutoring is indicative of learning
style.
H2: a lack of attention to detail in answering questions is indicative of learning style.
H3: choosing to be guided through a process (or not) is indicative of learning style.
H4: the success of a learner in a particular style of tutoring question (theoretical or practical) is
indicative of learning style.
H5: a learner’s reading time is indicative of learning style.
2. Does Oscar CITS successfully tutor learners, i.e. do they learn anything? Learning gain will be
evaluated by comparing the MCQ pre-test score (completed before the tutoring conversation begins) to
the MCQ post-test score (completed after the tutoring conversation ends) to see whether test scores
have improved, as follows:
Learning_gain = post-test_score – pre-test_score
3. How comfortable and confident do learners feel in using the tutoring system, and would they use Oscar
CITS in practice? Satisfaction from the learners’ perspective will be determined via a questionnaire
Page 10
using a set of subjective metrics. The design of the evaluation questionnaire was based on a user
satisfaction questionnaire for rating dialogues with text-based CAs (O’Shea, Crockett & Bandar 2011).
The questionnaire requires participants to rate aspects of the Oscar CITS tutorial using a six-point
Likert scale (which forces participants to express a positive or negative opinion). Additionally, open
questions were included to capture positive and negative comments.
Page 11
Participants
This paper presents results collated from two studies and evaluated on all three levels. The studies had different
participants who had no previous experience using Oscar CITS.
Study 1 – An initial pilot study was undertaken to explore whether the implementation of Oscar CITS was
successful in tutoring and whether sufficient information was captured to predict learning styles. Ten
participants were chosen whose first language was English and who had previous experience of an
undergraduate ORACLE SQL course (but with various levels of expertise).
Study 2 – There were 104 participants who had previous experience of an undergraduate SQL course and
various levels of SQL expertise. Participants were second and final year undergraduate students on a computer
science degree at MMU. The Oscar CITS SQL revision tutorial was integrated into the first teaching week and
during the timetabled classes, participants were asked to complete the revision tutorial. In order to promote full
completions of the tutorial, participants who completed the Oscar CITS revision tutorial were awarded marks in
recognition of their engagement.
6.2. Experimental methodology
Study 1 was a controlled study that took place in an office setting where participants could be unobtrusively
observed during their Oscar CITS tutorial. Participants completed the tutorial individually in a single session.
Study 2 was undertaken in several computer laboratories. Participants started the Oscar CITS revision tutorial in
the laboratories, and those who did not complete the tutorial in a single session were able to continue the
revision via the Internet at another time.
Each participant registered with the Oscar CITS anonymously, which involved being assigned a user ID and
creating a password, that were recorded in the student model. Next, participants completed the formal ILS
questionnaire, also recorded in the student model. Before starting the conversational tutorial, participants
completed a pre-tutorial 12 question MCQ test, known as the pre-test, to assess their existing SQL knowledge.
The pre-test results were stored in the student model. Next, Oscar CITS directed a two-way conversational SQL
revision tutorial that took on average approximately 43 minutes, with each participant following an individual
learning path depending on their existing knowledge and the dialogue. During the tutorial, the participant
dialogue was recorded in log files along with captured aspects of participant behaviour. There were ten main
SQL tutorial questions. At the end of the tutorial, participants completed the same MCQ test (known as the post-
test) to assess their learning gain, with the results being stored in the student model. Next, Oscar CITS presented
participants with a comparison of their test results (indicating their learning gain) and some feedback on their
tutorial performance. Finally, participants were asked to complete a user evaluation questionnaire. For the
purpose of the experiments, the participant behaviour data recorded during tutorial interactions was analysed to
generate a learning styles prediction after all tutorials were complete (rather than during the tutorial conversation
like the full working system). The next section will describe the analysis of participant behaviour for the five
reported experiments.
6.2.1. Analysis of participant behaviour
Experiment 1: logic rules
This experiment relates to a participant’s individual learning path during the tutorial. During the tutorial, logic
rules increment associated learning style scores when particular behaviour occurs. For each ILS dimension the
two related learning style scores were compared to give a prediction of learning style for that dimension. For
example, for the processing dimension if the score for Active is higher than the score for Reflective, the
participant is predicted to be Active. Where scores were equal, the learning style dimension remained
unclassified and was excluded from the analysis. To calculate the prediction accuracy, the predicted learning
style for each dimension was compared to the ILS questionnaire results. The number of correct predictions for
each learning style was counted to produce an accuracy value that is the percentage of correct predictions for
each learning style. This experiment tests the hypotheses H1, H2 and H3 and generated prediction accuracies for
all learning style dimensions.
Experiment 2: tutorial question style
This experiment considered the style of tutorial questions where participants gave the correct answer by
counting the number of correct theoretical and the number of correct practical questions. The number of correct
answers of each style was compared, taking into consideration the possible number of correct answers for
theoretical and practical questions, using the formula below:
Correct practical questions compared to Correct theoretical questions
Total practical questions Total theoretical questions
Page 12
Participants who performed equally well in both styles of question were unclassified and excluded from the
analysis. Where participants performed better in practical questions, the Oscar CITS predicted their learning
style to be Active and Sensory. Participants who performed better in theoretical questions were predicted to be
Reflective and Intuitive. The Oscar CITS prediction was compared to the ILS questionnaire results and the
number of correct predictions counted for each learning style, to produce a prediction accuracy percentage. This
experiment tests the hypothesis H4 and generated prediction accuracies for the perception (Sensory/Intuitive)
and processing (Active/Reflective) ILS dimensions.
Experiment 3: approach to queries
In Experiment 3, the learner’s approach to writing queries was considered. Two questions in the tutorial applied
a generic question template (methodology step 2.3) with a choice of approach to writing SQL queries to solve a
problem. For each question, participants who attempted the query straight away were predicted to be Global
learners whilst participants who asked for guidance were predicted to be Sequential learners. Each participant
had two predictions, one for each question. The predicted learning style was compared to the ILS questionnaire
results, and the number of correct predictions counted for each learning style to produce a prediction accuracy
percentage. This experiment tests the hypothesis H3 and generated prediction accuracies for the perception
(Sensory/Intuitive) and understanding (Sequential/Global) ILS dimensions.
Experiment 4: attention to detail
One tutorial question applied a generic ‘trick question’ style (methodology step 2.3), that includes the answer in
the explanatory text to test the participant’s attention to detail and reading skills. Participants who did not
answer the question correctly were predicted to be Visual and Intuitive learners, whereas those who answered
correctly were predicted to be Verbal and Sensory learners. The predicted learning style was compared to the
ILS questionnaire results, and the correct predictions counted for each learning style to produce a prediction
accuracy percentage. This experiment tests the hypothesis H2 and generated prediction accuracies for the
perception (Sensory/Intuitive) and the input (Visual/Verbal) ILS dimensions.
Experiment 5: reading time
Experiment 5 considers a participant’s aptitude with words by investigating their reading speed. As each learner
follows an individual learning path, calculating reading time from the total number of words read over the
duration of the tutorial would not produce a fair comparison. The only text common to all participant
interactions is the introductory text for the first tutorial question, so reading time was defined as the time taken
to read this text. Each participant’s reading time was then compared to the average (both mean and median)
reading time across the sample. Where a participant had an above average reading time, Oscar CITS predicted
they were Sensory and Visual learners, and where they had a below average reading time, they were predicted to
be Intuitive and Verbal learners. The predicted learning style was compared to the ILS questionnaire results, and
the correct predictions counted for each learning style to produce a prediction accuracy percentage. This
experiment tests the hypothesis H5 and generated prediction accuracies for the perception (Sensory/Intuitive)
and the input (Visual/Verbal) ILS dimensions.
7. Results and discussion
There were 114 participants over both studies, with 75 participants completing the full revision tutorial. The
distribution of learning styles across the 75 participants was approximately equal for all but the Visual/Verbal
dimension, which contained many more Visual than Verbal learners. This finding is consistent with the ILS
model, which states that “most people of college age and older are visual” (Felder & Silverman 1988). This has
implications for the analysis of results for predicting the Visual/Verbal learning styles, as the dataset is so biased
towards the Visual learning style. The distribution of the 75 participants is shown in the first row of Table 7
(prior probability). The experimental results will now be discussed.
7.1. Experimental results
Table 7 shows the prediction accuracy results, representing the ability of Oscar CITS to predict a participant’s
learning style for that experimental measure. Experiments 3, 4 and 5 did not require the completion of the entire
tutorial and so the number of participants analysed is higher. The prior probability is the accuracy of predicting
a learning style based on the distribution of learning styles across the sample. This is included as a fairer
comparison than simply using 50% because the spread of learning styles across the sample is not exactly equal.
This is particularly true for the Visual/Verbal dimension where 87% of participants are Visual. Each
experiment’s results will now be discussed.
Table 7.
Experimental results: accuracy of prediction of learning styles.
Page 13
Prior probability
Experiment 1 – logic rules
Experiment 2 – tutorial question style
Experiment 3a – approach to queries (Q5)
Experiment 3b – approach to queries (Q9)
Experiment 4 – attention to detail
Experiment 5 – reading time
n
7
5
7
5
7
5
8
9
7
6
9
4
9
5
Sensory
60%
4%
36%
65%
70%
59%
51%
Intuitiv
e
40%
80%
50%
38%
56%
28%
78%
Visual
87%
68%
-
-
-
94%
47%
Verbal
13%
10%
-
-
-
17%
71%
Active
57%
100%
53%
-
-
-
-
Reflective
43%
0%
73%
-
-
-
-
Sequential
60%
82%
-
74%
70%
-
-
Global
40%
33%
-
48%
61%
-
-
Experiment 1: logic rules
Using this measure, Oscar CITS was able to predict three learning styles with higher accuracy than the prior
probability – Intuitive (80%), Active (100%) and Sequential (82%). For the Visual learning style, even though
Oscar CITS accurately predicts Visual participants in 68% of cases, the unequal spread of participants for this
dimension means that this is not significant when compared to the prior probability of 87%. This measure was
not able to predict the Reflective learning style, probably because Reflective learners spend time after the
learning experience reflecting on what they know and put it together as knowledge. The results support
hypotheses H1, H2 and H3 and show that logic rules are the most successful factor in predicting the Intuitive,
Active and Sequential learning styles.
Experiment 2: tutorial question style
70 participants showed a preference for practical or theoretical tutorial questions; those participants whose
success was the same for both question styles remained unclassified. Oscar CITS was able to predict the
Intuitive (50%) and Reflective (73%) learning styles better than the prior probability. The results support
hypothesis H4 and show that tutorial question style was the most successful factor in predicting the Reflective
learning style, with the accuracy of 73% being far better than the prior probability of 43%.
Experiment 3: approach to queries
This experiment predicted learning styles depending on a participant’s approach to writing queries. Table 7
reports results for two relevant tutorial questions as Experiments 3a and 3b. 89 participants completed question
5 (Experiment 3a) and 76 participants completed question 9 (Experiment 3b). Apart from the Sequential
learning style, results for the second question were higher – probably because having experienced the style of
question before, participants has a better idea of their preferred approach. All learning styles (except the
Intuitive in experiment 3a) were predicted with higher accuracy than the prior probability. Experiment 3b was
the most successful factor in predicting the Sensory (70%) and Global (61%) learning styles, and the results
support hypothesis H3.
Experiment 4: attention to detail
94 participants had completed the ‘trick question’. For the Sensory/Intuitive learning style dimension, the
prediction accuracies of 59% and 28% are worse than the prior probability for the sample of 62% and 38%
respectively. However, predictions for the Visual/Verbal learning style dimension were better than the prior
probability at 94% and 17% respectively, with this measure producing the most accurate prediction overall for
the Visual learning style. Therefore the results support hypothesis H6, a lack of attention to detail in answering
questions is indicative of learning style.
Experiment 5: reading time
Reading time was calculated for 95 participants who had completed Question 1. The results were mixed, with
poor predictions of Intuitive and Visual participants (those with a below average reading time) but good
predictions of Sensory and Verbal participants (those with above average reading times). The prediction
accuracies for the Intuitive (78%) and Verbal (71%) learning styles are much higher than the prior probabilities
of 40% and 13% respectively. The results show that this measure is the best predictor of Verbal learning style,
thus supporting the hypothesis H5. However, it must be borne in mind that the uneven spread of participants for
the Visual/Verbal dimension prevents firm conclusions from being drawn.
7.2. Learning gain
Table 8 shows the participant learning gain results, with a total average test score improvement of 13%. Average
learning gain was higher for study 1 (20%), which probably reflects the higher motivation of participants in
completing the tutorial as this was a controlled setting. Study 2 involved real students completing the tutorial in
a real educational environment, and so a lower learning gain was expected due to factors such as distractions.
Page 14
The results suggest that Oscar CITS did help learning as participants increased their learning of SQL and
improved their test results.
Table 8.
Learning gain results.
Study
Study 1
Study 2
Total
n
10
63
73
Learning gain
Mean (/12)
2.4
1.44
1.58
Standard deviation
2.01
2.07
2.07
Mean %
20%
12%
13%
7.3. Participant evaluation
In general, the participant feedback showed that Oscar CITS was well received, understandable and helpful. 46
participants completed the evaluation questionnaire. 87% of participants rated the tutoring highly, with 51%
awarding the tutoring the highest rating. 94% of participants found the tutoring helpful, with 72% giving the
highest rating. An astounding 35% of participants stated that they would use Oscar CITS tutorial instead of
attending a face-to-face tutorial. Slightly more than half of the sample (52%) would use Oscar CITS instead of
reading a book, and 85% of participants would use Oscar CITS to support classroom tutoring. Overall, 89% of
participants would use a resource like Oscar CITS if it were available. When openly asked for comments about
Oscar CITS, half of the participants remarked that Oscar was easy to use and 43% noted that Oscar CITS was
helpful. One participant commented “is like having your own friendly tutor”, and another “it gives instant
feedback unlike a traditional test”. From these results it can be concluded that most people found the Oscar
CITS tutoring easy to use, helpful, and would use Oscar CITS to support their studies.
7.4. Results summary
The experiments were conducted using real university students in a real teaching/learning environment. The
results support the hypotheses and show that by adopting the Oscar CITS methodology and architecture, it is
possible to successfully predict learning styles from a two-way natural language tutoring conversation. Oscar
CITS helped participants to increase their knowledge and participants valued the Oscar CITS learning
experience and would use Oscar CITS to support learning. Table 9 summarises the best prediction accuracies
resulting from the five experiments described. In a full Oscar CITS learning style values are adjusted
dynamically throughout the tutorial conversation based on learner behaviour, apart from the Reflective learning
style, where the preferred question style is tested periodically at the end of each tutorial.
Table 9.
Oscar CITS best prediction accuracy.
Oscar CITS
n
75-95
Sensory
70%
Intuitiv
e
80%
Visual
94%
Verbal
71%
Active
100%
Reflective
73%
Sequential
82%
Global
61%
The methodology and architecture for Oscar CITS are independent of the learning styles model and subject
domain chosen. Although the results show the successful prediction of ILS learning styles, before conclusions
may be drawn about non-computing subject domains it is necessary to implement Oscar CITS and empirically
test its prediction of learning styles with different models.
A comparison of results with other CITS is not possible as no other CITS can predict learning styles. On a
superficial level, the results compare favourably with menu-based ITS that predict ILS learning styles (Ozpolat
& Akar 2009; Cha et al 2006; Garcia et al 2007). However it is inappropriate to compare prediction accuracies
with these ITS because, despite adopting the ILS, they classify learning styles differently, introducing a third
‘Neutral’ class for each dimension which describes learners with low strength learning styles (i.e. those at the
centre of the dimension). Also, the method of calculating prediction accuracy for these ITS uses different
scoring, by awarding a 0.5 score if the learning style prediction is mismatched with a Neutral classification,
rather than a zero score for all mismatches used by Oscar CITS.
8. Conclusions
This paper has presented the Oscar Conversational Intelligent Tutoring System, a novel CITS which implicitly
predicts and adapts to learning styles whilst directing a tutorial conversation. Oscar CITS imitates a human tutor
by incorporating the intelligent tutoring techniques of curriculum sequencing, intelligent solution analysis and
problem solving support. A tutorial is directed by Oscar CITS, which detects behaviour cues from learners to
present learning material suited to their knowledge and learning style. Learners can participate in a personalised
tutorial via the Internet, learning at their own pace at a time and place to suit them. Oscar’s conversational style
is intuitive to use, helping to improve motivation and build confidence, with one user remarking “it encouraged
me to think rather than simply giving me the answer”.
An original methodology and architecture for creating the Oscar CITS were described, which are independent of
the learning styles model and subject domain being taught. The 3-phase methodology describes the development
of the Learning Styles Predictor, Tutorial Knowledge Base and CA components and includes a number of
generic tools to aid development (behaviour variables, key words, logic rules, 3-level conversation model,
Page 15
question styles and templates). The generic architecture is modular, allowing different learning style models and
subject domains to be applied whilst supporting the reuse of components.
Oscar CITS was implemented to deliver an SQL revision tutorial and evaluated empirically by real students in a
real educational setting. The experimental results show that it is possible to predict learning styles from a two-
way natural language tutoring conversation. Oscar CITS successfully predicted all learning styles in the Index of
Learning Styles model, with accuracies ranging from 61-100%. Oscar CITS was well received by participants,
who found it helpful, easy to use and successful in improving their knowledge.
Further work has been done in analysing different sorts of behaviour for predicting learning styles from natural
language, including a preference for practical or theoretical questions, the number of words used, the amount of
discussion, duration and vocabulary. An algorithm is now being developed to improve the accuracy predicting
learning styles using a fuzzy set representation that combines different aspects of learner behaviour captured
from a natural language tutorial. An Oscar CITS adaptation algorithm has been designed that selects the best
fitting adaptation for each tutorial question by combining student learning styles with available teaching styles
(Latham, Crockett, McLean & Edmonds, 2011). In future, a speech module could be incorporated into the Oscar
CITS architecture to facilitate spoken tutorial conversations.
Acknowledgements
The research presented in this paper was funded by EPSRC. The authors thank ConvAgent Limited for the use
of their InfoChat CA and PatternScript scripting language.
References
Ammar, M. B., Neji, M., Alimi, A. M. & Gouarderes, G. (2010). The Affective Tutoring System. Expert Systems with Applications 37, 3013-
3023.
Arnott, E., Hastings, P. & Allbritton, D. (2008). Research Methods Tutor: Evaluation of a dialogue-based tutoring system in the classroom.
Behaviour Research Methods 40 (3), 694-698.
Brooks, C., Greer, J., Melis, E. and Ullrich, C. (2006). Combining ITS and eLearning Technologies: Opportunities and Challenges. ITS
2006, LNCS 4053, 278-287.
Brusilovsky, P. & Peylo, C. (2003). Adaptive and Intelligent Web-based Educational Systems. Int. J. Artificial Intelligence in Education 13,
156-169.
Cha, H. J., Kim, Y. S., Park, S. H., Yoon, T. B., Jung, Y. M. & Lee, J. H. (2006). Learning styles diagnosis based on user interface behaviours
for the customization of learning interfaces in an intelligent tutoring system. ITS 2006, LNCS 4053, 513-524.
Chi, M.T.H., Siler, S., Jeong, H., Yamauchi, T. & Hausmann, R.G. (2001). Learning from human tutoring. Cognitive Science 25, 471-533.
Coffield F., Moseley D., Hall E. & Ecclestone K. (2004). Learning Styles and Pedagogy in Post-16 Learning: A Systematic and Critical
Review. London: Learning and Skills Research Center.
Convagent Ltd (2005), Convagent, Available: http://www.convagent.com/
Crown copyright (2004). Pedagogy and Practice: Teaching and Learning in Secondary Schools Unit 1: Structuring learning, DfES
Publications.
D’Mello, S., Lehman, B., Sullins, J., Daigle, R., Combs, R., Vogt, K., Perkins, L. & Graesser, A. (2010). A Time for Emoting: When Affect-
Sensitivity Is and Isn’t Effective at Promoting Deep Learning. ITS 2010, LNCS 6094, 245-254.
Felder, R. & Silverman, L.K. (1988). Learning and Teaching Styles in Engineering Education. J. Engineering Education 78 (7), 674-681.
Garcia, P., Amandi, A., Schiaffino, S. & Campo, M. (2007). Evaluating Bayesian networks’ precision for detecting students’ learning styles.
Computers & Education 49, pp. 794–808.
Honey, P. & Mumford, A. (1992). The manual of learning styles, Peter Honey, Maidenhead.
Hsieh, S. W., Jang, Y. R., Hwang, G. J., & Chen, N. S. (2011). Effects of teaching and learning styles on students’ reflection levels for
ubiquitous learning. Computers & Education 57, 1194-1201.
Kelly, D. & Tangney, B. (2006). Adapting to intelligence profile in an adaptive educational system. Interacting with Computers 18, pp. 385-
409.
Khoury, R., Karray, F. & Kamel, M.S. (2008). Keyword extraction rules based on a part-of-speech hierarchy. Int. J. Advanced Media and
Communication 2 (2), 138-153.
Klasnja-Milicevic, A., Vesin, B., Ivanovic, M. & Budimac, Z. (2011). E-Learning personalization based on hybrid recommendation strategy
and learning style identification. Computers & Education 56, 885--899.
Latham, A., Crockett, K. & Bandar, Z. (2010). A Conversational Expert System Supporting Bullying and Harassment Policies. In: Proc.
ICAART 2010, 163-168.
Latham, A., Crockett, K., McLean, D. & Edmonds, B. (2009). Using Learning Styles to Enhance Computerised Learning Systems, in Proc.
2009 Annual Research Student Conference, Manchester Metropolitan University, Manchester, UK.
Latham, A.M., Crockett, K.A., McLean, D.A. & Edmonds, B. (2010). Predicting Learning Styles in a Conversational Intelligent Tutoring
System. ICWL 2010, LNCS 6483, 131-140.
Latham, A.M., Crockett, K.A., McLean, D.A., Edmonds, B. & O’Shea, K. (2010). Oscar: An Intelligent Conversational Agent Tutor to
Estimate Learning Styles. In: Proc. IEEE WCCI 2010, 2533-2540.
Latham, A.M., Crockett, K.A., McLean, D.A. & Edmonds, B. (2011). Oscar: An Intelligent Adaptive Conversational Agent Tutoring
System. LNAI 6682, 563-572.
Leontidis, M. & Halatsis, C. (2009). Integrating Learning Styles and Personality Traits into an Affective Model to Support Learner’s
Learning. ICWL 2009, LNCS 5686, 225-234.
Page 16
Li, Y., Bandar, Z., McLean, D. & O’Shea, J. (2004). A Method for Measuring Sentence Similarity and its Application to Conversational
Agents. In Proc. FLAIRS 2004, 820-825.
Mairesse, F., Walker, M., Mehl, M. & Moore, M. (2007), Using Linguistic Cues for the Automatic Recognition of Personality in
Conversation and Text, J. Artificial Intelligence Research 30, 457-501.
Melis, E., Andrès, E., Büdenbender, J., Frishauf, A., Goguadse, G., Libbrecht, P., Pollet, M. & Ullrich, C. (2001). ActiveMath: A web-based
learning environment. Int. J. Artificial Intelligence in Education 12 (4), pp. 385-407.
Michie, D. (2001). Return of the Imitation Game. Electronic Transactions on Artificial Intelligence 6, 203-221.
Michie, D. & Sammut, C. (2001). Infochat Scripter’s Manual, ConvAgent Ltd, Manchester, UK.
Mitrovic, A. (2003). An Intelligent SQL Tutor on the Web. Int. J. Artificial Intelligence in Education 13, 171-195.
O’Shea, J., Bandar, Z. & Crockett, K. (2011), Systems Engineering and Conversational Agents, in Tolk, A. & Jain, L.C. (Eds), Intelligence-
Based Systems Engineering, Intelligent Systems Reference Library 10, Springer-Verlag Berlin Heidelberg.
O’Shea, K., Crockett, K. & Bandar, Z. (2011), An Approach to Conversational Agent Design using Semantic Sentence Similarity, IEEE
Transactions on Systems, Man and Cybernetics, under review.
Özpolat, E. & Akar, G.B. (2009). Automatic detection of learning styles for an e-learning system. Computers & Education 53, pp.355-367.
Papanikolaou, K.A., Grigoriadou, M., Kornilakis, H. & Magoulas, G.D. (2003). Personalizing the Interaction in a Web-based Educational
Hypermedia System: the case of INSPIRE. User Modeling and User-Adapted Interaction 13, 213-267.
Popescu, E. (2010). Adaptation provisioning with respect to learning styles in a Web-based educational system: an experimental study.
Journal of Computer Assisted Learning 26, 243-257.
Pudner, K., Crockett K.A. & Bandar, Z. (2007). An Intelligent Conversational Agent Approach to Extracting Queries from Natural
Language. In Proc. WCE Int. Conf. Data Mining and Knowledge Engineering 2007, 305-310.
Sammut, C. (2001). Managing Context in a Conversational Agent. Linkoping Electronic Articles in Computer & Information Science 3 (7).
Linkoping University Electronic Press, Sweden.
Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C. & Bigdeli, A. (2008). “How do you know that I don’t understand?” A look at the future
of intelligent tutoring systems. Computers in Human Behaviour 24, 1342-1363.
Spallek, H. (2003). Adaptive Hypermedia: A New Paradigm for Educational Software. Advances in Dental Research 17 (1), 38-42.
Wang, T., Wang, K. & Huang, Y. (2008). Using a Style-based Ant Colony System for Adaptive Learning. Expert Systems with Applications
34(4), 2449-2464.
Woo Woo, C., Evens, M.W., Freedman, R., Glass, M., Seop Shim, L., Zhang, Y., Zhou, Y. & Michael, J. (2006). An intelligent tutoring
system that generates a natural language dialogue using dynamic multi-level planning. Artificial Intelligence in Medicine 38, pp. 25-46.
Yannibelli, V., Godoy, D. & Amandi, A. (2006). A genetic algorithm approach to recognise students’ learning styles. Interactive Learning
Environments 14 (1), 55-78.
Page 17
... ere are many tasks that need to be refined in the automated processing and analysis of fine art and design images, such as forgery detection [8], object retrieval [9], and archiving and retrieval of fine art and design works [10]. However, all of these tasks are derived from the classification task of art and design images. ...
Article
Full-text available
To study the beautification of art design and analyze the application of visual perception in art design, this paper proposes an image beautification processing technique based on multiple chromatic aberration compensation of illumination. The paper will then investigate the task of classifying styles, genres, and artists based on deep learning methods on fine art design images. The proposed method automatically classifies fine art design images with significant improvements in classification accuracy and efficiency. Experimental tests are conducted and the results show that the beautification of night images using this technique has good fine art presentation capabilities, improves the aesthetic and visual sensory expressive performance of the images, and has good fine art application value.
... Nowadays thanks to the maturity of technologies, many efforts are focusing on the update or facilitation of technologies used for the realization of advanced pedagogical strategies (which might have been used before) and deep learning, such as intelligent conversational agent, adaptive support (Diziol et al., 2010), adaptive learning, personalized learning and affective consideration, autonomous data-driven decision-making (Koedinger et al., 2013;Latham et al., 2012). Besides, more innovative pedagogy is integrated in the systems as well, such as gaming (Millis et al., 2017). ...
Chapter
Enabled by advanced technologies, smart learning is growing into a popular trend. Smart learning can involve combining the Internet, mobile, and context-aware technologies with the analysis of large datasets and information about a specific learner to create learner-specific, meaningful learning activities. A smart learning application can potentially play the role of an informed and capable companion who seeks opportunities to advise others about a wide variety of things that come up in school, at home, or elsewhere. Given the movement toward smart learning applications, this chapter takes a historical view toward the development of inquiry learning and critical thinking skills, reviewing prior approaches and examining the gains and shortfalls of those efforts. We develop a research-based framework for a transformative approach to support the development of inquiry learning and critical thinking skills in young children using smart technologies. Such a transformation could be achieved by incorporating personalized learning, adaptive learning, affective considerations, and mixed realities in a series of steps to help a young learner develop habits of mind that include noticing unusual things, seeking explanations for those things, testing various explanations, and reflecting on progress and process. A framework for a conversational application aimed at developing critical thinking systematically, especially promoting continuing inquiry and reasoning, is presented along with suggestions for determining impact. An initial prototype and results are briefly described.
... Many related technologies have been integrated into artificial intelligence to simulate human thought processes and intelligent behavior, such as neural networks, expert systems, deep learning, symbolic machine learning, speech recognition, image recognition, natural language processing and statistical analysis, or others that can be classified as artificial intelligence technologies (Lu and Yang, 2018). Artificial intelligence has made considerable progress recently and has been extensively applied in various fields around the world, bringing outstanding value and possessing great potential, such as practical application in medical-related fields (He et al., 2019), augmented online language learning (Li and Wang, 2020), scientific research based on the neural network model (Iyanda et al., 2018), personality and affective differences in the psychology sector (Latham et al., 2012) and basic discipline education in mathematical science (Kok et al., 2009;Colchester et al., 2017). ...
Article
Full-text available
While an increasing number of organizations have introduced artificial intelligence as an important facilitating tool for learning online, the application of artificial intelligence in e-learning has become a hot topic for research in recent years. Over the past few decades, the importance of online learning has also been a concern in many fields, such as technological education, STEAM, AR/VR apps, online learning, amongst others. To effectively explore research trends in this area, the current state of online learning should be understood. Systematic bibliometric analysis can address this problem by providing information on publishing trends and their relevance in various topics. In this study, the literary application of artificial intelligence combined with online learning from 2010 to 2021 was analyzed. In total, 64 articles were collected to analyze the most productive countries, universities, authors, journals and publications in the field of artificial intelligence combined with online learning using VOSviewer through WOS data collection. In addition, the mapping of co-citation and co-occurrence was explored by analyzing a knowledge map. The main objective of this study is to provide an overview of the trends and pathways in artificial intelligence and online learning to help researchers understand global trends and future research directions.
Article
This paper researches on the influence of socialization strategies on users' learning behavior in online educating platform. According to the organizational socialization theory, four typical socialization strategies of brand communities are studied: information feedbacking, interactive supporting, member educating, and information providing. The study reveals that the above four strategies all have positive influences on social identification, and then promote learning frequency and time. The study also illustrates that the economic stimulus (i.e., extrinsic motivation) of platform has influences on the behavior (i.e., learning frequency and time) of users on it. This research sheds light on that extrinsic motivation of learning has enhancing/crowding out effect on the internal motivation of users.
Chapter
This chapter describes the latest developments in Conversational Intelligent Tutoring Systems (CITS), which are e-learning systems that adopt Computational Intelligence approaches to mimic a human tutor. CITS deliver a personalized, human-like tutorial via natural language. Like expert human tutors, CITS automatically profile a learner’s knowledge and skills and also profile and adapt to an individual’s affective traits (such as mood or personality) to improve their learning. The chapter explores the challenges for CITS design that prevent them from moving into the mainstream and highlights scalability issues that the research community must address. Two cutting-edge CITS are described. Oscar CITS dynamically profiles learning styles using learner behaviour throughout the tutorial conversation, and adapts at a question level, resulting in significantly better learning gain. Hendrix 2.0 CITS uses a bank of artificial neural networks to automatically profile a learner’s comprehension using webcam images, and intervenes to help improve learner motivation and avoid impasse. The considerable issue of trust in AI is discussed, and a study of public opinion on ethical use of AI in education showed that more needs to be done in educating the public on the benefits and risks of AI in education.
Article
Collaborative Virtual Environments (CVE) have shown potential to be an effective social skill training platform for children with Autism Spectrum Disorders (ASD) to learn and practice collaborative and communication skills through peer interactions. However, most existing CVE systems require that appropriately matched partners be available at the same time to promote interaction, which limits their applicability to some community settings due to scheduling constraints. A second shortcoming of these more naturalistic peer-based designs is the intensive resources required to manually code the unrestricted conversations that occurred during the peer-based interactions. To preserve the benefits of CVE-based platforms and mitigate some of the resource limitations related to peer availability, we developed an Intelligent Collaborative Haptic-Gripper System (INC-Hg). This system provides an intelligent agent partner who can understand, communicate, and haptically interact with the user, without requiring the presence of another human peer. The INC-Hg operates in real time and thus is able to perform collaborative training tasks at any time and at the user's pace. INC-Hg can also record the real-time data regarding spoken language and task performance, thereby greatly reducing the resource burden of communication and interaction performance analysis. A preliminary usability study with 10 participants with ASD (ages 8–12 years) indicated that the system could classify the participant's utterances into five classes with an accuracy of 70.34%, which suggested the potential of INC-Hg to automatically recognize and analyze conversational content. The results also indicated high accuracies of the agent to initiate a conversation (97.56%) and respond to the participants (86.52%), suggesting the capability of the agent to conduct proper conversations with the participants. Compared to the results of human-to-human collaborative tasks, the human-to-agent mode achieved higher average collaborative operation ratio (61% compared to 40%) and comparable average frequencies for Initiations and Responses among the participants with ASD. These results offer preliminary support as well as areas of improvement regarding the agent's ability to respond to participants, work with participants to complete tasks, engage in back-and-forth conversations, and support the potential of the agent to be a useful partner for individuals with ASD completing CVE tasks.
Thesis
Full-text available
Bu çalışma kapsamında Zeki Öğretim Sistemi (ZÖS) öğrenci modelinin tasarlanması ve geliştirilmesi amaçlanmıştır. Bu hedef doğrultusunda öğrencilerin bireysel özelliklerine göre farklı öğretimsel destek ihtiyaçları söz konusu olabileceğinden hareketle çalışmada öğrencilerin gereksinimlerine dayalı bir ZÖS tasarımı nasıl olmalıdır sorusuna gelişimsel araştırma (developmental research) yönetimi ile cevap aranmıştır. Öğrenci modeli için ilk aşamada öğrencilerin ihtiyaçları incelenmiş ve ZÖS’ün bileşenleri alan yazının taraması ile birlikte ortaya konulmuştur. Yapılan çalışmada bir yazılım gerçekleştirme söz konusu olduğu için ortaya konulacak ZÖS’ün tasarım ve geliştirilmesine yönelik yazılım geliştirme modellerinden Hızlı Prototipleme Modeli temel alınmıştır. Çalışma kapsamında öğrenci modeline odaklanılsa da ZÖS’te yer alan tüm bileşenler işe koşulmuştur. Bu sebeple geliştirilen ZÖS’ün alan modelinde ayrık bilgi bileşenler yaklaşımı, öğretici modelinde ise işbirlikçi filtreleme yöntemlerinden Ağırlıklandırılmış Jaccard tekniği kullanılmıştır. Öğrenci modelinde ise Bayes, Katman ve Stereotip (BaKaSt) öğrenci modellerinin bir arada bulunduğu hibrit bir öğrenci modeli ortaya konulmuştur. Geliştirilen ZÖS, BaKaSt olarak adlandırılmıştır. BaKaSt’ın değerlendirilmesi amacıyla deneysel bir araştırma yürütülmüş olup araştırmaya 58 kişi deney, 46 kişi kontrol grubu olmak üzere toplamda 104 lisans öğrencisi katılmıştır. Kontrol grubunda öğrenciler soru çözerken karşılaştıkları zorluklarda öğretimsel desteği kendilerinin seçtiği sistemi kullanmışlardır. BaKaSt’ı kullanan deney grubunda ise öğretimsel destek sistem tarafından sunulmuştur. Yapılan deneysel işlemler sonucu BaKaSt’ı kullanan öğrencilerin alternatif sistemi kullanan öğrencilere göre akademik başarısının daha yüksek olduğu tespit edilmiştir. Ayrıca BaKaSt’ı kullanan öğrencilerin alternatif sistemi kullanan öğrencilere göre daha fazla öğretimsel destek aldıkları ve daha az yardım arama davranışlarında bulundukları belirlenmiştir. In this study, it is aimed to design and develop the student model for the Intelligent Teaching System (ITS) for the learner needs. In line with this goal, the answer to the question of how an ITS design should be considering that students may have different educational needs according to their individual characteristics. For the student model, learner needs were examined in the first stage and the components of the ITS were revealed together with the review of the literature. Rapid Prototyping Model was taken as the basis in the study, since there is a software implementation. As a result, in the domain model of the created ITS, the discrete information components approach was employed, and in the tutoring model, the Weighted Jaccard Technique, one of the collaborative filtering approaches, was applied. In the student model, a hybrid student model, in which Bayesian, Layer and Stereotype student models are combined, was put forward. BaKaSt was tested in an experimental study. A total of 104 undergraduate students participated in this research. Experimental and control groups were formed for the experimental procedures. In the control group, the students used the system they chose for the instructional support for the difficulties they encountered while solving the questions. In the experimental group using BOS, it was provided by the instructional support system. As a result of the experimental procedures, it was determined that the academic success of the students using BOS was higher than the students using the alternative system.
Article
This paper presents the Storybox Methodology which combines a novel framework for structuring knowledge and conversations around a story (D-PAF), with a live chatroom-based training approach that builds the conversation knowledge base via live chatroom interactions. Chatbots have achieved success as intelligent interfaces in education, health, sales and support, but their move towards mainstream adoption has been hindered by the large amount of development resources required, in terms of data collection, preparation, user testing and technical knowledge. The complexity of the development task often necessitates both a system author and a domain expert working effectively together, adding further complexity and risk. Overcoming these barriers could increase feasibility of chatbots in a range of expert contexts. In education, there are groups of learners who do not enjoy reading and writing. Storytelling chatbots might be able to introduce these groups to enjoyable new ways to read and write, having a beneficial impact on their education and future prospects. This paper proposes the Storybox Methodology for the rapid development of storytelling chatbots. Storybox is evaluated by creating, training and testing ‘The Ghost’, a chatbot enacting Hamlet's Ghost character from William Shakespeare's dramatic tragedy. The results showed that after a period of live chatbot training of only 25 training conversations, The Ghost was able to conduct convincing conversations with participants.
Article
Full-text available
Human one-to-one tutoring has been shown to be a very effective form of instruction. Three contrasting hypotheses, a tutor-centered one, a student-centered one, and an interactive one could all potentially explain the effectiveness of tutoring. To test these hypotheses, analyses focused not only on the effectiveness of the tutors' moves, but also on the effectiveness of the students' construction on learning, as well as their interaction. The interaction hypothesis is further tested in the second study by manipulating the kind of tutoring tactics tutors were permitted to use. In order to promote a more interactive style of dialogue, rather than a didactic style, tutors were suppressed from giving explanations and feedback. Instead, tutors were encouraged to prompt the students. Surprisingly, students learned just as effectively even when tutors were suppressed from giving explanations and feedback. Their learning in the interactive style of tutoring is attributed to construction from deeper and a greater amount of scaffolding episodes, as well as their greater effort to take control of their own learning by reading more. What they learned from reading was limited, however, by their reading abilities.
Article
Full-text available
A recent advancement in VLSI that drastically improved the circuit density is the introduction of CMOL (CMOS/nanodevices hybrid), which consists of an overlay of a nanofabric over a CMOS stack. Combinational logic in CMOL is implemented from a netlist ...
Article
Full-text available
This paper focuses on the implementation of a novel semantic-based Conversational Agent (CA) framework. Traditional CAs interpret scripts consisting of structural patterns of sentences. User input is matched against such patterns and an associated response is sent as output. This traditional CA approach, which solely takes into account the structural form of a sentence, requires the scripter to anticipate the inordinate ways that a user may send input. This is a tiresome and time-consuming process. As such, a semantic-based CA that interprets scripts consisting of natural language sentences alleviates this burden by removing the process of pattern generation. The CA was evaluated by participants using a domain of a specific nature, that is, student debt management, which indicated promising results.
Chapter
Full-text available
This chapter describes Conversational Agents (CAs) in the context of Systems Engineering. A CA is a computer program which interacts with a user through natural language dialogue and provides some form of service. CA technology has two points of interest to systems engineers: the use of systems engineering techniques in CA research and the application of CAs in project development. CAs offer the opportunity to automate more complex applications than are feasible with conventional web interfaces. Currently such applications require a human expert in the domain to mediate between the user and the application. The CA effectively replaces the human expert. This chapter reviews the current capabilities of various CA technologies, outlines a development methodology for systems engineering practitioners interested in developing real world applications and suggests a number of directions for systems engineers who wish to participate in CA research. KeywordsConversational agent–systems engineering–dialogue–evaluation–methodology–semantic similarity–short text
Article
Learning styles encapsulate the preferences of the students, regarding how they learn. By including information about the student learning style, computer-based educational systems are able to adapt a course according to the individual characteristics of the students. In accomplishing this goal, educational systems have been mostly based on the use of questionnaires for establishing a student learning style. However, this method has shown to be not only time-consuming but also unreliable. A genetic algorithm approach to automatically identify the individual learning styles of students based on their actions while attending an academic course is presented in this paper. The application of a genetic algorithm to this domain allows us to both discover the learning styles of individual students as they attend different academic units, as well as track the changes on these styles that might occur over time.
Article
Personalized instruction is seen as a desideratum of today's e-learning systems. The focus of this paper is on those platforms that use learning styles as personalization criterion called learning style-based adaptive educational systems. The paper presents an innovative approach based on an integrative set of learning preferences that alleviates some of the limitations of similar systems. The adaptive methods used as well as their implementation in a dedicated system (WELSA) are presented, together with a thorough evaluation of the approach. The results of the experimental study involving 64 undergraduate students show that accommodating learning styles in WELSA has a beneficial effect on the learning process.