Conference PaperPDF Available

Assessing English as a Second Language: From Classroom Data to a Competence-Based Open Learner Model


Abstract and Figures

With the increase of ICT in classrooms comes much data that can be used for evidence-based assessment. We focus on harnessing and interpreting this data to empower teachers in formative assessment. We describe e-assessment of English as a Second Lan-guage and illustrate how we move from data collected in classroom activities, through an automated assessment method, to visualising competence levels in an open learner model.
Content may be subject to copyright.
Assessing English as a Second Language:
From Classroom Data to a Competence-Based
Open Learner Model
Susan BULL
, Barbara WASSON
, Matthew D.
, Eli MOE
, Cecilie HANSEN
, Klaus
Electronic, Electrical and Computer Engineering, University of Birmingham, UK
Department of Information Science & Media Studies, University of Bergen, Norway
Knowledge Management Institute, Technical University of Graz, Austria
InterMedia, Uni Health, Uni Research AS, Norway
Talkademy, Austria
Abstract: With the increase of ICT in classrooms comes much data that can be used for
evidence-based assessment. We focus on harnessing and interpreting this data to empower
teachers in formative assessment. We describe e-assessment of English as a Second Lan-
guage and illustrate how we move from data collected in classroom activities, through an
automated assessment method, to visualising competence levels in an open learner model.
Keywords: second language, evidence-based formative assessment, open learner model
Today’s classrooms may comprise a range of tools [1] producing much data that can be
tapped to support formative assessment. There is a need for methods to capture and present
the data so teachers can interpret and transform it to a meaningful form for students and
themselves. We are developing such tools and methods for English as a Second Language.
We describe moving from classroom data, through an automated assessment method, to an
open learner model (OLM) for use by teachers to support their formative assessment work.
The Common European Framework of Reference for languages (CEFR) offers com-
petence-based common reference levels in language learning [2]. These are based on lan-
guage use and abilities (what students can do). CEFR is not detailed enough to design di-
agnostic testing items or define task difficulty, but is a useful starting point [3]. A similar
focus is at the forefront of many current language courses and applications. In Norway, for
example, a specified set of learning goals and competences must be integrated into English
teaching in schools [4], and teachers plan activities to address the competences. Our OLM
provides students and teachers with an overview of current competence levels, enabling
better planning of teaching and student recognition of their learning. The approach also
offers a way to facilitate teachers’ classroom orchestration [5].
In this paper we introduce the OLM as a teacher and learner feedback tool, describe
data available to teachers, how they can transform interaction data to include in a learner
model, and outline how such data may be displayed to help raise awareness of competen-
1. Open Learner Models and Classroom Data
A learner model is a representation of a user’s skills and abilities, as inferred during their
interactions, and enables a system to adapt to the needs of the individual. Increasingly,
learner models are being opened to users as a means to help prompt learner reflection, help
teacher planning and decision-making, etc. [6]. There are now also strong arguments for
placing OLMs in the centre of contexts where there are multiple sources of data available
for the learner model [7],[8],[9] since a variety of tools are in use in classrooms [1]. While
an OLM can be likened to technology-based student progress and performance reports,
rather than reporting progress, it models and externalises competences and skills. The
problem in technology-rich classrooms is that data is not always available in a form that
matches competence descriptors, and is often not able to pass data to a learner modelling
service. We therefore offer teachers a means to transform activity data for an OLM.
Usually activity results are stored with scores or qualitative descriptors in an overview.
An illustration of a teacher’s spreadsheet recording results is given in Figure 1. This allows
the teacher to see at a glance, how an individual is progressing in goal-related competences.
As time advances and further items are added, we expect to see a shift towards good and
excellent - as is indeed happening in this example. We aim to support teachers with an ap-
proach that is similar to their self-generated methods (e.g. Figure 1), or methods with which
they are already familiar, but providing a focus on overviews of current competences. These
can be presented through an OLM, so students may more readily recognise the importance
of competences (rather than specific activities), and teachers can gain an overview they can
act on in the classroom or in later planning.
Figure 1: Example of a teacher's record of competencies that combines colour with text
This is in line with education policy in Europe moving from a focus on knowledge to a
focus on competence. For example, in Norway, the learning goals and competences cover
three areas: communication; language learning; culture, society and literature – each of
which comprises sets of competences [4]. For example, two of the “communication”
competences are that after four years of English students should be able to “read and un-
derstand the main content of texts on familiar topics” and “understand and use common
English words and phrases related to daily life, leisure time and interests, both orally and in
written form”. Teachers plan how to incorporate appropriate activities into their classrooms
to enable students to develop the competencies.
We illustrate with a set of activities aimed at 11-12 year-olds, including an electronic
reading and listening test; interactions in a virtual world (Second Life); and an electronic
self-assessment (from the European Language ePortfolio). Assessment methods, automatic
and manual, are applied to data from these activities to determine achievement level for
relevant competencies. The first activity, the online listening and reading test, has a mix of
item types: multiple choice, click item, click text, click name, click word, move paragraph.
Each item is weighted according to difficulty by professional test developers and these
weights, along with student answers and other test item information, is used by ProNIFA (an
automatic assessment method see below), to generate competence levels for students
taking the test before data is passed to the OLM. The second data set derives from activity
within Second Life, and includes chat logs and video recordings of activity in 3D space. For
example, from Second Life we get (i) a simple chat log file (time stamp, chatting per-
son/entity, chat text); (ii) a set of competencies (CEFR skills [2] shown below), specified in
a text file (number, id, initial probability that students have that skill, short description); and
educator-defined (scripted) rules, which vary from very simple such as checking whether a
certain entity writes a certain text; to more complicated, such as computing distances trav-
elled in Second Life. ProNIFA parses the log files, checks whether the rules apply and
updates the probabilities of the competencies (and the probability distribution over the
competence states).
(i) [07:21 UTC] <b><i>Teacher</i></b>Well done, Svein.<br>
(ii) 001 CEFR#094 0,5 Listening A1
(iii) [Rule1] Who=Teacher What=Well done, <NAME>. ASkills=1;2 AUpdate=0,2 LSkills=3 LUpdate=0,1
NB: If the teacher says "Well done" and a name, the probabilities of skills 1 and 2 for learner <NAME> are
increased by 0.2; and for skill 3, decreased by 0.1.
The third data set is produced by student self-assessments. The European Language
ePortfolio self-assessment grid was used to elicit self-assessment of speaking, listening and
reading skills. Questions relate to various “can do's”, e.g. “I can understand simple, short
greetings and expressions, such as hello, thank you or you are welcome” and students assess
themselves between “I can do this a bit / quite well / very well”. The teacher interprets these
data sets and the results are manually entered directly into the OLM – i.e. not all data needs
to be transformed using ProNIFA.
As explained above, not all data is immediately available in competence form, and
needs to be assessed either automatically or manually. ProNIFA (probabilistic non-invasive
formative assessment) is a tool to support teachers in the assessment process. It establishes a
user interface for data aggregation and analysis services and functions. Conceptually, the
functions are based on Competence-based Knowledge Space Theory (CbKST), originally
established by Doignon and Falmagne [10], a well-elaborated set-theoretic framework for
addressing the relations amongst problems (e.g. test items). It provides a basis for struc-
turing a domain of knowledge and for representing the knowledge based on prerequisite
relations. While the original idea considered performance (behaviour, e.g. solving a test
item), extensions introduced a separation of observable performance and latent, unob-
servable competencies, which determine the performance [11]. CbKST assumes a finite set
of more or less atomic competencies (in the sense of some well-defined, small scale de-
scriptions of some sort of aptitude, ability, knowledge, or skill) and a prerequisite relation
between those competencies. A prerequisite relation states that competency a is a prereq-
uisite to acquire another competency b. If a person has competency b, we can assume they
also have competency a. Because more than one set of competences can be a prerequisite for
another (e.g., competency a or b are a prerequisite for acquiring competency c), prerequisite
functions have been introduced, relying on and/or type relations. A person’s competence
state is described by a subset of competencies. Due to the prerequisite relations between
competencies, not all subsets are admissible competence states. Using interpretation and
representation functions, the latent competencies are mapped to a set of tasks (or test items)
covering a domain: mastering a task correctly is linked to a set of necessary competencies;
not mastering a task is linked to a set of lacking competencies. This assignment induces a
performance structure: the collection of all possible performance states. Recent versions of
the conceptual framework are based on probabilistic mapping of competencies and per-
formance indicators, accounting for lucky guesses or careless errors. This means, mastering
a task correctly provides evidence for certain competencies and competence states, with a
certain probability.
ProNIFA retrieves performance data and updates the probabilities of competencies and
competence states in a domain. When a task is mastered, all associated competencies are
increased in their probability, and failing in a task decreases the probabilities of associated
competencies. A distinct feature in formative assessment is the multi-source approach.
ProNIFA allows connecting the analysis features to a range of evidence sources (such as the
listening and reading test or activity in a virtual world). The interpretation of the sources of
evidence depends on a-priori specified and defined conditions, heuristics and rules, which
associate sets of available and lacking competencies to achievements exhibited in the evi-
dence. The idea is to define certain conditions or states in a given environment, for example:
the direction and speed a learner is moving, following instructions in English in an adven-
ture game, or a combination of correctly and incorrectly ticked multiple choice tasks in a
regular online test. The specification of such states can occur in multiple forms, ranging
from simply listing test items and the correctness of the items, to complex heuristics such as
the degree to which an activity reduced the ‘distance’ to the solution in a problem solving
process (technically this can be achieved by pseudo code scripting). The next step of this
kind of planning/authoring is to assign a set of competencies that can be assumed available
and also lacking when a certain state occurs. This assumption can be weighted with strength
of the probability updates. In essence, this approach equals the conceptual framework of
micro adaptivity (e.g. [12]). Figure 2 shows ProNIFA-analysed data from a Second Life
activity (see Section 1). The resulting model built around atomic competencies and related
probability distribution, is passed to an OLM platform as a next step to support teacher
appraisal efforts (Figure 3).
Figure 2: Screenshot of ProNIFA Figure 3: OLM skill meters
2. Competence Visualisation using an Open Learner Model
Using the easy-to-interpret ProNIFA display, teachers can add competency information to
the OLM, as shown in Figure 4. They provide a numerical value for the model (by clicking
on the stars) and may also include additional (non-modelled) feedback. The example shows
competences in English according to the required learning goals and competences [4]. So,
for example, if ProNIFA-analysis of recent Second Life logs indicates increased compe-
tence in some aspect of a student’s learning, the teacher can easily update the OLM ac-
cordingly. This can happen alongside other, possibly automated input to the learner model,
self-assessments, etc., if other activities are also ongoing. Thus, both teachers and students
can flexibly use the OLM for formative assessment support.
Figure 4: Teacher updates to the OLM
As stated previously, information at this broad level of granularity is intended primar-
ily to help gain a quick overview of students' competences which can, for example, be
highly useful in classrooms where teachers are trying to manage classroom activities, give
formative feedback, or update their teaching plan. In addition to the simple skill meters
(Figure 3), student rankings by competence, and a table overview are available. Work is
underway on word clouds – providing another way for teachers to quickly identify where to
focus their attention [13]; and treemaps, which will allow drill-down to more detail, sup-
porting more reflective formative assessment. These (and possibly other) learner model
views will help teachers easily interpret the kind of information they already collect (e.g.,
Figure 1), but in a more immediately usable format (or, in the case of the planned treemaps,
in a way that facilitates access to detail). Student use of the OLM, as well as promoting
awareness of their learning [6], will help focus students on thinking in terms of competences
(for English [4]), rather than activity-specific results (as in the example in Figure 1).
3. Summary
This paper has introduced a way to help teachers take the range of data now available about
students, and transform it into a form that can be used in an OLM. This can help students
note the importance of language competences, and help teachers’ classroom orchestration.
The project is supported by the European Community (EC) under the Information Society
Technology priority of the 7th Framework Programme for R&D, contract no 258114
NEXT-TELL. This document does not represent the opinion of the EC and the EC is not
responsible for any use that might be made of its content.
[1] Richardson, W. (2009). Blogs, Wikis, Podcasts, and Other Powerful Web Tools for Classrooms, Cali-
fornia: Corwin Press.
[2] Council of Europe (nd). The Common European Framework of Reference for Languages, Accessed 1 June 2012.
[3] Huhta, A. & Figueras, N. (2004). Using the CEF to Promote Language Learning through Diagnostic
Testing, In K. Morrow (ed), Insights from the Common European Framework, Oxford: OUP, 65-76.
[4] Utdanningsdirektoratet. Læreplan i Engelsk, . Accessed 1 June 2012.
[5] Dillenbourg, P. & Jermann, P. (2010) Technology for Classroom Orchestration, In M. S. Khine & I. M.
Saleh (eds). New Science of Learning: Cognition, Computers and Collaboration in Education, Berlin:
Springer Verlag, 525-552.
[6] Bull, S. & Kay, J. (2007). Student Models that Invite the Learner In: The SMILI Open Learner Modelling
Framework, International Journal of Artificial Intelligence in Education 17(2), 89-120.
[7] Morales, R., Van Labeke, N., Brna, P. & Chan, M.E. (2009). Open Learner Modelling as the Keystone of
the Next Generation of Adaptive Learning Environments. In C. Mourlas & P. Germanakos (eds), Intel-
ligent User Interfaces, Information Science Reference, 288-312, London: ICI Global.
[8] Mazzola, L. & Mazza, R. (2010). GVIS: A Facility for Adaptively Mashing Up and Representing Open
Learner Models, In M. Wolpers et al (eds), EC-TEL 2010, Berlin: Springer Verlag, 554-559.
[9] Reimann, P., Bull, S., Halb, W. & Johnson, M. (2011). Design of a Computer-Assisted Assessment
System for Classroom Formative Assessment, CAF11, IEEE.
[10] Doignon, J., & Falmagne, J. (1999). Knowledge Spaces. Berlin: Springer Verlag.
[11] Korossy, K. (1999). Modelling knowledge as competence and performance. In D. Albert & J. Lukas
(Eds.), Knowledge spaces: Theories, empirical research, and applications Mahwah, NJ: LEA, 103–132
[12] Kickmeier-Rust, M. D., & Albert, D. (2011). Micro adaptivity: Protecting immersion in didactically
adaptive digital educational games. Journal of Computer Assisted Learning, 26, 95-105.
[13] Reimann, P., Bull, S. & Ganesan, P. (in press). Supporting the Development of 21
Century Skills:
Student Facilitation of Meetings and Data for Teachers, TAPTA Workshop Proceedings, EC-TEL 2012.
... An important trend away from this has seen OLMs built as interfaces onto independent reusable learner model services for use by multiple systems (Kay et al. 2002;Brusilovsky et al. 2005;Kay 2008;Conejo et al. 2011;Kay and Kummerfeld 2012). Similarly, OLMs may aggregate data from several external systems, and present the combined evidence from these systems, to the user (Kay and Lum 2005;Bull et al. 2012). In addition to being able to view the learner model data (inspectable learner models), OLMs may permit some forms of interactive maintenance of the learner model between the system and the user. ...
... Taking a far broader view of learning, the many sensors used by the Quantified Self (Rivera-Pelayo et al. 2012) movement can be used to create models of a person's progress on their most important goals, such as learning to regulate their behaviour to improve their health (Kennedy et al. 2012). This is also in line with our previous observation that OLMs may now need to be able to combine data from multiple sources for visualisation (Bull et al. 2012). ...
... The growth of the web opened the possibility of learner model servers (Brusilovsky et al. 2005;Kay et al. 2002;ZapataRivera and Greer 2004b). This is reflected in the emergence of independent open learner models (Bull et al. , 2012Conejo et al. 2011;Kay 2008). These may support reuse of parts of the learner model by different applications, or allow the creation of OLMs that enable a learner to see their progress, potentially based on data from many sources, including various learning applications, over the long term (Bull and Gardner 2009;Gluga et al. 2010Gluga et al. , 2013). ...
The SMILI☺ (Student Models that Invite the Learner In) Open Learner Model Framework was created to provide a coherent picture of the many and diverse forms of Open Learner Models (OLMs). The aim was for SMILI☺ to provide researchers with a systematic way to describe, compare and critique OLMs. We expected it to highlight those areas where there had been considerable OLM work, as well as those that had been neglected. However, we observed that SMILI☺ was not used in these ways. We now reflect on the reasons for this, and conclude that it has actually served a broader role in defining the notion of OLM and informing OLM design. Since the initial SMILI☺ paper, much has changed in technology-enhanced learning. Notably, learning technology has become far more pervasive, both in formal and lifelong learning. This provides huge, and still growing amounts of learning data. The fields of Learning Analytics (LA), Learning at Scale (L@S), Educational Data Mining (EDM) and Quantified Self (QS) have emerged. This paper argues that there has also been an important shift in the nature and role of learner models even within Artificial Intelligence in Education and Intelligent Tutoring Systems research. In light of these trends, and reflecting on the use of SMILI☺, this paper presents a revised and simpler version of SMILI☺ alongside the original version. In both cases there are additional categories to encompass new trends, which can be applied, omitted or substituted as required. We now offer this as a guide for designers of interfaces for OLMs, learning analytics and related fields, and we highlight the areas where there is need for more research.
... (Reimann, 2011) There are several lines of research on e-assessment in NEXT-TELL, ranging from the development of assessment methods (both with teachers and from a formal model-based perspective, e.g. Kickmeier-Rust & Albert, 2013), to visualising assessment data and integration of this data in open learner models (Bull et al., 2012;Bull et al., 2013), which can be used by teachers and students for feedup, feedback, and feedforward (Hattie & Timerperly, 2007). This evidence-based approach (Mislevy, Almond, & Lukas, 2003) is particularly relevant in today's technology-rich classrooms, in which student use of technology creates a multitude of data (e.g. ...
... This evidence-based approach (Mislevy, Almond, & Lukas, 2003) is particularly relevant in today's technology-rich classrooms, in which student use of technology creates a multitude of data (e.g. artefacts and log data) that can be mined, assessed, and presented in ways that students and teachers can interpret to support learning (Bull et al., 2012). ...
... We found wonderful examples of assessment supported by Web 2.0 technologies. Unlike teachers in the other participating countries, the Norwegian teachers at DSS were able and willing to put the first version of some of the NEXT-TELL methods and tools into use (Cierniak et al., 2012;Bull et al., 2012) and have provided invaluable feedback to the project on issues of usability and problems with integrating ICT into practice. ...
Full-text available
This paper presents the Norwegian results of a baseline study of teacher practices with ICT. Through semi-structured interviews, six Norwegian teachers explain how digital technology not only changed aspects of their planning and classroom teaching, but also assessment and feedback. The results, together with similar results from England, Denmark, Germany, and Austria, contribute to the development of ICT support for formative e-assessment in the 21st Century classroom. Furthermore, through an analysis of the baseline interviews, tweets, blogs, forum posts, and discussions with teachers at conferences, we identified nine ICT-supported assessment methods being used in Norwegian classrooms. Our conclusion is that the interviewees are active users of ICT in all aspects of professional teacher practice, using both the tools provided and finding new tools to integrate technology into their professional practice.
... In addition, we have developed a number of formative assessment tools, such as RGFA and PRONIFA (Kickmeier- Rust et al., 2014;Kickmeier-Rust & Albert, 2016), which enable teachers to use real-time learning and visual analytics (Vatrapu, Teplovs, Fujita & Bull, 2011) for visualising student learning information that can inform formative feedback and pedagogical intervention. The tools can also aggregate and store the learning results in an Open Learner Model tool (Bull et al., 2012;Bull et al., 2016) that visualises student competence development. This article focuses on the iterative development of the TISL Heart. ...
... This is the essence of our approach to TISL: the focus is on using student data to inform teaching practice. Furthermore, as today's classrooms become populated with digital tools, more and varied student data are accessible; this student data can be harnessed and interpreted by teachers and by automated processes (Bull et al., 2012;Bull et al., in press;, and used as evidence to inform teaching practice. Figure 1). ...
Full-text available
Researchers have recently been calling for new models of teacher education and professional development for the 21st century. Teacher inquiry, where the teacher's own practice is under investigation, can be seen both as a way to improve day-to-day teaching in the classroom and as professional development for teachers. As such, it should also have a role in teacher education. In this article, we present the iterative development of the TISL Heart, a theory-practice model and method of teacher inquiry into student learning, which has a particular emphasis on the use of student results generated in the information and technology-rich classroom. This article proposes that this practice-near model is particularly relevant for teacher education, as it draws upon existing practices in using student data at a progressive school that focuses on the use of technology to enhance student learning. The article concludes by discussing the implications for its role in teacher education, particularly related to data literacy and its use in teaching.
... I Next-Tell ble et verktøy for kompetansemodeller (OLM) utviklet for å gi digitale representasjoner og visualiseringer av den laerendes kompetanser basert på digitalt registrerte vurderingssituasjoner 5 . Slik modellering av kompetanser, basert på digitalt innsamlet data, har vist seg å hjelpe laereren og den laerende i å planlegge og ta beslutninger for gruppe-og individuelle laeringsbehov (Bull, Brna & Pain 1995, Bull et al 2012, Bull & Wasson 2016. iComPAss tok i bruk erfaring og kunnskap fra Next-Tell og teorien om kompetansemodeller i klasserommet, for å se hvordan disse kunne brukes på en annen laeringsarena enn klasserommet -som var arbeidsplassen som laeringsarena. ...
Full-text available
Prosjektet inquire Competence for better Practice and Assessment ( iComPAss) oppstod fra behovet til to organisasjoner som begge fokuserer på opplæring. Det første var masterstudiet for organisering og ledelse ved Høgskolen i Sogn og Fjordane (HiSF), mens det andre var opplæring av brannkonstabler organisert av Sotra Brannvern IKS (SFR). Begge organisasjonene hadde behov for å heve kvaliteten på sin opplæring og finne metoder for å identifisere kompetansebehov og utvikle metoder og teknologi som kunne gi bedre oversikt over eksisterende opplæring og identifiserte kompetansebehov. Rapporten beskriver utviklingen av PraksisUtforskende Metode, og utviklingen av digitale verktøy for bedre kompetanseoversikt, trening- og opplæringsstøtte-
... The timeframe is about 5-10 years. The development of data collection methods and processing tools (e.g., Romero and Ventura 2007;Kickmeier-Rust and Albert 2012;Bull et al. 2012) and support for data usage through, for example, LA (Ferguson 2012) and teaching analytics (Vatrapu et al. 2012) has begun. Learning design is establishing itself as a promising field of research and practice (Mor et al. 2013). ...
Full-text available
Technology-rich learning environments generate rich streams of data pertaining to students’ and teachers’ actions and their outcomes. This data can be harnessed by teachers to monitor and improve their practice, but new methods and tools are needed that (1) help teachers to harness and interpret this data, and subsequently, (2) incorporate it into a framework of continuous professional development. Approaches and methods from teacher inquiry into student learning (TISL), learning design (LD) and learning analytics (LA) can be combined to support a teacher-led design inquiry of learning and innovation cycle. A transdisciplinary approach, which draws on insights from epistemic practice, pedagogical practice, design inquiry of learning, teacher inquiry, e-assessment, and learning and teaching analytics and visualisation, will produce methods and tools to enable teachers to reflect on their own teaching and student learning in order to improve teaching practice.
... The timeframe is about 5-10 years. The development of data collection methods and processing tools (e.g., Romero & Ventura, 2007;Kickmeier-Rust & Albert, 2012;Bull, Wasson, Kickmeier-Rust, Johnson & Moe, 2012) and support for data usage through, for example, learning analytics and teaching analytics (Vatrapu, Bull & Johnson, 2012) has begun. Learning design is establishing itself as a promising field of research and practice (Mor, Craft & Hernández-Leo, 2013). ...
The 12 Grand Challenges notified in this book provide a wealth of information and ideas to guide current policy makers to shape long-term policies and actions. These Grand Challenges are being formulated at a time when more and more of us are recognising the increasing importance of ICT in and for education and the necessity to find solutions to overcome the enormous gap between education and all other sectors of life and work. Three imminent trends identified along the 12 Grand Challenges note a fundamental paradigm shift in the role of new technologies supporting educational change. But this is not enough. The focus should be no longer on ICT tools and infrastructures but on open and flexible learning and teaching with the learner (and the educator) at the centre. This shows that the step from an early adoption of ICT use in education towards mainstreaming has started. It is all about the core business of education: Learning.
... Opening the learner model made it a first class learning resource in its own right [2,13]. Although OLMs began long before big data was so prevalent, they are now also looking towards big data, and approaches suitable for today's online learning contexts, for example: domain-independent reusable services for other systems [6,13], or aggregated from multiple external systems, with visualisation of the combined evidence to the user [5,13], are being developed. Thus, OLMs could be viewed as a specific kind of learning analytics, in that the visualisation is of the learner model rather than activities undertaken, performance, behaviour, etc. ...
Conference Paper
This paper compares approaches to visualising data for users in educational settings, contrasting visual learning analytics and open learner models. We consider the roots of each, and identify how each field can learn from experiences and approaches of the other, thereby benefiting both.
During the past years, the adoption of Learning Management System (LMS) to support an e-learning process has been continuously growing. Hence, a potential need and meaningful factor to provide a personalized support, within the context of these systems, has been the identification of particular characteristics of students to provide adaptations of the system’s elements to the individual traits. One particular characteristic that has been little studied in a personalized e-learning process are the learning disabilities (LD) of students. Dyslexia is a common LD in Spanish-speaking university students, which is specifically referred to the manifestation of different difficulties in reading. Dyslexia requires special attention by higher educational institutions to detect, assess, and assist affected students during their learning process. Thereby, an open challenge has been identified from this implication: How to include Spanish-speaking university students with dyslexia and/or reading difficulties in an e-learning process? In this dissertation, an approach to include the characteristics of these affected students with dyslexia in the context of an LMS is proposed and developed. To achieve this, as first step, it was detected students with or without a previous diagnosis of dyslexia that still show reading difficulties, it was detected the compensatory strategies that they could use to learn, and it was assessed the cognitive processes that they may have altered. Therefore, it was analyzed, designed and developed methods and tools for the detection and assessments of these students. Moreover, a learner model made up of demographics, reading profile, learning styles, and cognitive traits was defined. As second step, in our research work was the essential support and assistance to these students in overcoming their difficulties. To do so, it was necessary to create awareness in these students of their reading difficulties, learning styles and cognitive deficits. This awareness promotes learning reflection by encouraging students to view and self-regulate their learning. Furthermore, it was necessary to provide specialized recommendations to support such self-regulation of the students. Thus, methods and tools that can be used to assist these students were analyzed and developed, as well as adaptation processes to deliver learning analytics and specialized recommendations were defined. As third step, it was necessary to construct mechanisms to integrate these tools in an LMS to assist affected students during an e-learning process. Thus, a familiar environment that supports detection, assessment and assistance of these students is provided. Finally, in this dissertation several case studies that evaluate the validity of the methods and tools proposed were implemented. Experiments with pilot groups of students to test the functionality and usability, jointly with larger groups of students to test the usefulness and validity of the tools were conducted. Descriptive analysis as well as reliability and correlation analysis were performed.
Full-text available
Today’s technology-enabled learning environments are becoming quite different from those of a few years ago, with the increased processing power as well as a wider range of educational tools. This situation produces more data, which can be fed back into the learning process. Open learner models have already been investigated as tools to promote metacognitive activities, in addition to their potential for maintaining the accuracy of learner models by allowing users to interact directly with them, providing further data for the learner model. This paper suggests the use of negotiated open learner models as a means to both maintain the accuracy of learner models comprising multiple sources of data and prompt learner reflection during this model discussion process.
Conference Paper
With the range of educational tools available it is now realistic for learner models to take account of broader information, and there are strong arguments for placing open learner models in the centre of environments with diverse sources of data [1],[2],[3]. This Interactive Event will demonstrate the Next-TELL approach to facilitating teachers’ use of data from a variety of sources, and will allow participants to interact at all stages of this process. The Interactive Event will comprise three parts: Going to the Chatterdale village: an OpenSim mystery for language learners; Interaction with ProNIFA (probabilistic non-invasive formative assessment) to help teachers transform Chatterdale log data for an open learner model; Interaction with the Next-TELL Open Learner Model to explore learner model visualisations from automated and manual sources.
Full-text available
It is believed that, with the help of suitable technology, learners and systems can cooperate in building a sufficiently accurate learner model they can use to promote learner reflection through discussion of their knowledge, preferences and motivational dispositions (among other learner characteristics). Open learner modelling is a technology that can help set up this discussion by giving the learners a representation of aspects of the learner as "believed" by the system. In this way/role, open learner modelling can perform a critical role in a new breed of intelligent learning environments driven by the aim to support the development of self-management, signification, participation and creativity in learners. In this chapter we provide an analysis of the migration of open learner modelling technology to common e-learning settings, the implications for modern e-learning systems in terms of adaptations to support the open learner modelling process, and the expected functionality of a new generation of intelligent learning environments.
Conference Paper
Full-text available
We describe a number of high-level design decisions that we found essential for a Computer-assisted Assessment System that is to be deployed in school classrooms for supporting formative assessment by teachers and self-assessment by students. In addition, the system needs to provide information to parents. Our design decisions comprise the use of the Open Learner Model approach to make diagnostic information available to the various stakeholders, the use of a modelling methodology to describe assessment methods declaratively (glass-box), and the decision to embed assessment in a flexible manner into current and emerging learning environments. Implications for system architecture are also described.
Conference Paper
Full-text available
In this article we present an infrastructure for creating mash up and visual representations of the user profile that combine data from different sources. We explored this approach in the context of Life Long Learning, where different platforms or services are often used to support the learning process. The system is highly configurable: data sources, data aggregations, and visualizations can be configured on the fly without changing any part of the software and have an adaptive behavior based on user’s or system’s characteristics. The visual profiles produced can have different graphical formats and can be bound to different data, automatically adapting to personal preferences, knowledge, and contexts. A first evaluation, conducted through a questionnaire, seems to be promising thanks to the perceived usefulness and the interest in the tool.
This paper proposes providing teachers with real-time accurate and pedagogically-relevant information to assist students in the development of 21st Century skills, across subject areas, using a variety of technologies and data sources. We suggest that, while allowing students to practice skills such as meeting facilitation, recording activities both directly and indirectly (student and peer reporting) will likely be a useful step in supporting students in their acquisition of such skills, while helping teachers guide development in their students through visualisations of their students' competencies.
This chapter develops an extension of Doignon and Falmagne's knowledge struc-tures theory by integrating it into a competence-performance conception. The aim is to show one possible way in which the purely behavioral and descriptive knowledge structures approach could be structurally enriched in order to account for the need of explanatory features for the empirically observed solution behav-ior. Performance is conceived as the observable solution behavior of a person on a set of domain-specific problems. Competence (ability, skills) is understood as a theoretical construct accounting for the performance. The basic concept is a mathematical structure termed a diagnostic, that creates a correspondence be-tween the competence and the performance level. The concept of a union-stable diagnostic is defined as an elaboration of Doignon and Falmagne's concept of a knowledge space. Conditions for the construction and several properties of union-stable diagnostics are presented. Finally, an empirical application of the competence-performance conception in a small knowledge domain is reported that shall illustrate some advantages of the introduced modeling approach.
In recent years, the learner models of some adaptive learning environments have been opened to the learners they represent. However, as yet there is no standard way of describing and analysing these 'open learner models'. This is, in part, due to the variety of issues that can be important or relevant in any particular learner model. The lack of a framework to discuss open learner models poses several difficulties: there is no systematic way to analyse and describe the open learner models of any one system; there is no systematic way to compare the features of open learner models in different systems; and the designers of each new adaptive learning system must repeatedly tread the same path of studying the many diverse uses and approaches of open learner modelling so that they might determine how to make use of open learner modelling in their system. We believe this is a serious barrier to the effective use of open learner models. This paper presents such a framework, and gives examples of its use to describe and compare adaptive educational systems.
The idea of utilizing the rich potential of today's computer games for educational purposes excites educators, scientists and technicians. Despite the significant hype over digital game-based learning, the genre is currently at an early stage. One of the most significant challenges for research and development in this area is establishing intelligent mechanisms to support and guide the learner, and to realize a subtle balance between learning and gaming, and between challenge and ability on an individual basis. In contrast to traditional approaches of adaptive and intelligent tutoring, the key advantage of games is their immersive and motivational potential. Because of this, the psycho-pedagogical and didactic measures must not compromise gaming experience, immersion and flow. In the present paper, we introduce the concept of micro-adaptivity, an approach that enables an educational game to intelligently monitor and interpret the learner's behaviour in the game's virtual world in a non-invasive manner. On this basis, micro-adaptivity enables interventions, support, guidance or feedback in a meaningful, personalized way that is embedded in the game's flow. The presented approach was developed in the context of the European Enhanced Learning Experience and Knowledge TRAnsfer project. This project also realized a prototype game, demonstrating the capabilities, strengths and weaknesses of micro-adaptivity.
We have learned from Theorem 2.2.4 that any learning space is a knowledge space, that is, a knowledge structure closed under union. The ∪-closure property is critical for the following reason. Certain knowledge spaces, and in particular the finite ones, can be faithfully summarized by a subfamily of their states. To wit, any state of the knowledge space can be generated by forming the union of some states in the subfamily. When such a subfamily exists and is minimal for inclusion, it is unique and is called the ‘base’ of the knowledge space. In some cases, the base can be considerably smaller than the knowledge space, which results in a substantial economy of storage in a computer memory. The extreme case is the power set of a set of n elements, where the 2n knowledge states can be subsumed by the family of the n singleton sets. This property inspires most of this chapter, beginning with the basic concepts of ‘base’ and ‘atoms’ in Sections 3.4 to 3.6. Other features of knowledge spaces are also important, however, and are dealt with in this chapter.