Conference PaperPDF Available

NLP for Student and Teacher: Concept for an AI based Information Literacy Tutoring System

Authors:

Abstract

We present the concept of an intelligent tutoring system which combines web search for learning purposes and state-of-the-art natural language processing techniques. Our concept is described for the case of teaching information literacy, but has the potential to be applied to other courses or for independent acquisition of knowledge through web search. The concept supports both, students and teachers. Furthermore, the approach integrates issues like AI explainability, privacy of student information, assessment of the quality of retrieved information and automatic grading of student performance.
NLP for Student and Teacher: Concept for an AI based
Information Literacy Tutoring System
P. Libbrechta, T. Declerckb, T. Schlippea, T. Mandlcand D. Schinerd
aIUBH Fernstudium, Bad Reichenhall, Germany
bDFKI GmbH, Saarbrücken, Germany
cUniversity of Hildesheim, Hildesheim, Germany
dDIPF | Leibniz Institute for Educational Research and Information, Frankfurt, Germany
Abstract
We present the concept of an intelligent tutoring system which combines web search for learning purposes and state-of-the-
art natural language processing techniques. Our concept is described for the case of teaching information literacy, but has
the potential to be applied to other courses or for independent acquisition of knowledge through web search. The concept
supports both, students and teachers. Furthermore, the approach integrates issues like AI explainability, privacy of student
information, assessment of the quality of retrieved information and automatic grading of student performance.
1. Motivation
Information literacy is a core skill for the digital age.
In modern education and work environments it is of
growing importance as knowledge work is increas-
ingly based on large and rapidly changing knowledge
sources. Search and organization of knowledge is a
constant requirement. Higher education teaches in-
formation literacy sometimes in dedicated courses and
often only within another course. Studies show that
the level is low: E.g. students have diculties in us-
ing operators in search terms, organize literature and
tend not to know appropriate sources to nd scientic
literature.
The potential of Articial Intelligence (AI) in higher
education still needs to be explored and innovative
applications need to be developed. Can computers
support teaching sta in coaching information com-
petency? – The research area “AI in Education” ad-
dresses the application and evaluation of AI meth-
ods in the context of education and training. One
of the main focuses of this research is to analyze
and improve teaching and learning processes. On the
one hand, deep learning – learning in multi-layered
(“deep”) articial neural networks – has become a cen-
tral component of AI research and numerous libraries
or frameworks1have been created that simplify the
Proceedings of the CIKM 2020 Workshops, October 19–20, Galway,
Ireland
email: p.libbrecht@iubh-fernstudium.de ( P. Libbrecht);
declerck@dfki.de ( T. Declerck); t.schlippe@iubh.de ( T. Schlippe);
mandl@uni-hildesheim.de ( T. Mandl); schiner@dipf.de (
D. Schiner)
orcid:
©2020 Copyright for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
1To name a few: TensorFlow,Keras ,Cae,PyTorch
work and support the creation of own experiments.
On the other hand, many educational institutions al-
ready conduct their courses, exercises, and examina-
tions online. This means that student assessments are
already available in digital, machine-readable form,
oering a wide range of analysis options. Focusing
on information literacy, a course typically consists of
teaching material in text form and the course partici-
pants themselves practice information skills and gen-
erate text in online research and essays. However, the
evaluation of free texts such as essays, references and
research methodology still requires intensive manual
work.
Consequently, we deal with the question which
methods of Natural Language Processing (NLP) can
support coaching of information competency and how
they can be applied for the teacher, and for the stu-
dent. The focus is on the combination of various deep
learning approaches to automatically help students to
accelerate the learning process by automatic feedback,
but also to support teachers by pre-evaluating free text
and suggesting corresponding scores or grades.
2. Related Work
Lazonder [1] showed that searching and talking about
the search has led to positive learning eects. In a
similar fashion, providing feedback and suggestions
around a search activity can support the reection on
search tools’ usage, on one’s information needs and
on the goals of the task at hand. In his keynote at
ECIR 20202C. Shah envisioned the next decade of re-
search in search and recommendation where modeling
2https://ecir2020.org/keynote-speakers/
the tasks is central to raise the quality of the results.
Dened tasks around the learning of information liter-
acy are a good example of context where recommen-
dations can be made more relevant to the process. In-
telligent Tutoring Systems (ITS) follow a long tradi-
tion of environments where AI supports learning. The
most widespread didactic situation where ITS has been
employed, is as exercises where a direct feedback (e.g.
in form of a score or recommendation) is oered fol-
lowing interactions with a dedicated system. Multiple
example intelligent tutoring systems exist and many
follow the model of [2]. Our research aims to observe
the work of the students instead of requiring exercise
specic actions.
State-of-the-art research and the basis for the devel-
opment of an NLP system to coach information com-
petency include sentiment analysis [3], topic identi-
cation [4], named entity recognition [5], text sum-
marization [6], word sense disambiguation [7] and in-
formation retrieval [8]. A major scientic challenge
is the explainability of system outputs [9]. The NLP
methods can be combined with knowledge graphs to
include ontology-based knowledge coding in the pro-
cesses [10]. This can be enhanced by including visu-
alizations that represent both the inputs of the stu-
dent and the results of the NLP analysis. Lachner [11]
shows that graph representations can support the un-
derstanding of a topic. To represent ontologies or com-
plex topics, knowledge graphs such as [12] help to
identify context.
For the processing in deep learning architectures,
sequences of words are encoded into vector spaces
in order to perform computations in neural networks.
Tools for text vectorization are Word2Vec , GloVe [13]
or fastText [14]. The concept of the skip-thought
vectors [15], universal sentence encoder (USE) and
bidirectional encoder representations from transform-
ers (BERT) [16] are methods also supporting sentence
embeddings in the semantic vector space. [17] in-
vestigates and compares state-of-the-art deep learn-
ing techniques for automatic short answer grading.
Their experiments demonstrate that systems based on
BERT [16] performed best for English and German. On
their German data set they report a Mean Average Er-
ror of 1.2 points, i.e. 31% of the student answers are
correctly graded and in 40% the system deviates by 1
out of 10 points.
3. Information Literacy Courses
The term Information Literacy is often used synony-
mous with Media Literacy. According to the UNESCO
it “constitutes a composite set of knowledge, skills,
attitudes, competencies and practices that allow ef-
fectively access, analyze, critically evaluate, interpret,
use, create and disseminate information and media
products with the use of existing means and tools on a
creative, legal and ethical basis. It is an integral part of
so-called “21st century skills” or “transversal compe-
tencies”” [18]. In higher education, these information
skills are highly relevant for students. Nevertheless,
the information literacy of students is often measured
as low [19]. Courses on basic scientic work cover sev-
eral domains of information literacy. Often, there is a
strong focus on searching skills, correct citing and as-
sembling short abstracts based on scientic texts. The
practice of teaching information literacy skills has not
developed much towards digital formats. Some open
online courses exist [20], but there is no use of AI tools
yet.
An example is the ILO-MOOC (informationliter-
acy.eu). It allows to study in a self-paced manner. The
feedback for students is show right or wrong after an-
swering multiple choice questions. Another example
is at IUBH University where bachelor students with
a diverse background are trained on the basics of sci-
entic work. While the focus of the assignments is in
the production of written texts, it involves all aspects
of information literacy. The course is made for both
remote and on-site attendance and involves various
communication channels, many of them happening on
the web. The resulting competencies expect an inde-
pendent and self-condent scientic work which may
be strongly supported by an automatic evaluation.
4. Proposed System Architecture
Our concept proposes an integration within the web
activities of the learner attending an assignment task
which includes searching, reading, evaluating, and
writing: Using JavaScript or web extensions, the text
and timestamps of the search results, of the viewed
publications, and of the input text can be used as fea-
tures for the NLP models. Based on the assignment’s
objective, the feature vectors generated from the stu-
dent’s behavior and text is processed by our NLP mod-
els. The models were trained by annotated text data
from previous course members (model solution, al-
ready graded works, other annotated works) to gen-
erate textual feedback.
The concept comprises several tasks for which sup-
port for students can be provided. For the sake of
brevity we illustrate two of them:
A core task in scientic work and, thus, in teaching
information literacy is web search. Students are of-
ten required to search for documents fullling certain
requirements e.g. within a closed collection of docu-
ments. An AI system observes the search terms input
by the student and compares the strategies to identi-
ed objectives. The system tracks the actions of the
students regarding search terms, observed documents,
headers and time spent. It then suggests the most
appropriate steps toward reaching better results. In
the example of searching in a closed collection with a
pre-dened goal, suggestions for further search terms
leading to relevant documents can be made.Text vec-
torization and a deep learning model-based classi-
cation can be used for keyword extraction [21]. As
such, the student learns in the direct interaction with
a search system and improves skills based on previous
activities by providing automatic feedback.
Another exemplary task in teaching information lit-
eracy is related to academic writing. Students are
asked to assemble a short summary and synthesis of
several papers. The system supports them in analyz-
ing the writing, recognizing the parts in a certain pa-
per, checking whether the short summary is adequate
and without plagiarism. Siamese neural networks can
be used to detect similarities there [22]. The system
also uses NLP to analyze the coherence of the text.
Here an AI based system also gives context-dependent
suggestions on how to improve the text. The sugges-
tions provided can refer to documents, showing a title,
a time when the student saw it, and a link to the docu-
ment as last accessed. This is eective for the learner
as the reading is kept in memory. Such support within
the writing process can help students more than a the-
oretical unit on academic writing.
In our suggested AI based information literacy tu-
toring system, word sequences in search terms, re-
trieved documents, and reference documents are en-
coded into vector spaces in order to perform compu-
tations in deep learning architectures. A ne-tuning
architecture, such as BERT, which has proven itself in
many NLP tasks, provides the basis of our system: It is
based on a pre-trained deep learning model, which —
supplemented by a linear regression layer — is adapted
to our specic tasks, e.g. grading short summaries or
retrieved documents from the web search, and the pa-
rameters of the embeddings are tuned accordingly. A
data set with labeled and graded documents and sum-
maries from old information literacy courses serves to
optimize the model for predicting scores.
To achieve a steeper learning curve and to guaran-
tee explainability, we suggest two methods: (1) High-
lighting keywords and (2) displaying the condence
score of the system’s output. The keywords can be
retrieved by adapting the BERT ne-tuning architec-
ture to extract named entities as proposed in [23] and
[24]. The condence score can be retrieved by map-
ping the predicted scores to classes and output a vector
that contains a probability for each class.
Further feedback to both teachers and learners can
be given by the visualizations of individual states and
congurations of the system. Given the exemplary
tasks, the trial and error processes can be shown in
dierent paths along a timeline that can be shared by
the student with a teacher to allow for better feedback.
Also the categorization and identication of correct
steps are to be shown to the learner using clear visu-
alizations to help understanding the decision making.
The analysis of the evolving students’ work gath-
ers data that should not necessarily be shown to fel-
low students or teachers: The data is made of personal
trial and error processes and is to be considered as pri-
vate. Other content, e.g. from chat rooms and forums,
can be considered as public. It is adequate that a bot
can provide answers using information that all chat
members have seen (e.g. lecture scripts, assignments’,
posts). Similarly, submitted assignments’ text data can
be automatically graded based on existing information
such as earlier assignments or expert texts.
5. Conclusion and Future Work
We have described the architecture of an intelligent tu-
toring system which combines web search and natural
language processing techniques on the basis of infor-
mation competency. After implementing the system
and training the machine learning models, the system
needs to be evaluated and optimized for students and
teachers regarding usability and eciency for which
several courses exist. With the help of metrics, we re-
duce the error rate for training the models. Finally, we
intend to speedup the system. Throughout the imple-
mentation usability tests are repeatedly performed to
ensure the quality of the proposed system.
We plan to apply the architecture within several
courses, adjust the tasks to be automatically measur-
able and annotate corpora of articles so that an au-
tomatic evaluation yields productive feedback. Using
this system, we expect to answer the following ques-
tions: How to adequately capture the students’ activ-
ity, select information to store and evaluate it, and how
to oer support which is timely and relevant for the
learning process.
References
[1] A. Lazonder, Do two heads search better than
one? Eects of student collaboration on web
search behaviour and search outcomes, BJET 36
(2005) 465–475.
[2] K. VanLehn, The Behavior of Tutoring Systems,
I. J. Articial Intelligence in Education 16 (2006)
227–265.
[3] H. Liu, Sentiment Analysis of Citations Using
Word2vec, arXiv:1704.00177 (2017).
[4] M. Lamba, M. Margam, Metadata Tagging
of Library and Information Sciences Theses:
Shodhganga (2013-2017), 2018. doi:10.5281/
zenodo.1475795.
[5] X. Li, J. Feng, Y. Meng, Q. Han, F. Wu, J. Li, A Uni-
ed MRC Framework for Named Entity Recogni-
tion, arXiv:1910.11476 (2019).
[6] A. Padmakumar, A. Saran, Unsupervised Text
Summarization Using Sentence Embeddings, in:
Technical Report, University of Texas at Austin,
2016.
[7] Y. Wang, M. Wang, H. Fujita, Word Sense Dis-
ambiguation: A comprehensive knowledge ex-
ploitation framework, Knowledge-Based Sys-
tems (2020).
[8] H. Zhang, G. Cormack, M. Grossman,
M. Smucker, Evaluating Sentence-Level
Relevance Feedback for High-Recall Information
Retrieval, Inf. Retr. J. (2020).
[9] H. Liu, Q. Yin, W. Wang, Towards Explainable
NLP: A Generative Explanation Framework for
Text Classication, in: A. Korhonen, D. Traum,
L. Màrquez (Eds.), ACL, 2019.
[10] D. Gromann, L. Anke, T. Declerck, Special Issue
on Semantic Deep Learning, Semantic Web 10
(2019) 815–822.
[11] A. Lachner, C. Neuburg, Learning by writing
explanations: Computer-based Feedback about
the Explanatory Cohesion Enhances Students’
Transfer, Instructional Science 47 (2019) 19–37.
[12] M. Jaradeh, A., Oelen, M. Prinz, M. Stocker,
S. Auer, Open research knowledge graph: A sys-
tem walkthrough, in: TPDL, 2019.
[13] J. Pennington, R. Socher, C. Manning, Glove:
Global Vectors for Word Representation., in:
EMNLP, 2014.
[14] P. Bojanowski, E. Grave, A. Joulin, T. Mikolov,
Enriching Word Vectors with Subword Informa-
tion, Transactions of the Association for Compu-
tational Linguistics 5 (2017) 135–146.
[15] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel,
A. Torralba, R. Urtasun, S. Fidler, Skip-Thought
Vectors, in: NIPS, MIT Press, Cambridge, MA,
USA, 2015.
[16] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova,
BERT: Pre-training of Deep Bidirectional Trans-
formers for Language Understanding, in:
ACL:HLT, 2019.
[17] J. Sawatzki, T. Schlippe, M. Benner-Wickner,
Deep Learning Techniques for Automatic Short
Answer Grading, in: submitted to: COLING,
ACL, 2020.
[18] The Moscow Declaration on Media and Informa-
tion Literacy, 2020. https://iite.unesco.org/mil,
2020-05-02.
[19] A. Hebert, Information literacy skills of rst-
year Library and Information Science graduate
students: An exploratory study, Evidence Based
Library and Information Practice 13 (2018) 32–52.
[20] S. Nowrin, L. Robinson, D. Bawden, Multi-
lingual and Multi-cultural Information Literacy:
Perspectives, Models and Good Practice, Global
Knowledge, Memory and Communication (2019).
[21] Y. Zhang, M. Tuo, Q. Yin, L. Qi, X. Wang, T. Liu,
Keywords Extraction with Deep Neural Network
Model, Neurocomputing 383 (2020) 113 – 121.
[22] E. Hambi, F. Benabbou, A Multi-Level
Plagiarism-Detection-System Based on Deep
Learning Alg., in: IJCSNS, 2019.
[23] K. Hakala, S. Pyysalo, Biomedical Named En-
tity Recognition with Multilingual BERT, in:
BioNLP-OST, 2019.
[24] E. Taher, S. Hoseini, M. S., Beheshti-NER: Per-
sian named entity recognition Using BERT, in:
ICNLSP, 2019.
... Danping [23] discussed the evaluation function of college teacher's ability of using information technology including oriented function, inspiring function and ensuring function, and established the evaluation index system including information ideals, information circumstances, basic knowledge, using ability and effects. Libbrecht et al. [24] presented the concept of an intelligent tutoring system which combines web search for learning purposes and state-of-the-art natural language processing techniques. The concept is described for the case of teaching information literacy, but has the potential to be applied to other courses or for independent acquisition of knowledge through web search. ...
Article
Full-text available
To enhance the information literacy among college teachers, it is necessary to evaluate their existing information awareness, information ethics, information techniques and information competence. Existing studies qualitatively analysed the effective means of enhancing information literacy among college teachers, and the dimensions were too homogeneous. In response, this paper studies the machine learning-based evaluation of college teachers’ information literacy enhancement. Firstly, the paper presents a framework of predictive model on information literacy enhancement evaluation of college teachers, and the influencing factors of college teachers’ information technology usage behaviour (ITUB) from the authors’ viewpoint. Then the paper presents a framework diagram for extracting ITUB features, along with a detailed introduction to specific influencing factors. After that, the paper extracts the potential information in the content of information technology behaviour to be predicted. The content features of ITUB are characterised by two aspects: content similarity and data form features. Next, the paper shows a method for calculating the affective polarity of ITUB. It also constructs a predictive model for enhancing ITUB and shows the objective function of the model. The experimental results verify the validity of the constructed model.
... The research area "AI in Education" addresses the application and evaluation of Artificial Intelligence (AI) methods in the context of education and training [3]. One of the main focuses of this research is to analyze and improve teaching and learning processes. ...
Chapter
Full-text available
We investigate and compare state-of-the-art deep learning techniques for Automatic Short Answer Grading. Our experiments demonstrate that systems based on the Bidirectional Encoder Representations from Transformers (BERT) [1] performed best for English and German. Our system achieves a Pearson correlation coefficient of 0.73 and a Mean Absolute Error of 0.4 points on the Short Answer Grading data set of the University of North Texas [2]. On our German data set we report a Pearson correlation coefficient of 0.78 and a Mean Absolute Error of 1.2 points. Our approach has the potential to greatly simplify the life of proofreaders and to be used for learning systems that prepare students for exams: 31% of the student answers are correctly graded and in 40% the system deviates on average by only 1 point out of 6, 8 and 10 points.KeywordsAutomatic short answer gradingArtificial intelligence in educationNatural language processingDeep learning
... The research area "AI in Education" addresses the application and evaluation of Artificial Intelligence (AI) methods in the context of education and training [7]. One of the main focuses of this research is to analyze and improve teaching and learning processes with natural language processing (NLP) models. ...
Chapter
Full-text available
Our previous analysis on 26 languages which represent over 2.9 billion speakers and 8 language families demonstrated that cross-lingual automatic short answer grading allows students to write answers in exams in their native language and graders to rely on the scores of the system [1]. With lower deviations than 14% (0.72 points out of 5 points) on the corpus of the short answer grading data set of the University of North Texas [2], our natural language processing models show better performances compared to the human grader variability (0.75 points, 15%). In this paper we describe our latest analysis of the integration and application of a multilingual model in interactive training programs to optimally prepare students for exams. We present a multilingual interactive conversational artificial intelligence tutoring system for exam preparation. Our approach leverages and combines learning analytics, crowdsourcing and gamification to automatically allow us to evaluate and adapt the system as well as to motivate students and increase their learning experience. In order to have an optimal learning effect and enhance the user experience, we also tackle the challenge of explainability with the help of keyword extraction and highlighting techniques. Our system is based on Telegram since it can be easily integrated into massive open online courses and other online study systems and has already more than 400 million users worldwide [3].
Article
Artificial Intelligence (AI) has revolutionized various domains, including education and research. Natural language processing (NLP) techniques and large language models (LLMs) such as GPT-4 and BARD have significantly advanced our comprehension and application of AI in these fields. This paper provides an in-depth introduction to AI, NLP, and LLMs, discussing their potential impact on education and research. By exploring the advantages, challenges, and innovative applications of these technologies, this review gives educators, researchers, students, and readers a comprehensive view of how AI could shape educational and research practices in the future, ultimately leading to improved outcomes. Key applications discussed in the field of research include text generation, data analysis and interpretation, literature review, formatting and editing, and peer review. AI applications in academics and education include educational support and constructive feedback, assessment, grading, tailored curricula, personalized career guidance, and mental health support. Addressing the challenges associated with these technologies, such as ethical concerns and algorithmic biases, is essential for maximizing their potential to improve education and research outcomes. Ultimately, the paper aims to contribute to the ongoing discussion about the role of AI in education and research and highlight its potential to lead to better outcomes for students, educators, and researchers.
Article
Full-text available
Among the primary aims of Artificial Intelligence (AI) is the enhancement of User Experience (UX) by providing deep understanding, profound empathy, tailored assistance, useful recommendations, and natural communication with human interactants while they are achieving their goals through computer use. To this end, AI is used in varying techniques to automate sophisticated functions in UX and thereby changing what UX is apprehended by the users. This is achieved through the development of intelligent interactive systems such as virtual assistants, recommender systems, and intelligent tutoring systems. The changes are well received, as technological achievements but create new challenges of trust, explainability and usability to humans, which in turn need to be amended by further advancements of AI in reciprocity. AI can be utilised to enhance the UX of a system while the quality of the UX can influence the effectiveness of AI. The state of the art in AI for UX is constantly evolving, with a growing focus on designing transparent, explainable, and fair AI systems that prioritise user control and autonomy, protect user data privacy and security, and promote diversity and inclusivity in the design process. Staying up to date with the latest advancements and best practices in this field is crucial. This paper conducts a critical analysis of published academic works and research studies related to AI and UX, exploring their interrelationship and the cause-effect cycle between the two. Ultimately, best practices for achieving a successful interrelationship of AI in UX are identified and listed based on established methods or techniques that have been proven to be effective in previous research reviewed.
Chapter
More and more educational institutions are making lecture videos available online. Since 100+ empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video [1], it makes sense to provide those lecture videos with captions. However, studies also show that the words themselves contribute only 7% and how we say those words with our tone, intonation, and verbal pace contributes 38% to making messages clear in human communication [2]. Consequently, in this paper, we address the question of whether an AI-based visualization of voice characteristics in captions helps students further improve the watching and learning experience in lecture videos. For the AI-based visualization of the speaker’s voice characteristics in the captions we use the WaveFont technology [3–5], which processes the voice signal and intuitively displays loudness, speed and pauses in the subtitle font. In our survey of 48 students, it could be shown that in all surveyed categories—visualization of voice characteristics, understanding the content, following the content, linguistic understanding, and identifying important words—always a significant majority of the participants prefers the WaveFont captions to watch lecture videos.
Chapter
Massive open online courses and other online study opportunities are providing easier access to education for more and more people around the world. To cope with the large number of exams to be assessed in these courses, AI-driven automatic short answer grading can recommend teaching staff to assign points when evaluating free text answers, leading to faster and fairer grading. But what would be the best way to work with the AI? In this paper, we investigate and evaluate different methods for explainability in automatic short answer grading. Our survey of over 70 professors, lecturers and teachers with grading experience showed that displaying the predicted points together with matches between student answer and model answer is rated better than the other tested explainable AI (XAI) methods in the aspects trust, informative content, speed, consistency and fairness, fun, comprehensibility, applicability, use in exam preparation, and in general.
Book
This edited book is a collection of selected research papers presented at the 2021 2nd International Conference on Artificial Intelligence in Education Technology (AIET 2021), held in Wuhan, China on July 2-4, 2021. AIET establishes a platform for AI in education researchers to present research, exchange innovative ideas, propose new models, as well as demonstrate advanced methodologies and novel systems. Rapid developments in artificial intelligence (AI) and the disruptive potential of AI in educational use has drawn significant attention from the education community in recent years. For educators entering this uncharted territory, many theoretical and practical questions concerning AI in education are raised, and issues on AI’s technical, pedagogical, administrative and socio-cultural implications are being debated. The book provides a comprehensive picture of the current status, emerging trends, innovations, theory, applications, challenges and opportunities of current AI in education research. This timely publication is well-aligned with UNESCO’s Beijing Consensus on Artificial Intelligence (AI) and Education. It is committed to exploring how best to prepare our students and harness emerging technologies for achieving the Education 2030 Agenda as we move towards an era in which AI is transforming many aspects of our lives. Providing a broad coverage of recent technology-driven advances and addressing a number of learning-centric themes, the book is an informative and useful resource for researchers, practitioners, education leaders and policy-makers who are involved or interested in AI and education.
Chapter
Full-text available
Massive open online courses and other online study opportunities are providing easier access to education for more and more people around the world. However, one big challenge is still the language barrier: Most courses are available in English, but only 16% of the world’s population speaks English [1]. The language challenge is especially evident in written exams, which are usually not provided in the student’s native language. To overcome these inequities, we analyze AI-driven cross-lingual automatic short answer grading. Our system is based on a Multilingual Bidirectional Encoder Representations from Transformers model [2] and is able to fairly score free-text answers in 26 languages in a fully-automatic way with the potential to be extended to 104 languages. Augmenting training data with machine translated task-specific data for fine-tuning even improves performance. Our results are a first step to allow more international students to participate fairly in education.KeywordsCross-lingual automatic short answer gradingArtificial intelligence in educationNatural language processingDeep learning
Article
Full-text available
Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community. Its application lies in many different areas including sentiment analysis, Information Retrieval (IR), machine translation and knowledge graph construction. Solutions to WSD are mostly categorized into supervised and knowledge-based approaches. In this paper, a knowledge-based method is proposed, modeling the problem with semantic space and semantic path hidden behind a given sentence. The approach relies on the well-known Knowledge Base (KB) named WordNet and models the semantic space and semantic path by Latent Semantic Analysis (LSA) and PageRank respectively. Experiments has proven the method’s effectiveness, achieving state-of-the-art performance in several WSD datasets.
Conference Paper
Full-text available
Electronic Theses and Dissertations (ETDs) poses the challenge of managing and extraction of appropriate knowledge for decision making. To tackle the same, topic modeling was first applied to Library and Information Science (LIS) theses submitted to Shodhganga (an Indian ETDs digital repository) to determine the five core topics/tags and then the performance of the built model based on those topics/tags were analyzed. Using a Latent Dirichlet Allocation based Topic-Modeling-Toolkit, the five core topics were found to be information literacy, user studies, scientometrics, library resources and library services for the epoch 2013-2017 and consequently all the theses were summarized with the presence of their respective topic proportion for the tags/topics. A Support Vector Machine (Linear) prediction model using RapidMiner toolbox was created and showed 88.78% accuracy with 0.85 kappa value.
Article
Full-text available
Objective – This cross-sectional, descriptive study seeks to address a gap in knowledge of both information literacy (IL) self-efficacy and IL skills of students entering Louisiana State University’s Master of Library and Information Science (MLIS) program. Methods – An online survey testing both IL self-efficacy and skills was administered through Qualtrics. The online survey instrument used items from existing instruments (Beile, 2007; Michalak & Rysavy, 2016) and was distributed to two cohorts of incoming students; the first cohort entered the MLIS program in fall 2017, and the second entered in spring 2018. Results – Data varied between cohorts and between survey instruments for both IL self-efficacy and skills; however, bivariate analysis of data indicated a moderate positive correlation between overall IL self-efficacy and demonstrated IL skill scores in both fall 2017 and spring 2018 cohorts. Conclusion – The study indicates a need for a larger, multi-institutional study using a rigorously validated instrument to gather data and make generalizable inferences about the IL self-efficacy and skills of incoming LIS graduate students.
Article
Full-text available
Recent studies documented that the act of writing explanations improves students’ learning only to a limited extent, as students attend less frequently to genre-typical features of comprehensibility during writing explanations (i.e., cohesion). In this study, we investigated whether learning by writing explanations can be enhanced when students additionally receive computer-based feedback on the cohesion of their explanations. Sixty-one advanced students studied a hyper-text about photovoltaic panels. Afterwards, they provided a written explanation about the learning content. During writing, students randomly received either individual computer-based feedback in the form of a concept map or not. Our findings indicated that students who received additional concept map feedback outperformed students without such feedback on a transfer test. Mediation analyses revealed that the effect of the concept map feedback on students’ transfer was mediated by the level of global cohesion of the provided explanations. Thus, we can conclude that learning by writing explanations can be enhanced by formative computer-based feedback that provides specific information about the quality of students’ written explanations.
Article
Full-text available
This study uses a novel simulation framework to evaluate whether the time and effort necessary to achieve high recall using active learning is reduced by presenting the reviewer with isolated sentences, as opposed to full documents, for relevance feedback. Under the weak assumption that more time and effort is required to review an entire document than a single sentence, simulation results indicate that the use of isolated sentences for relevance feedback can yield comparable accuracy and higher efficiency, relative to the state-of-the-art Baseline Model Implementation (BMI) of the AutoTAR Continuous Active Learning ("CAL") method employed in the TREC 2015 and 2016 Total Recall Track.
Article
Keywords can express the main content of an article or a sentence. Keywords extraction is a critical issue in many Natural Language Processing (NLP) applications and can improve the performance of many NLP systems. The traditional methods of keywords extraction are based on machine learning or graph model. The performance of these methods is influenced by the feature selection and the manually defined rules. In recent years, with the emergence of deep learning technology, learning features automatically with the deep learning algorithm can improve the performance of many tasks. In this paper, we propose a deep neural network model for the task of keywords extraction. We make two extensions on the basis of traditional LSTM model. First, to better utilize both the historic and following contextual information of the given target word, we propose a target center-based LSTM model (TC-LSTM), which learns to encode the target word by considering its contextual information. Second, on the basis of TC-LSTM model, we apply the self-attention mechanism, which enables our model has an ability to focus on informative parts of the associated text. In addition, we also introduce a two-stage training method, which takes advantage of large-scale pseudo training data. Experimental results show the advantage of our method, our model beats all the baseline systems all across the board. And also, the two-stage training method is of great significance for improving the effectiveness of the model.
Chapter
Despite improved digital access to scholarly literature in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. Scholarly knowledge remains locked in representations that are inadequate for machine processing. The Open Research Knowledge Graph (ORKG) is an infrastructure for representing, curating and exploring scholarly knowledge in a machine actionable manner. We demonstrate the core functionality of ORKG for representing research contributions published in scholarly articles. A video of the demonstration [7] and the system (https://labs.tib.eu/orkg/) are available online. KeywordsDigital librariesInformation scienceKnowledge graphResearch infrastructureScholarly communication
Article
Purpose This paper aims to review current approaches to, and good practice in, information literacy (IL) development in multi-lingual and multi-cultural settings, with particular emphasis on provision for international students. Design/methodology/approach A selective and critical review of published literature is extended by evaluation of examples of multi-lingual IL tutorials and massive open online courses. Findings Multi-lingual literacy and multi-cultural IL are umbrella terms covering a variety of situations and issues. This provision is of increasing importance in an increasingly mobile and multi-cultural world. This paper evaluates current approaches and good practice, focussing on issues of culture vis-à-vis language; the balance between individual and group needs; specific and generic IL instruction; and models for IL, pedagogy and culture. Recommendations for good practice and for further research are given. Originality/value This is one of very few papers critically reviewing how IL development is affected by linguistic and cultural factors.