ArticlePDF Available

Abstract and Figures

Nowadays, the use of chatbots in industry and education has increased substantially. Building the chatbot system using traditional methods less effective than the applied of machine learning (ML) methods. Before chatbot based on finite-state, rule-base, knowledgebase, etc, but these methods still exist limitation. Recently, thanks to the advancement in natural language processing (NLP) and neural network (NN), conversational AI systems have made significant progress in many tasks such as intent classification, entity extraction, sentiment analysis, etc. In this paper, we implemented a Vietnamese chatbot that is capable of understanding natural language. It can generate responses, take actions to the user and remember the context of the conversation. We used Rasa platform for building chatbot and proposed an approach using custom pipeline for NLU model. In our work, we applied the pre-trained models FastText and multilingual BERT and two custom components for the pipelines. We evaluated and compared our proposed model with existing ones using the pre-defined NLU pipeline. Experimental comparison of three models showed that the proposed model performed better in intent classification and entity extraction.
Content may be subject to copyright.
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
AbstractNowadays, the use of chatbots in industry and
education has increased substantially. Building the chatbot
system using traditional methods less effective than the applied
of machine learning (ML) methods. Before chatbot based on
finite-state, rule-base, knowledgebase, etc, but these methods
still exist limitation. Recently, thanks to the advancement in
natural language processing (NLP) and neural network (NN),
conversational AI systems have made significant progress in
many tasks such as intent classification, entity extraction,
sentiment analysis, etc. In this paper, we implemented a
Vietnamese chatbot that is capable of understanding natural
language. It can generate responses, take actions to the user
and remember the context of the conversation. We used Rasa
platform for building chatbot and proposed an approach using
custom pipeline for NLU model. In our work, we applied the
pre-trained models FastText and multilingual BERT and two
custom components for the pipelines. We evaluated and
compared our proposed model with existing ones using the pre-
defined NLU pipeline. Experimental comparison of three
models showed that the proposed model performed better in
intent classification and entity extraction.
KeywordsVietnamese chatbot, RASA NLU, pipeline, Rasa
custom components
The virtual assistants or chatbots appear more and more in
our social life such as Apple Siri, Google Assistant, Amazon
Alexa, Microsoft Cortana and Yandex Wordstat. The virtual
assistant can help user perform a variety of requests from
making a call, searching for information on the internet to
booking a ticket, booking a hotel room. Communicating with
the virtual assistant via voice or text interaction. Traditional
chatbots are built based on rule-based, so when the input
questions outside the script, the chatbots will not understand
and wait for the customer care staff to respond to the user
requests. A chatbot is intelligent or not depends on the
ability to understand user context and work independently.
To achieve this goal, chatbots must be built on machine
learning and artificial intelligence. Last few years, many
studies used several machine learning methods based on
neural networks [1]-[4] for building conversational AI
With the advantage of machine learning techniques, the
chatbot performance increased. The use of chatbots has
Manuscript received Oct 15, 2020
Nguyen Thi Mai Trang is with Volgograd State Technical University,
Volgograd, RUSSIA, (corresponding author phone: +79667853229; e-
M. Shcherbakov is with Volgograd State Technical University, Volgograd,
RUSSIA (e-mail:
shown amazing efficiency in many areas of social life.
Chabots help businesses save costs, manpower and increase
efficiency customer care. On the user side, there are quite a
lot of people prefer to interact with chatbots [5]. With the
launch of several chatbot platforms such as Facebook
Messenger, WhatsApp, Telegram, Skype, Slack the number
of chatbots increased day by day. Recently, the use of
chatbots increased not only in commercial but also in
medicine and education.
Currently, there are many development platforms with
supporting for building chatbots based on machine learning
such as RASA, Amazon Lex, Microsoft Bot Framework,
Google DialogFlow, IBM Watson Assistant, and so
on. In this work, we use RASA platform for building our
chatbot with some reasons: RASA is an open-source natural
language processing tool, it can run locally and has the
advantage of self-hosted open source such as adaptability,
data control, etc. [6].
The rest of the paper is structured as follows: section II
discusses the existing works, section III describes the
method to build a Vietnamese chatbot with significant
improvement in intent classification and entity extraction,
we show and discuss the obtained results in section VI and
we close with some concluding remarks and discussion on
our future work.
There are many tools/platforms for building a chatbot
based on natural language understanding (NLU). We have
implemented our research that uses the most common
platform is available in the market [7]. Rasa is not only a
commercial chatbot building platform but it also greatly aids
research. In previous work, we created our chatbot in
Russian. But we haven't used a custom pipeline to improve
the NLU model. The NLU model is only trained based on
supervised embedding, but not applying modern pre-trained
models as GloVe, FastText or BERT.
In paper [8], the authors reviewed and analyzed Rasa
platform quite in detail. They built a chatbot integrating with
API and database. However, the study just built a simple
chatbot without using the advanced capabilities of the
In [9], Jiao described the principle of Rasa NLU and
designed the functional framework which implemented with
Rasa NLU. The author integrated Rasa NLU and NN
methods for entity extraction after intent recognition. The
study showed that the Rasa NLU outperforms NN
inaccuracy for a single experiment.
Recently, Rasa NLU is often used to build a
Enhancing Rasa NLU model for Vietnamese
Nguyen Thi Mai Trang, Maxim Shcherbakov
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
conversational AI. It comprises loosely coupled modules
combining several NLP and ML libraries in a consistent API
[10]. In article [11], the author presented the examples for
using custom component in Rasa NLU in Vietnamese and
Japanese. He created the custom tokenizers for Vietnamese
and Japanese and created the custom sentiment analyzer too.
The result showed that the NLU model is more suitable and
better for Vietnamese and Japanese. The work [12]
presented the building of a Vietnamese chatbot using Rasa
platform. The author created a custom tokenizer for
Vietnamese and trained NLU model by a supervised method.
To the best of our knowledge, there are no studies for
building Vietnamese chatbots applying the pre-trained
models like FastText, MITIE, BERT in custom Rasa NLU
In this section, we will present the construction of a Rasa
chatbot consisting of two main components: Rasa NLU and
Rasa Core. We also propose a method to improve
Vietnamese chatbot's natural language understanding by
using the custom components in pipelines. In addition, the
performance of a chatbot also depends on tuning the
parameters in the policies that are provided in Rasa Core.
A. RASA flatform
RASA is an open-source implementation for natural
language processing (NLU) and Dual Intent and Entity
Transformer (DIET) model. RASA is a combination of two
modules: Rasa NLU and Rasa Core [13]. Rasa NLU
analyses the user's input, then it classifies the user’s intent
and extracts the entities. Rasa NLU combines different
annotators from the spaCy parse to interpret the input data
- Intent classification: Interpreting meaning based on
predefined intents. (Example: “How many people
are infected with COVID-19 in USA?” is a
request_cases” intent with 97% confidence)
- Entity extraction: recognizing structured data.
(Example: USA is a “location”)
Rasa Core takes structured input in the form of intents and
entities and chooses which action the chatbot should take
using a probabilistic model. Fig. 1 shows the high-level
architecture of RASA.
Fig. 1. Diagram of how a Rasa Core app works (from
The process of how a Rasa Core app response to a
message is explained as follows:
1 - The input message passed to the Interpreter (Rasa NLU).
The Interpreter converts user’s message into a structured
output including the original text, intents and entities.
2 The Tracker follows conversation state and receives the
appearance of the new message.
3- The output of the Tracker passes into Policy, which
receives the current state of the tracker
4 - The next action is chosen by the policy
5 The tracker logs the chosen action
6 The response is sent to the user, using the pre-defined
utterance in
B. Processing Rasa pipeline
The process of the input message includes different
components. These components are executed sequentially in
a processing pipeline. The component processes the input
and gives an output which can be used by any following
components in the pipeline. A processing pipeline defines
which processing stages the input messages will have to
pass. The processing stage can be a tokenizer, featurizer,
named entity recognizer, intent classifier.
RASA provides pre-configured pipelines which can be
used by setting the configuration values as spacy_sklearn,
mitie, mitie_sklearn, keyword, tensorflow_embedding.
Besides, we can create a custom pipeline by passing the
components to the NLU pipeline configuration variable. Fig
2 shows the schema of intent classification.
Fig. 2. Schema of intent classification and entity extraction
using Rasa NLU.
To build a good NLU model for chatbots, we enhance the
model with our own custom components such as sentiment
analyzer, tokenizer, spell checker etc.
C. Custom component
Components make up the NLU pipeline. They work
sequentially to handle the user’s input text into a structured
output. In this work, we create two custom components:
Vietnamese tokenizer and FastText featurizer which Rasa
NLU doesn’t currently offer.
A Vietnamese tokenizer is created by using Vietnamese
Tookit Underthesea [15]. The FastText featurizer is a dense
featurizer which helps to load the FastText embeddings. The
implementation of the featurizer requires a tokenizer that is
presented in the pipeline (see Fig 4.).
D. Choosing Rasa NLU pipelines
Rasa has pre-configured pipelines which we review for
building the chatbot: TensorFlow-based pipeline, ConveRT
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
pipeline, BERT-based pipeline.
TensorFlow-based pipeline can be used to build the chatbot
from scratch. It doesn’t use pre-trained word vectors and
supports any language that can be tokenized. When we use a
custom tokenizer for specify-language, we can replace the
“tokenizer_whitespace” with our tokenizer with more
accurate. Fig. 3 shows an alternative of Vietnamese
tokenizer in TensorFlow-based pipeline.
Fig. 3. Example of using Vietnamese tokenizer in
Tensorflow pipeline.
The custom pipeline is a list of the names of the
components which we want to use. We propose a custom
pipeline (cpFastText) using Vietnamese tokenizer and
FastText featurizer for loading pre-trained word-embedding
in Vietnamese from FastText. Rasa doesn't natively support
FastText, so we need to create a custom featurizer for
FastText and add into pipeline. We can download the pre-
trained model for Vietnamese from FastText [16]. The
model with 4.19 Gb in binary compression. Fig. 4 shows the
custom pipeline using FastText model for Vietnamese.
Fig. 4. The custom pipeline using Vietnamese Tokenizer,
FastText featurizer and FastText model.
ConveRT pipeline is a template pipeline that uses a
ConveRT model to extract pre-trained sentence embeddings.
ConveRT pipeline shown the effectiveness of big training
data [17]. In this work, we do not use this pipeline to build
the Vietnamese chatbot because it is only able to support
We also consider a pipeline using the state-of-the-art
language model BERT. Rasa provides the pipeline with the
configuration for BERT using Hugging Face model. It can
be configured with a BERT model inside the pipeline. The
example of the BERT pipeline is shown in Fig. 5.
Fig. 5. An example of a pipeline using multilingual BERT
We will implement the experiment and compare the above
pipelines in section IV
E. Policies
Rasa Core provides a class rasa.core.policies.policy. It
decides the action of chatbot. There are ruse-based and
machine-learning policies are detailed in table 1.
Type Policy
Name policy
TED policy the
The policy concatenates
the user input, system
actions and slots
The policy remembers
the stories from the
training data. It checks
the matching story of
the current conversation
and predicts the next
action from the
matching story with the
confidence of [0,1].
Number of turns of the
conversation is
indicated in
It remembers examples
from matching story for
up to max_history
turns. It is similar to the
Memoization Policy. In
addition, the policy has
a forgetting
Rule policy
A policy handles
conversation parts that
follow a fixed behavior
and makes predictions
using rules that have
been in the training
Max History
It controls the number
of dialogue history that
model looks at to
predict the next action.
It determines how many
augmented stories are
subsampled during
It provides to apply
machine learning
algorithms to bu
ild up
vector representations
of conversational AI.
The policies are configured in config.yml. Two
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
parameters Max_History and Data Augmentation affect the
performance of the model [8]. The policy is used affects the
performance of the model. So we need to review and tune
the parameters of the policies and be able to use the polices
in tandem.
A. Experimental setup
In this work, we experimented in a dataset that includes 40
intents, 8 entities and 1000 examples. The intents and
utterances are stored in The stories of conversations
are presented in The pipeline and policies are in
config.yml. The domain.yml file defines the domain in
which chatbot operates. The domain specifies the intents,
entities, slots, utterance responses, actions and a
configuration for conversation sessions. We specify the
expiration time of a conversation session is 60 seconds.
We created two custom components (Vietnamese
tokenizer, FastText featurizer) as described in section III .
We used three pipelines for evaluation were TensorFlow
embedding, cpFastText pipeline and BERT pipeline. In
cpFastText, a Vietnamese tokenizer and a FastText
featurizer are added into the pipeline. In BERT pipeline
(mBERT), we configured it with the BERT multilingual
base model (cased) that pre-trained on the top 104 languages
with the Wikipedia dataset. Therefore, we will compare the
proposed cpFastText model with the two basic models
(Tensorflow and mBERT) that are configured using existing
components from Rasa NLU.
B. Experimental results
Evaluating Rasa NLU models based on three metrics:
precision (1), F1-score (3) and accuracy (4). Correctly
predicted observations (True Positives ) is the number of
observations that were predicted correctly for the class. They
belonged to the class and the model classified them correct.
We denote the True Positive as TP, the True Negative as
TN, the False Positive as FP and the False Negative as FN.
ecision +
callecision callecision
FRePr Re.Pr
.21 +
Accuracy +++
We evaluated the intent classification by F1-score,
accuracy and precision using cross-validation. The results on
three models in table 2 show the micro-average scores on the
test set for intent classification and entity extraction over 5-
fold cross-validation. As the results, the proposed
cpFastText model is the best model for intent recognition
(F1-score = 0.863, accuracy = 0.865 , precision = 0.867 )
and entity extraction (F1-score = 0.815, accuracy = 0.997,
precision = 0.864). The mBERT model gives the worst
results. It can be explained that, because the data set is not
large enough (1000 examples) and applying the mBERT
model is not appropriate, instead of using the TensorFlow-
based model gives better results.
0.826 ± 0.033
0.849 ± 0.029
0.668 ± 0.185
0.681 ± 0.174
0.607 ± 0.039
0.647 ± 0.044
0.729 ± 0.127
0.762 ± 0.110
0.863 ± 0.010
0.876 ± 0.011
0.815 ± 0.042
0.864 ± 0.072
To compare the multiple pipelines we used Rasa with the
command as follow:
rasa test nlu --config Tensorflow.yml cpFastText.yml
--nlu data/ --runs 3 --percentages 0 25 50 75 85
Then the models are evaluated on the test set and the F1-
score for each exclusion percentage is recorded. The
comparison of three Rasa pipelines presents in Fig. 6. The
Fig. 6 shows that cpFastText is the pipeline with the best F1-
Fig. 6. Comparison of Rasa NLU pipelines.
We created plots of intent prediction confidence
distribution for three models (see Fig. 7). The plot with two
columns per bin showing the confidence distribution for hits
and misses. The plot of cpFastText model shows the most
correct prediction confidences. Thus, the results show that
the models cpFastText using the proposed method have
given the best result for intent classification. So we have
chosen the proposed cpFastText model to build the chatbot.
Then we evaluated the dialogue model using Rasa Core.
For cpFastText model, we have used TED Policy,
Memoization Policy, Fallback Policy and set max history to
5, nlu threshold to 0.7, core threshold to 0.5.
The confusion matrix of the actions presents in Fig. 8.
Evaluation on conversation level and action level on three
metrics F1-score, accuracy and precision. The results are
presented in table 3.
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
Fig. 8. Confusion matrix of the actions.
Evaluation on
conversation level
Evaluation on
action level
In this paper, we presented the method to improve the
performance of a chatbot using custom Rasa NLU pipeline.
Using custom components is appropriate for NLU model in
non-English languages. As the results, the proposed model
showed the best result in intent classification and entity
extraction. The proposed cpFastText model performed better
compared to the TensorFlow-based (F1-score is 3.7%
higher), mBERT (F1-score is 25.6%). The dialogue model
Rasa Core got an accuracy of 95.2% on conversation level
and 98.4% on action level.
The future scope of this study, we build a model for
Vietnamese spell checker based on neural machine
translation and use it as a new custom component in NLU
pipeline. Besides, we will combine our chatbot with a
question answering system based on BERT model for
answering the non-predefined questions.
[1] T. Nguyen and M. Shcherbakov, “A Neural Network based
Vietnamese Chatbot,” in 2018 International Conference on System
Modeling & Advancement in Research Trends (SMART), 2018.
[2] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau,
“Building end-to-end dialogue systems using generative hierarchical
neural network models,” arXiv [cs.CL], 2015.
[3] D. Al-Ghadhban and N. Al-Twairesh, “Nabiha: An Arabic dialect
chatbot,” Int. J. Adv. Comput. Sci. Appl., vol. 11, no. 3, 2020.
[4] T.-H. Wen et al., “A network-based end-to-end trainable task-
oriented dialogue system,” in Proceedings of the 15th Conference of
the European Chapter of the Association for Computational
Linguistics: Volume 1, Long Papers, 2017.
[5] “Need-to-know chatbot statistics in 2020,” [Online].
[Accessed: 06-Oct-2020].
[6] D. Braun, A. Hernandez-Mendez, F. Matthes, and M. Langen,
“Evaluating natural language understanding services for
conversational question answering systems,” in Proceedings of the
18th Annual SIGdial Meeting on Discourse and Dialogue, 2017.
[7] T.M.T. Nguyen, M.V. Shcherbakov, “Целевой чат-бот на основе
машиного обучения [A goal-oriented chatbot based on machine
learning].” Modeling, optimization and information technology, May
2020. [Online] Available:
a) Tensorflow model b) mBERT model c) cpFastText model
Fig. 7. Intent prediction confidence distribution of three models.
International Journal of Open Information Technologies ISSN: 2307-8162 vol. 9, no.1, 2021
[8] R. K. Sharma and National Informatic Center, “An Analytical Study
and Review of open source Chatbot framework, Rasa,” Int. J. Eng.
Res. Technol. (Ahmedabad), vol. V9, no. 06, 2020.
[9] A. Jiao, “An intelligent chatbot system based on entity extraction
using RASA NLU and neural network,” J. Phys. Conf. Ser., vol.
1487, p. 012014, 2020.
[10] T. Bocklisch, J. Faulkner, N. Pawlowski, and A. Nichol, “Rasa:
Open Source Language Understanding and Dialogue
Management,” arXiv [cs.CL], 2017.
[11] P. H. Quang, “Rasa chatbot: Tăng khả năng chatbot với custom
component và custom tokenization(tiếng Việt tiếng Nhật),” Viblo,
16-Mar-2020. [Online]. Available:
tokenizationtieng-viet-tieng-nhat-Qbq5QN4mKD8. [Accessed: 14-
[12] M. V. Do, “Xây dựng chatbot bán hàng dựa trên mô hình sinh,” M.S.
thesis, Graduate Univ. of Sc. and Tech., Hanoi, 2020. Accessed on:
10 Sep, 2020. [Online]. Available:
[13] “The Rasa Core dialogue engine,” [Online]. Available: [Accessed: 1-Oct-2020].
[14] H. Agarwala, R. Becker, M. Fatima, L. Riediger, “Development of an
artificial conversation entity for continuous learning and adaption to
user’s preferences and behavior” [Online]. Available: https://www.di-
WS18.pdf. [Accessed: 25-Sep-2020].
[15] “underthesea,” PyPI. [Online]. Available: [Accessed: 25-Sep-2020].
[16] “Word vectors for 157 languages fastText,” [Online].
Available: [Accessed:
[17] A. Singh, “Evaluating the new ConveRT pipeline introduced by
RASA,” Medium, 03-Dec-2019. [Online]. Available:
pipeline-introduced-by-rasa-3db377b8961d. [Accessed: 30-Aug-
Nguyen Thi Mai Trang,
PhD student in CAD Department, Volgograd State Technical University,
Volgograd, Russia
Maxim Shcherbakov,
Dr. Tech. Sc., Head of CAD Department, Volgograd State Technical
University, Volgograd, Russia
... In fact, Rasa can achieve similar performance with commercial systems like Google Dialogflow or Microsoft LUIS as studied in [17][18][19][20], while offering the advantages of an open-source end-to-end system (e.g., self-hosted, secure, scalable, fully disclosed) [21]. Applications of Rasa can be found on various domains, as a Spanish question answering agent of the football domain [22], a university campus information system [23] and a Vietnamesespeaking agent [24] among others. ...
... Our system builds on the Rasa framework and as such it follows the supported pipeline. Similar to previous works [23,24] we explore how this pipeline can be appropriately modified with some key differences being the language, the domain of application and the explored featurizers. Additionally, in this work we focus primarily on the ML components comparing both different classifiers and also embeddings in more detail. ...
... The DIET classifier also performed well when using either only sparse features or both sparse and dense features. Using components designed for the specific language, instead of multilingual ones, has shown that can improve the results at the above tasks [24]. In this work, we also focused on modifying the required components for our language of interest and observed promising results. ...
Full-text available
Virtual assistants are becoming popular in a variety of domains, responsible for automating repetitive tasks or allowing users to seamlessly access useful information. With the advances in Machine Learning and Natural Language Processing, there has been an increasing interest in applying such assistants in new areas and with new capabilities. In particular, their application in e-healthcare is becoming attractive and is driven by the need to access medically-related knowledge, as well as providing first-level assistance in an efficient manner. In such types of virtual assistants, localization is of utmost importance, since the general population (especially the aging population) is not familiar with the needed “healthcare vocabulary” to communicate facts properly; and state-of-practice proves relatively poor in performance when it comes to specialized virtual assistants for less frequently spoken languages. In this context, we present a Greek ML-based virtual assistant specifically designed to address some commonly occurring tasks in the healthcare domain, such as doctor’s appointments or distress (panic situations) management. We build on top of an existing open-source framework, discuss the necessary modifications needed to address the language-specific characteristics and evaluate various combinations of word embeddings and machine learning models to enhance the assistant’s behaviour. Results show that we are able to build an efficient Greek-speaking virtual assistant to support e-healthcare, while the NLP pipeline proposed can be applied in other (less frequently spoken) languages, without loss of generality.
... Due to its huge benefit, the conversational AI bots have been applied to various industries such as insurance [4], education [3], entertainment [8], health care [9], e-commerce [10], COVID-19 [11], or business intelligence [12]. In practice, there are several well-known conversational AI agents such as Amazon Alexa, 2 Apple Siri, 3 Microsoft Cortana, 4 IBM Watson bot, 5 and Google assistant. 1 2 3 4 5 ...
... The former manages intent classification, entity extraction, and response retrieval, while the later controls the next action in a conversation based on the context. We extend the idea of using Rasa for building chatbots [5] because Rasa provides flexible environment for creating customized pipelines. ...
... The system of Nguyen and Shcherbakov [5] is likely the most relevant work to ours. The authors designed a Vietnamese chatbot which uses two pre-trained models (e.g., FastText and BERT) in the Rasa NLU pipeline. ...
Conference Paper
Full-text available
The admission process of universities in Vietnam is a labor-expensive task due to the involvement of humans. This paper introduces an intelligent system (a chatbot) that can support the admission process by automatically answering questions. Different from prior work that usually builds the bot from scratch, we develop the bot by using the Rasa platform. To do that, we investigate different combinations of components of natural language understanding to find the best pipeline. We also create and release a dataset in the admission domain to train the bot. Experimental results show that the pipeline using DIET with features from pre-trained language models is competitive. The introduction video of the system is also available. 1
... From the survey, two primary use-cases of COVID-19 chatbots emerge -(1) information dissemination: answering pandemic-related questions asked by the users (Li et al., 2020;Desai, 2021;Prasannan et al., 2020;Mehfooz et al., 2020;Trang and Shcherbakov, 2021), and (2) symptom-screening: assessing risk factors associated with the symptoms provided by the user for quick diagnosis (Ferreira et al., 2020;Martin et al., 2020a;Judson et al., 2020b;Quy Tran et al., 2021). Existing commercial frameworks such as DialogFlow, Watson Assistant and MS Bot have been used primarily for building a majority of these chatbots (Li et al., 2020;Sophia and Jacob, 2021). ...
Full-text available
The COVID-19 pandemic has brought out both the best and worst of language technology (LT). On one hand, conversational agents for information dissemination and basic diagnosis have seen widespread use, and arguably, had an important role in combating the pandemic. On the other hand, it has also become clear that such technologies are readily available for a handful of languages, and the vast majority of the global south is completely bereft of these benefits. What is the state of LT, especially conversational agents, for healthcare across the world's languages? And, what would it take to ensure global readiness of LT before the next pandemic? In this paper, we try to answer these questions through survey of existing literature and resources, as well as through a rapid chatbot building exercise for 15 Asian and African languages with varying amount of resource-availability. The study confirms the pitiful state of LT even for languages with large speaker bases, such as Sinhala and Hausa, and identifies the gaps that could help us prioritize research and investment strategies in LT for healthcare.
... (Abu Ali and Habash, 2016;Al-Humoud et al., 2018) Building the agents with task-oriented or social-bots domain with conversational capabilities of understanding the context. (Nguyen and Shcherbakov, 2021;Grosuleac et al., 2020) ...
Full-text available
Conversational AI intends for machine-human interactions to appear and feel more natural and inclined to communicate in a near-human context. Chatbots, also known as conversational agents, are typically divided into two types of use-cases: task-oriented bots and social friend-bots. Task-oriented bots are often used to do activities such as answering questions or solving basic queries. Furthermore, social-friend-bots are designed to communicate like humans, where the user can speak freely and the bot answers organically while maintaining the conversation’s ambience. This paper analyses recent works in the conversational AI domain examining the exclusive methodologies, existing frameworks or tools, evaluation metrics, and available datasets for building robust conversational agents. Finally, a mind-map encompassing all the stated elements and qualities of chatbots is created.
... In the paper by Nguyen et. al. [8], the authors built a custom Vietnamese language tokenizer and a custom language featurizer which leveraged pre-trained fastText [9] Vietnamese word embedding and achieved a better results with their custom made components compared to the default pipeline components provided by Rasa. fastText [9] provides word embeddings for 157 languages and thus depending on the language in which the chatbot needs to be built, the corresponding featurizer, to leverage the pre-trained word vectors, must be designed and attached to the pipeline. ...
Full-text available
Chatbots are intelligent software built to be used as a replacement for human interaction. However, existing studies typically do not provide enough support for low-resource languages like Bangla. Moreover, due to the increasing popularity of social media, we can also see the rise of interactions in Bangla transliteration (mostly in English) among the native Bangla speakers. In this paper, we propose a novel approach to build a Bangla chatbot aimed to be used as a business assistant which can communicate in Bangla and Bangla Transliteration in English with high confidence consistently. Since annotated data was not available for this purpose, we had to work on the whole machine learning life cycle (data preparation, machine learning modeling, and model deployment) using Rasa Open Source Framework, fastText embeddings, Polyglot embeddings, Flask, and other systems as building blocks. While working with the skewed annotated dataset, we try out different setups and pipelines to evaluate which works best and provide possible reasoning behind the observed results. Finally, we present a pipeline for intent classification and entity extraction which achieves reasonable performance (accuracy: 83.02%, precision: 80.82%, recall: 83.02%, F1-score: 80%).
... While the Path in the chatbot follows the scenario's flow, the text from the different case nodes had to be analysed and hand create "Intends" to establish the purpose behind the different statements and "Entities" to classify the purpose of the users behind every statement ( Figure 1). Then implemented into RASA [13], and Natural Language Understanding (NLU) training followed, allowed users partial circumventive discussion, however the structure would deterministically continue with the desired training material. Persuasive techniques [14]. ...
Full-text available
A crucial factor for successful cybersecurity education is how information is communicated to learners. Case-based learning of common cybersecurity issues has been shown to improve human behaviour for prevention. However, some delivery methods prevent realistic critical appraisal and reflection of awareness. Conversational agents can scaffold healthcare workers’ understanding and promote deterrence strategies. The challenges of repurposing material to create a case-based agent were explored, and the ASPIRE process was modified. Heuristic evaluation from 10 experts in innovative educational technology resulted in the desired outcomes of usability, however Natural Language Understanding improvements were needed. Discussion of best practice when repurposing into conversational agents suggested modification of the ASPIRE process is feasible for future use.
Full-text available
Over the past few years, there has been a boost in the use of commercial virtual assistants. Obviously, these proprietary tools are well-performing, however the functionality they offer is limited, users are ”vendor-locked”, while possible user privacy issues rise. In this paper we argue that low-cost, open hardware solutions may also perform well, given the proper setup. Specifically, we perform an initial assessment of a low-cost virtual agent employing the Rasa framework integrated into a Raspberry Pi 4. We set up three different architectures, discuss their capabilities and limitations and evaluate the dialogue system against three axes: assistant comprehension, task success and assistant usability. Our experiments show that our low-cost virtual assistant performs in a satisfactory manner, even when a small-sized training dataset is used.
Today, chatbots or conversational agents are increasingly being used in various fields. In recent years, governments, organizations, and businesses have invested in dialogue systems to improve the engagement of their users. A smart chatbot is a chatbot that can understand the user’s intents. To achieve this goal, the developer always needs to improve the ability to classify the user’s intents. However, the user’s raw text usually contains spelling errors, abbreviations, or slang that the chatbot cannot understand. Therefore, processing the input text is also an urgent task of natural language understanding (NLU). In the Vietnamese dialogue system, there are always cases when users enter questions without signs. These questions greatly affect the understanding of the user’s intents. In this paper, an improvement in the intent classification for a Vietnamese chatbot is presented. We propose the Encoder-Decoder Bidirectional Long Short-Term Memory (BiLSTM) model for the diacritic restoration. On our evaluation dataset, this approach has an accuracy of 99.12% by using the pre-trained language model for word embedding. Then, the proposed model is applied as a custom component in the NLU pipeline. Using the proposed approach for constructing a model of intent classification is considered, comparison with basic models has shown the best results for the proposed model.
Full-text available
The advancement of the Internet of Things, big data, and mobile computing leads to the need for smart services that enable the context awareness and the adaptability to their changing contexts. Today, designing a smart service system is a complex task due to the lack of an adequate model support in awareness and pervasive environment. In this paper, we present the concept of a context-aware smart service system and propose a knowledge model for context-aware smart service systems. The proposed model organizes the domain and context-aware knowledge into knowledge components based on the three levels of services: Services, Service system, and Network of service systems. The knowledge model for context-aware smart service systems integrates all the information and knowledge related to smart services, knowledge components, and context awareness that can play a key role for any framework, infrastructure, or applications deploying smart services. In order to demonstrate the approach, two case studies about chatbot as context-aware smart services for customer support are presented.
Full-text available
Intelligent chatbot systems are popular issues in the application fields of robot system and natural language processing. As the development of natural language processing and neural network algorithms, the application of artificial intelligence is increasing in Chatbot systems, which are typically used in dialog systems for various practical purposes including customer service or information acquisition. This paper designs the functional framework and introduces the principle of RASA NLU for the Chatbot system, then it integrates RASA NLU and neural network (NN) methods and implements the system based on entity extraction after intent recognition. With the experimental comparison and validation, our developed system can realize automatic learning and answering the collected questions about finance. The system analysis of two methods also validate that RASA NLU outperforms NN in accuracy for a single experiment, but NN has better integrity to classify entities from segmented words.
Conference Paper
Full-text available
Conversational interfaces recently gained a lot of attention. One of the reasons for the current hype is the fact that chatbots (one particularly popular form of conversational interfaces) nowadays can be created without any programming knowledge, thanks to different toolkits and so-called Natural Language Understanding (NLU) services. While these NLU services are already widely used in both, industry and science, so far, they have not been analysed systematically. In this paper, we present a method to evaluate the classification performance of NLU services. Moreover, we present two new corpora, one consisting of annotated questions and one consisting of annotated questions with the corresponding answers. Based on these corpora, we conduct an evaluation of some of the most popular NLU services. Thereby we want to enable both, researchers and companies to make more educated decisions about which service they should use.
Full-text available
Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring labelled datasets and solving a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable dialogue system along with a new way of collecting task-oriented dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.
Building end-to-end dialogue systems using generative hierarchical neural network models
  • I V Serban
  • A Sordoni
  • Y Bengio
  • A Courville
  • J Pineau
I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, "Building end-to-end dialogue systems using generative hierarchical neural network models," arXiv [cs.CL], 2015.
Need-to-know chatbot statistics in 2020
"Need-to-know chatbot statistics in 2020," [Online]. Available: [Accessed: 06-Oct-2020].
Целевой чат-бот на основе машиного обучения [A goal-oriented chatbot based on machine learning
  • T M T Nguyen
  • M V Shcherbakov
T.M.T. Nguyen, M.V. Shcherbakov, "Целевой чат-бот на основе машиного обучения [A goal-oriented chatbot based on machine learning]." Modeling, optimization and information technology, May 2020. [Online] Available:
Rasa: Open Source Language Understanding and Dialogue Management
  • T Bocklisch
  • J Faulkner
  • N Pawlowski
  • A Nichol
T. Bocklisch, J. Faulkner, N. Pawlowski, and A. Nichol, "Rasa: Open Source Language Understanding and Dialogue Management," arXiv [cs.CL], 2017.