Chapter
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Knowledge graphs and graph databases are nowadays extensively used in various domains. However, manually creating knowledge graphs using existing ontology concepts presents significant challenges. On the other hand, chatbots are one of the most prominent technologies in the recent past. In this paper, we explore the idea of utilizing chatbots to facilitate the manual population of knowledge graphs. To implement these chatbots, we generate them based on other special knowledge graphs that serve as models of chatbots. These chatbot models are created using our modelling ontology (specially designed for this purpose) and ontologies from a specific domain. The proposed approach enables the manual population of knowledge graphs in a more convenient manner through the use of automatically generated conversational agents based on our chatbot models.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Chatbots are software services accessed via conversation in natural language. They are increasingly used to help in all kinds of procedures like booking flights, querying visa information or assigning tasks to developers. They can be embedded in webs and social networks, and be used from mobile devices without installing dedicated apps. While many frameworks and platforms have emerged for their development, identifying the most appropriate one for building a particular chatbot requires a high investment of time. Moreover, some of them are closed – resulting in customer lock-in – or require deep technical knowledge. To tackle these issues, we propose a model-driven engineering approach to chatbot development. It comprises a neutral meta-model and a domain-specific language (DSL) for chatbot description; code generators and parsers for several chatbot platforms; and a platform recommender. Our approach supports forward and reverse engineering, and model-based analysis. We demonstrate its feasibility presenting a prototype tool and an evaluation based on migrating third party Dialogflow bots to Rasa.
Article
Full-text available
With the rapid progress of the semantic web, a huge amount of structured data has become available on the web in the form of knowledge bases (KBs). Making these data accessible and useful for end-users is one of the main objectives of chatbots over linked data. Building a chatbot over linked data raises different challenges, including user queries understanding, multiple knowledge base support, and multilingual aspect. To address these challenges, we first design and develop an architecture to provide an interactive user interface. Secondly, we propose a machine learning approach based on intent classification and natural language understanding to understand user intents and generate SPARQL queries. We especially process a new social network dataset (i.e., myPersonality) and add it to the existing knowledge bases to extend the chatbot capabilities by understanding analytical queries. The system can be extended with a new domain on-demand, flexible, multiple knowledge base, multilingual, and allows intuitive creation and execution of different tasks for an extensive range of topics. Furthermore, evaluation and application cases in the chatbot are provided to show how it facilitates interactive semantic data towards different real application scenarios and showcase the proposed approach for a knowledge graph and data-driven chatbot.
Article
Full-text available
Amazon’s Alexa, Apple’s Siri, Google Assistant and Microsoft’s Cortana, clearly illustrate the impressive research work and potentials to be explored in the field of conversational agents. Conversational agent, chatter-bot or chatbot is a program expected to converse with near-human intelligence. Chatbots are designed to be used either as task-oriented ones or simply open-ended dialogue generator. Many approaches have been proposed in this field which ranges from earlier versions of hard-coded response generator to the advanced development techniques in Artificial Intelligence. In a broader sense, these can be categorized as rule-based and neural network based. While rule-based relies on predefined templates and responses, a neural network based relies on deep learning models. Rule-based are preferable for simpler task-oriented conversations. Open-domain conversational modeling is a more challenging area and uses mostly neural network-based approaches. This paper begins with an introduction of chatbots, followed by in-depth discussion on various classical or rule-based and neural-network-based approaches. The evaluation metrics employed for chatbots are mentioned. The paper concludes with a table consisting of recent research done in the field. It covers all the latest and significant publications in the field, the evaluation metrics employed, the corpus which is used as well as the possible areas of enhancement that exist in the proposed techniques.
Article
Full-text available
Keeping the dialogue state in dialogue systems is a notoriously difficult task. We introduce an ontology-based dialogue manage(OntoDM), a dialogue manager that keeps the state of the conversation, provides a basis for anaphora resolution and drives the conversation via domain ontologies. The banking and finance area promises great potential for disambiguating the context via a rich set of products and specificity of proper nouns, named entities and verbs. We used ontologies both as a knowledge base and a basis for the dialogue manager; the knowledge base component and dialogue manager components coalesce in a sense. Domain knowledge is used to track Entities of Interest, i.e. nodes (classes) of the ontology which happen to be products and services. In this way we also introduced conversation memory and attention in a sense. We finely blended linguistic methods, domain-driven keyword ranking and domain ontologies to create ways of domain-driven conversation. Proposed framework is used in our in-house German language banking and finance chatbots. General challenges of German language processing and finance-banking domain chatbot language models and lexicons are also introduced. This work is still in progress, hence no success metrics have been introduced yet.
Article
Full-text available
Objective: Ontologies are widely used in the biomedical domain. While many tools exist for the edition, alignment or evaluation of ontologies, few solutions have been proposed for ontology programming interface, i.e. for accessing and modifying an ontology within a programming language. Existing query languages (such as SPARQL) and APIs (such as OWLAPI) are not as easy-to-use as object programming languages are. Moreover, they provide few solutions to difficulties encountered with biomedical ontologies. Our objective was to design a tool for accessing easily the entities of an OWL ontology, with high-level constructs helping with biomedical ontologies. Methods: From our experience on medical ontologies, we identified two difficulties: (1) many entities are represented by classes (rather than individuals), but the existing tools do not permit manipulating classes as easily as individuals, (2) ontologies rely on the open-world assumption, whereas the medical reasoning must consider only evidence-based medical knowledge as true. We designed a Python module for ontology-oriented programming. It allows access to the entities of an OWL ontology as if they were objects in the programming language. We propose a simple high-level syntax for managing classes and the associated "role-filler" constraints. We also propose an algorithm for performing local closed world reasoning in simple situations. Results: We developed Owlready, a Python module for a high-level access to OWL ontologies. The paper describes the architecture and the syntax of the module version 2. It details how we integrated the OWL ontology model with the Python object model. The paper provides examples based on Gene Ontology (GO). We also demonstrate the interest of Owlready in a use case focused on the automatic comparison of the contraindications of several drugs. This use case illustrates the use of the specific syntax proposed for manipulating classes and for performing local closed world reasoning. Conclusion: Owlready has been successfully used in a medical research project. It has been published as Open-Source software and then used by many other researchers. Future developments will focus on the support of vagueness and additional non-monotonic reasoning feature, and automatic dialog box generation.
Conference Paper
Full-text available
Distributed Market Spaces (DMS), refer to an exchange environment in emerging Internet of Everything, that supports users in making transactions of complex products; a novel type of products made up of different products and/or services that can be customized to better fit the individual context of the user. In order to express their demand for a particular complex product in a way that is interpretable by the DMS, users need flexible User Interfaces (UIs) that allow context-focused data collection related to the complexity of the user's demand. This paper proposes a concept for generic UIs that enables users to compose their own UIs for requesting complex products, by combining existing UI descriptions for different parts of the particular complex product, as well as to share and improve UI descriptions among other users within the markets.
Chapter
The process of manually inserting data in ontology instances is usually a cumbersome activity. Editing complex domain ontologies using Protégé and similar tools requires expert knowledge. Despite exhaustive research in this area, the existing ontology population tools are still not user-friendly enough to simplify this activity for end-users. To facilitate this process, we propose an approach to design an ontology to serve as a meta-model for the generation of user interface models. The user interface models are used to create web applications with dialog-based HTML forms, which are eventually used to populate instances of OWL ontologies. Our meta-model includes several patterns used to generate programming control structures used to populate ontology instances. On the one hand, the meta-model describes user interfaces, and on the other hand, it describes the structure of the output ontology instance. We also show a prototype of a tool that loads a simple meta-model file and creates a single-page application that populates an ontology instance.
Chapter
The Internet has evolved into a global marketplace where information about literally every existing product or service can be found. Smart devices and cyber-physical systems, which are interconnected to IT systems and services, form the basis for the arising Internet of Everything, opening up new economic opportunities for its participants and users beyond its technological aspects and challenges. While today’s e-business scenarios are mostly dominated by a few centralized online platforms, future business models, which will be feasible for the Internet of Everything, need to address special requirements. At present, the amount of data, as well as the sum of available products and services in conjunction with possible customizations already overstrain users in search for their optimal choice. This leads to a decision making process according to the principle of adverse selection. Such business models, e.g., leveraging the possibilities of smart cities, need to cope with arbitrary combinations of products and services orchestrated into complex products in a highly distributed and dynamic environment. Furthermore, these arbitrary combinations are influenced by real-time context information derived from sensor networks or IT systems, as well as the users’ requirements and preferences. The complexity of finding the optimal product/service combination overstrains users and leads to decisions according to the principle of adverse selection (i.e., choosing good enough instead of optimal). Such e-business models require an appropriate underlying value generation architecture that supports users in this process. In this chapter, we develop a business model that addresses these problems. In addition, we present the Distributed Market Spaces (DMS) software-system architecture as a possible implementation, which enables the aforementioned decentralized and context-centric e-business scenario and leverages the commercial possibilities of smart cities.
Article
We introduce a pair of tools, Rasa NLU and Rasa Core, which are open source python libraries for building conversational software. Their purpose is to make machine-learning based dialogue management and language understanding accessible to non-specialist software developers. In terms of design philosophy, we aim for ease of use, and bootstrapping from minimal (or no) initial training data. Both packages are extensively documented and ship with a comprehensive suite of tests. The code is available at https://github.com/RasaHQ/
Article
A new ontology based approach is proposed to model and operate chatbots (OntBot). OntBot uses appropriate mapping technique to transform ontologies and knowledge into relational database and then use that knowledge to drive its chats. The proposed approach overcomes a number of traditional chatbots drawbacks including: the need to learn and use chatbot specific language such as AIML, high botmaster interference, and the use of non-matured technology. OntBot has the additional power of easy users interactions using their natural language, and the seamless support of different application domains. This gives the proposed approach a number of unique scalability and interoperability properties that are going to be evaluated in future phases of this research project.
Chapter
This paper is a technical presentation of Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) and Artificial Intelligence Markup Language (AIML), set in context by historical and philosophical ruminations on human consciousness. A.L.I.C.E., the first AIML-based personality program, won the Loebner Prize as “the most human computer” at the annual Turing Test contests in 2000, 2001, and 2004. The program, and the organization that develops it, is a product of the world of free software. More than 500 volunteers from around the world have contributed to her development. This paper describes the history of A.L.I.C.E. and AIML-free software since 1995, noting that the theme and strategy of deception and pretense upon which AIML is based can be traced through the history of Artificial Intelligence research. This paper goes on to show how to use AIML to create robot personalities like A.L.I.C.E. that pretend to be intelligent and selfaware. The paper winds up with a survey of some of the philosophical literature on the question of consciousness. We consider Searle’s Chinese Room, and the view that natural language understanding by a computer is impossible. We note that the proposition “consciousness is an illusion” may be undermined by the paradoxes it apparently implies. We conclude that A.L.I.C.E. does pass the Turing Test, at least, to paraphrase Abraham Lincoln, for some of the people some of the time. KeywordsArtificial Intelligence-natural language-chat robot-bot-Artificial Intelligence Markup Language (AIML)-Markup Languages-XML-HTML-philosophy of mind-consciousness-dualism-behaviorism-recursion-stimulusresponse-Turing Test-Loebner Prize-free software-open source-A.L.I.C.E-Artificial Linguistic Internet Computer Entity-deception-targeting
Conference Paper
A promising application domain for Semantic Web technology is the annotation of products and services offerings on the Web so that consumers and enterprises can search for suitable suppliers using products and services ontologies. While there has been substantial progress in developing ontologies for types of products and services, namely eClassOWL, this alone does not provide the representational means required for e-commerce on the Semantic Web. Particularly missing is an ontology that allows describing the relationships between (1) Web resources, (2) offerings made by means of those Web resources, (3) legal entities, (4) prices, (5) terms and conditions, and the aforementioned ontologies for products and services (6). For example, we must be able to say that a particular Web site describes an offer to sell cell phones of a certain make and model at a certain price, that a piano house offers maintenance for pianos that weigh less than 150 kg, or that a car rental company leases out cars of a certain make and model from a set of branches across the country. In this paper, we analyze the complexity of product description on the Semantic Web and define the GoodRelations ontology that covers the representational needs of typical business scenarios for commodity products and services.