Figure - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Source publication
The idea of data to be semantically linked and the subsequent usage of this linked data with modern computer applications has been one of the most important aspects of Web 3.0. However, the actualization of this aspect has been challenging due to the difficulties associated with building knowledge bases and using formal languages to query them. In...
Context in source publication
Context 1
... intents ranged from "greet" and "goodbye" which are used to activate and deactivate the bot to "ask_hostel_gender_details" which is used to inquire if a hostel is for girls or boys. Examples of intents along with their query type are shown in Table 1. An example of intent in Rasa, along with its sentence set, is shown in Figs. 6 and 7. ...Citations
... Visual Question Answering (VQA) represents the main point of contact between the communities of natural language processing and computer vision. Technologies such as conversational agents and chatbots [13] are suitable for this purpose. These technologies can interface with neural networks and ontologies [14], exploiting their functionalities like graph reasoning [15] for extending context. ...
In this work we present a preliminary version of a comprehensive interface for supporting users to interact with scholarly documents, enabling multi-layered exploration and offering deeper insights by integrating diverse features and contextual information. By bridging diverse information our work pursues the identification, characterization, and linking of visual elements to semantic and context data, leveraging large language models for interoperability. Recent advances in retrieval augmented generation are also exploited to address some language models limitations, allowing them to access latent information from document representations such as graph and vector embeddings. The system under development performs an analysis of input documents and enables the extraction of visual and semantic features, making them accessible in a comprehensive framework. The association of structural information to visual data allows formal analysis of documents and is exploited in our model to enhance visual extraction, performing a novel ontology-based constraint violation detection. The information extracted through this framework is semantically explorable, providing access to the document structure, which can be exploited in many applications like question answering and document understanding.
... глобальной семантической сети, формируемой посредством стандартизации представления информации в виде, пригодном для машинной обработки UGC с использованием различных семантических моделей 20 . Существуют несколько языков для записи семантических моделей, основными из которых являются RDF/RDFS (Resource Description Framework) 21 и OWL (Ontology Web Language) 22 . ...
The paper considers the main technologies (including socio-humanitarian ones) that pose challenges to ensure the security of communication in the Internet environment and require the development of new regulatory models. The correlation and interrelationships of the concepts of information, information-psychological, reputational and media security; information and cognitive sovereignty; information, cognitive and hybrid warfare; the phenomenon of «soft power», «sociological propaganda», which is important for the unification of the terminological apparatus in this area, are considered. For the first time, from the point of view of jurisprudence, the concept of cognitive sovereignty is comprehensively considered and its components are characterized, including media security, cultural sovereignty, technological sovereignty, managerial sovereignty, and legal security. The research section devoted to the comprehensive consideration of the phenomenon of social engineering is also new, not only as a set of methods of psychological influence aimed at obtaining unauthorized access to data, but also as other complexes of socio-humanitarian technologies for managing meanings, methods and techniques of information and psychological influence on human behavior. The place of legal social engineering in the system of social engineering is considered and the role of the lawyer-strategist (lawyer-lawmaker) is justified as a social engineer who develops models for rationing not only current, but also emerging, predictable social relations. The analysis of the development of cyberspace from the point of view of the concept «Web 1.0 — Web 2.0 — Web 3.0 — Web 3» allowed, firstly, to develop an author’s feature model of various «types» (stages) of the development of the Internet environment and, secondly, to identify challenges to the law caused by the need to ensure media security and cognitive sovereignty, and also the adaptation of new economic models.
... This document presents an approach to creating a bot using the RASA tool (Mishra et al., 2022). The first part describes the tool's functionality, and the next one presents the concept of building a voice assistant using only open-source components. ...
... The RASA framework has become a prominent tool in Arabic language processing. In addition, incorporating both morphological and syntactic disambiguation in a unified framework has shown highly favorable outcomes for languages such as Arabic [47]. An integrated strategy is essential for efficiently handling Arabic text and comprehending user input within the context of a chatbot. ...
... An integrated strategy is essential for efficiently handling Arabic text and comprehending user input within the context of a chatbot. In addition, incorporating both morphological and syntactic disambiguation in a unified framework has shown highly favorable outcomes for languages such as Arabic [47]. An integrated method is essential for efficiently processing Arabic text and comprehending user inputs within the context of a chatbot. ...
The rise of conversational agents (CAs) like chatbots in education has increased the demand for advisory services. However, student–college admission interactions remain manual and burdensome for staff. Leveraging CAs could streamline the admission process, providing efficient advisory support. Moreover, limited research has explored the role of Arabic chatbots in education. This study introduces Tayseer, an Arabic AI-powered web chatbot that enables instant access to college information and communication between students and colleges. This study aims to improve the abilities of chatbots by integrating features into one model, including responding with audiovisuals, various interaction modes (menu, text, or both), and collecting survey responses. Tayseer uses deep learning models within the RASA framework, incorporating a customized Arabic natural language processing pipeline for intent classification, entity extraction, and response retrieval. Tayseer was deployed at the Technical College for Girls in Najran (TCGN). Over 200 students used Tayseer during the first semester, demonstrating its efficiency in streamlining the advisory process. It identified over 50 question types from inputs with a 90% precision in intent and entity predictions. A comprehensive evaluation illuminated Tayseer’s proficiency as well as areas requiring improvement. This study developed an advanced CA to enhance student experiences and satisfaction while establishing best practices for education chatbot interfaces by outlining steps to build an AI-powered chatbot from scratch using techniques adaptable to any language.
... Additionally, [59] utilized a diverse set of classifiers, including Random Forest, Naive Bayes, SVM, Softmax Regression, and a Classifier Ensemble. In contrast, [60] focused on fewshot learning techniques, specifically 1-shot and 5-shot learning, while [61] leveraged the Rasa framework for query intent recognition. In [62], the authors identify the user's query intent by employing a method based on session sequences of user browsing time on e-commerce sites for a product, utilizing corresponding attribute graphs. ...
Question Answering (QA) systems are increasingly essential in educational institutions, enhancing both learning and administrative processes by providing quick and accurate answers to user queries. However, existing systems often struggle with accurately classifying and responding to diverse and context-dependent questions, especially when dealing with large knowledge graphs. Predicting the domain of a question can significantly narrow down the search space within a vast knowledge graph, improving the system’s efficiency and accuracy. This study addresses this gap by developing and evaluating domain prediction models. We compare the performance of various deep learning architectures, including Bi-GRU, Bi-LSTM, GRU, and LSTM. Our results demonstrate that the 1-layer Bi-GRU model outperforms the others, achieving the highest test accuracy of 82.13%. Additionally, by employing an ensemble technique that combines models with highest performance measures from each architecture, we further enhance overall performance, achieving an accuracy of 87.14%, which demonstrates improved predictive capability. This work is significant as it provides a robust solution for improving the accuracy and relevance of QA systems in educational settings, thereby enhancing user satisfaction and operational efficiency.
Despite the growing adoption of Property Graph Databases, like Neo4j, interacting with them remains difficult for non-technical users due to the reliance on formal query languages. Natural Language Interfaces (NLIs) address this by translating natural language (NL) into Cypher. However, existing solutions are typically limited to high-resource languages; are difficult to adapt to evolving domains with limited annotated data; and often depend on Machine Learning (ML) approaches, including Large Language Models (LLMs), that demand substantial computational resources and advanced expertise for training and maintenance. We address these limitations by introducing a novel dependency-based, training-free, schema-agnostic Natural Language Interface (NLI) that converts NL queries into Cypher for querying Property Graphs. Our system employs a modular pipeline-integrating entity and relationship extraction, Named Entity Recognition (NER), semantic mapping, triple creation via syntactic dependencies, and validation against an automatically extracted Schema Graph. The distinctive feature of this approach is the reduction in candidate entity pairs using syntactic analysis and schema validation, eliminating the need for candidate query generation and ranking. The schema-agnostic design enables adaptation across domains and languages. Our system supports single- and multi-hop queries, conjunctions, comparisons, aggregations, and complex questions through an explainable process. Evaluations on real-world queries demonstrate reliable translation results.
To explore information on the Semantic Web, SPARQL queries or DL-queries are suitable tools. However, users interested in exploring the content of such knowledge bases often find it challenging to employ formal query languages, as this requires familiarity with the target domain’s representation model. To address these challenges, a Question-Answering System that automatically translates natural language questions into SPARQL queries, over the Smithsonian American Art Museum CIDOC-CRM representation is presented. The proposed approach uses an ontology, named Query Ontology, defined to represent the natural language concepts and relations specific to the question’s domain. This system’s architecture uses a traditional natural language processing symbolic approach, with a pipeline of modules for the syntactic, semantic, and pragmatic analysis. An evaluation of the proposed system is presented and shows very promising results.
Information is inevitable when it comes to national security. The information revolution seems to hold the massive potential to strengthen national security against current and upcoming threats and cyber-attacks. However, advancements in information accessibility possess innumerable complications for retaining stable national security. One of the preeminent information sources is social media which certainly raises information manipulation factors and destabilizes national security. To accomplish better national security plans, information technology can help countries to identify potential threats, share information securely, and protect mechanisms in them. Artificial Intelligence (AI) is one of the smart areas that robustly facilitates secure information handling to avoid threats and cyber-attacks. It intelligently scrutinizes information available to the public through social media and assists in refraining negative effects on national security. This research article widely focuses on four main analytical milestones; 1) Information available to the public 2) Information affecting national security 3) Risks of cyber-attacks 4) AI as paramount to national security for accomplishing competent information role. Our principal objective is to demystify information accessibilities perspectives for readers to understand the fundamentals of information accessibility and inaccessibility corresponding to national security. To support and manifest our milestones and objectives, Systematic Literature Review (SLR) is methodologically adapted to draw suitable conclusions and develop a farsighted model and frame of reference. This paper concludes with AI tool based categorization, algorithmic function and domain-specific analysis with area-based limitations to highlight current needs. Above all, this article is a thought-provoking kick-start for many naive social media users that usually avoid information-bearing elements and are victimized by cyber-attacks followed by national security compromises.