Technical Report

SPARQL Update - A Language for Updating RDF Graphs

Authors:
  • Niramai Health Analytix
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In [20], we present an extended version of an RDF Document, denoted RDF ⊕ Document, that is given as input to the RGS System used to synchronize shared RDF Documents between agents. A common way to store and access content in an RDF Graph is to store the triples in an SQL database, create appropriate RDF Views for the content, and then use SPARQL (SPARQL Protocol and RDF Query Language) [21] together with these RDF Views to query RDF Graphs. This is the approach we take. ...
... Like FDS, our approach does use a unified query mechanism, SPARQL (SPARQL Protocol and RDF Query Language) [21] for this collection of shared RDF Documents/Graphs, but also locally for internal agent queries of all its RDF Documents, shared and unshared. SPARQL was developed for querying information stored in RDF Graphs. ...
... Deltas are encoded using a subset of SPARQL Update query [21]. Only two queries are allowed, i.e. ...
Article
Full-text available
In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge are gathered incrementally and in different ways by heterogeneous robots and humans. We describe this as the creation of Hastily Formed Knowledge Networks (HFKNs). The focus of this paper is the specification and prototyping of a general distributed system architecture that supports the creation of HFKNs by teams of robots and humans. The information collected ranges from low-level sensor data to high-level semantic knowledge, the latter represented in part as RDF Graphs. The framework includes a synchronization protocol and associated algorithms that allow for the automatic distribution and sharing of data and knowledge between agents. This is done through the distributed synchronization of RDF Graphs shared between agents. High-level semantic queries specified in SPARQL can be used by robots and humans alike to acquire both knowledge and data content from team members. The system is empirically validated and complexity results of the proposed algorithms are provided. Additionally, a field robotics case study is described, where a 3D mapping mission has been executed using several UAVs in a collaborative emergency rescue scenario while using the full HFKN Framework.
... This work is thus orthogonal to our because we work under a constraint of storage space. Additionally, SPARQL does not support UP-DATE transactions (there are currently proposals to add this functionality to SPARQL (see [81])). Updates are performed by massively loading data. ...
... An estimation of which size of ρ is optimal for the workload is out of the scope of this work 3 Currently, SPARQL does not provide an UPDATE transaction, although there are already proposals to add UPDATE to SPARQL (see[81]). ...
Thesis
In dieser Arbeit schlagen wir die Verwendung von materialisierten Anfragen als Indexstruktur für RDF-Daten vor. Wir streben eine Reduktion der Bearbeitungszeit durch die Minimierung der Anzahl der Vergleiche zwischen Anfrage und RDF Datenmenge an. Darüberhinaus betonen wir die Rolle von Kostenmodellen und Indizes für die Auswahl eines efizienten Ausführungsplans in Abhängigkeit vom Workload. Wir geben einen Überblick über das Problem der Auswahl von materialisierten Anfragen in relationalen Datenbanken und diskutieren ihre Anwendung zur Optimierung der Anfrageverarbeitung. Wir stellen RDFMatView als Framework für SPARQL-Anfragen vor. RDFMatView benutzt materializierte Anfragen als Indizes und enthalt Algorithmen, um geeignete Indizes fur eine gegebene Anfrage zu finden und sie in Ausführungspläne zu integrieren. Die Auswahl eines effizienten Ausführungsplan ist das zweite Thema dieser Arbeit. Wir führen drei verschiedene Kostenmodelle für die Verarbeitung von SPARQL Anfragen ein. Ein detaillierter Vergleich der Kostmodelle zeigt, dass ein auf Index-- und Prädikat--Statistiken beruhendes Modell die genauesten Informationen liefert, um einen effizienten Ausführungsplan auszuwählen. Die Evaluation zeigt, dass unsere Methode die Anfragebearbeitungszeit im Vergleich zu unoptimierten SPARQL--Anfragen um mehrere Größenordnungen reduziert. Schließlich schlagen wir eine einfache, aber effektive Strategie für das Problem der Auswahl von materialisierten Anfragen über RDF-Daten vor. Ausgehend von einem bestimmten Workload werden algorithmisch diejenigen Indizes augewählt, die die Bearbeitungszeit des gesamten Workload minimieren sollen. Dann erstellen wir auf der Basis von Anfragemustern eine Menge von Index--Kandidaten und suchen in dieser Menge Zusammenhangskomponenten. Unsere Auswertung zeigt, dass unsere Methode zur Auswahl von Indizes im Vergleich zu anderen, die größten Einsparungen in der Anfragebearbeitungszeit liefert.
... The choice of ontology informed the RDF data model [24], and SPARQL query language [25] being selected for representing and querying graphs, respectively. The RDF triple is a 3-tuple of <subject, predicate, object> that states a subject has a relationship predicate (directed edge) to an entity object. ...
... The RDF triple is a 3-tuple of <subject, predicate, object> that states a subject has a relationship predicate (directed edge) to an entity object. SPARQL [25] defines a set of patterns that constrains the set of RDF terms returned from the graph. Figure 5a shows a part of the triple examples of our ontology data store. ...
Article
Full-text available
Smart building, one of IoT-based emerging applications is where energy-efficiency, human comfort, automation, security could be managed even better. However, at the current stage, a unified and practical framework for knowledge inference inside the smart building is still lacking. In this paper, we present a practical proposal of knowledge extraction on event-conjunction for automatic control in smart buildings. The proposal consists of a unified API design, ontology model, inference engine for knowledge extraction. Two types of models: finite state machine(FSMs) and bayesian network (BN) have been used for capturing the state transition and sensor data fusion. In particular, to solve the problem that the size of time interval observations between two correlated events was too small to be approximated for estimation, we utilized the Markov Chain Monte Carlo (MCMC) sampling method to optimize the sampling on time intervals. The proposal has been put into use in a real smart building environment. 78-days data collection of the light states and elevator states has been conducted for evaluation. Several events have been inferred in the evaluation, such as room occupancy, elevator moving, as well as the event conjunction of both. The inference on the users’ waiting time of elevator-using revealed the potentials and effectiveness of the automatic control on the elevator.
... They provide SPARQL endpoints to query the data, but they neither address data updates nor the explicit application in an enterprise environment. Our contribution in this paper is the ontology-based write access to relational data via SPAR- QL/Update [19], the upcoming data manipulation language (DML) of the Semantic Web. We present the update-aware RDB to RDF mapping language R3M and algorithms for translating SPARQL/Update to SQL DML. ...
... It is currently limited to readonly access to RDF data as it does not provide any means to insert, delete, or modify data. The Semantic Web community made efforts to close this gap, which lead to the SPARQL/Update [19] proposal for an RDF data manipulation language. SPARQL/Update does also serve as the basis for the update functionality in the relaunched W3C SPARQL working group (WG). ...
Conference Paper
Relational Databases are used in most current enterprise environments to store and manage data. The semantics of the data is not explicitly encoded in the relational model, but implicitly on the application level. Ontologies and Semantic Web technologies provide explicit semantics that allows data to be shared and reused across application, enterprise, and community boundaries. Converting all relational data to RDF is often not feasible, therefore we adopt an ontology-based access to relational databases. While existing approaches focus on read-only access, we present our approach OntoAccess that adds ontology-based write access to relational data. OntoAccess consists of the updateaware RDB to RDF mapping language R3M and algorithms for translating SPARQL/Update operations to SQL. This paper presents the mapping language, the translation algorithms, and a prototype implementation of OntoAccess.
... Middleware application loads owl files with rules in Ontologies+Rules module. Ontology mapping module provides connection of information from devices to datatype properties of individuals of ontology using SPARQL/Update statements [24] as a main mechanism for modifying ontology. Thus, presented knowledge model will reflect the current state of the system. ...
... As far as the knowledge model of the system is ontologies, the technology for updating must be able to change them. SPARQL/Update [24] is a language for updating RDF (Resource Description Framework) graphs. It can insert and delete triples from RDF graph, perform a group of updating operations, and create and delete RDF graphs. ...
Conference Paper
Full-text available
Modern buildings are equipped with multiple systems, which are dedicated to improve quality of living for inhabitants and to facilitate performance of daily duties for maintenance personnel. Yet the variety of the systems causes the bulky informational flow, which may result informational tense to the user, time lost and increase of errors. To make life comfortable in a smart house, the inhabitants of the building should have an access to the simple and intuitive control and monitoring of the apartment, and the maintenance personnel should have a handily informational support for the prioritizing and efficient performance of the maintenance tasks. Thus, there is a need in intelligent management of information flow to provide data to the users with regards to the context like for instance on-going situation and goals, user's intentions, state and role in the system. This paper proposes framework of context-aware middleware as a solution for information management in the system. Context-awareness is achieved with ontological knowledge models of the system and two-level reasoning upon the ontologies. The enabled technologies are discussed with relation to the use case, which is a combination of smart home and elderly care services. Future steps towards framework realization end the paper.
... The W3C has defined widely-accepted standards that make such an interoperability possible: the OWL 2 Web Ontology language defines the syntax that can be used to write ontologies; many reasoners are available today that are capable of using ontologies written in OWL 2 to make inferences on facts stored as RDF graphs [25]. A query language, SPARQL, is available for retrieving facts from RDF graphs in much the same way as data is retrieved from a database [51]. Data formatted using the RDF language and linked to ontologies are called linked open data, because their adoption of a standard format makes them usable to everybody and connected to all other data which refer to the same shared ontologies. ...
... In 2004 the SPARQL query language for RDF appeared, which since 2008 is a W3C recommendation. Although originally only a language for read--querying data, SPARQL later received the amendment SPARQL--Update, which allows write access to RDF data (Seaborne et al. 2008). ...
Article
Full-text available
The Secure SQL Server - SecSS, is a technology primarily developed to enable self-service governance of states, as described in (Paulin 2012). Self-service governance is a novel model of governance that rejects service-based public administration and instead proposes that governed subjects manage their legal relations in a self-service manner, based on ad-hoc determination of eligibilities. In this article we describe the prototype SecSS and its evaluation in a complex governmental scenario.
... The domain ontology is represented in OWL [6] and news querying is done by means of a SPARQL [30] variant, tSPARQL [17], which extends SPARQL by offering a wider range of timerelated functionalities. The domain ontology graph is maintained using SPARQL Update [35]. Finally, classification of news items is done using GATE [15] and the WordNet [16] semantic lexicon. ...
Conference Paper
Full-text available
When recommending news items, most of the traditional algorithms are based on TF-IDF, i.e., a term-based weighting method which is mostly used in information retrieval and text mining. However, many new technologies have been made available since the introduction of TF-IDF. This paper proposes a new method for recommending news items based on TF-IDF and a domain ontology. It is demonstrated that adapting TF-IDF with the semantics of a domain ontology, resulting in Concept Frequency - Inverse Document Frequency (CF-IDF), yields better results than using the original TF-IDF method. CF-IDF is built and tested in Athena, a recommender extension to the Hermes news personalization framework. Athena employs a user profile to store concepts or terms found in news items browsed by the user. The framework recommends new articles to the user using a traditional TF-IDF recommender and the CF-IDF recommender. A statistical evaluation of both methods shows that the use of an ontology significantly improves the performance of a traditional recommender.
... More details on the information extraction approach and micropost annotation are given on Section IV. Once the information is transformed to RDF, it is sent to a Semantic Publisher using SPARQL Update [13] via HTTP. Although it is desirable, for performance issues, to have the Semantic Publisher on the same server, architecturally it can be located anywhere on the Web and accessed via an abstraction layer achieved via HTTP and the SPARQL Protocol for RDF. ...
Conference Paper
Full-text available
In this paper we discuss the collection, semantic annotation and analysis of real-time social signals from micro blogging data. We focus on users interested in analyzing social signals collectively for sense making. Our proposal enables flexibility in selecting subsets for analysis, alleviating information overload. We define an architecture that is based on state-of-the-art Semantic Web technologies and a distributed publish-subscribe protocol for real time communication. In addition, we discuss our method and application in a scenario related to the health care reform in the United States.
... In addition, we rely on SPARUL (SPARQL/Update [36]), and its related HTTP bindings to provide an additional abstraction layer for data storage16. ...
Chapter
Full-text available
During the past few years, various organisations embraced the Enterprise 2.0 paradigms, providing their employees with new means to enhance collaboration and knowledge sharing in the workplace. However, while tools such as blogs, wikis, and principles like free-tagging or content syndication allow user-generated content to be more easily created and shared in the enterprise, in spite of some social issues, these new practices lead to various problems in terms of knowledge management. In this chapter, we provide an approach based on Semantic Web and Linked Data technologies for (1) integrating heterogeneous data from distinct Enterprise 2.0 applications, and (2) bridging the gap between raw text and machine-readable Linked Data. We discuss the theoretical background of our proposal as well as a practical case-study in enterprise, focusing on the various add-ons that have been provided to the original information system, as well as presenting how public Linked Open Data from the Web can be used to enhance existing Enterprise 2.0 ecosystems.
... Read-Write Linked Data [2] extends the Linked Data principles with the requirement to allow applications to read, write and update data on the Semantic Web. SPAR-QL/Update is an extension of SPARQL to support update over RDF graphs [8]. D2RQ, however, merely provides readonly access to the relational data. ...
Article
Full-text available
D2RQ is a popular RDB-to-RDF mapping platform that supports mapping relational databases to RDF and posing SPARQL queries to these relational databases. However, D2RQ merely provides a read-only RDF view on relational databases. Thus, we introduce D2RQ/Update---an extension of D2RQ to enable executing SPARQL/Update statements on the mapped data, and to facilitate the creation of a read-write Semantic Web.
... Once the event has been manually validated, the Hermes framework updates the ontology by means of action rules that make use of SPARQL/Update [4]. The action rules are ordered, e.g., removing old CEOs before adding new CEOs to prevent incorrect updates. ...
... Data could be gathered from different sources of information, for example, wireless sensors with WS capabilities and databases. OntologyManager subscribes to WS events from required sources of data and invokes SPARQL/Update [18] services of OntologyService for updating information in the ontology. OntologyService contains the ontology of the system, performs reasoning and provides services for external applications for enabling SPARQL and SPARQL/Update queries. ...
Conference Paper
Full-text available
Emerging network technologies and growing variety of available devices open new smart home systems perspective for data capturing and analyzing. Systems tend to become wider in scale and are able to capture different aspects of living conditions inside and outside buildings. The variety of available information makes those systems attractive to various user groups with different roles and responsibilities in Smart Home domain. Yet current solutions for monitoring systems do not consider personal user needs, which lead to overwhelming user with excessive information flow. This paper proposes information management system with usage of Semantic Web technologies (ontologies and queries), which takes into account personal user needs in the system and current situation in the environment and reduces information load to the user by providing personal data. It is expected that proposed information management approach will make general monitoring systems user friendly and personal oriented and thus safer during the operation. The detailed description of designed ontology is provided. Two basic scenarios for Smart Homes with social services are considered and developed prototype is described. Current implementation of the proposed architecture shows feasibility of the approach and prompts further fields of research.
... SPARQL will not be discussed further in the following because of the following significant disadvantages. SPARQL is an RDF query language based on SQL (Structured Query Language), but, since it is not based on OWL and queries have to be written for each access [17,20], it is significantly slower than the alternatives. OWL API and Owlready use the OWL2 standard based on the W3C specification [21]. ...
Article
Full-text available
Imagine the possibility to save a simulation at any time, modify or analyze it, and restart again with exactly the same state. The conceptualization and its concrete manifestation in the implementation OntologySim is demonstrated in this paper. The presented approach of a fully ontology-based simulation can solve current challenges in modeling and simulation in production science. Due to the individualization and customization of products and the resulting increase in complexity of production, a need for flexibly adaptable simulations arises. This need is exemplified in the trend towards Digital Twins and Digital Shadows. Their application to production systems, against the background of an ever increasing speed of change in such systems, is arduous. Moreover, missing understandability and human interpretability of current approaches hinders successful, goal oriented applications. The OntologySim can help solving this challenge by providing the ability to generate truly cyber physical systems, both interlocked with reality and providing a simulation framework. In a nutshell, this paper presents a discrete-event-based open-source simulation using multi-agency and ontology.
... It is a data access language that can be used locally or remotely. SPARQL is an RDF-inquired language governed by the W3C [18]. It defines a standard format for creating RDF data-targeting queries and a set of criteria for processing and returning the results. ...
Article
Full-text available
This review presents various perspectives on converting user keywords into a formal query. Without understanding the dataset’s underlying structure, how can a user input a text-based query and then convert this text into semantic protocol and resource description framework query language (SPARQL) that deals with the resource description framework (RDF) knowledge base? The user may not know the structure and syntax of SPARQL, a formal query language and a sophisticated tool for the semantic web (SEW) and its vast and growing collection of interconnected open data repositories. As a result, this study examines various strategies for turning natural language into formal queries, their workings, and their results. In an Internet search engine from a single query, such as on Google, numerous matching documents are returned, with several related to the inquiry while others are not. Since a considerable percentage of the information retrieved is likely unrelated, sophisticated information retrieval systems based on SEW technologies, such as RDF and web ontology language (OWL), can help end users organize vast amounts of data to address this issue. This study reviews this research field and discusses two different approaches to show how users with no knowledge of the syntax of semantic web technologies deal with queries.
... Requirement #4 can be achieved by defining an artificial language for reading or writing data, or using an existing standard. Examples of contemporary artificial languages that enable read-and write-access to data would be SQL or SPARQL/Update (Seaborne et al. 2008). In our previous work (Paulin 2011c) we describe an ss-gov prototype using SQL for both manipulating data and defining access restrictions. ...
Article
Full-text available
In this paper we present a novel model for governing societies based on modern information technology, which neither relies on manual bureaucratic labor, nor depends on process-based e-government services for governance. We expose the flaws of the later and argue that it is not feasible for sustainable governance due to permanently changing laws and instead propose a model in which people can govern themselves in a self-service manner by relying on constellations of data stored in a network of governmental databases to which citizen and officials have read- and write access under rules defined by temporary valid law.
... As such, in practice, it is often helpful to manually decompose the single query into multiple query steps. An exemplary four-step approach using the SPARQL 1.1 UPDATE construct [62,63] can be found in the supplementary ...
Article
Full-text available
Background: Sharing sensitive data across organizational boundaries is often significantly limited by legal and ethical restrictions. Regulations such as the EU General Data Protection Rules (GDPR) impose strict requirements concerning the protection of personal and privacy sensitive data. Therefore new approaches, such as the Personal Health Train initiative, are emerging to utilize data right in their original repositories, circumventing the need to transfer data. Results: Circumventing limitations of previous systems, this paper proposes a configurable and automated schema extraction and publishing approach, which enables ad-hoc SPARQL query formulation against RDF triple stores without requiring direct access to the private data. The approach is compatible with existing Semantic Web-based technologies and allows for the subsequent execution of such queries in a safe setting under the data provider's control. Evaluation with four distinct datasets shows that a configurable amount of concise and task-relevant schema, closely describing the structure of the underlying data, was derived, enabling the schema introspection-assisted authoring of SPARQL queries. Conclusions: Automatically extracting and publishing data schema can enable the introspection-assisted creation of data selection and integration queries. In conjunction with the presented system architecture, this approach can enable reuse of data from private repositories and in settings where agreeing upon a shared schema and encoding a priori is infeasible. As such, it could provide an important step towards reuse of data from previously inaccessible sources and thus towards the proliferation of data-driven methods in the biomedical domain.
... Tornou-se uma recomendação pelo W3C em 2008 e, desde então, é a principal linguagem para consulta na Web Semântica [92]. Apesar de seu foco na consulta, alguns membros do W3C têm feito esforços acerca da manipulação de triplas RDF com SPARQL [94] A especificação da SPARQL ainda é composta por um protocolo que define um meio de transporte para as consultas SPARQL entre clientes e processadores [27]. Para isso, são utilizados Web Services que podem ser implementados em REST ou SOAP/WSDL. ...
Thesis
Full-text available
The researches on Semantic Web Services are aimed, mostly, the SOAP architecture. This architecture is rarely used in Web 2.0 and therefore in Online Social Networks. This dissertation presents an approach for practical implementation of semantic descriptions in RESTful Web Services. It is a simplified architecture that gained much focus on Web 2.0, increasingly replacing the SOAP architecture. The development of the tool, presented here, will fill a gap in the process of implanting the Semantic Web. Existing solutions expose a theoretical view and have no practical implementations. The solution proposed in this paper, relates existing standards and technologies to develop an integrated and free tool. From which, services of a popular Online Social Network are described. Finally, the automatic discovery, composition and invocation of such services are made.
... The RDF and OWL standards are used to represent data and bind each resource with its meaning. The SPARQL UPDATE [16] and QUERY [17] languages are used to update and retrieve data from the shared information store, which constitutes a knowledge base (KB) for the environment. Through these technologies, Smart-M3 becomes a candidate middleware platform for hosting a wide range of context-aware applications based on ontology-driven and multi-agent approaches [18], [19], [20], [21], [22]. ...
Conference Paper
A smart space enhances a networked computing environment by enabling information sharing for a multitude of local digital devices and global resources from the Internet. We consider the M3 architecture (multi-device, multi-vendor, multi-domain) for creating smart spaces, which integrates technologies from two innovative concepts: the Semantic Web and the Internet of Things. Our research focus is on analyses of the capabilities of Smart-M3 platform, which provides software implementations for such a central element of an M3 smart space as Semantic Information Broker (SIB). The paper presents a state-of-the-art and contributes our systematized vision on the SIB design and implementation. The analyzed open source SIB implementations include the original Smart-M3 piglet-based SIB, its optimized descendant RedSIB, OSGi SIB for Java devices, pySIB for Python devices, and CuteSIB for Qt devices. We also analyze the design of proprietary or incomplete SIB implementations: RIBS for embedded devices and ADK SIB built upon the OSGi framework with integration in the Eclipse Integrated Development Environment. The theoretical study is augmented with experimental evaluation of available SIB implementations.
... Currently, many of the protocols composing the Semantic Web stack are also used in contexts not related to the web, for example in order to grant interoperability among smart devices in IoT or pervasive computing scenarios. For example Unicode is used to univocally identify resources, RDF (Resource Description Framework) [8] to represent data as graphs composed by triples (subject, predicate and object), OWL (Web Ontology Language) [9] to provide meanings to the represented information and SPARQL UPDATE [10] and QUERY [11] language to respectively modify and retrieve information from the RDF knowledge base. Among the existing interoperability platforms, a suitable choice as a target platform for the development of the article scenario is Smart-M3 [12]. ...
... There are several languages designed for querying such large KBs, including SPARQL [6], Xcerpt [7], RQL [8]. However, learning these languages adds a limitation, as one needs to be familiarized with the query language, it's syntax, it's semantics, and the ontology of the knowledge base. ...
Conference Paper
Knowledge Base, represents facts about the world, often in some form of subsumption ontology, rather than implicitly, embedded in procedural code, the way a conventional computer program does. While there is a rapid growth in knowledge bases, it poses a challenge of retrieving information from them. Knowledge Base Question Answering is one of the promising approaches for extracting substantial knowledge from Knowledge Bases. Unlike web search, Question Answering over a knowledge base gives accurate and concise results, provided that natural language questions can be understood and mapped precisely to an answer in the knowledge base. However, some of the existing embedding-based methods for knowledge base question answering systems ignore the subtle correlation between the question and the Knowledge Base (e.g., entity types, relation paths, and context) and suffer from the Out Of Vocabulary problem. In this paper, we focused on using a pre-trained language model for the Knowledge Base Question Answering task. Firstly, we used Bert base uncased for the initial experiments. We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question and from the asked question to the knowledge base answer aspects. Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question dynamically in multiple aspects. Our experimental results show the effectiveness and the superiority of the Bert pre-trained language model embeddings for question answering systems on knowledge bases over other well-known embedding methods.
... There are several languages designed for querying such large KBs, including SPARQL [31], Xcerpt [8], RQL [9]. However, learning these languages adds a limitation, as one needs to be familiarized with the query language, it's syntax, it's semantics, and the ontology of the knowledge base. ...
Preprint
Full-text available
Knowledge Base, represents facts about the world, often in some form of subsumption ontology, rather than implicitly, embedded in procedural code, the way a conventional computer program does. While there is a rapid growth in knowledge bases, it poses a challenge of retrieving information from them. Knowledge Base Question Answering is one of the promising approaches for extracting substantial knowledge from Knowledge Bases. Unlike web search, Question Answering over a knowledge base gives accurate and concise results, provided that natural language questions can be understood and mapped precisely to an answer in the knowledge base. However, some of the existing embedding-based methods for knowledge base question answering systems ignore the subtle correlation between the question and the Knowledge Base (e.g., entity types, relation paths, and context) and suffer from the Out Of Vocabulary problem. In this paper, we focused on using a pre-trained language model for the Knowledge Base Question Answering task. Firstly, we used Bert base uncased for the initial experiments. We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question and from the asked question to the knowledge base answer aspects. Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question dynamically in multiple aspects. Our experimental results show the effectiveness and the superiority of the Bert pre-trained language model embeddings for question answering systems on knowledge bases over other well-known embedding methods.
... According to the disadvantages of traditional approaches [6][7][8][9], this paper proposes an extensible approach to convert RDB to RDF with SPARQL/UPDATE [11] which is a new query language for Semantic Web. SPARQL/UPDATE makes up the read-only limitation of SPARQL and provides the methods to insert, delete, or modify RDF data. ...
Article
Full-text available
Converting a huge amount of Web data stored in databases to semantic data is a critical requirement for the development of current Web. The literal to URI and implicit information issues existing in some traditional approaches of mapping Relational Database (RDB) to RDF are studied in this paper. In order to solve the above problems, SPARQL/UPDATE is used to extend the traditional approaches. This extensible approach not only solves the literal to URI problem but also gives methods to process implicit information with rules in the original RDF document. Experimental results show that the new approach enable semantic query engines to find more exact results.
Conference Paper
This proposal explores the promotion of existing relational databases to Semantic Web Endpoints. It presents the benefits of ontology-based read and write access to existing relational data as well as the need for specialized, scalable reasoning over that data. We introduce our approach for translating SPARQL/Update operations to SQL, describe how scalable reasoning can be realized by using the power of the database system, and outline two case studies for evaluating our approach.
Chapter
The Semantic Web extends the existing Web, adding a multitude of language standards and software components to give humans and machines direct access to data. The chapter starts with deriving the architecture of the Semantic Web as a whole from first principles, followed by a presentation of Web standards underpinning the Semantic Web that are used for data publishing, querying, and reasoning. Further, the chapter identifies functional software components required to implement capabilities and behavior in applications that publish and consume Semantic Web content.
Thesis
Full-text available
Die Tourismus-Domäne ist eine sehr Informations-intensive Industrie. Im eTourismus-Bereich ist daher eine mächtiges Datenschema zur geeigneten Speicherung und Abfrage der Daten erforderlich. Bis vor einigen Jahren haben dafür relationale Datenbanken und Dokumenten-zentrierte Syste- me ausgereicht. Für einen Touristen spielt aber heute das schnelle und einfache Befriedigen seines Informationsbedürfnisses eine immer größer werdende Rolle. Aus diesem Grund wird mehr und mehr auf den Ein- satz von semantischen Technologien im Bereich des eTourismus gesetzt. So geschehen auch bei der Transformation des Tourismus-Portals vakan- tieland.nl zu einer semantischen Web-Applikation. Eine solche Umstellung bringt jedoch auch neue Probleme mit sich. Zum Beispiel die Frage, wie touristische Informationen geeignet mit Hilfe des Resource Description Frameworks (RDF) modelliert werden können. In dieser Arbeit wird die- ser Frage in Bezug auf die Modellierung von Eigenschaften von touris- tischen Zielen nachgegangen. Dazu wird eine bestehende eTourismus- Ontologie analysiert und basierend darauf ein geeignetes Schema defi- niert. Anschließend wird die Ontologie einer Evolution unterzogen, um diese an das neue Schema anzupassen. Um den Nutzen des Tourismus- Portals zusätzlich zu erhöhen, werden außerdem die bereits existierenden Filterfunktionen erweitert.
Chapter
Full-text available
Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”.
Chapter
Full-text available
The core vision put forward by the Internet of Things, of networked, intelligent objects capable of taking autonomous decisions based on decentral information processing, resonates strongly with research in the field of autonomous cooperating logistics processes. The characteristics of the IT landscape underlying autonomous cooperating logistics processes pose a number of challenges towards data integration. The heterogeneity of the data sources, their highly distributed nature, along with their availability, make the application of traditional approaches problematic. The field of semantic data integration offers potential solutions to address these issues. This contribution aims to examine in what way an adequate approach towards data Integration may be facilitated on that basis. It subsequently proposes a service-oriented, ontology-based mediation approach to data Integration for an Internet of Things supporting autonomous cooperating logistics processes.
Chapter
Full-text available
Autonomous control in logistic systems is characterized by the ability of logistic objects to process information, to render and to execute decisions on their own. This paper investigates whether the concept of the semantic mediator is applicable to the data integration problems arising from an application scenario of autonomous control in the transport logistics sector. Initially, characteristics of autonomous logistics processes are presented, highlighting the need for decentral data storage in such a paradigma. Subsequently, approaches towards data integration are examined. An application scenario exemplifying autonomous control in the field of transport logistics is presented and analysed, on the basis of which a concept, technical architecture and prototypical implementation of a semantic mediator is developed and described. A critical appraisal of the semantic mediator in the context of autonomous logistics processes concludes the paper, along with an outlook towards ongoing and future work.
Article
The increasing performance and wider spread use of automated semantic annotation and entity linking platforms has empowered the possibility of using semantic information in information retrieval. While keyword-based information retrieval techniques have shown impressive performance, the addition of semantic information can increase retrieval performance by allowing for more accurate sense disambiguation, intent determination, and instance identification, just to name a few. Researchers have already delved into the possibility of integrating semantic information into practical search engines using a combination of techniques such as using graph databases, hybrid indices and adapted inverted indices, among others. One of the challenges with the efficient design of a search engine capable of considering semantic information is that it would need to be able to index information beyond the traditional information stored in inverted indices, including entity mentions and type relationships. The objective of our work in this paper is to investigate various ways in which different data structure types can be adopted to integrate three types of information including keywords, entities and types. We will systematically compare the performance of the different data structures for scenarios where i) the same data structure types are adopted for the three types of information, and ii) different data structure types are integrated for storing and retrieving the three different information types. We report our findings in terms of the performance of various query processing tasks such as Boolean and ranked intersection for the different indices and discuss which index type would be appropriate under different conditions for semantic search.
Conference Paper
Full-text available
Semantic Web technologies are increasingly used in the Internet of Things due to their intrinsic propensity to foster interoperability among heterogenous devices and services. However, some of the IoT application domains have strict requirements in terms of timeliness of the exchanged messages, latency and support for constrained devices. An example of these domains is represented by the emerging area of the Internet of Musical Things. In this paper we propose C Minor, a CoAP-based semantic publish/subscribe broker specifically designed to meet the requirements of Internet of Musical Things applications, but relevant for any IoT scenario. We assess its validity through a practical use case.
ResearchGate has not been able to resolve any references for this publication.