Mirko Spasić

Mirko Spasić
Matematički fakultet, Beograd · Katedra za računarstvo i informatiku

About

20
Publications
4,362
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
66
Citations
Citations since 2017
15 Research Items
56 Citations
2017201820192020202120222023051015
2017201820192020202120222023051015
2017201820192020202120222023051015
2017201820192020202120222023051015

Publications

Publications (20)
Article
The query containment problem is a fundamental computer science problem which was originally defined for relational queries. With the growing popularity of the sparql query language, it became relevant and important in this new context: reliable and efficient sparql query containment solvers may have various applications within static analysis of q...
Preprint
Full-text available
Tool SPECS implements an efficient automated approach for reasoning about the SPARQL query containment problem. In this paper, we prove the correctness of this approach. We give precise semantics of the core subset of SPARQL language. We briefly discuss the procedure used for reducing the query containment problem into a formal logical framework. W...
Article
Full-text available
Improving code quality without changing its functionality, e.g., by refactoring or optimization, is an everyday programming activity. Good programming practice requires that each such change should be followed by a check if the change really preserves the code behavior. If such a check is performed by testing, it can be time consuming and still can...
Article
Full-text available
GeoSPARQL is an important standard for the geospatial linked data community, given that it defines a vocabulary for representing geospatial data in RDF, defines an extension to SPARQL for processing geospatial data, and provides support for both qualitative and quantitative spatial reasoning. However, what the community is missing is a comprehensiv...
Article
Full-text available
Checking the compliance of geospatial triplestores with the GeoSPARQL standard represents a crucial step for many users when selecting the appropriate storage solution. This publication presents the software which comprises the GeoSPARQL compliance benchmark – a benchmark which checks RDF triplestores for compliance with the requirements of the Geo...
Preprint
Full-text available
We propose a series of tests that check for the compliance of RDF triplestores with the GeoSPARQL standard. The purpose of the benchmark is to test how many of the requirements outlined in the standard a tested system supports and to push triplestores forward in achieving a full GeoSPARQL compliance. This topic is of concern because the support of...
Preprint
Full-text available
The Linked Data Benchmark Council's Social Network Benchmark (LDBC SNB) is an effort intended to test various functionalities of systems used for graph-like data management. For this, LDBC SNB uses the recognizable scenario of operating a social network, characterized by its graph-shaped data. LDBC SNB consists of two workloads that focus on differ...
Conference Paper
Full-text available
Geospatial RDF datasets have a tendency to use latitude and longitude properties to denote the geographic location of the entities described within them. On the other hand, geographic information systems prefer the use of WKT and GML geometries when working with geospatial data. In this paper, we present a process of RDF data transformation which p...
Chapter
The aim of the Mighty Storage Challenge (MOCHA) at ESWC 2018 was to test the performance of solutions for SPARQL processing in aspects that are relevant for modern applications. These include ingesting data, answering queries on large datasets and serving as backend for applications driven by Linked Data. The challenge tested the systems against da...
Chapter
Following the success of Virtuoso at last year’s Mighty Storage Challenge - MOCHA 2017, we decided to participate once again and test the latest Virtuoso version against the new tasks which comprise the MOCHA 2018 challenge. The aim of the challenge is to test the performance of solutions for SPARQL processing in aspects relevant for modern applica...
Conference Paper
Full-text available
Following the success of Virtuoso at last year's Mighty Storage Challenge - MOCHA 2017, we decided to participate once again and test the latest Virtuoso version against the new tasks which comprise the MOCHA 2018 challenge. The aim of the challenge is to test the performance of solutions for SPARQL processing in aspects relevant for modern applica...
Chapter
The Mighty Storage Challenge (MOCHA) aims to test the performance of solutions for SPARQL processing, in several aspects relevant for modern Linked Data applications. Virtuoso, by OpenLink Software, is a modern enterprise-grade solution for data access, integration, and relational database management, which provides a scalable RDF Quad Store. In th...
Chapter
The aim of the Mighty Storage Challenge (MOCHA) at ESWC 2017 was to test the performance of solutions for SPARQL processing in aspects that are relevant for modern applications. These include ingesting data, answering queries on large datasets and serving as backend for applications driven by Linked Data. The challenge tested the systems against da...
Conference Paper
Full-text available
The Mighty Storage Challenge (MOCHA) aims to test the performance of solutions for SPARQL processing, in several aspects relevant for modern Linked Data applications. Virtuoso, by OpenLink Software, is a modern enterprise-grade solution for data access, integration, and relational database management, which provides a scalable RDF Quad Store. In th...
Conference Paper
Full-text available
Synthetic datasets used in benchmarking need to mimic all characteristics of real-world datasets, in order to provide realistic benchmarking results. Synthetic RDF datasets usually show a significant discrepancy in the level of structuredness compared to real-world RDF datasets. This structural difference is important as it directly affects storage...
Conference Paper
Full-text available
Linked Open Data (LOD) is a growing movement for organizations to make their existing data available in a machine-readable format. There are two equally important viewpoints to LOD: publishing and consuming. This article analyzes the requirements for both sub-processes and presents an example of publishing statistical data in RDF format and integra...
Conference Paper
Full-text available
To improve transparency and public service delivery, national, regional and local governmental bodies need to consider new strategies to openning up their data. We approach the problem of creating a more scalable and interoperable Open Gov-ernment Data ecosystem by considering the latest advances in Linked Open Data. More precisely, we showcase how...
Conference Paper
Full-text available
To make the Web of Data a reality, and push large scale integration of, and reasoning on, data on the Web, huge amounts of data must be made available in a standard format, reachable and manageable by Semantic Web tools. National statistical offices across the world already possess an abundance of structured data, both in their databases and files...
Conference Paper
Full-text available
As a .Net C# application, NooJ was originally reserved for a single family of platforms - Windows. As many potential NooJ users use other operating systems (e.g. Linux, BSD, Solaris, Mac OSX, etc.), a need emerged to support NooJ on these platforms as well. Java is very well supported on many operating systems commonly used on desktop and laptop co...

Network

Cited By