Jędrzej PotoniecPoznan University of Technology · Institute of Computing Science
Jędrzej Potoniec
About
29
Publications
2,127
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
174
Citations
Publications
Publications (29)
We present a novel approach for learning embeddings of ALC knowledge base concepts. The embeddings reflect the semantics of the concepts in such a way that it is possible to compute an embedding of a complex concept from the embeddings of its parts by using appropriate neural constructors. Embeddings for different knowledge bases are vectors in a s...
Sarcastic RoBERTa is an approach to recognizing sarcastic tweets written in English. It is based on a pre-trained RoBERTa model supported by a 3-layer feed-forward fully-connected neural network. It establishes a new state-of-the-art result on the iSarcasm dataset, attaining the \(F_1\) score of 0.526, and being not far from the human performance o...
We present a method for constructing synthetic datasets of Competency Questions translated into SPARQL-OWL queries. This method is used to generate BigCQ, the largest set of CQ patterns and SPARQL-OWL templates that can provide translation examples to automate assessing the completeness and correctness of ontologies.
Capturing business process specifics using a model is essential to effectively manage, control, and instruct the process participants with their roles and tasks. A normative process model is an invaluable source of information, not only for human inspection but also for software supporting and controlling the process. The actual process execution l...
Glossary of Terms extraction from textual requirements is an important step in ontology engineering methodologies. Although initially it was intended to be performed manually, last years have shown that some degree of automatization is possible. Based on these promising approaches, we introduce a novel, human interpretable, rule-based method named...
Competency Questions (CQs) are used in many ontology engineering methodologies to collect requirements and track the completeness and correctness of an ontology being constructed. Although they are frequently suggested by ontology engineering methodologies, the publicly available datasets of CQs and their formalizations in ontology query languages...
This data article reports on a new set of 234 competency questions for ontology development and their formalisation into a set of 131 SPARQL-OWL queries. This is the largest set of competency questions with their linked queries to date, covering several ontologies of different type in different subject domains developed by different groups of quest...
Competency Questions (CQs) are natural language questions outlining and constraining the scope of knowledge represented in an ontology. Despite that CQs are a part of several ontology engineering methodologies, the actual publication of CQs for the available ontologies is very limited and even scarcer is the publication of their respective formaliz...
Competency Questions (CQs) are natural language questions outlining and constraining the scope of knowledge represented by an ontology. Despite that CQs are a part of several ontology engineering methodologies, we have observed that the actual publication of CQs for the available ontologies is very limited and even scarcer is the publication of the...
We consider how to select a subgraph of an RDF graph in an ontology learning problem in order to avoid learning redundant axioms. We propose to address this by selecting RDF triples that can not be inferred using a reasoner and we present an algorithm to find them.
In this study, we present Swift Linked Data Miner, an interruptible algorithm that can directly mine an online Linked Data source (e.g., a SPARQL endpoint) for OWL 2 EL class expressions to extend an ontology with new SubClassOf : axioms. The algorithm works by downloading only a small part of the Linked Data source at a time, building a smart inde...
For a given set of URIs, finding their common graph patterns may provide useful knowledge. We present an algorithm searching for the best patterns while trying to extend the set of relevant URIs. It involves interaction with the user in order to supervise extension of the set.
Swift Linked Data Miner (SLDM) is a data mining algorithm capable to infer new knowledge and thus extend an ontology by mining a Linked Data dataset. We present an extension to WebProtégé providing SLDM capabilities in a web browser. The extension is open source and readily available to use.
The authors propose a new method for mining sets of patterns for classification, where patterns are represented as SPARQL queries over RDFS. The method contributes to so-called semantic data mining, a data mining approach where domain ontologies are used as background knowledge, and where the new challenge is to mine knowledge encoded in domain ont...
We present an idea of using mathematicall modelling to guide a process of mining a set of patterns in an RDF graph and further exploiting these patterns to build expressive OWL class hierarchies.
The authors propose a new method for mining sets of patterns for classification, where patterns are represented as SPARQL queries over RDFS. The method contributes to so-called semantic data mining, a data mining approach where domain ontologies are used as background knowledge, and where the new challenge is to mine knowledge encoded in domain ont...
We propose a new method for knowledge acquisition and ontology refinement for the Semantic Web. The method is based on a combination of the attribute exploration algorithm from the formal concept analysis and active learning approach to machine learning classification task. It enables utilization of Linked Data during the process of an ontology ref...
The number of publicly available resources that re-use terms from various OWL ontologies has increased massively over last years, with the presence of Linked Open Data datasets and the growing number of websites that embed now structured data into HTML pages using markup languages such as RDFa, microdata and microformats. In this paper, we describe...
We propose a new method for knowledge acquisition and ontology refinement for the Semantic Web utilizing Linked Data available through remote SPARQL endpoints. This method is based on combination of the attribute exploration algorithm from formal concept analysis and the active learning approach from machine learning.
We consider a classification process, that the representation precision of new examples is interactively increased. We use an attribute value ontology (AVO) to represent examples at different levels of abstraction (levels of precision). This precision can be improved by conducting diagnostic tests. The selection of these diagnostic tests is general...
We present a prototype system, named ASPARAGUS, that performs aggregation of SPARQL query results on a semantic baseline,
that is by an exploitation of the background ontology expressing the semantics of the returned results. The system implements
the recent research results on semantic grouping, and semantic clustering. In the former case, results...
The paper introduces a task of frequent concept mining: mining frequent patterns of the form of (complex) concepts expressed
in description logic. We devise an algorithm for mining frequent patterns expressed in standard EL++\mathcal{EL}^{++} description logic language. We also report on the implementation of our method. As description logic provid...
We present RMonto, an ontological extension to RapidMiner, that provides possibility of machine learning with formal ontologies. RMonto is an easily extendable framework, currently providing support for unsupervised clustering with kernel methods and (frequent) pattern mining in knowledge bases. One important feature of RMonto is that it enables wo...