Nico PotykaCardiff University | CU · School of Computer Science and Informatics
Nico Potyka
Dr. rer. nat.
About
95
Publications
7,210
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
515
Citations
Introduction
My general research interest is in scalable and interpretable approaches to Artificial Intelligence. My current work focuses on foundations, algorithms and applications of abstract argumentation and knowledge graphs.
Additional affiliations
Education
May 2011 - December 2015
October 2008 - November 2010
September 2005 - August 2008
Ostfalia Hochschule für angewandte Wissenschaften
Field of study
- Computer Science
Publications
Publications (95)
Weighted bipolar argumentation frameworks determine the strength of arguments based on an initial weight and the strength of their attackers and supporters. These frameworks can be applied to model and solve problems that arise in areas like social media analysis and decision support. Approaches for computing strength values often assume an acyclic...
Epistemic graphs are a recent generalization of epistemic probabilistic argumentation. Relations between arguments can be supporting, attacking, as well as neither supporting nor attacking. These interdependencies are represented by epistemic constraints, and the semantics of epistemic graphs are given in terms of probability distributions satisfyi...
Weighted bipolar argumentation frameworks allow model-ing decision problems and online discussions by defining arguments and their relationships. The strength of arguments can be computed based on an initial weight and the strength of attacking and supporting arguments. Application domains include social media analysis and decision support. While p...
We consider the problem of reasoning under uncertainty in the presence of inconsistencies. Our knowledge bases consist of linear probabilistic constraints that, in particular, generalize many probabilistic-logical knowledge representation formalisms. We first generalize classical probabilistic models to inconsistent
knowledge bases by considering a...
Quantitative bipolar argumentation frameworks (QBAFs) have various applications in areas like product recommendation, review aggregation and explaining machine learning models. QBAF semantics assign a strength to every argument that is based on an a priori belief and the strength of its attackers and supporters. Intuitively, a QBAF semantics is ope...
There is a growing interest in understanding arguments' strength in Quantitative Bipolar Argumentation Frameworks (QBAFs). Most existing studies focus on attribution-based methods that explain an argument's strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve th...
Explaining the strength of arguments under gradual semantics is receiving increasing attention. For example, various studies in the literature offer explanations by computing the attribution scores of arguments or edges in Quantitative Bipolar Argumentation Frameworks (QBAFs). These explanations, known as Argument Attribution Explanations (AAEs) an...
Knowledge graph embeddings (KGE) apply machine learning methods on knowledge graphs (KGs) to provide non-classical reasoning capabilities based on similarities and analogies. The learned KG embeddings are typically used to answer queries by ranking all potential answers, but rankings often lack a meaningful probabilistic interpretation - lower-rank...
Knowledge graph embedding (KGE) models are often used to predict missing links for knowledge graphs (KGs). However, multiple KG embeddings can perform almost equally well for link prediction yet suggest conflicting predictions for certain queries, termed \textit{predictive multiplicity} in literature. This behavior poses substantial risks for KGE-b...
Quantitatively explaining the strength of arguments under gradual semantics has recently received increasing attention. Specifically, several works in the literature provide quantitative explanations by computing the attribution scores of arguments. These works disregard the importance of attacks and supports, even though they play an essential rol...
Statistical information is ubiquitous but drawing valid conclusions from it is prohibitively hard. We explain how knowledge graph embeddings can be used to approximate probabilistic inference efficiently using the example of Statistical EL (SEL), a statistical extension of the lightweight Description Logic EL. We provide proofs for runtime and soun...
There is a growing interest in understanding arguments' strength in Quantitative Bipolar Argumentation Frameworks (QBAFs). Most existing studies focus on attribution-based methods that explain an argument's strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve th...
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e.g., when a loan application has been rejected). However, as noted recently, counterfactual explainers may lack robustness in the sense that a minor change in the input can cause a ma...
Assumption-based Argumentation (ABA) is a well-known structured argumentation formalism, whereby arguments and attacks between them are drawn from rules, defeasible assumptions and their contraries. A common restriction imposed on ABA frameworks (ABAFs) is that they are flat, i.e. each of the defeasible assumptions can only be assumed, but not deri...
Neural networks (NNs) have various applications in AI, but explaining their decisions remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs’ outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof....
Argumentative explainable AI has been advocated by several in recent years, with an increasing interest on explaining the reasoning outcomes of Argumentation Frameworks (AFs). While there is a considerable body of research on qualitatively explaining the reasoning outcomes of AFs with debates/disputes/dialogues in the spirit of extension-based sema...
ProbLog is a popular probabilistic logic programming language/tool, widely used for applications requiring to deal with inherent uncertainties in structured domains. In this paper we study connections between ProbLog and a variant of another well-known formalism combining symbolic reasoning and reasoning under uncertainty, i.e. probabilistic argume...
Argumentative explainable AI has been advocated by several in recent years, with an increasing interest on explaining the reasoning outcomes of Argumentation Frameworks (AFs). While there is a considerable body of research on qualitatively explaining the reasoning outcomes of AFs with debates/disputes/dialogues in the spirit of \emph{extension-base...
Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. We show that their decision process can be naturally represented as an argumentation problem, which allows creat...
Assumption-based Argumentation (ABA) is a well-known structured argumentation formalism, whereby arguments and attacks between them are drawn from rules, defeasible assumptions and their contraries. A common restriction imposed on ABA frameworks (ABAFs) is that they are flat, i.e., each of the defeasible assumptions can only be assumed, but not der...
Neural networks (NNs) have various applications in AI, but explaining their decision process remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics t...
Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. In order to reason about the decision process, we propose representing it as an argumentation problem. We genera...
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs...
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs...
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs...
There is broad agreement in the literature that explanation methods should be faithful to the model that they explain, but faithfulness remains a rather vague term. We revisit faithfulness in the context of continuous data and propose two formal definitions of faithfulness for feature attribution methods. Qualitative faithfulness demands that score...
Recently, various methods for representation learning on Knowledge Bases (KBs) have been developed. However, these approaches either only focus on learning the embeddings of the data-level knowledge (ABox) or exhibit inherent limitations when dealing with the concept-level knowledge (TBox), e.g., not properly modelling the structure of the logical...
One natural approach to probabilistic logic programming is Nilsson's probabilistic logic. We discuss the basic framework in the propositional setting, some reasoning ideas and extensions that allow handling inconsistent information. We then discuss its relationship to probabilistic epistemic argumentation. Roughly speaking, probabilistic epistemic...
Computational models of argumentation are an interesting tool to represent decision processes. Bipolar abstract argumentation studies the question of which arguments a rational agent can accept given attack and support relationships between them. We present a generalization of the fundamental complete semantics from attack-only graphs to bipolar gr...
Argumentation is inherently pervaded by uncertainty, which can arise as a result of the context in which argumentation is used, the kinds of agents that are involved in a given situation, the types of arguments that are used, and more. One of the prominent approaches for handling uncertainty in argumentation is probabilistic argumentation, which of...
Gradual argumentation frameworks represent arguments and their relationships in a weighted graph. Their graphical structure and intuitive semantics makes them a potentially interesting tool for interpretable machine learning. It has been noted recently that their mechanics are closely related to neural networks, which allows learning their weights...
Graph Convolutional Networks (GCNs) are typically studied through the lens of Euclidean geometry. Non-Euclidean Riemannian manifolds provide specific inductive biases for embedding hierarchical or spherical data, but cannot align well with data of mixed topologies. We consider a larger class of semi-Riemannian manifolds with indefinite metric that...
We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks. This connection creates a bridge between research in Formal Argumentation and Machine Learning. We generalize the semantics of feed-forward neural networks to acyclic graphs and study the resulting computational and semantic...
We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks. This connection creates a bridge between research in Formal Argumentation and Machine Learning. We generalize the semantics of feed-forward neural networks to acyclic graphs and study the resulting computational and semantic...
Argumentation Frameworks represent arguments and their relationships like attack and support in a graph. Their simple structure makes them easily interpretable and therefore a potentially interesting tool for explainable machine learning. We discuss some ideas for modeling and solving classification problems as abstract argumentation problems. As o...
Applying automated reasoning tools for decision support and analysis in law has the potential to make court decisions more transparent and objective. Since there is often uncertainty about the accuracy and relevance of evidence, non-classical reasoning approaches are required. Here, we investigate probabilistic epistemic argumentation as a tool for...
Applying automated reasoning tools for decision support and analysis in law has the potential to make court decisions more transparent and objective. Since there is often uncertainty about the accuracy and relevance of evidence, non-classical reasoning approaches are required. Here, we investigate probabilistic epistemic argumentation as a tool for...
Bipolar abstract argumentation frameworks allow modeling decision problems by defining pro and contra arguments and their relationships. In some popular bipolar frameworks, there is an inherent tendency to favor either attack or support relationships. However, for some applications, it seems sensible to treat attack and support equally. Roughly spe...
We explain how abstract argumentation problems can be encoded as Markov networks. From a computational perspective, this allows reducing argumentation tasks like finding labellings or deciding credulous and sceptical acceptance to probabilistic inference tasks in Markov networks. From a semantical perspective, the resulting probabilistic argumentat...
Gradual argumentation frameworks allow modeling arguments and their relationships and have been applied to problems like decision support and social media analysis. Semantics assign strength values to arguments based on an initial belief and their relationships. The final assignment should usually satisfy some common-sense properties. One property...
Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment...
Probabilistic epistemic argumentation allows for reasoning about argumentation problems in a way that is well founded by probability theory. Epistemic states are represented by probability functions over possible worlds and can be adjusted to new beliefs using update operators. While the use of probability functions puts this approach on a solid fo...
Probabilistic epistemic argumentation allows for reasoning about argumentation problems in a way that is well founded by probability theory. Epistemic states are represented by probability functions over possible worlds and can be adjusted to new beliefs using update operators. While the use of probability functions puts this approach on a solid fo...
This extended abstract summarizes the key results from [10].
In an epistemic graph, belief in arguments is represented by probability distributions. Furthermore, the influence that belief in arguments can have on the belief in other arguments is represented by constraints on the probability distributions. Different agents may choose different constraints to describe their reasoning, thus making epistemic gra...
Probabilistic epistemic argumentation allows for reasoning about argumentation problems in a way that is well founded by probability theory. Epistemic states are represented by probability functions over possible worlds and can be adjusted to new beliefs using update operators. While the use of probability functions puts this approach on a solid fo...
Weighted bipolar argumentation frameworks offer a tool for decision support and social media analysis. Arguments are evaluated by an iterative procedure that takes initial weights and attack and support relations into account. Until recently, convergence of these iterative procedures was not very well understood in cyclic graphs. Mossakowski and Ne...
Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment...
When combining beliefs from different sources,
often not only new knowledge but also conflicts arise.
In this paper, we investigate how we can measure the disagreement
among sources. We start our investigation with disagreement measures
that can be induced from inconsistency measures in an automated way.
After discussing some problems with this...
Weighted bipolar argumentation frameworks determine the strength of arguments based on an initial weight and the strength of their attackers and supporters. They find applications in decision support and social media analysis. Mossakowski and Neuhaus recently introduced a unification of different models and gave sufficient conditions for convergenc...
Attractor is a Java library that can be used to solve
weighted bipolar argumentation problems with continuous dynamical systems.
Weighted bipolar argumentation frameworks are an AI formalism that allow modeling decision problems and
online discussions by defining arguments and their relationships.
The strength of arguments can be computed based...
This archive contains 3,000 randomly generated bipolar argumentation graphs (BAGs) ranging from size 100 to 3,000.
Warning: the archive is 158 MB large and the uncompressed size is 987 MB.
The random generator is described in
Potyka, N.
Continuous Dynamical Systems for Weighted Bipolar Argumentation.
In 16th International Conference on Princip...
We present a probabilistic extension of the description logic ALC for reasoning about statistical knowledge. We consider conditional statements over proportions of the domain and are interested in the probabilistic-logical consequences of these proportions. After introducing some general reasoning problems and analyzing their properties, we present...
In persuasion dialogues, the ability of the persuader to model the per-suadee allows the persuader to make better choices of move. The epistemic approach to probabilistic argumentation is a promising way of modelling the per-suadee's belief in arguments, and proposals have been made for update methods that specify how these beliefs can be updated a...
We present a probabilistic extension of the description logic ALC for reasoning about statistical knowledge. We consider conditional statements over proportions of the domain and are interested in the probabilistic-logical consequences of these proportions. After introducing some general reasoning problems and analyzing their properties, we present...
In persuasion dialogues, the ability of the persuader to model the persuadee allows the persuader to make better choices of move. The epistemic approach to probabilistic argumentation is a promising way of modelling the persuadee's belief in arguments, and proposals have been made for update methods that specify how these beliefs can be updated at...
A central question for knowledge representation is how to encode and handle uncertain knowledge adequately. We introduce the probabilistic description logic \(\mathcal {ALCP}\) that is designed for representing context-dependent knowledge, where the actual context taking place is uncertain. \(\mathcal {ALCP}\) allows the expression of logical depen...
We propose a novel framework for computational concept invention. As opposed to recent implementations of Fauconnier’s and Turner’s Conceptual Blending Theory, our framework simplifies computational concept invention by focusing on concepts’ functions rather than on structural similarity of concept descriptions. Even though creating an optimal comb...
We propose a probabilistic-logical framework for group decision-making. Its main characteristic is that we derive group preferences from agents' beliefs and utilities rather than from their individual preferences as done in social choice approaches. This can be more appropriate when the individual preferences hide too much of the individuals' opini...
A central question for knowledge representation is how to encode and handle uncertain knowledge adequately. We introduce the probabilistic description logic ALCP that is designed for representing context-dependent knowledge, where the actual context taking place is uncertain. ALCP allows the expression of logical dependencies on the domain and prob...
The expert system shell MECore provides a series of knowledge management operations to define probabilistic knowledge bases and to reason under uncertainty. To provide a reference work for MECore algorithmics, we bring together results from different sources that have been applied in MECore and explain their intuitive ideas. Additionally, we report...
Classical logic can be regarded as the study of drawing deductive conclusions from consistent assumptions. However, the classical truth values true and false are often insufficient for applications in uncertain domains. Probabilistic logics overcome this problem by interpreting formulas by probabilities, where the probability 1 corresponds to true...
We investigate the relationships between some relational probabilistic conditional logics by comparing their semantics. In order to do so, we will order the different semantics with respect to their strength. Subsequently, we will provide several results that allow drawing conclusions from reasoning results under particular semantics about the resu...
LabSAT is a software system that for a giving abstract argumentation system AF can determine some or all extensions, and can decide whether an argument is credulously or sceptically accepted. These tasks are solved for complete, stable, preferred, and grounded semantics. LabSAT’s implementation employs recent results on the connection between argum...
We consider the problem of reasoning over probabilistic knowledge bases with different priority levels. While we assume that the knowledge is consistent on each level, there can be inconsistencies between different levels. Examples arise naturally in hierarchical domains when general knowledge is overwritten with more specific information. We exten...
The classical probabilistic entailment problem is to determine upper and lower bounds on the probability of formulas, given a consistent set of probabilistic assertions. We generalize this problem by omitting the consistency assumption and, thus, provide a general framework for probabilistic reasoning under inconsistency. To do so, we utilize incon...
A knowledge base in the logic FO-PCL is a set of relational probabilistic conditionals. The models of such a knowledge base are probability distributions over possible worlds, and the principle of Maximum Entropy (ME) selects the unique model having maximum entropy. While previous work on FO-PCL focused on ME model computation, in this paper we pro...
Combining logic with probability theory provides a solid ground for the representation of and the reasoning with uncertain knowledge. Given a set of probabilistic conditionals like “If A then B with probability x”, a crucial question is how to extend this explicit knowledge, thereby avoiding any unnecessary bias. The connection between such probabi...
This archive contains java code for the examples from
Nico Potyka, Matthias Thimm
Probabilistic Reasoning with Inconsistent Beliefs using Inconsistency Measures
IJCAI 2015.
We make use of the libraries
ANTLR
http://www.antlr.org/
BSD license (see below)
and
Oj!Algorithms
http://ojalgo.org/
MIT license (see below).
The BSD license
Redistri...
Coping with uncertain knowledge and changing beliefs is essential for reasoning in dynamic environments. We generalize an approach to adjust probabilistic belief states by use of the relative entropy in a propositional setting to relational languages, leading to a concept for the evolution of relational probabilistic belief states. As a second cont...
Consolidation describes the operation of restoring consistency
in an inconsistent knowledge base. Here we consider this
problem in the context of probabilistic conditional logic, a language
that focuses on probabilistic conditionals (if-then rules). If a knowledge
base, i. e., a set of probabilistic conditionals, is inconsistent
traditional model-b...
Inconsistency measures help analyzing contradictory knowledge bases and resolving inconsistencies. In recent years several measures with desirable properties have been proposed, but often these measures correspond to combinatorial or non-convex optimization problems that are hard to solve in practice. In this paper, I study a new family of inconsis...
Coping with uncertain knowledge and changing beliefs is essential for reasoning in dynamic environments. We generalize an approach to adjust probabilistic belief states by use of the relative entropy in a propositional setting to relational languages. As a second contribution of this paper, we present a method to compute such belief changes by cons...
The expert system shell MECore provides a series of knowledge
management operations to define probabilistic knowledge bases and
to reason under uncertainty. We report on our ongoing work regarding
further development of MECore's algorithms to compute optimum entropy
distributions. We provide some intuition for these methods and
point out their bene...
We present a case-study of applying probabilistic logic to the analysis of clinical patient data in neurosurgery. Probabilistic conditionals are used to build a knowledge base for modelling and representing clinical brain tumor data and expert knowledge of physicians working in this area. The semantics of a knowledge base consisting of probabilisti...
By using the principle of maximum entropy incomplete probabilistic knowledge can be completed to a full joint distribution. This inductive knowledge representation method can be reversed to extract probabilistic rules from an empirical probability distribution. Based on this idea propositional learning approach has been developed. Recently, an exte...
Dealing with uncertainty that is inherently present in any medical domain, is one of the major challenges when designing a medical decision support system. We demonstrate how probabilistic logic can be used to design medical knowledge bases at the example of analysing clinical brain tumor data. We use MECoRe, a system implementing probabilistic con...
Probabilistic conditional logics offer a rich and well-founded framework for designing expert systems. The factorization of their maximum entropy models has several interesting applications. In this paper a general factorization is derived providing a more rigorous proof than in previous work. It yields an approach to extend Iterative Scaling varia...
The principle of maximum entropy inductively completes the knowledge given by a knowledge base R, and it has been suggested to view learning as an operation being inverse to inductive knowledge completion. While a corresponding learning approach has been developed when R is based on propositional logic, in this paper we describe an
extension to a r...