Ute SchmidOtto-Friedrich-Universität Bamberg · Department of Applied Computer Sciences
Ute Schmid
Prof. Dr.
About
356
Publications
47,969
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,245
Citations
Introduction
research interests: Interpretable machine learning; human-lilke machine-learning; relational learning; inductive programming; explaining black-box classifiers
methods: inductive logic programming; logic models; psychological experiments
current projects: explaining classifiers for facial expressions of pain (PainFaceReader); a cognitive companion to delete irrelevant digital objects (Dare2Del); explaining classifiers for medical image data (TraMeExCo)
Additional affiliations
March 2001 - August 2004
September 2004 - present
Education
April 1990 - June 1994
October 1989 - June 1994
October 1986 - March 1989
Publications
Publications (356)
We introduce FruitNeRF, a unified novel fruit counting framework that leverages state-of-the-art view synthesis methods to count any fruit type directly in 3D. Our framework takes an unordered set of posed images captured by a monocular camera and segments fruit in each image. To make our system independent of the fruit type, we employ a foundation...
Being able to recognise defects in industrial objects is a key element of quality assurance in production lines. Our research focuses on visual anomaly detection in RGB images. Although Convolutional Neural Networks (CNNs) achieve high accuracies in this task, end users in industrial environments receive the model's decisions without additional exp...
The growing number of applications of machine learning and data mining in many domains—from agriculture to business, education, industrial manufacturing, and medicine—gave rise to new requirements for how to inspect and control the learned models. The research domain of explainable artificial intelligence (XAI) has been newly established with a str...
Recent advancements in generative AI have introduced novel prospects and practical implementations. Especially diffusion models show their strength in generating diverse and, at the same time, realistic features, positioning them well for generating counterfactual explanations for computer vision models. Answering "what if" questions of what needs...
The anthropogenic climate crisis results in the gradual loss of tree species in locations where they were previously able to grow. This leads to increasing workloads and requirements for foresters and arborists as they are forced to restructure their forests and city parks. The advancements in computer vision (CV)—especially in supervised deep lear...
Ensuring the quality of black-box Deep Neural Networks (DNNs) has become ever more significant, especially in safety-critical domains such as automated driving. While global concept encodings generally enable a user to test a model for a specific concept, linking global concept encodings to the local processing of single network inputs reveals thei...
Explaining concepts by contrasting examples is an efficient and convenient way of giving insights into the reasons behind a classification decision. This is of particular interest in decision-critical domains, such as medical diagnostics. One particular challenging use case is to distinguish facial expressions of pain and other states, such as disg...
Explanatory Interactive Machine Learning queries user feedback regarding the prediction and the explanation of novel instances. CAIPI, a state-of-the-art algorithm, captures the user feedback and iteratively biases a data set toward a correct decision-making mechanism using counterexamples. The counterexample generation procedure relies on hand-cra...
With the digital transformation, artificial intelligence (AI) applications are also finding their way into more and more areas of work and life. In particular, models learned from data are being used, which are mostly opaque black boxes. The fact that people can understand why an AI system behaves the way it does is necessary for various reasons: T...
Programming by demonstration (PBD) is introduced as family of approaches to teach a computer system new behavior by demonstrating it in the context of a concrete example. References to classical and current PBD systems are given.
Hybride KI kombiniert auf Wissensrepräsentation basierende Ansätze und datengetrie-bene Heran gehensweisen (maschinelles Lernen). Credo: Was die Menschheit schon weiß, muss nicht aus Daten gelernt werden. Versprechen: (Energie-)effiziente, robuste, erklärbare und vertrauenswürdige KI-Sys teme, die weniger Bias unterliegen, weniger Daten für den Ler...
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairne...
Climate crisis and correlating prolonged, more intense periods of drought threaten tree health in cities and forests. In consequence, arborists and foresters suffer from increasing workloads and, in the best case, a consistent but often declining workforce. To optimise workflows and increase productivity, we propose a novel open-source end-to-end a...
Explaining concepts by contrasting examples is an efficient and convenient way of giving insights into the reasons behind a classification decision. This is of particular interest in decision-critical domains, such as medical diagnostics. One particular challenging use case is to distinguish facial expressions of pain and other states, such as disg...
Research in the field of explainable artificial intelligence has produced a vast amount of visual explanation methods for deep learning-based image classification in various domains of application. However, there is still a lack of domain-specific evaluation methods to assess an explanation’s quality and a classifier’s performance with respect to d...
Knowledge Graphs (KGs) are able to structure and represent knowledge in complex systems, whereby their completeness impacts the quality of any further application. Real world KGs are notoriously incomplete, which is why KG Completion (KGC) methods emerged. Most KGC methods rely on Knowledge Graph Embedding (KGE) based link prediction models, which...
DIN SPEC 92001-3 Artificial Intelligence – Life Cycle Processes and Quality Requirements.
Artificial Intelligence has become a game-changer, but its impact must be approached responsibly. This is the third document in a series, and it aims to ensure that AI systems are developed, deployed, and used efficiently, responsibly, and in a trustworthy wa...
Climate crisis and correlating prolonged, more intense periods of drought threaten tree health in cities and forests. In consequence, arborists and foresters suffer from increasing workloads and, in the best case, a consistent but often declining workforce. To optimise workflows and increase productivity, we propose a novel open-source end-to-end a...
Artificial intelligence (AI) today is very successful at standard pattern-recognition tasks due to the availability of large amounts of data and advances in statistical data-driven machine learning. However, there is still a large gap between AI pattern recognition and human-level concept learning. Humans can learn amazingly well even under uncerta...
The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive logic programming uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In earli...
We present ManuKnowVis, the result of a design study, in which we contextualize data from multiple knowledge repositories of a manufacturing process for battery modules used in electric vehicles. In data-driven analyses of manufacturing data, we observed a discrepancy between two stakeholder groups involved in serial manufacturing processes: Knowle...
With the perspective on applications of AI-technology, especially data intensive deep learning approaches, the need for methods to control and understand such models has been recognized and gave rise to a new research domain labeled explainable artificial intelligence (XAI). In this overview paper we give an interim appraisal of what has been achie...
Trace-based programming is introduced as a specific approach to inductive programming where a typically recursive program is inferred from a small set of example computational traces.
Smart sensor systems are a key factor to ensure sustainable compute by enabling machine learning algorithms to be executed at the data source. This is particularly helpful when working with moving parts or in remote areas, where no tethered deployment is possible. However, including computations directly at the measurement device places an increase...
In the digital age, saving and accumulating large amounts of digital data is a common phenomenon. However, saving does not only consume energy, but may also cause information overload and prevent people from staying focused and working effectively. We present and systematically examine an explanatory AI system (Dare2Del), which supports individuals...
Neural networks are widely adopted, yet the integration of domain knowledge is still underutilized. We propose to integrate domain knowledge about co-occurring facial movements as a constraint in the loss function to enhance the training of neural networks for affect recognition. As the co-ccurrence patterns tend to be similar across datasets, appl...
There have been remarkable breakthroughs in Machine Learning (ML) and Artificial Intelligence (AI), notably in the areas of Natural Language Processing (NLP) and Deep Learning. Additionally, hate speech detection in dialogues has been gaining popularity among Natural Language Processing researchers with the increased use of social media. However, a...
Deep Learning-based tissue classification may support pathologists in analyzing digitized whole slide images. However, in such critical tasks, only approaches that can be validated by medical experts in advance to deployment, are suitable. We present an approach that contributes to making automated tissue classification more transparent. We step be...
Artificial Intelligence and Digital Twins play an integral role in driving innovation in the domain of intelligent driving. Long short-term memory (LSTM) is a leading driver in the field of lane change prediction for manoeuvre anticipation. However, the decision-making process of such models is complex and non-transparent, hence reducing the trustw...
Machine learning based image classification algorithms, such as deep neural network approaches, will be increasingly employed in critical settings such as quality control in industry, where transparency and comprehensibility of decisions are crucial. Therefore, we aim to extend the defect detection task towards an interactive human-in-the-loop appr...
Nowadays, Artificial Intelligence (AI) algorithms show a strong performance for many use cases, making them desirable for real-world scenarios where the algorithms provide high-impact decisions. However, one major drawback of AI algorithms is their susceptibility to bias and resulting unfairness. This has a huge influence for their application, as...
Would you trust physicians if they cannot explain their decisions to you? Medical diagnostics using machine learning gained enormously in importance within the last decade. However, without further enhancements many state-of-the-art machine learning methods are not suitable for medical application. The most important reasons are insufficient data s...
This paper presents results from a video-based
study on the impact of prior information on the user experience
dimensions perceived intelligence, subjective performance, and
trust in autonomous driving. A simulated autonomous driving
situation is presented to test participants, while they are given
different prior information in terms of descriptio...
The topic of comprehensibility of machine-learned theories has recently drawn increasing attention. Inductive Logic Programming (ILP) uses logic programming to derive logic theories from small data based on abduction and induction techniques. Learned theories are represented in the form of rules as declarative descriptions of obtained knowledge. In...
In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance fo...
Would you trust physicians if they cannot explain their decisions to you? Medical diagnostics using machine learning gained enormously in importance within the last decade. However, without further enhancements many state-of-the-art machine learning methods are not suitable for medical application. The most important reasons are insufficient data s...
Artificial Intelligence and Digital Twins play an integral role in driving innovation in the domain of intelligent driving. Long short-term memory (LSTM) is a leading driver in the field of lane change prediction for manoeuvre anticipation. However, the decision-making process of such models is complex and non-transparent, hence reducing the trustw...
Machine learning based image classification algorithms, such as deep neural network approaches, will be increasingly employed in critical settings such as quality control in industry, where transparency and comprehensibility of decisions are crucial. Therefore, we aim to extend the defect detection task towards an interactive human-in-the-loop appr...
Introduction:
The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicati...
One major drawback of deep neural networks (DNNs) for use in sensitive application domains is their black-box nature. This makes it hard to verify or monitor complex, symbolic requirements. In this work, we present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background...
Zusammenfassung
Verfahren der Künstlichen Intelligenz, insbesondere datenintensive Methoden des maschinellen Lernens, halten immer mehr Einzug in industrielle Anwendungen. Im Normalfall werden KI-Anwendungen meist als fertige Black-Box-Komponenten betrachtet, welche nicht in der Lage sind, mit Anwendern zu interagieren. Am Beispiel von Parametriera...
We propose a method for explaining the results of black box image classifiers to domain experts and end users, combining two example-based explanatory approaches: Firstly, prototypes as represen- tative data points for classes, and secondly, contrastive example com- parisons in the form of near misses and near hits. A prototype globally explains th...
Human gender bias is reflected in language and text production. Because state-of-the-art machine translation (MT) systems are trained on large corpora of text, mostly generated by humans, gender bias can also be found in MT. For instance when occupations are translated from a language like English, which mostly uses gender neutral words, to a langu...
In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as w...
In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as w...
Human gender bias is reflected in language and text production. Because state-of-the-art machine translation (MT) systems are trained on large corpora of text, mostly generated by humans, gender bias can also be found in MT. For instance when occupations are translated from a language like English, which mostly uses gender neutral words, to a langu...
We propose Case-based reasoning (CBR) as an approach to assist human operators who control special purpose production machines. Our support system automatically extracts knowledge from machine data and creates recommendations, which help the operators solve problems with a production machine. This support has to be comprehensive and maintainable by...
In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple...
With the growing number of applications of machine learning in complex real-world domains machine learning research has to meet new requirements to deal with the imperfections of real world data and the legal as well as ethical obligations to make classifier decisions transparent and comprehensible. In this contribution, arguments for interpretable...
In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance fo...
Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protection Regulation of the European Union, which features transparency as a cornerstone. Such demands req...
Explainability has been recognized as an important requirement of artificial intelligence (AI) systems. Transparent
decision policies and explanations regarding why an AI system comes about a certain decision is a pre-requisite if AI is
supposed to support human decision-making or if human-AI collaborative decision-making is envisioned. Human-AI
in...
Given the recent successes of Deep Learning in AI there has been increased interest in the role and need for explanations in machine learned theories. A distinct notion in this context is that of Michie’s definition of ultra-strong machine learning (USML). USML is demonstrated by a measurable increase in human performance of a task following provis...
With the increasing prevalence of Machine Learning in everyday life, a growing number of people will be provided with Machine-Learned assessments on a regular basis. We believe that human users interacting with systems based on Machine-Learned classifiers will demand and profit from the systems’ decisions being explained in an approachable and comp...
Most machine learning based decision support systems are black box models that are not interpretable for humans. However, the demand for explainable models to create comprehensible and trustworthy systems is growing, particularly in complex domains involving risky decisions. In many domains, decision making is based on visual information. We argue...