
Marina M.-C. HöhneTechnische Universität Berlin | TUB · Department of Software Engineering and Theoretical Computer Science
Marina M.-C. Höhne
Dr. rer. nat.
About
21
Publications
4,139
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
217
Citations
Introduction
Additional affiliations
October 2018 - present
October 2014 - August 2017
Education
August 2017 - October 2018
Parental Leave
Field of study
Publications
Publications (21)
Many efforts have been made for revealing the decision-making process of black-box learning machines such as deep neural networks, resulting in useful local and global explanation methods. For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by addin...
Deep Neural Networks (DNNs) draw their power from the representations they learn. In recent years, however, researchers have found that DNNs, while being incredibly effective in learning complex abstractions, also tend to be infected with artifacts, such as biases, Clever Hanses (CH), or Backdoors, due to spurious correlations inherent in the train...
The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool exists that exhaustively and spe...
Explainable artificial intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks. In this work, we investigate how XAI methods can be used for exploring and visualizing the diversity of feature representations learned by Bayesian...
The recent trend of integrating multi-source Chest X-Ray datasets to improve automated diagnostics raises concerns that models learn to exploit source-specific correlations to improve performance by recognizing the source domain of an image rather than the medical pathology. We hypothesize that this effect is enforced by and leverages label-imbalan...
Self-supervised learning methods can be used to learn meaningful representations from unlabeled data that can be transferred to supervised downstream tasks to reduce the need for labeled data. In this paper, we propose a 3D self-supervised method that is based on the contrastive (SimCLR) method. Additionally, we show that employing Bayesian neural...
Current machine learning models have shown high efficiency in solving a wide variety of real-world problems. However, their black box character poses a major challenge for the understanding and traceability of the underlying decision-making strategies. As a remedy, many post-hoc explanation and self-explanatory methods have been developed to interp...
To make advanced learning machines such as Deep Neural Networks (DNNs) more transparent in decision making, explainable AI (XAI) aims to provide interpretations of DNNs' predictions. These interpretations are usually given in the form of heatmaps, each one illustrating relevant patterns regarding the prediction for a given instance. Bayesian approa...
Zusammenfassung
Die Erklärbare Künstliche Intelligenz (KI) ist zu einem Schlüsselthema geworden, da es scheint, dass dadurch die Lücke zwischen KI in der Forschung und KI in der realen Anwendung geschlossen werden kann. Dadurch soll das große Potential, das KI für viele Bereiche bereithält (Krebsforschung, Klimaforschung) nutzbar gemacht werden kön...
Deep learning has revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence has emerged as an area of research that goes beyond pure prediction improvement by extracting knowledge from deep learning methodologies through the interpr...
Attribution methods remain a practical instrument that is used in real-world applications to explain the decision-making process of complex learning machines. It has been shown that a simple method called SmoothGrad can effectively reduce the visual diffusion of gradient-based attribution methods and has established itself among both researchers an...
Deep learning algorithms have revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence (XAI) has emerged as a novel area of research that goes beyond pure prediction improvement. Knowledge embodied in deep learning methodologies is...
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e.g. safety-critical areas. So far, however, no methods for quantifying uncertainties of explanations have been...
In many research areas scientists are interested in clustering objects within small datasets while making use of prior knowledge from large reference datasets. We propose a method to apply the machine learning concept of transfer learning to unsupervised clustering problems and show its effectiveness in the field of single-cell RNA sequencing (scRN...
High prediction accuracies are not the only objective to consider when solving problems using machine learning. Instead, particular scientific applications require some explanation of the learned prediction function. For computational biology, positional oligomer importance matrices (POIMs) have been successfully applied to explain the decision of...
Derivations.
Further details for extracting motifs by mimicking POIMs and the extension of Theorem 1 and 2 to multiple motifs.
(PDF)
Complex problems may require sophisticated, non-linear learning methods such as kernel machines or deep neural networks to achieve state of the art prediction accuracies. However, high prediction accuracies are not the only objective to consider when solving problems using machine learning. Instead, particular scientific applications require some e...
Identifying discriminative motifs underlying the functionality and evolution of organisms is a major challenge in computational biology. Machine learning approaches such as support vector machines (SVMs) achieve state-of-the-art performances in genomic discrimination tasks, but—due to its black-box character—motifs underlying its decision function...
This work is in the context of kernel-based learning algorithms for sequence data. We present a probabilistic approach to automatically extract, from the output of such string-kernel-based learning algorithms , the subsequences—or motifs—truly underlying the machine's predictions. The proposed framework views motifs as free parameters in a probabil...
Fundamental changes over time of surface EMG signal characteristics are a challenge for myo-control algorithms controlling prosthetic devices. These changes are generally caused by electrode shifts after donning and doffing, sweating, additional weight or varying arm positions, which results in a change of the signal distribution – a scenario often...
Ensuring robustness of myocontrol algorithms for prosthetic devices is an important challenge. Robustness needs to be maintained under nonstationarities, e.g. due to electrode shifts after donning and doffing, sweating, additional weight or varying arm positions. Such nonstationary behavior changes the signal distributions - a scenario often referr...