Nico Hambauer

Nico Hambauer
Friedrich-Alexander-University of Erlangen-Nürnberg | FAU · School of Business and Economics

Bachelor of Science
Generalised Additive Models, Intrinsically Interpretable AI, Patient Triage Problems, AI for Health, Computer Vision

About

3
Publications
3,851
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
0
Citations

Publications

Publications (3)
Conference Paper
Full-text available
The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with...
Preprint
Full-text available
The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with...
Preprint
Full-text available
In recent years, large pre-trained deep neural networks (DNNs) have revolutionized the field of computer vision (CV). Although these DNNs have been shown to be very well suited for general image recognition tasks, application in industry is often precluded for three reasons: 1) large pre-trained DNNs are built on hundreds of millions of parameters,...

Network

Projects

Project (1)
Project
Over the past decade, the vast majority of research in machine learning (ML) has proposed prediction models for improved decision support for which the functioning is not verifiable by humans. This rise of black-box models has caused problems in healthcare, criminal justice, and other areas because it is not directly observable what information in the input data drives the models to generate their decisions. Although recent advancements in the realm of explainable artificial intelligence and interpretable ML have led to promising results towards more transparent outputs, there are still crucial barriers that hamper a widespread dissemination of such models in critical environments. On the one hand, current efforts predominantly focus on post-hoc analytical explanations that suffer from rough approximations which might not be reliable. On the other hand, it currently lacks algorithms that allow a direct integration of domain expertise during the model development so that misleading conclusions can be avoided. Consequently, the field requires a much stronger linkage between algorithmic and behavioral research to directly integrate the user’s expert knowledge into the model structures, to reflect on his or her perception of the model’s output, and to promote the model’s interpretation already at the development stage. To this end, we conduct a comprehensive research project at the interconnection of mathematical and socio-technical research. More specifically, the mathematical strand deals with development of ML models based on additive model constraints, in which input variables are mapped independently of each other in a non-linear way and the mappings are summed up afterwards. Such models are commonly known as generalized additive models (GAMs) and the univariate mappings between features and the response are called shape functions. Since shape functions can be arbitrarily complex, GAMs generally achieve much better prediction accuracies than simple linear models, while retaining full interpretability. To incorporate expert knowledge into the model structure, we will specifically develop model constraints that affect the shape functions in an expert-driven manner, such as incorporating predefined shapes, smoothness, regularization, and other structural components. The socio-technical research will then investigate to what extent the resulting ML models supported by expert knowledge can lead to higher interpretability and acceptance, which will be thoroughly investigated in a series of (user-centric) experiments as well as in field studies with associated project partners from industry. For this purpose, our research project is structured into the three modules of 1) algorithmic development, 2) technical assessment, and 3) socio-technical evaluation. With our results, we expect to introduce a true game changer for the development of ML models in mission-critical decision scenarios.