Lab

Assistant Professorship for Intelligent Information Systems

About the lab

Hi there!

Our research focuses on business analytics, machine learning, and artificial intelligence. In particular, our lab is concerned with the design, analysis, and use of intelligent information systems based on methods and technologies of advanced data processing (e.g., deep learning, computer vision, natural language processing, process mining).

One of the main areas of interest for conducting and applying our research is the field of industrial manufacturing. Additionally, we deal with the analysis and design of data science qualification programs and we investigate approaches for increasing acceptance of AI systems from a socio-technical perspective.

Visit our website for further information: http://intelligentsystems.wiso.rw.fau.de/

Featured projects (1)

Project
Over the past decade, the vast majority of research in machine learning (ML) has proposed prediction models for improved decision support for which the functioning is not verifiable by humans. This rise of black-box models has caused problems in healthcare, criminal justice, and other areas because it is not directly observable what information in the input data drives the models to generate their decisions. Although recent advancements in the realm of explainable artificial intelligence and interpretable ML have led to promising results towards more transparent outputs, there are still crucial barriers that hamper a widespread dissemination of such models in critical environments. On the one hand, current efforts predominantly focus on post-hoc analytical explanations that suffer from rough approximations which might not be reliable. On the other hand, it currently lacks algorithms that allow a direct integration of domain expertise during the model development so that misleading conclusions can be avoided. Consequently, the field requires a much stronger linkage between algorithmic and behavioral research to directly integrate the user’s expert knowledge into the model structures, to reflect on his or her perception of the model’s output, and to promote the model’s interpretation already at the development stage. To this end, we conduct a comprehensive research project at the interconnection of mathematical and socio-technical research. More specifically, the mathematical strand deals with development of ML models based on additive model constraints, in which input variables are mapped independently of each other in a non-linear way and the mappings are summed up afterwards. Such models are commonly known as generalized additive models (GAMs) and the univariate mappings between features and the response are called shape functions. Since shape functions can be arbitrarily complex, GAMs generally achieve much better prediction accuracies than simple linear models, while retaining full interpretability. To incorporate expert knowledge into the model structure, we will specifically develop model constraints that affect the shape functions in an expert-driven manner, such as incorporating predefined shapes, smoothness, regularization, and other structural components. The socio-technical research will then investigate to what extent the resulting ML models supported by expert knowledge can lead to higher interpretability and acceptance, which will be thoroughly investigated in a series of (user-centric) experiments as well as in field studies with associated project partners from industry. For this purpose, our research project is structured into the three modules of 1) algorithmic development, 2) technical assessment, and 3) socio-technical evaluation. With our results, we expect to introduce a true game changer for the development of ML models in mission-critical decision scenarios.

Featured research (34)

The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements.
The COVID-19 pandemic is accompanied by a massive “infodemic” that makes it hard to identify concise and credible information for COVID-19-related questions, like incubation time, infection rates, or the effectiveness of vaccines. As a novel solution, our paper is concerned with designing a question-answering system based on modern technologies from natural language processing to overcome information overload and misinformation in pandemic situations. To carry out our research, we followed a design science research approach and applied Ingwersen’s cognitive model of information retrieval interaction to inform our design process from a socio-technical lens. On this basis, we derived prescriptive design knowledge in terms of design requirements and design principles, which we translated into the construction of a prototypical instantiation. Our implementation is based on the comprehensive CORD-19 dataset, and we demonstrate our artifact’s usefulness by evaluating its answer quality based on a sample of COVID-19 questions labeled by biomedical experts.
The COVID-19 pandemic is accompanied by a massive "infodemic" that makes it hard to identify concise and credible information for COVID-19-related questions, like incubation time, infection rates, or the effectiveness of vaccines. As a novel solution, our paper is concerned with designing a question-answering system based on modern technologies from natural language processing to overcome information overload and misinformation in pandemic situations. To carry out our research, we followed a design science research approach and applied Ingwersen's cognitive model of information retrieval interaction to inform our design process from a socio-technical lens. On this basis, we derived prescriptive design knowledge in terms of design requirements and design principles, which we translated into the construction of a prototypical instantiation. Our implementation is based on the comprehensive CORD-19 dataset, and we demonstrate our artifact's usefulness by evaluating its answer quality based on a sample of COVID-19 questions labeled by biomedical experts.
The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements.
The rapid development of 3D sensors and object detection methods based on 3D point clouds has led to increasing demand for labeling tools that provide suitable training data. However, existing labeling tools mostly focus on a single use case and generate bounding boxes only indirectly from a selection of points, which often impairs the label quality. Therefore, this work describes labelCloud, a generic point cloud labeling tool that can process all common file formats and provides 3D bounding boxes in multiple label formats. labelCloud offers two labeling methods that let users draw rotated bounding boxes directly inside the point cloud. Compared to a labeling tool based on indirect labeling, labelCloud could significantly increase the label precision while slightly reducing the labeling time. Due to its modular architecture, researchers and practitioners can adapt the software to their individual needs. With labelCloud, we contribute to enabling convenient 3D vision research in novel application domains.

Lab head

Patrick Zschech
Department
  • School of Business, Economics and Society

Members (3)

Nico Hambauer
  • Friedrich-Alexander-University of Erlangen-Nürnberg
Juliane Ort
  • Duale Hochschule Baden-Württemberg Mannheim
Julian Rosenberger
  • Friedrich-Alexander-University of Erlangen-Nürnberg