PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Zusammenfassung Nach einer kurzen Begriffsklärung zu den Themen maschinelles Lernen und Künstliche Intelligenz (KI) beschreibt dieses Dokument eine Methodik zur Nach einer kurzen Begriffsklärung zu den Themen maschinelles Lernen und Künstliche Intelligenz (KI) beschreibt dieses Dokument eine Methodik zur Verortung verschiedener KI Forschungs-und Entwicklungsdisziplinen in einem gesamtheitlichen Kontext. Darauf aufbauend werden die relevanten KI Aspekte kurz erörtert und betont, dass verschiedene Forschungsbereiche bearbeitet werden müssen um schlussendlich KI effektiv, ökonomisch und gesellschaftlich verträglich in vielen Anwendungsbereichen erfolgreich einzusetzen. Beispielhaft werden mehrere erfolgreiche KI Einsatzbeispiele in unterschiedlichen Anwendungsbereichen dargestellt.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Since computers have become increasingly more powerful, users are less willing to accept slow responses of systems. Hence, performance testing is important for interactive systems. However, it is still challenging to test if a system provides acceptable performance or can satisfy certain response-time limits, especially for different usage scenarios. On the one hand, there are performance-testing techniques that require numerous costly tests of the system. On the other hand, model-based performance analysis methods have a doubtful model quality. Hence, we propose a combined method to mitigate these issues. We learn response-time distributions from test data in order to augment existing behavioral models with timing aspects. Then, we perform statistical model checking with the resulting model for a performance prediction. Finally, we test the accuracy of our prediction with hypotheses testing of the real system. Our method is implemented with a property-based testing tool with integrated statistical model checking algorithms. We demonstrate the feasibility of our techniques in an industrial case study with a web-service application.
Conference Paper
Full-text available
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
Conference Paper
Full-text available
Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.
Conference Paper
Safety and security are major concerns in the development of Cyber-Physical Systems (CPS). Signal temporal logic (STL) was proposed as a language to specify and monitor the correctness of CPS relative to formalized requirements. Incorporating STL into a development process enables designers to automatically monitor and diagnose traces, compute robustness estimates based on requirements, and perform requirement falsification, leading to productivity gains in verification and validation activities; however, in its current form STL is agnostic to the input/output classification of signals, and this negatively impacts the relevance of the analysis results. In this paper we propose to make the interface explicit in the STL language by introducing input/output signal declarations. We then define new measures of input vacuity and output robustness that better reflect the nature of the system and the specification intent. The resulting framework, which we call interface-aware signal temporal logic (IA-STL), aids verification and validation activities. We demonstrate the benefits of IA-STL on several CPS analysis activities: (1) robustness-driven sensitivity analysis, (2) falsification and (3) fault localization. We describe an implementation of our enhancement to STL and associated notions of robustness and vacuity in a prototype extension of Breach, a MATLAB®/Simulink® toolbox for CPS verification and validation. We explore these methodological improvements and evaluate our results on two examples from the automotive domain: a benchmark powertrain control system and a hydrogen fuel cell system.
Article
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust in a model. Trust is fundamental if one plans to take action based on a prediction, or when choosing whether or not to deploy a new model. Such understanding further provides insights into the model, which can be used to turn an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We further propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). The usefulness of explanations is shown via novel experiments, both simulated and with human subjects. Our explanations empower users in various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and detecting why a classifier should not be trusted.
Bereich Marketing und Vertrieb Weitere typische KI Anwendungsbeispiele für Kundenservices von Unternehmen im Marketing und Verkauf und Handel sind: • Das Erkennen von Inhalten in Dokumenten. Dies hängt sehr stark vom Verständnis der Sprache ab, als auch von den Dokumentstrukturen
  • Ki Awendungsbeispiele Im
KI Awendungsbeispiele im Bereich Marketing und Vertrieb Weitere typische KI Anwendungsbeispiele für Kundenservices von Unternehmen im Marketing und Verkauf und Handel sind: • Das Erkennen von Inhalten in Dokumenten. Dies hängt sehr stark vom Verständnis der Sprache ab, als auch von den Dokumentstrukturen (automated document processing).
Einsatz von KI zur Erzeugung von effektiven Testfällen und Testdaten für das Testen von digitalen Systemen
  • Ermittlung Von Effektiven Testdaten
Ermittlung von effektiven Testdaten. Einsatz von KI zur Erzeugung von effektiven Testfällen und Testdaten für das Testen von digitalen Systemen [36]