Recent publications
We introduce a runtime verification framework for programmable switches that complements static analysis. To evaluate our approach, we design and develop, a runtime verification system that automatically detects, localizes, and patches software bugs in P4 programs. Bugs are reported via a violation of pre-specified expected behavior that is captured by . is based on machine learning-guided fuzzing that tests P4 switch non-intrusively, i.e., without modifying the P4 program for detecting runtime bugs. This enables an automated and real-time localization and patching of bugs. We used a prototype to detect and patch existing bugs in various publicly available P4 application programs deployed on two different switch platforms, namely, behavioral model (bmv2) and Tofino. Our evaluation shows that significantly outperforms bug detection baselines while generating fewer packets and patches bugs in large P4 programs, e.g., switch.p4 without triggering any regressions.
Industrial Cyber-Physical Systems (ICPS) are a key element that acts as the backbone infrastructure for realising innovative systems compliant with the fourth industrial revolution vision and requirements to realize it. Several architectures, such as the Reference Architectural Model Industry 4.0 (RAMI4.0), the Industrial Internet Reference Architecture (IIRA), and the Smart Grid Architecture Model (SGAM), have been proposed to develop and integrate ICPS, their services, and applications for different domains. In such architectures, the digitization of assets and interconnection to relevant industrial processes and business services is of paramount importance. Different technological solutions have been developed that overwhelmingly focus on the integration of the assets with their cyber counterpart. In this context, the adoption of standards is crucial to enable the compatibility and interoperability of these networked-based systems. Since industrial agents are seen as an enabler in realizing ICPS, this work aims to provide insights related to the use and alignment of the recently established IEEE 2660.1 recommended practice to support ICPS developers and engineers to integrate assets in the context of each one of the three referred reference architectures. A critical discussion also points out some noteworthy aspects that emerge when using the IEEE 2660.1 in these architectures and discusses limitations and challenges ahead.
The increasing dissemination of JSON as exchange and storage format through its popularity in business and analytical applications requires efficient storage and processing of JSON documents. Consequently, this led to the development of specialized JSON document stores and the extension of existing relational stores, while no JSON-specific benchmarks were available to assess these systems.In this work, we assess currently available JSON document store benchmarks and select the recently developed DeepBench benchmark to experimentally study important dimensions like analytical querying capabilities, object nesting and array unnesting. To make the computational complexity of array unnesting more tractable, we introduce an improvement that we evaluate within a commercial system as part of the common, performance-oriented development process in practice.We conclude our evaluation of well-known document stores with DeepBench and give new insights into strengths and potential weaknesses of those systems that were not found by existing, non-JSON benchmarking practices. In particular the algebraic optimization of JSON query processing is still limited despite prior work on hierarchical data models in the XML context.
Cyberattacks will continue to thrive as long as their benefits exceed their cost. A successful cyberattack requires a vulnerability, but also and foremost the attacker’s willingness to exploit. Can we reduce this willingness, can we even the attack/defense asymmetry?
Information systems research has a long-standing interest in how organizations gain value through information technology. In this article, we investigate a business process intelligence (BPI) technology that is receiving increasing interest in research and practice: process mining. Process mining uses digital trace data to visualize and measure the performance of business processes in order to inform managerial actions. While process mining has received tremendous uptake in practice, it is unknown how organizations use it to generate business value. We present the results of a multiple case study with key stakeholders from eight internationally operating companies. We identify key features of process mining-data & connectivity, process visualization, and process analytics-and show how they translate into a set of affordances that enable value creation. Specifically, process mining affords (1) perceiving end-to-end process visualizations and performance indicators, (2) sense-making of process-related information, (3) data-driven decision making, and (4) implementing interventions. Value is realized, in turn, in the form of process efficiency, monetary gains, and non-monetary gains, such as customer satisfaction. Our findings have implications for the discourse on IT value creation as we show how process mining constitutes a new class of BI&A technology, that enables behavioral visibility and allows organizations to make evidence-based decisions about their business processes.
To kick-start the discussion, let’s first review some of the recent attacks. In the node-ipc case1 a developer pushed an update that deliberately but stealthily included code that sabotaged the computer of the users who installed the updated component. Such an attack was selective: a DarkSide in reverse. If the computer Internet Protocol (IP) was geolocated in Russia, the attack would be launched. Several days and a few million downloads later, the “spurious code” was actually noticed and investigated. Linus’s law on the many eyes eventually made the bug shallow,2 and the developer pulled back the changes.
Encrypting data before sending it to the cloud ensures data confidentiality but requires the cloud to compute on encrypted data. Trusted execution environments, such as Intel SGX enclaves, promise to provide a secure environment in which data can be decrypted and then processed. However, vulnerabilities in the executed program give attackers ample opportunities to execute arbitrary code inside the enclave. This code can modify the dataflow of the program and leak secrets via SGX side channels. Fully homomorphic encryption would be an alternative to compute on encrypted data without data leaks. However, due to its high computational complexity, its applicability to general-purpose computing remains limited. Researchers have made several proposals for transforming programs to perform encrypted computations on less powerful encryption schemes. Yet current approaches do not support programs making control-flow decisions based on encrypted data.
We introduce the concept of dataflow authentication (DFAuth) to enable such programs. DFAuth prevents an adversary from arbitrarily deviating from the dataflow of a program. Our technique hence offers protections against the side-channel attacks described previously. We implemented two flavors of DFAuth, a Java bytecode-to-bytecode compiler, and an SGX enclave running a small and program-independent trusted code base. We applied DFAuth to a neural network performing machine learning on sensitive medical data and a smart charging scheduler for electric vehicles. Our transformation yields a neural network with encrypted weights, which can be evaluated on encrypted inputs in \( 12.55 \,\mathrm{m}\mathrm{s} \) . Our protected scheduler is capable of updating the encrypted charging plan in approximately 1.06 seconds.
Level lifetimes for the candidate chiral doublet bands of ⁸⁰Br were extracted by means of the Doppler-shift attenuation method. The absolute transition probabilities derived from the lifetimes agree well with the M1 and E2 chiral electromagnetic selection rules, and are well reproduced by the triaxial particle rotor model calculations. Such good agreements among the experimental data, selection rules of chiral doublet bands and theoretical calculations are rare and outstanding in researches of nuclear chirality. Besides odd-odd Cs isotopes, odd-odd Br isotopes in the A≈ 80 mass region represent another territory that exhibits the ideal selection rules expected for chiral doublet bands.
The success of Industry 4.0 has led to technological innovations in Operator 4.0 roles and capabilities. This increasing human presence and involvement amongst Artificial Intelligence (AI) solutions, automated and autonomous systems have renewed the ethics challenges of human-centric industrial cyber-physical systems in sustainable factory automation. In this paper, we aim to address these ethics challenges by proposing a new AI ethics framework for Operator 4.0. Founded on the key intersecting ethics dimensions of the IEEE Ethically Aligned Design and the Ethics Guidelines for Trustworthy AI by the European Union’s High-Level Expert Group on AI, this framework is formulated for the primary profiles of the Operator 4.0 typology across transparency, equity, safety, accountability, privacy, and trust. This framework provides a level of completeness, where all ethics dimensions are closely intertwined, and no component is applied in isolation for physical, mental, and cognitive operator workloads and interactions.
This chapter introduces the main conceptual foundations of multi‐agent systems and holonic systems and presents the framing of industrial agents as an instantiation of such technological paradigms to face industrial requirements such as, for example, those posed by industrial cyber‐physical systems (ICPS). It addresses the alignment of industrial agents with RAMI 4.0. The chapter also addresses the use of industrial agents to realize ICPS, to concretely enhance the functionalities provided by the asset administration shells (AAS). Holonic paradigm translates the Köstler's observations and Herbert Simon's theories into a set of appropriate concepts for distributed control systems. Along with the holonics principles, an industrial agent usually has an associated physical hardware counterpart, which increases the deployment complexity. On the one side, the AAS is designed to be available for both non‐intelligent and intelligent digitalized assets, which is also a digital basis for autonomous components and systems.
Günter Hotz hat im Laufe der vielen Jahre, in denen er an der Universität des Saarlandes als akademischer Lehrer tätig war, insgesamt 54 „Kinder“ zur Promotion, manche von ihnen dann auch zur Habilitation geführt. Nachfolgend sind sie mit ihrem Promotions- und, wo zutreffend, Habilitationsthema aufgelistet, bevor in den nachfolgenden Kapiteln von jedem einzelnen Doktorkind (akademischer) Lebenslauf folgen, bei einigen auch mit einem mehr oder weniger umfangreichen Beitrag weiter ergänzt.
We develop a bivariational principle for an antisymmetric product of nonorthogonal geminals. Special cases reduce to the antisymmetric product of strongly-orthogonal geminals (APSG), the generalized valence bond-perfect pairing (GVB-PP), and the antisymmetrized geminal power (AGP) wavefunctions. The presented method employs wavefunctions of the same type as Richardson-Gaudin (RG) states, but which are not eigenvectors of a model Hamiltonian which would allow for more freedom in the mean-field. The general idea is to work with the same state in a primal picture in terms of pairs, and in a dual picture in terms of pair-holes. This leads to an asymmetric energy expression which may be optimized bivariationally, and is strictly variational when the two representations agree. The general approach may be useful in other contexts, such as for computationally feasible variational coupled-cluster methods.
A fundamental and challenging problem in deep learning is catastrophic forgetting, the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks. This problem has been widely investigated in the research community and several Incremental Learning approaches have been proposed in the past years. While earlier works in computer vision have mostly focused on image classification and object detection, more recently some IL approaches for semantic segmentation have been introduced. These previous works showed that, despite its simplicity, knowledge distillation can be effectively employed to alleviate catastrophic forgetting. In this paper, we follow this research direction and, inspired by recent literature on contrastive learning, we propose a novel distillation framework, Uncertainty-aware Contrastive Distillation. In a nutshell, is operated by introducing a novel distillation loss that takes into account all the images in a mini-batch, enforcing similarity between features associated to all the pixels from the same classes, and pulling apart those corresponding to pixels from different classes. Our experimental results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches, and leads to state-of-art performance on three commonly adopted benchmarks.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Dietmar-Hopp-Allee 16, D-69190, Walldorf, Baden-Württemberg, Germany
Website
http://www.sap.com