Recent publications
Rapid growth of digital technology has facilitated industry progress, while industrial CO2 emissions are a major issue to be confronted. Digital Twins can play a major role but so far, they have no common norms, standards, or models yet. On top of this, the majority in literature uses the term Digital Twin, but only a few sources are really describing a Digital Twin, whereby it describes a bidirectional data transfer between the real model and the software model. Until now, Digital Twins focus on a single area of interest and do not consider the broader challenge of CO2 emissions. This study gives an example how to predict CO2 emissions for the operation of a production site by merging three Digital Models (Building, Production, and Energy Model). This approach demonstrates how CO2 emissions can be reduced during operation by selecting an appropriate production scenario and a specific energy source mix in the planning phase. The core task is to enable energy demand reduction by simulating different production scenarios and to identify the best energy source mix with the resulting CO2 emissions visible. The case study shows that by merging the three Digital Models, it is possible to create an overview of the expected CO2 emissions which can be used as a basis for further developments for Digital Twins. However, the case study has shown that only manual data exchange between the models was possible. Further developments enabling a common data exchange and the connection of the interdisciplinary digital models through a shared language are urgently needed to speed up developments for Digital Twins and shape an interdisciplinary industry approach.
Temporal knowledge graph completion (TKGC) aims to predict the missing links among the entities in a temporal knowledge graph (TKG). Most previous TKGC methods only consider predicting the missing links among the entities seen in the training set, while they are unable to achieve great performance in link prediction concerning newly-emerged unseen entities. Recently, a new task, i.e., TKG few-shot out-of-graph (OOG) link prediction, is proposed, where TKGC models are required to achieve great link prediction performance concerning newly-emerged entities that only have few-shot observed examples. In this work, we propose a TKGC method FITCARL that combines few-shot learning with reinforcement learning to solve this task. In FITCARL, an agent traverses through the whole TKG to search for the prediction answer. A policy network is designed to guide the search process based on the traversed path. To better address the data scarcity problem in the few-shot setting, we introduce a module that computes the confidence of each candidate action and integrate it into the policy for action selection. We also exploit the entity concept information with a novel concept regularizer to boost model performance. Experimental results show that FITCARL achieves stat-of-the-art performance on TKG few-shot OOG link prediction. Code and supplementary appendices are provided (https://github.com/ZifengDing/FITCARL/tree/main).
Many industries are facing the challenge of increasing the number of cables in their products. All of these cable paths must be planned in a time-consuming, repetitive, and error-prone process. Instead of manually defining all waypoints for a broad range of cables, automation can provide globally optimized and valid paths for accelerated product development. To establish automated electrical routing, an industrial-oriented application is directly integrated into existing 3D CAD workflows. The purpose of this research is to investigate the applicability and performance of Reinforcement Learning in identifying optimal paths in a three-dimensional space. Therefore, information is directly extracted from 3D CAD, and results are immediately returned to CAD. Handing over the crucial task of routing to a Multi-Agent Deep Q-Network (MADQN) promises a scalable solution for various environments of different sizes and levels of complexity. Minimizing the total cable lengths while considering cable and environment-specific constraints is formulated as a shortest-path problem in three-dimensional space in order to make it solvable for the developed approach. To reduce the complexity based on the application domain, the agents’ accessible space is decreased to a maximum distance from the initial 3D CAD geometry. This paper provides a detailed explanation of the developed approach, which is compared to established methods in electrical routing such as the A* algorithm.
Systems engineering has become important in almost every complex product manufacturing industry, especially automotive. Emerging trends like vehicle electrification and autonomous driving now pose a System of Systems (SoS) engineering challenge to automotive OEMs. This paper presents a proof‐of‐concept (PoC) that applies a top‐down SoS perspective to Hyundai‐Kia Motor Corporation's (HKMC) virtual product development process to develop a performance‐critical component of the vehicle, the tire. The PoC demonstrates using the Arcadia MBSE method to develop a consistent, layered, vehicle architecture model starting from the SoS operational context down to the lowest level of system decomposition in the physical architecture thereby capturing top‐down knowledge traceability. Using the concept of functional chains, several vehicle performance views are captured that serve as the basis for architecture verification orchestration across engineering domains using a cross‐domain orchestration platform thereby validating key vehicle/tire performance metrics that influence the tire design parameters. Preliminary results of the study show that applying a method‐based modeling approach could provide several benefits to HKMC's current product development approach such as reduced time to model, SoS knowledge capture and reusability, parameter/requirement traceability, early performance verification, and effective systems engineering collaboration between the OEM, tire design supplier and tire manufacturers.
We present an industrial end-user perspective on the current state of quantum computing hardware for one specific technological approach, the neutral atom platform. Our aim is to assist developers in understanding the impact of the specific properties of these devices on the effectiveness of algorithm execution. Based on discussions with different vendors and recent literature, we discuss the performance data of the neutral atom platform. Specifically, we focus on the physical qubit architecture, which affects state preparation, qubit-to-qubit connectivity, gate fidelities, native gate instruction set, and individual qubit stability. These factors determine both the quantum-part execution time and the end-to-end wall clock time relevant for end-users, but also the ability to perform fault-tolerant quantum computation in the future. We end with an overview of which applications have been shown to be well suited for the peculiar properties of neutral atom-based quantum computers.
Gas turbines are a key part of many countries’ power generation portfolios, but components such as blades can suffer from hot corrosion attack, which can decrease component lifetimes. Corrosion is driven by impurity levels in the fuel and air (e.g., species containing sulphur and/or alkali metals) and depends on environmental conditions (e.g., air pollution, seawater droplets), that can lead to formation of harmful species in the gas. Understanding and determining the deposition flux of such contaminants is crucial for understanding the problem. Thermodynamic simulations were used to determine types and amounts of potentially corrosive contaminants, this was followed by deposition fluxes calculations. An operating scenario, based upon an offshore platform was evaluated. The effectiveness of different filtration systems has been evaluated. The impurity levels of alkali metals, such as sodium, greatly impacts the calculated deposition flux of species linked to corrosion attack. The presence of Na2SO4, and K2SO4 was found, at temperature representative of stage 2 nozzle guide vanes. Lowering sulphur input (from fuel or air) can be an efficient way to decrease deposition, attention must also be paid to lowering the amount of alkali metal entering the gas turbine, which can be lowered by the filtration systems’ correct use.
What are Cy Twombly’s sculptures made of? This article presents an overview of a non-destructive examination conducted on three sculptures by American artist Cy Twombly (1928–2011) as part of an art-technological research project at the Doerner Institut in Munich. The artworks are part of the collection of the Brandhorst Museum and belong to Twombly’s series of so-called ‘Original Sculptures’: assemblages of individual found objects, which the artist covered and modified with layers of plaster and white paint. To develop a long-time preservation strategy, the research focused on understanding the materials and construction methods used in Twombly's sculptures. In collaboration with the Chair of Non-Destructive Testing at the Technical University of Munich, the artworks were inspected using X-Ray radiography and computed tomography. The results showed that Cy Twombly used various everyday objects made from wood, plastics, metal, and paper/cardboard to build the assemblages. Unexpectedly, the examinations revealed that the individual parts are solely held together by the coating of plaster and paint, lacking additional mechanical connections. The overall structure thus proved to be very fragile and highly sensitive to physical stresses, whether due to handling, transport, or strains in the microstructure caused by climatic fluctuations.
Since little was known about Cy Twombly´s choice of materials and manufacturing details, the results offer valuable insights into the overall artistic process and decision-making of one of the most influential artists of the 20th/21st centuries. Conservators can use the art-technological findings to monitor the sculptures‘ condition and develop or adapt long-term preservation strategies, including aspects such as ambient climatic conditions and handling storage and transport specifications. In addition, the knowledge generated can be used for further research on the specific materials and transferred to other artworks by Cy Twombly.
Many important real-world data sets come in the form of graphs or networks, including social networks, knowledge graphs, protein-interaction networks, the World Wide Web and many more. Graph neural networks (GNNs) are connectionist models that capture the dependence structure induced by links via message passing between the nodes of graphs. Similarly to other connectionist models, GNNs lack transparency in their decision-making. Since the unprecedented levels of performance of such AI methods lead to increasing use in the daily life of humans, there is an emerging need to understand the decision-making process of such systems. While symbolic methods such as inductive logic learning come with explainability, they perform best when dealing with relatively small and precise data. Sub-symbolic methods such as graph neural networks are able to handle large datasets, have a higher tolerance to noise in real world data, generally have high computing performance and are easier to scale up. We aim to develop a hybrid method by combining GNNs, sub-symbolic explainer methods and inductive logic learning. This enables human-centric and causal explanations through extracting symbolic explanations from identified decision drivers and enriching them with available background knowledge. With this method, high-accuracy sub-symbolic predictions come with symbolic-level explanations, and the preliminary evaluation results reported show an effective solution for the performance vs. explainability trade-off. The evaluation is done on a chemical use case and an industrial cybersecurity use case.
Led by the manufacturing industry, virtual replicas of production systems also known as digital twins (DTs) are gradually moving into all areas of industry. Their advantages are characterized by the possibility of product optimization, simulations, improved monitoring and prediction of downtimes and optimized maintenance, to name just a few. The engineering, procurement and construction (EPC) of process plants as mechatronic systems is characterized by a high degree of project-specific modifications and interdisciplinary engineering effort with low reusability, in contrast to unit-production-driven areas such as automotive. This results in a high cost-benefit ratio for the creation of DTs over the life cycle of process plants, especially when suppliers are integrated into the value chain. The objective of this paper is to analyze the state of plant lifecycle management, data exchange and the possibilities of optimized supplier integration during the planning and EPC of process plants regarding DT creation and usage. Three research questions (RQs) were used to narrow down a total of 356 identified publications to 54, which were then examined. The papers covered a variety of topics, including combining discipline-specific models, plant management approaches and the combination of both.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Germany
Website
https://www.siemens.com/