• Germany
Recent publications
This paper presents a new algorithm for detecting turn-to-turn faults in power transformers. Turn-to-turn faults between two parallel conductors have been taken into account, since they are the most difficult to detect due to the smallest short-circuit current level, especially when they are located close to the middle of the coil. Generally, standard differential protection relays fail in such a case, when low number of turns are shorted. Therefore, a new method was proposed here that is based on the combination of the negative-sequence current integral criterion and the unique differential quantities. The sensitivity and security of the presented method are analyzed and discussed. The performance of the proposed algorithm and three other commonly used methods have been tested with the signals generated using the MATLAB/Simulink program. The results confirm that the proposed algorithm is very sensitive to turn-to-turn faults and gives sufficient stabilization under external faults with CT saturation.
The paper presents a novel model order reduction method for mechanical problems in linear elasticity with nonlinear contact conditions. Recently, we have proposed an efficient reduction scheme for the node‐to‐node formulation [Manvelyan et al.,Comput Mech 68, 1283–1295 (2021)] that leads to Linear Complementarity Problems (LCP). Here, we enhance the underlying contact problem to a node‐to‐segment formulation, which leads to quadratic inequalities as constraints. The adjoint system corresponds to a Nonlinear Complementarity Problem (NCP), that describes the Lagrange multiplier. The latter is solved by a Newton‐type iteration based on a LCP solver in each time step. Since the maximal set of potential contact nodes is predefined, an additional substructuring by Craig‐Bampton can be performed. This contact treatment turns out to be necessary and allows exclusion of the Lagrange multipliers and the nodal displacements at contact from reduction. The numerical solutions of the reduced contact problem achieve high accuracy and the dynamic contact carries over the behaviour of the full order model. Moreover, if the contact area is small compared to the overall structure, the reduction scheme performs very efficiently. The performance of the resulting reduction method is assessed on two 2D computational examples from linear elasticity.
Sequential recommendations play a crucial role in many real-world applications. Due to the sequential nature, reinforcement learning has been employed to iteratively produce recommendations based on an observed stream of user behavior. In this setting, a recommendation agent interacts with the environments (users) by sequentially recommending items (actions) to maximize users’ overall long-term cumulative rewards. However, most reinforcement learning-based recommendation models only focus on extrinsic rewards based on user feedback, leading to sub-optimal policies if user-item interactions are sparse and fail to obtain the dynamic rewards based on the users’ preferences. As a remedy, we propose a dynamic intrinsic reward signal integrated with a contrastive discriminator-augmented reinforcement learning framework. Concretely, our framework contains two modules: (1) a contrastive learning module is employed to learn the representation of item sequences; (2) an intrinsic reward learning function to imitate the user’s internal dynamics. Furthermore, we combine static extrinsic reward and dynamic intrinsic reward to train a sequential recommender system based on double Q-learning. We integrate our framework with five representative sequential recommendation models. Specifically, our framework augments these recommendation models with two output layers: the supervised layer that applies cross-entropy loss to perform ranking and the other for reinforcement learning. Experimental results on two real-world datasets demonstrate that the proposed framework outperforms several sequential recommendation baselines and exploration with intrinsic reward baselines.
Changes in the levels of circulating proteins are associated with Alzheimer’s disease (AD), whereas their pathogenic roles in AD are unclear. Here, we identified soluble ST2 (sST2), a decoy receptor of interleukin-33–ST2 signaling, as a new disease-causing factor in AD. Increased circulating sST2 level is associated with more severe pathological changes in female individuals with AD. Genome-wide association analysis and CRISPR–Cas9 genome editing identified rs1921622, a genetic variant in an enhancer element of IL1RL1, which downregulates gene and protein levels of sST2. Mendelian randomization analysis using genetic variants, including rs1921622, demonstrated that decreased sST2 levels lower AD risk and related endophenotypes in females carrying the Apolipoprotein E (APOE)-ε4 genotype; the association is stronger in Chinese than in European-descent populations. Human and mouse transcriptome and immunohistochemical studies showed that rs1921622/sST2 regulates amyloid-beta (Aβ) pathology through the modulation of microglial activation and Aβ clearance. These findings demonstrate how sST2 level is modulated by a genetic variation and plays a disease-causing role in females with AD. This study finds that sST2 is a disease-causing factor for Alzheimer’s disease. Higher sST2 levels impair microglial Aβ clearance in APOE4+ female individuals. A genetic variant, rs1921622, is associated with a reduction in sST2 level and protects against AD in APOE4+ female individuals.
Engineering intelligent industrial systems is challenging due to high complexity and uncertainty with respect to domain dynamics and multiple agents. If industrial systems act autonomously, their choices and results must be within specified bounds to satisfy these requirements. Reinforcement learning (RL) is promising to find solutions that outperform known or handcrafted heuristics. However in industrial scenarios, it also is crucial to prevent RL from inducing potentially undesired or even dangerous behavior. This paper considers specification alignment in industrial scenarios with multi-agent reinforcement learning (MARL). We propose to embed functional and non-functional requirements into the reward function, enabling the agents to learn to align with the specification. We evaluate our approach in a smart factory simulation representing an industrial lot-size-one production facility, where we train up to eight agents using DQN, VDN, and QMIX. Our results show that the proposed approach enables agents to satisfy a given set of requirements.
Using data and Artificial Intelligence, it is possible to answer the big questions, how sustainable the planet is or what impact industry has on climate. The Big Data Value Association (BDVA) believes that Data Sharing Spaces will be a key enabler to this vision. The BDVA community has created a unified perspective on the value of data sharing spaces across the pillars of data, governance, people, organization, and technology, with trust as a central foundation. This chapter details this BDVA perspective, explaining the five pillars needed to create value in data with trust as a central concept, together with the tools and mechanisms for strategic stakeholders to create data sharing spaces jointly. It elaborates the strategic challenges which need to be overcome and sets out our call to action for the community to make this a reality. The chapter also summarizes the initial progress on data platform development, data governance, and Trustworthy AI to make data sharing spaces a reality. Finally, it details an example of a data space in smart manufacturing.
Die Prozessanalytik (PAT) spielt zusammen mit einer ausgeklügelten Systemintegration eine wichtige Rolle für den effizienten Betrieb chemischer Produktionsanlagen. Ein praktisches Beispiel einer Destillation liefert Einblicke in die technische Umsetzung und den ROI.
Background The anteroposterior (ap) radiograph of the pelvis is decisive in the diagnosis of different pathologies of the hip joint. Technical advantages have reduced the radiation dose of pelvic CT to levels comparable to radiographs. The purpose of this study was to validate if standard radiographic parameters (lateral center edge angle, medial center edge angle, acetabular index, acetabular arc, extrusion index, crossover sign and posterior wall sign) can accurately be determined on radiograph-like projections reconstructed from the CT dataset pre- and postoperatively. Methods A consecutive series of patient with symptomatic dysplasia of the hip and a full radiologic workup (radiographs and CT scan pre- and postoperatively) who underwent periacetabular osteotomy were included. Standard radiographic parameters were compared between radiographs and radiograph-like projections by two authors pre- and postoperatively. Results A total of 16 hips (32 radiographs/32 radiograph-like projections) were included in the study. No significant difference was found between the radiographs and radiograph-like images for all parameter for both examiners. ICC between radiograph and radiograph-like projections for all investigated parameters showed good to excellent reliability (0.78–0.99) pre- and postoperatively. Conclusion Radiograph-like projections show comparable results to radiographs with regard to the important investigated parameters (lateral center edge angle, medial center edge angle, acetabular index, acetabular arc, extrusion index, crossover sign and posterior wall sign). Thus, ultra-low-dose CT scans may reduce the need for conventional radiographs in pre- and postoperative analyses of 3-dimensional hip pathologies in the future, as the advantages increasingly outweigh the disadvantages.
The present work addresses the installation effects expected for the tonal noise of a pair of side-by-side contrarotating subsonic propellers mounted near a wing trailing edge. This generic configuration mimics future urban air-vehicle architectures. The installation effects refer to the additional sources of aerodynamic noise caused by blade-wing interaction and their scattering by the wing, compared to the case of isolated propellers. The paper aims to demonstrate the ability of analytical models to estimate these effects, which is of primary interest for the preliminary design steps of an installed propulsion system. Furthermore, the analytical parametric study might help determine promising configurations for future aircraft. In the analytical formulation, dipole-like noise sources of the propellers are considered, assuming rigid blades. The sound radiation from the propellers is formulated in three dimensions for characteristic spinning modes of the tonal noise. In contrast, the half-plane Green's function accounts for the sound scattering by the wing. A finite-chord correction is applied to the half-plane formulation and validated by numerical simulations. The results show that the installation effect is crucial for analyzing tonal propeller noise at low frequencies. In particular, sound radiation is significantly increased when the blade tips operate in the close vicinity of the trailing edge.
Industrial Control Systems (ICSs) rely on insecure protocols and devices to monitor and operate critical infrastructure. Prior work has demonstrated that powerful attackers with detailed system knowledge can manipulate exchanged sensor data to deteriorate performance of the process, even leading to full shutdowns of plants. Identifying those attacks requires iterating over all possible sensor values, and running detailed system simulation or analysis to identify optimal attacks. That setup allows adversaries to identify attacks that are most impactful when applied on the system for the first time, before the system operators become aware of the manipulations. In this work, we investigate if constrained attackers without detailed system knowledge and simulators can identify comparable attacks. In particular, the attacker only requires abstract knowledge on general information flow in the plant, instead of precise algorithms, operating parameters, process models, or simulators. We propose an approach that allows single-shot attacks, i.e., near-optimal attacks that are reliably shutting down a system on the first try. The approach is applied and validated on two use cases, and demonstrated to achieve comparable results to prior work, which relied on detailed system information and simulations.
This paper introduces the concept of spatial and media‐based modulated (SMBM) orthogonal frequency division multiplexing (OFDM) as a potential candidate for highly mobile next generation beyond 5G (B5G) wireless communications. The proposed SMBM‐OFDM technique utilizes not only the transmit antenna and channel state indices but also OFDM subcarriers to improve the system performance under high mobility. In addition, this study sheds light on challenging fast time‐varying channel estimation problem of MBM‐based systems by using the linear minimum mean square error (LMMSE) approach due to its optimality to investigate the achievable system performance with the aid of basis expansion modeling. The minimum lower bound on the channel estimation error (Bayesian Cramer–Rao bound) is derived theoretically and shown to be attainable by the considered LMMSE estimator. Moreover, symbol detection performance is provided for different modulation types and higher mobile velocities. Simulation results demonstrate that SMBM‐OFDM system under high mobility is able to provide around 12‐dB performance gains in terms of both channel estimation and symbol detection error compared to conventional spatial modulation (SM)‐OFDM systems without MBM. The presented framework is important due to addressing the high mobility support of SMBM‐OFDM systems for B5G wireless communications in terms of achievable channel estimation and data detection performance.
Railway Traffic Management Systems (TMSs) handle data from multiple railway subsystems, including Rail Business Services (such as interlocking, RBC, maintenance service, etc.) and external services (such as passenger information systems, weather forecast, etc.). In turn, the data from these subsystems are described in several models or ontologies contributed by various organizations or projects which are in a process of converging or federation. The challenge of the Shift2Rail OPTIMA project, which is implementing a communication platform for virtual testing of new applications for railway TMS, is to allow the exchange of data between different services or users and to support new traffic management applications, enabling access to a large number of disparate data sources. In this paper, the core activities of the OPTIMA project related to the formulation and standardization of a common data model are described. A new Common Data Model is developed based on standardized data structures to enable the seamless exchange of large amounts of data between different and heterogeneous sources and consumers of data, that contributes to the building of next generation of a more effective and efficient railway TMS suitable to offer precise and real-time traffic information to railway operators and other end users.
Günter Hotz hat im Laufe der vielen Jahre, in denen er an der Universität des Saarlandes als akademischer Lehrer tätig war, insgesamt 54 „Kinder“ zur Promotion, manche von ihnen dann auch zur Habilitation geführt. Nachfolgend sind sie mit ihrem Promotions- und, wo zutreffend, Habilitationsthema aufgelistet, bevor in den nachfolgenden Kapiteln von jedem einzelnen Doktorkind (akademischer) Lebenslauf folgen, bei einigen auch mit einem mehr oder weniger umfangreichen Beitrag weiter ergänzt.
Constructing surrogate models for uncertainty quantification (UQ) on complex partial differential equations (PDEs) having inherently high-dimensional O(10n), n≥2, stochastic inputs (e.g., forcing terms, boundary conditions, initial conditions) poses tremendous challenges. The “curse of dimensionality” can be addressed with suitable unsupervised learning techniques used as a pre-processing tool to encode inputs onto lower-dimensional subspaces while retaining its structural information and meaningful properties. In this work, we review and investigate thirteen dimension reduction methods including linear and nonlinear, spectral, blind source separation, convex and non-convex methods and utilize the resulting embeddings to construct a mapping to quantities of interest via polynomial chaos expansions (PCE). We refer to the general proposed approach as manifold PCE (m-PCE), where manifold corresponds to the latent space resulting from any of the studied dimension reduction methods. To investigate the capabilities and limitations of these methods we conduct numerical tests for three physics-based systems (treated as black-boxes) having high-dimensional stochastic inputs of varying complexity modeled as both Gaussian and non-Gaussian random fields to investigate the effect of the intrinsic dimensionality of input data. We demonstrate both the advantages and limitations of the unsupervised learning methods and we conclude that a suitable m-PCE model provides a cost-effective approach compared to alternative algorithms proposed in the literature, including recently proposed expensive deep neural network-based surrogates and can be readily applied for high-dimensional UQ in stochastic PDEs.
This paper will detail the process of developing a 36/60(72,5) kV high voltage wet mate and dry mate connector system, designed to meet the requirements of floating offshore wind turbines, and utilizing new and proven subsea technology developed to meet the requirements of critical reliability subsea applications. The paper will consider the state of the art for subsea high voltage wet and dry mate connectors before identifying the requirements of the floating offshore wind industry. A technology readiness assessment will be performed to define a technology qualification program for a new solution to meet the requirements of the floating offshore wind industry. The technology qualification program will then be executed to bring a solution to technology readiness level (TRL) 4, according to API 17Q. This paper will develop a technology readiness assessment for existing high-voltage connector solutions within the oil and gas industry for applications within floating offshore wind, starting with documenting the current state of the art for subsea high-voltage wet and dry mate connectors. The technology readiness assessment will form the basis of an engineering and test program for a new solution for floating offshore wind applications. This paper will document the engineering and qualification in accordance with the requirements of the floating offshore wind industry as well as established and emerging international and regional standards, such as IEC and IEEE standards. It is very likely that as the floating offshore wind industry matures, there will be continual development of the requirements and this paper will discuss likely changes in the requirements for high-voltage connector systems for floating offshore wind in the long term. Within the floating offshore wind industry, there are many different solutions proposed for the floating substructure. The paper will explore the potential advantages and disadvantages that the use of subsea wet and dry mate connectors might bring to floating offshore wind and will consider the full lifecycle of the floating wind farm, with a high focus on risk and cost during installation, reliability, and maintenance.
In the next 5-10 years, it is to be expected that normally unmanned production platforms (and eventually autonomous platforms) with a single yearly maintenance intervention will become the norm. This paper describes the basis for achieving this objective by integrating digital twins for the Process and Asset Performance/Maintenance domains. The Process Twin described has been deployed in real-world field applications and provides the necessary level of reliability and accuracy to allow for closed-loop production optimization in real-time (i.e., optimized setpoints are pushed directly to the DCS or SCADA without manual verification or validation). The process model covers the entire production value chain, including reservoir, wells, risers, process facilities, sale of product, etc. Multiple constraints can be entered into the optimization engine, giving operators the ability to define a bespoke landscape for their optimization based on several key performance indicators (KPIs) related to production, process, economic, and environmental requirements. The Asset Performance Twin complements the Process Twin by continuously generating the remaining Useful Life (RUL) of equipment. In the event that RUL does not match the timing of the next planned maintenance campaign, an alternative operational scenario can be calculated to extend the RUL. The Process Twin then optimizes production around this new constraint, with the ultimate objective being to minimize unplanned downtime and associated manual interventions.
In recent years, Asset Performance Management (APM) solutions have garnered increased attention from oil and gas process facility operators employing digitalization strategies. However, despite the growing interest, there is still confusion among industry professionals about what APM is and how it can be leveraged to reduce operating costs (OPEX), improve asset reliability, and contribute to de-manning objectives. This paper aims to answer these questions by defining the framework for an effective APM program and outlining considerations for implementation. Multipleoffshore production facility uses cases are presented which demonstrate how an APM approach can be applied to enable a reliability centered maintenance (RCM) approach, whilst ensuring that equipment and systems perform their expected function within a specificoperating and business context.
Open (i.e., simple) cycle gas turbines (GTs) have been the preferred means of power generation on floating, production, storage, and offloading (FPSO) vessels over the past two decades. GT-based packages offer several advantages over other widely used power solutions, such as gas engines and diesel gensets - including high power density, increased availability, and reduced greenhouse gas (GHG) emissions. In recent years, however, with many offshore operators establishing targets for environmental footprint reductions, new pathways for decarbonization are being evaluated. Combined cycle (i.e., the addition of a steam bottoming cycle to an open cycle GT) is a concept that has been widely employed in onshore industrial applications and is now garnering more interest in the offshore segment. This paper discusses the benefits combined cycle power plants can provide when compared to open cycle GTs and outlines installation and operability considerations for both greenfields and brownfields. In certain applications, the combined cycle power plant design may allow compression duties to be met with electric motors instead of gas turbines. Aside from operational advantages, such as increased availability and efficiency, and better turndown capabilities, this provides the added benefit of centralizing and optimizing emissions on the facility. It also enables more effective emissions monitoring and control. The paper will also discuss how digital monitoring and control systems can be applied to ensure that gas turbines are running at optimal setpoints relative to ambient conditions and current power demand. In this way, all GTs in the power plant can be operated (collectively) in the most fuel-efficient manner, thus contributing to further emissions reductions.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
4,247 members
Dirk Ertel
  • Advanced Therapies
Rainer Kuth
  • H WS SP IBD1
Michael Gepp
  • Digital Industries