• Stuttgart, Baden-Württemberg, Germany
Recent publications
In this big data era, multitudes of data are generated and collected which contain the potential to gain new insights, e.g., for enhancing business models. To leverage this potential through, e.g., data science and analytics projects, the data must be made available. In this context, data marketplaces are used as platforms to facilitate the exchange and thus, the provisioning of data and data-related services. Data marketplaces are mainly studied for the exchange of data between organizations, i.e., as external data marketplaces. Yet, the data collected within a company also has the potential to provide valuable insights for this same company, for instance to optimize business processes. Studies indicate, however, that a significant amount of data within companies remains unused. In this sense, it is proposed to employ an Enterprise Data Marketplace, a platform to democratize data within a company among its employees. Specifics of the Enterprise Data Marketplace, how it can be implemented or how it makes data available throughout a variety of systems like data lakes has not been investigated in literature so far. Therefore, we present the characteristics and requirements of this kind of marketplace. We also distinguish it from other tools like data catalogs, provide a platform architecture and highlight how it integrates with the company’s system landscape. The presented concepts are demonstrated through an Enterprise Data Marketplace prototype and an experiment reveals that this marketplace significantly improves the data consumer workflows in terms of efficiency and complexity. This paper is based on several interdisciplinary works combining comprehensive research with practical experience from an industrial perspective. We therefore present the Enterprise Data Marketplace as a distinct marketplace type and provide the basis for establishing it within a company.
Micromagnets have wide applications in MEMS and micromotors but there are still miniaturization limitations in current microfabrication processes for permanent magnets. A novel damage-free ultrashort pulsed laser machining process to manufacture complex shapes of Sm2Co17 micromagnets is proposed in this work. Laser process permits to further miniaturize the size of the resulting micromagnets achieving very small micromagnets. This can be achieved as micromagnets are submerged in a refrigerant fluid which is especially beneficial for magnets that are materials very sensitive to high temperature. The heat effect of laser cutting on the hard magnetic materials is drastically reduced thanks to the fluid. The detailed description of the manufacturing process is hereby presented. Results of several machining processes like milling and cutting and the magnetic characterization of the resulting micromagnets are shown. Complex segment shapes, with 65-μm thickness made in high-quality Sm2Co17 material, with good accuracy are achieved. It is demonstrated that no permanent degradation of the magnetic properties appears after laser machining.
Nutrition and hydration are fundamental aspects of healthcare, especially in the care of older people, particularly those in hospitals or in long-term care facilities. Worldwide, nurses are ‘best-placed’ coordinators of interdisciplinary nutritional management and care processes. Even so, it is essential that nurses collaborate with other healthcare specialists as an interdisciplinary team to provide high-quality care that reflects patients’ needs for assessment, intervention, and health promotion. When an interdisciplinary team work collaboratively, care is more successful, improves patient outcomes, and reduces the risk of in-hospital and long-term mortality. The care process begins with screening and monitoring of the nutritional status and fluid intake of all older people within 24 h of admission. In the case of positive screening, comprehensive assessment and involvement of other team members should undertake to understand the underlying problem. Appropriate food and appealing meals, snacks, and drinks should be available and offered with recommended amounts of energy, protein, vitamins, minerals (particularly calcium), and water. This should be complemented with supplementary drinks if intake is not adequate. The prescription of vitamin D and calcium should be discussed. Patient-centred and evidence-based information should provide and interventions in the case of end-of-life care should be appropriate discussed. Educating, informing, and involving patients and families increases their level of health literacy. Malnutrition and/or dehydration management should be included in the discharge plan. The aim of this chapter is to increase awareness of nurses’ responsibility, within a multidisciplinary team, for assessment and intervention of nutrition and hydration, examine the issues pertaining to nutrition and fluid balance in older people and outline the nature, assessment and interventions relating to malnutrition and dehydration.
In embedded computing domains, including the automotive industry, complex functionalities are split across multiple tasks that form task chains . These tasks are functionally dependent and communicate partial computations through shared memory slots based on the Logical Execution Time (LET) paradigm. This paper introduces a model that captures the behavior of a producer-consumer pair of tasks in a chain, characterizing the timing of reading and writing events. Using ring algebra, the combined behavior of the pair can be modeled as a single periodic task. The paper also presents a lightweight mechanism to eliminate jitter in an entire chain of any size, resulting in a single periodic LET task with zero jitter. All presented methods are available in a public repository.
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data. However, their re-identification accuracy still falls short compared to their supervised counterparts. We hypothesize that this drawback results from formulating self-supervised objectives that are limited to single frames or frame pairs. Such formulations do not capture sufficient visual appearance variations to facilitate learning consistent re-identification features for autonomous driving when the frame rate is low, or object dynamics are high. In this work, we propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames by enforcing consistent association scores across short and long timescales. We perform extensive evaluations that demonstrate that re-identification features trained from longer sequences significantly reduce ID switches on standard autonomous driving datasets compared to existing self-supervised learning methods, which are limited to training on frame pairs. Using our proposed SubCo loss function, we set the new state of the art among self-supervised methods and even perform on par with fully supervised learning methods.
Vehicle-to-Vehicle (V2V) communication is an essential component of the Intelligent Transportation System (ITS), which enables realtime traffic data sharing and collective awareness among vehicles and promotes a safer, more effective, and environmentally friendly road traffic environment. One of the main prerequisites for a robust and reliable V2V application minimizing crashes, easing traffic, and reducing traffic congestion is an error-free prediction of the underlying communication performance. In this paper, we delve into the predictive Quality of Service (pQoS) for V2V communication specifically for IEEE 802.11p networks.We establish the correlations between main Key Performance Indicators (KPIs) affecting the V2V communication quality as well as provide recommendations on how these correlations can be used to develop a robust Quality of Service (QoS) prediction algorithm for forecasting and optimizing the V2V performance.
Citation: Leite, T.M.; Freitas, C.; Magalhães, R.; Ferreira da Silva, A.; Alves, J.R.; Viana, J.C.; Delgado, I. Decoupling of Temperature and Strain Effects on Optical Fiber-Based Measurements of Thermomechanical Loaded Printed Circuit Board Assemblies. Sensors 2023, 23, 8565. Abstract: This study investigated the use of distributed optical fiber sensing to measure temperature and strain during thermomechanical processes in printed circuit board (PCB) manufacturing. An optical fiber (OF) was bonded to a PCB for simultaneous measurement of temperature and strain. Optical frequency-domain reflectometry was used to interrogate the fiber optic sensor. As the optical fiber is sensitive to both temperature and strain, a demodulation technique is required to separate both effects. Several demodulation techniques were compared to find the best one, highlighting their main limitations. The importance of good estimations of the temperature sensitivity coefficient of the OF and the coefficient of thermal expansion of the PCB was highlighted for accurate results. Furthermore, the temperature sensitivity of the bonded OF should not be neglected for accurate estimations of strains. The two-sensor combination model provided the best results, with a 2.3% error of temperature values and expected strain values. Based on this decoupling model, a methodology for measuring strain and temperature variations in PCB thermomechanical processes using a single and simple OF was developed and tested, and then applied to a trial in an industrial environment using a dynamic oven with similar characteristics to those of a reflow oven. This approach allows the measurement of the temperature profile on the PCB during oven travel and its strain state (warpage).
In production, quality monitoring is essential to detect defective elements. State-of-the-art approaches are single-sensor systems (SSS) and multi-sensor systems (MSS). Yet, these approaches might not be suitable: Nowadays, one component may comprise several hundred meters of the weld seam, necessitating high-speed welding to produce enough components. To detect as many defects as possible in time, fast yet precise monitoring is required. However, information captured by SSS might not be sufficient and MSS suffer from long inference times. Therefore, we present a confidence-based cascaded system (CS). The key idea of the CS is that not all data are analyzed to obtain the quality weld, but only selected ones. As evidenced by our results, all CS outperform SSS in terms of accuracy and inference time. Further, compared to MSS, the CS has hardware advantages.
In this work, the development of a new particle counting sensor based on continuous-wave laser-induced incandescence is described. We demonstrate the sensor, which makes use of a focussed laser beam emitted by a compact laser diode, as an alternative device to established laboratory techniques to count soot particles. Simulations help understanding the cause–effect relationships based on known theory with the implementation of specific laser beam characteristics that influence the absorption process. The main goal of this work was to demonstrate the sensor’s ability to determine the soot particle number concentration of a test aerosol. This has been confirmed through a comparison with established laboratory instruments, like scanning mobility particle sizer or condensation particle counter. The observed size dependency of the laser induced incandescence sensor’s detection efficiency was successfully corrected with prior knowledge of the investigated particle size distribution. Furthermore, the sensor’s detection limit with respect to the particle diameter was determined to be well below 100 nm with a dependency on the applied laser power.
Introduction The clinical assessment of mobility, and walking specifically, is still mainly based on functional tests that lack ecological validity. Thanks to inertial measurement units (IMUs), gait analysis is shifting to unsupervised monitoring in naturalistic and unconstrained settings. However, the extraction of clinically relevant gait parameters from IMU data often depends on heuristics-based algorithms that rely on empirically determined thresholds. These were mainly validated on small cohorts in supervised settings. Methods Here, a deep learning (DL) algorithm was developed and validated for gait event detection in a heterogeneous population of different mobility-limiting disease cohorts and a cohort of healthy adults. Participants wore pressure insoles and IMUs on both feet for 2.5 h in their habitual environment. The raw accelerometer and gyroscope data from both feet were used as input to a deep convolutional neural network, while reference timings for gait events were based on the combined IMU and pressure insoles data. Results and discussion The results showed a high-detection performance for initial contacts (ICs) (recall: 98%, precision: 96%) and final contacts (FCs) (recall: 99%, precision: 94%) and a maximum median time error of −0.02 s for ICs and 0.03 s for FCs. Subsequently derived temporal gait parameters were in good agreement with a pressure insoles-based reference with a maximum mean difference of 0.07, −0.07, and <0.01 s for stance, swing, and stride time, respectively. Thus, the DL algorithm is considered successful in detecting gait events in ecologically valid environments across different mobility-limiting diseases.
Advancements in AI led to the emergence of in-memory-computing architectures as a promising solution for the associated computing and memory challenges. This study introduces a novel in-memory-computing (IMC) crossbar macro utilizing a multi-level ferroelectric field-effect transistor (FeFET) cell for multi-bit multiply and accumulate (MAC) operations. The proposed 1FeFET-1R cell design stores multi-bit information while minimizing device variability effects on accuracy. Experimental validation was performed using 28 nm HKMG technology-based FeFET devices. Unlike traditional resistive memory-based analog computing, our approach leverages the electrical characteristics of stored data within the memory cell to derive MAC operation results encoded in activation time and accumulated current. Remarkably, our design achieves 96.6% accuracy for handwriting recognition and 91.5% accuracy for image classification without extra training. Furthermore, it demonstrates exceptional performance, achieving 885.4 TOPS/W–nearly double that of existing designs. This study represents the first successful implementation of an in-memory macro using a multi-state FeFET cell for complete MAC operations, preserving crossbar density without additional structural overhead.
Objective This study aims to evaluate the prognostic value of stress cardiac magnetic resonance (CMR) without inducible ischemia in a real-world cohort of patients with known severe coronary artery stenosis. Background The prognosis of patients with severe coronary artery stenosis and without inducible ischemia using stress CMR remains uncertain, even though its identification of functionally significant coronary artery disease (CAD) is excellent. Materials and methods Patients without inducible ischemia and known CAD who underwent stress CMR between February 2015 and December 2016 were included in this retrospective study. These patients were divided into two groups: group 1 with stenosis of 50%–75% and group 2 with stenosis of >75%. The primary endpoint was defined as the occurrence of a major adverse cardiovascular event (MACE) [cardiac death, non-fatal myocardial infarction (MI), percutaneous coronary intervention (PCI), or coronary artery bypass grafting (CABG)]. Results Real-world data collected from 169 patients with a median age of 69 (60–75) years were included. The median follow-up was 5.5 (IQR 4.1–6.6) years. Events occurred after a mean time of 3.0 ± 2.2 years in group 1 and 3.7 ± 2.0 years in group 2 ( p = 0.35). Sixteen (18.8%) patients in group 1 and 23 (27.4%) patients in group 2 suffered from MACE without a significant difference between the two groups ( p = 0.33). In group 2, one cardiac death (1.2%), seven non-fatal MI (8.3%), 15 PCI (17.9%), and one CABG (1.2%) occurred. Conclusion The findings of this pilot study suggest that long-term outcomes in a real-world patient cohort with known severe and moderate coronary artery stenosis but without inducible ischemia were similar. Stress CMR may provide valuable risk stratification in patients with angiographically significant but hemodynamically non-obstructive coronary lesions.
As highlighted in several studies, the thermal expansion of the polymer matrix plays an important role with respect to the thermo-electrical behavior of conductive composites. Although it is known that the crosslink density of polymers affects the thermal expansion of polymers, no studies are known on whether different crosslink densities can affect the thermoelectric behavior of electrically conductive composites. In this study, silicone/carbon black composites with different crosslink densities, were investigated with respect to their self-heating properties under applied voltages and their thermo-electrical behaviors like positive and negative temperature coefficients. Specific resistances ρ about 55 +/− 4.5 Ω-cm have been achieved and uniquely high negative temperature as well as low temperature coefficients were demonstrated, which make these materials interesting for sensor-technical applications. Regarding their self-heating behaviors, average surface temperatures of up to 147°C were measured, which further make these materials very interesting as flexible heating elements. Finally, it could be shown that the crosslink density of addition curing silicones is adjustable over a wide range and that it can be further significantly changed by the addition of carbon black and platinum content.
Droplets that spontaneously penetrate a gap between two hydrophobic surfaces without any external stimulus seems counterintuitive. However, in this work we show that it can be energetically favorable for a droplet to penetrate a gap formed by two hydrophobic or in some cases even superhydrophobic surfaces. For this purpose, we derived an analytical equation to calculate the change in Helmholtz free energy of a droplet penetrating a hydrophobic gap. The derived equation solely depends on the gap width, the droplet volume and the contact angle on the gap walls, and predicts whether a droplet penetrates a hydrophobic gap or not. Additionally, numerical simulations were conducted to provide insights into the gradual change in Helmholtz free energy during the process of penetration and to validate the analytical approach. A series of experiments with a hydrophobic gap having an advancing contact angle of 115∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$115^\circ$$\end{document}, a droplet volume of about 10 μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document}L and different gap widths confirmed the theoretical predictions. Limits and possible deviations between the analytical solution, the simulation and the experiments are presented and discussed.
S-N curve parameter determination is a time- and cost-intensive procedure. A standardized method for simultaneously determining all S-N curve parameters with minimum testing effort is still missing. The Bayesian optimal experimental design (BOED) approach can reduce testing effort and accelerates uncertainty reduction during fatigue testing for S-N curve parameters. The concept is applicable to all S-N curve models and is exemplary illustrated for a bilinear S-N curve model. We demonstrate the fatigue testing workflow for the bilinear S-N curve in detail while discussing steps and challenges when generalizing to other S-N curve models. Applying the BOED to the bilinear S-N curve models, minor errors and uncertainties for all S-N curve parameters are obtained after only 10 experiments for data scatter values below 1.1. For such, the relative error in fatigue limit estimation was less than 1% after five tests. When S-N data scatter higher than 1.2 is concerned, 17 tests were required for robust analysis. The BOED methodology should be applied to other S-N curve models in the future. The high computational effort and the approximation of the posterior distribution with a normal distribution are the limitations of the presented BOED approach.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
3,067 members
Amit Kale
  • Department of Corporate Research
Jens Baringhaus
  • Department of Corporate Research
Philippe Leick
  • Division of Gasoline Systems
Michael Pfeiffer
  • Bosch Center for Artificial Intelligence - Research
Lavdim Halilaj
  • Department of Corporate Research
Robert-Bosch GmbH, 70839, Stuttgart, Baden-Württemberg, Germany