Bosch
  • Stuttgart, Baden-Württemberg, Germany
Recent publications
Autonomous driving is a key vehicle capability for future mobility solutions that relies on the reliability of its high-performance vehicle computer. As for any electronics for automotive applications, thermal management is a crucial point. A very extensively employed approach for improving heat removal is the usage of heat spreaders in the form of a package lid, especially for flip-chip applications. The usage of a lid in flip-chip ball grid array (BGA) packages improves heat removal based on the heat spreading capabilities of the lid, and reduces package warpage, since with a lid the package becomes stiffer. However, adding a lid also increases stresses at solder interconnects. When selecting a material for the lid, the most intuitive criterion is to focus on the material’s thermal conductivity. Sometimes overlooked, the coefficient of thermal expansion (CTE) of the lid also plays an important role regarding package warpage and reliability of thermal interface materials (TIM) as well as solder interconnects, such as solder bumps. This work focuses on the influence of different lid materials at package level for flip-chip applications. Consequently, the influence of three alternative materials is investigated and compared to the performance of a standard copper lid. For the quantification of the effects, a validated finite element model of a flip-chip package is presented, with the thermo-mechanical performance of the studied lid materials validated based on package warpage, peel strains and plastic strain energy density at the bumps, as well as on peel and shear strains at the TIM.
Modern combustion engines require an efficient cycle-by-cycle fuel injection control scheme to optimise the single combustion events during transient operation. The online optimisation of the respective control inputs typically needs accurate while sufficiently simple models of the combustion quantities. Based on a recently presented cycle-by-cycle optimisation scheme with a hybrid model, this paper focuses on two aspects to enhance the accuracy as well as computational efficiency for an online computation. Firstly, the proper calibration of Gaussian processes nested in a combined physics-/data-based model structure is addressed. Respective test bench measurements and a tailored two-step training procedure are presented. Secondly, the computational efficiency of the online cycle-by-cycle optimisation is increased by mapping computationally intensive calculations into the data-based models through offline preprocessing. In addition, a data-driven approximation of the complete optimisation scheme is proposed to further minimise the computational demand. Simulation studies are used to evaluate the performance of these approaches.
In this work, we propose to use deep material networks (DMNs) as a surrogate model for full-field computational homogenization to inversely identify material parameters of constitutive inelastic models for short fiber-reinforced thermoplastics (SFRTs). Micromechanics offers an elegant way to obtain constitutive models of materials with complex microstructure, as these emerge naturally once an accurate geometrical representation of the microstructure and expressive material models of the constituents forming the material are known. Unfortunately, obtaining the latter is non-trivial, essentially for two reasons. For a start, experiments on pure samples may not accurately represent the conditions present in the composite. Moreover, the manufacturing process may alter the material behavior, and a subsequent modification is necessary. To avoid modeling the physics of the manufacturing process, one may identify the material models of the individual phases of the composite based on experiments on the composite. Unfortunately, this procedure requires conducting time-consuming simulations. In the work at hand, we use Deep Material Networks to replace full-field simulations, and to carry out an inverse parameter optimization of the matrix model in a SFRT. We are specifically concerned with the long-term creep response of SFRTs, which is particularly challenging to both experimental and simulation-based approaches due to the strong anisotropy and the long time scales involved. We propose a dedicated fully coupled plasticity and creep model for the polymer matrix and provide details on the experimental procedures.
Zusammenfassung Data-Science-Projekte sind typischerweise interdisziplinär, adressieren vielfältige Problemstellungen aus unterschiedlichen Domänen und sind häufig durch heterogene Projektmerkmale geprägt. Bestrebungen in Richtung einer einheitlichen Charakterisierung von Data-Science-Projekten sind insbesondere dann relevant, wenn über deren Durchführung entschieden werden soll – beispielsweise anhand von Kriterien wie Ressourcenbedarf, Datenverfügbarkeit oder potenziellen Risiken. Nach bestem Wissen der Autoren fehlt es jedoch in Wissenschaft und Praxis bisher an einschlägigen Ansätzen. Mit diesem Artikel wird ein erster Schritt auf dem Weg hin zu einem Ansatz für eine einheitliche Charakterisierung von Data-Science-Projekten gegangen, indem ein morphologischer Kasten vorgeschlagen wird, der im Rahmen einer dreischrittigen Analyse auf Basis eines Fragenkataloges abgeleitet wurde. Er umfasst sieben Dimensionen mit 32 Dimensionsausprägungen und wird anhand einer Fallstudie aus dem Gebiet der Predictive Maintenance illustriert. Der morphologische Kasten bietet theoretische und praktische Anwendungspotenziale für den strukturierten Vergleich von Data-Science-Projekten und die Definition von Projektportfolios, erhebt jedoch keinen Anspruch auf Vollständigkeit. Er ist somit als Vorschlag und Anstoß zum Einstieg in einen weiterführenden Diskurs anzusehen.
Under fatigue loading, the stiffness decrease in short-fiber reinforced polymers reflects the gradual degradation of the material. Thus, both measuring and modeling this stiffness is critical to investigate and understand the entire fatigue process. Besides evolving damage, viscoelastic effects within the polymer influence the measured dynamic stiffness. In this paper, we study the influence of a linear viscoelastic material model for the matrix on the obtained dynamic stiffness and extend an elastic multiscale fatigue-damage model to viscoelasticity. Our contribution is two-fold. First, we revisit the complex-valued elastic models known in the literature to predict the asymptotic periodic orbit of a viscoelastic material. For small phase shifts in an isotropic linear viscoelastic material, we show through numerical experiments that a real-valued computation of an “elastic” material is sufficient to approximate the dynamic stiffness of a microstructure with a generalized Maxwell material and equal Poisson’s ratios in every element as matrix, reinforced by elastic inclusions. This makes standard solvers applicable to fiber-reinforced thermoplastics. Secondly, we propose a viscoelastic fatigue-damage model for the thermoplastic matrix based on decoupling of the time scales where viscoelastic and fatigue-damage effects manifest. We demonstrate the capability of the multiscale model to predict the dynamic stiffness evolution under fatigue loading of short-fiber reinforced polybutylene terephthalate (PBT) by a validation with experimental results.
In recent years, computer vision algorithms have become more powerful, which enabled technologies such as autonomous driving to evolve rapidly. However, current algorithms mainly share one limitation: They rely on directly visible objects. This is a significant drawback compared to human behavior, where visual cues caused by objects (e. g., shadows) are already used intuitively to retrieve information or anticipate occurring objects. While driving at night, this performance deficit becomes even more obvious: Humans already process the light artifacts caused by the headlamps of oncoming vehicles to estimate where they appear, whereas current object detection systems require that the oncoming vehicle is directly visible before it can be detected. Based on previous work on this subject, in this paper, we present a complete system that can detect light artifacts caused by the headlights of oncoming vehicles so that it detects that a vehicle is approaching providently (denoted as provident vehicle detection). For that, an entire algorithm architecture is investigated, including the detection in the image space, the three-dimensional localization, and the tracking of light artifacts. To demonstrate the usefulness of such an algorithm, the proposed algorithm is deployed in a test vehicle to use the detected light artifacts to control the glare-free high beam system proactively (react before the oncoming vehicle is directly visible). Using this experimental setting, the provident vehicle detection system’s time benefit compared to an in-production computer vision system is quantified. Additionally, the glare-free high beam use case provides a real-time and real-world visualization interface of the detection results by considering the adaptive headlamps as projectors. With this investigation of provident vehicle detection, we want to put awareness on the unconventional sensing task of detecting objects providently (detection based on observable visual cues the objects cause before they are visible) and further close the performance gap between human behavior and computer vision algorithms to bring autonomous and automated driving a step forward.
Grinding with metal-bonded cBN grinding tools enables a long lifetime without any need for redressing. However, the lifetime strongly varies and a precise estimation of the remaining number of parts to be machined is an essential contribution to efficient industrial manufacturing. Previous studies focus mainly on the separation of two or three tool wear categories by the application of traditional machine learning methods while achieving a good accuracy of the prediction. In this work, a tool condition monitoring system is proposed to obtain more than 1500 statistical features from several sensors, which represent the characteristics of the performed grinding process. In contrast to most previous studies, a regression task is formulated to estimate the remaining useful tool lifetime, which is determined by the occurrence of strong grinding burn. Two machine learning concepts are compared in this study comprising an ensemble tree-based method and a custom deep state space model based on recurrent neural networks. Using first the signal-based features only, a good estimation quality for both models could be achieved. In addition, the features are extended by physical knowledge coming from a surrogate model of a kinematic geometrical process model and some well-known physical relationships. Using this hybrid machine learning framework, the same models are evaluated again. The prediction quality of the models could be improved significantly resulting in a mean R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^{2}$$\end{document} score of more than 0.95.
The necessity to obtain, to parametrize, and to maintain models of the underlying dynamics impedes predictive control of energy systems in many real-world applications. To alleviate the need for explicit model knowledge this paper proposes a framework for the operation of distributed multi-energy systems via data-driven predictive control with stochastic uncertainties. Instead of modeling the dynamics of the individual distributed energy resources, our approach relies on measured input–output data of the distributed resources only. Moreover, we combine data-driven predictive control with forecasts of exogenous signals (renewable generations and demands) by Gaussian processes. A simulation study based on realistic data illustrates the efficacy of the proposed scheme to handle mild non-linearities and measurement noise.
Approximate Computing and Mixed Signal In-Memory accelerators are promising paradigms to significantly reduce computational requirements of DNN inference without accuracy loss. In this work, we present a novel in-memory design for layer-wise approximate computation at different approximation levels. A sensitivity-based high-dimensional search is performed to explore the optimal approximation level for each DNN layer. Our new methodology offers high flexibility and optimal trade-off between accuracy and throughput, which we demonstrate by an extensive evaluation on various DNN benchmarks for medium- and large-scale image classification with CIFAR10, CIFAR100 and ImageNet. With our novel approach, we reach an average of 5x and up to 8x speedup without accuracy loss.
The presented method describes a standardized test procedure for the evaluation of takeover performance of drivers during automated driving. It was primarily developed to be used for evaluating Level 3 systems (conditional automated driving). It should be applied in a driving simulator environment during the development phase of a system. The method consists of a test course on a three-lane highway with 12 test scenarios where the driver repeatedly has to take over control from the automated system. Each scenario requires the driver to build up an adequate situation awareness in order to take the decision for the correct action. The method was explicitly designed to map the four relevant steps in the takeover process of automated driving: Perception – Awareness – Decision – Action and is therefore called PADA-AD Test for automated driving. The method description contains guidelines with regard to the specification of the test course and the included test scenarios, the design and duration of the automated drives, the non-driving related task to be performed during the automated drives, the instructions to be given to the subjects and finally the measures for evaluating takeover performance of the drivers. • A test procedure for the evaluation of takeover performance of drivers during automated driving was developed for usage in a driving simulator during the development phase of a system/HMI • The test course enables the assessment of the driver's takeover performance in various test scenarios including higher cognitive processes • The method is highly standardized and thus replicable through use of a predetermined test course with clearly defined scenarios, reduced environmental conditions and "popping up” of situational elements
Robots operating with humans in highly dynamic environments need not only react to moving persons and objects but also to anticipate and adhere to patterns of motion of dynamic agents in their environment. Currently, robotic systems use information about dynamics locally, through tracking and predicting motion within their direct perceptual range. This limits robots to reactive response to observed motion and to short-term predictions in their immediate vicinity. In this paper, we explore how maps of dynamics (MoDs) that provide information about motion patterns outside of the direct perceptual range of the robot can be used in motion planning to improve the behaviour of a robot in a dynamic environment. We formulate cost functions for four MoD representations to be used in any optimizing motion planning framework. Further, to evaluate the performance gain through using MoDs in motion planning, we design objective metrics, and we introduce a simulation framework for rapid benchmarking. We find that planners that utilize MoDs waste less time waiting for pedestrians, compared to planners that use geometric information alone. In particular, planners utilizing both intensity (proportion of observations at a grid cell where a dynamic entity was detected) and direction information have better task execution efficiency.
Usage profiles of shared autonomous fleets will considerably differ from present-day privately owned vehicles. Thus, requirements on powertrain and other vehicle components are expected to change significantly. While there are still no real-world data available, automotive requirement engineering strongly depends on synthetic driving profiles, for example, forwarded by traffic simulation. These simulations, however, are quite challenging as they need to combine multi-modal, large-scale fleet simulations with microscopic traffic modeling to simultaneously produce realistic usage profiles and detailed driving cycles. We aim to combine the two open-source tools MATSim and SUMO to achieve this goal. As an important step in this endeavor, we analyze the consistency of both MATSim and SUMO with regard to traffic dynamics by means of three experiments with an increasing level of complexity: (i) analytically on a homogeneous road segment in the steady-state; (ii) numerically on a homogeneous road segment in the non-stationary state for a synthetic test case; and (iii) numerically for a highly non-linear medium-sized real-world test case in Berlin. We analyze the simulation results with respect to macroscopic flow–density–speed relations. In addition, we also study network impedances for the Berlin test case. We show that the traffic dynamics of MATSim and SUMO behave differently for the various test cases and discuss the implications on our tool-coupling efforts.
The combination of direct laser interference patterning (DLIP) with laser-induced periodic surface structures (LIPSS) enables the fabrication of functional surfaces reported for a wide spectrum of materials. The process throughput is usually increased by applying higher average laser powers. However, this causes heat accumulation impacting the roughness and shape of produced surface patterns. Consequently, the effect of substrate temperature on the topography of fabricated features requires detailed investigations. In this study, steel surfaces were structured with line-like patterns by ps-DLIP at 532 nm. To investigate the influence of substrate temperature on the resulting topography, a heating plate was used to adjust the temperature. Heating to 250 $$^{\circ }$$ ∘ C led to a significant reduction of the produced structure depths, from 2.33 to 1.06 µm. The reduction is associated with the appearance of a different LIPSS type, depending on the grain orientation of the substrates and laser-induced superficial oxidation. This study revealed a strong effect of substrate temperature, which is also to be expected when heat accumulation effects arise from processing surfaces at high average laser power.
The requirements and availability of the electric powertrain will be significantly increased with the introduction of automated driving functions. In this case, the mechanical fallback level of the driver must be replaced by a fault-tolerant system. New concepts such as the predictive diagnostic or customized operation strategies ensure the fault tolerance. An essential component to realize the requirements is the electric drive. In the present work, a method for the prediction of the fault condition in permanent magnet synchronous motor (PMSM) is developed based on artificial neural networks (ANN). Not only the failure occurrence is detected, but also the severity of the failure is predicted and classified. For this purpose, a suitable failure indicator is needed, which contradicts the severity of the failure and thus allows both the prediction and degradation (protection) of the system. The prerequisite for the use of machine learning methods, such as artificial neural networks, is the existence of a database. Data is obtained with the help of simulation model of PMSM, which can be corrected with failures. Features from the phase currents and the battery current in the time domain and in the frequency domain are presented as well as classical methods such as the wavelet analysis or the decomposition into symmetrical components. The selection of the features has a great influence on the diagnostic result and on the performance of the algorithm. The failures are represented by the features in the frequency domain. Based on these aspects, several neural networks are formed. To predict the failure, an accuracy of about 95% is achieved and for the classification an accuracy of about 98.5%.
The complexity of crash scenarios in the context of vehicle safety is steadily increasing. This is especially the case on the way to mixed traffic challenges with non-automated and automated driving vehicles. The number of simulations required to design a robust restraint system is thus also increasing. The vast range of possible scenarios here is causing a huge parameter space. Simultaneously biofidelic simulation models are resulting in very high computational costs and therefore the number of simulations should be limited to a feasible operational range. In this study, a machine-learning based design of experiments algorithm is developed, which specifically addresses the issues when designing a safety system with a limited number of simulation samples taking diversity of the occupant and accident scenario into account. In contrast to an optimization task, where the aim is to meet a target function, our job has been to find the critical load case combinations to make sure that these are addressed and not missed. A combination of a space-filling approach and a metamodel has been established to find the critical scenarios in order to improve the system for those cases. It focuses specifically on the areas that are difficult to predict by the metamodel. The developed method was applied to iteratively generate a simulation matrix of a total of 208 simulations with a generic interior model and a detailed FE human body model. Kinematic and strain-based injury metrics were used as simulation output. These were used to train the metamodels after each iteration and derive the simulation matrix for the next iteration. In this paper we present a method that allows the training of a robust metamodel for the prediction of injury criteria, considering both varying load cases and varying restraint system parameters for individual anthropometries and seating postures. Based on that, restraint systems or metamodels can be optimized to achieve the best overall performance for a huge variety of possible scenarios with a specific focus on critical scenarios.
Background: /Aims: Although T cell intrinsic expression of G9a has been associated with murine intestinal inflammation, mechanistic insight into the role of this methyltransferase in human T cell differentiation is ill-defined and manipulation of G9a function for therapeutic use against inflammatory disorders is unexplored. Methods: Human naïve T cells were isolated from peripheral blood and differentiated in vitro in the presence of a G9a inhibitor (UNC0642) prior to being characterized via the transcriptome (RNA-seq), chromatin accessibility (ATAC-seq), protein expression (CyTOF, flow cytometry), metabolism (mitochondrial stress test, UPLC-MS/MS) and function (T cell suppression assay). In vivo role of G9a was assessed using three murine models. Results: We discovered that pharmacologic inhibition of G9a enzymatic function in human CD4 T cells led to spontaneous generation of FOXP3+ T cells (G9ai-Tregs) in vitro that faithfully reproduce human Tregs, functionally and phenotypically. Mechanistically, G9a inhibition altered the transcriptional regulation of genes involved in lipid biosynthesis in T cells resulting in increased intracellular cholesterol. Metabolomic profiling of G9ai-Tregs confirmed elevated lipid pathways that support Treg development through oxidative phosphorylation and enhanced lipid membrane composition. Pharmacologic G9a inhibition promoted Treg expansion in vivo upon antigen (gliadin) stimulation and ameliorated acute TNBS-induced colitis secondary to tissue specific Treg development. Lastly, Tregs lacking G9a expression (G9aKO Tregs) remain functional chronically and can rescue T cell transfer induced colitis. Conclusion: G9a inhibition promotes cholesterol metabolism in T cells favoring a metabolic profile that facilitates Treg development in vitro and in vivo. Our data support the potential use of G9a inhibitors in the treatment of immune-mediated conditions including inflammatory bowel disease.
The reliability of piezoelectric thin films is crucial for piezoelectric micro-electromechanical system applications. The understanding of resistance degradation in piezoelectric thin films requires knowledge about point defects. Here, we show the resistance degradation mechanism in the lead-free alternative sodium potassium niobate (KNN) thin films and the relationship to point defects in both field-up and field-down polarities. The conduction mechanism of KNN thin films is found to be Schottky-limited. Furthermore, a reduction in Schottky barrier height accompanies the resistance degradation resulting from interfacial accumulation of additional charged defects. We use thermally stimulated depolarization current measurements and charge-based deep level transient spectroscopy to characterize the defects in KNN thin films. Our results show that oxygen vacancies accumulate at the interface in field-up polarity, and multiple defects accumulate in field-down polarity, potentially oxygen vacancies and holes trapped by potassium vacancies. We use wafer deposition variation to create samples with different film properties. Measurement results from these samples correlate resistance degradation with the defect concentration. We find the natural logarithm of leakage current to be linearly proportional to the defect concentration to the power of 0.25. The demonstrated analysis provides a precise and meaningful process assessment for optimizing KNN thin films.
Many angle or position sensors rank among the inductive encoders, although their sensing principle is different. The sensor design investigated in this paper is based on coupled coils, whereas the information about the position angle is modulated on the induced voltage, measured at the receiving coils. Unfortunately, no closed solution for most of the physical quantities exists, since this principle is based on eddy currents, which are rather complex to calculate for the given geometry. Consequently, the common way is to calculate the sensor quantities by a 3D finite-element (FE) simulation. However, this leads in most cases to a high time and computational effort. To overcome the limitations with respect to computational resources, a novel method is presented to reduce simulation effort and calculate regression models, which can even replace simulations. In the following investigations, D-optimal designs are used—a subdomain in the field of statistical design of experiments—and combined with a numerical implementation of Faraday’s law, in order to calculate the induced voltages afterwards from simulated magnetic field data. With this method, the sensor signals can be calculated for multiple angle positions from one simulated position by shifting the integration boundaries. Hence, simulation time is significantly reduced for a full period. The regression models obtained by this method, can predict the Tx-coil inductance, induced Rx-voltage amplitude and angular error in dependency of geometric design parameters.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
2,802 members
Amit Kale
  • Department of Corporate Research
Jens Baringhaus
  • Department of Corporate Research
Michael Pfeiffer
  • Bosch Center for Artificial Intelligence - Research
Lavdim Halilaj
  • Department of Corporate Research
Marco Wagner
  • Department of Corporate Research
Information
Address
Robert-Bosch GmbH, 70839, Stuttgart, Baden-Württemberg, Germany
Website
http://www.bosch.com