Recent publications
To speed up testing time in a power cycling test, normally high acceleration factors induced by high temperature swings are applied. With a classical Coffin-Manson lifetime approach the induced fatigue can be modelled. This work uses test conditions at the transition between the elastic and plastic deformation zone. Testing the high cycle fatigue zone requires evolved equipment, so active power cycling with switching losses is implemented. It was found that for high junction temperatures (
$T_{\rm{vj,max}} = 150°C$
) a transition between the plastic and the elastic zone could not be detected down to
$ΔT=18 \, \rm{K}$
. However, for reduced junction temperatures (
$T_{\rm{vj,max}} = 115°C$
) the start of the elastic zone was found at around
$ΔT < 29 \, \rm{K}$
. The main failure mechanism was found to be chip solder fatigue in the center of the solder layer. The experimental data are transferred into a 3D simulation environment to further investigate the failure mode. With the findings a lifetime model is described and applied which predicts, depending on conditions, lifetime benefits up to 268% compared to a standard lifetime approach.
In event-based vision, visual information is encoded by sequential events in space and time, similar to the human visual system, where the retina emits spikes. Thus, spiking neural networks are to be preferred for processing event-based input streams. As for classical deep learning networks, spiking neural networks must be robust against different corruption or perturbations in the input data. However, corruption in event-based data has received little attention so far. According to previous studies, biologically motivated neural networks, consisting of lateral inhibition to implement a competition mechanism between the neurons, show an increase in the robustness against loss of information of input data. We here analyze the influence of inhibitory feedback on the robustness against four different types of corruption on an event-based data set. We demonstrate how a 1 : 1 ratio between feed-forward excitation and feedback inhibition increases the robustness against the loss of events, as well as against additional noisy events. Interestingly, our results show that strong feedback inhibition is a disadvantage if events in the input stream are shifted in space or in time.
Meeting customer time requirements poses a major challenge in the context of high‐variety make‐to‐order companies. Companies need to reduce the lead time and process urgent jobs in time, while realising high delivery reliability. The key decision stages within Workload Control (WLC) are order release and shop floor dispatching. To the best of our knowledge, recent research has mainly focused on order release stage and inadvertently ignored shop floor dispatching stage. Meanwhile, urgency of job is not only related to its due date, but also affected by the dynamics of shop floor. Specifically, urgency of jobs may decrease at downstream operations in the job's routing, since priority dispatching for urgent jobs accelerates production speed at the upstream operations. And occupying production resources increases the waiting time of non‐urgent jobs at workstation. This phenomenon leads to the change of urgency of jobs. Misjudgement of urgent jobs therefore may result in actual urgent jobs not being processed in time. In response, the authors focus on shop floor dispatching stage and consider the transient status of urgent operations in the context of WLC. The urgency of jobs is rejudged at the input buffer of each workstation, which is firstly defined as urgent operations and non‐urgent operations. Using simulation, the results show that considering the transient status of urgent operations contributes to speeding up production for actual urgent jobs and meeting delivery performance both in General Flow Shop and Pure Job Shop. In addition, percentage tardy performance is greatly affected by norm levels, especially at the severe urgent level. These have important implications on how urgent operations should be designed and how norm level should be set at shop floor dispatching stage.
Within the Priority Programme Calm, Smooth and Smart, a new approach for deliberately introduced damping through variation of stiffness was presented. The proposed approach is a novel way of combining the concepts of damping and absorption inherently in a structure. By dynamically adapting the stiffness of a slender, beam-like structure through the use of shape adaption in the cross-section, energy is transferred from critical low-frequency modes into a specifically designed, higher frequency absorber mode, which can then be damped in an optimal way. Experimental studies were first conducted to examine the suitability of shape adaption for the proposed approach. Various investigations were carried out regarding application of the presented concept with different time laws for free and forced oscillations. In addition, thorough analytical and numerical studies were conducted to understand the internal energy transfers and provide the basis for decoupling of active and semi-active effects related to the reduction of vibrations. Another focus was on the synthesis of a structural layout that enables a defined stiffness change to be induced and an absorber mode (defined in shape and eigenfrequency) to be integrated into the structure.
With the increasing demand for component quality and shorter cycle times at the same time, the demands on the production processes are increasing. This also affects the separating processes such as shear cutting. Due to the increasing use of high-strength steels, conventional shear cutting processes are reaching their limits. One way to meet the challenges is to increase the cutting speed. At speeds above 3 \(\frac{m}{s}\), this is called high-speed impact cutting. A new machine concept with linear motors was developed to investigate the influences on different materials. With the help of this test bench, it is possible to flexibly adjust the cutting speeds and monitor the process. This enables the determination of the correlation between process parameters and the shear cutting result.
Purpose: The study examined the longitudinal interplay of anthropometric, metabolic, and neuromuscular development related to
performance in adolescent national-level swimmers over 12 months. Methods: Seven male and 12 female swimmers (14.8 [1.3] y,
FINA [International Swimming Federation] points 716 [51]) were tested before (T0) and after the preparation period (T1), at the
season’s peak (T2), and before the next season (T3). Anthropometric (eg, fat percentage) and neuromuscular parameters (squat and
bench-press load-velocity profile) were assessed on dry land. Metabolic (cost of swimming [C], maximal oxygen uptake [VO˙ 2peak],
and peak blood lactate [bLapeak]) and performance (sprinting speed [vsprint] and lactate thresholds [LT1 and 2]) factors were determined
using a 500-m submaximal, 200-m all-out, 20-second sprint, and incremental test (+0.03 m·s−1
, 3 min), respectively, in front-crawl
swimming. Results: vsprint (+0.6%) and LT1 and 2 (+1.9–2.4%) increased trivially and slightly, respectively, from T0 to T2 following
small to moderate strength increases (≥+10.2%) from T0 to T1 and VO˙ 2peak (+6.0%) from T1 to T2. Bench-press maximal strength
and peak power correlated with vsprint from T0 to T2 (r ≥ .54, P < .05) and LT2 at T1 (r ≥ .47, P < .05). Changes in fat percentage and
VO˙ 2peak (T2–T1 and T3–T2, r ≤ −.67, P < .01) and C and LT2 (T2–T0, r = −.52, P = .047) were also correlated. Conclusions:
Increases in strength and VO˙ 2peak from preparation to the competition period resulted in improved sprint and endurance performance.
Across the season, upper-body strength was associated with vsprint and LT2, although their changes were unrelated.
The review summarizes our recent reports on brightly‐emitting materials with varied dimensionality (3D, 2D, 0D) synthesized using “green” chemistry and exhibiting highly efficient photoluminescence (PL) originating from self‐trapped exciton (STE) states. The discussion starts with 0D emitters, in particular, ternary indium‐based colloidal quantum dots, continues with 2D materials, focusing on single‐layer polyheptazine carbon nitride, and further evolves to 3D luminophores, the latter exemplified by lead‐free double halide perovskites. The review shows the broadband STE PL to be an inherent feature of many materials produced in mild conditions by “green” chemistry, outlining PL features general for these STE emitters and differences in their photophysical properties. The review is concluded with an outlook on the challenges in the field of STE PL emission and the most promising venues for future research.
To cope with the increasing demands in care due to the aging society and the simultaneous lack of professional caregivers, a technical assistance system can help to monitor elderly people in their own homes and to support professional caregivers and caring relatives. The technical assistance system consists of a smart sensor with an omnidirectional camera on the ceiling of each room and additionally, other smart home sensors in an apartment for elderly persons. Based on smart sensor data, the positions, poses, and activities of the patient are detected with the help of machine learning (ML) techniques. In this work a temporal behaviour model of the patient is developed to recognize activities of daily living (ADL) such as "eating", "sleeping" or "emergency". For this, the actions (e.g., walking, sitting) and the location in the current room of the patient, as well as the data of other sensors in the apartment are used. This input data is fed into a trained decision tree of depth 14, which ultimately determines the patient’s activity. The accuracy for detecting activities of daily living with the decision tree is 96.47%, where the activities can be detected in real-time.
The aim of this chapter is to showcase the effectiveness of partial least squares structural equation modeling (PLS-SEM) in estimating choices based on data derived from discrete choice experiments. To achieve this aim, we employ a PLS-SEM-based discrete choice modelling approach to analyze data from a large study in the German healthcare sector. Our primary focus is to reveal distinct customer segments by exploring variations in their preferences. Our results demonstrate similarities to other segmentation techniques, such as latent class analysis in the context of multinomial logit analysis. Consequently, employing PLS-SEM to examine data from discrete choice experiments holds great promise in deepening our understanding of consumer choices.
This comment on “Mindfulness for global public health: Critical analysis and agenda” by Doug Oman focuses on the difficulties associated with the current use and understanding of the term mindfulness. In particular, I argue that the current lack of agreement on what mindfulness practice is, or, perhaps more realistically, what mindfulness practices are, and how their effects can be explained might jeopardize such an integration process in the long run. In the literature, one can find widely differing conceptions of what constitutes a mindfulness practice. Moreover, there is clear evidence that different mindfulness practices can yield quite different effects. This holds for the comparison of “mindfulness packages” but also for comparisons of single components of these packages, and for incremental combinations of components. There is also strong evidence that mindfulness practices do not work equally well for different purposes and different people. These differential effects need to be elaborated and explained. Unfortunately, theoretical models for mindfulness practices are also still quite heterogeneous. As a first step, researchers and practitioners could be very specific about what they mean by mindfulness practice or even use alternative terms for different practices. Moreover, they could stay open to alternative forms of meditation and put as much theory as possible into their research to eventually find out when, how, and why specific mindfulness practices (and packages thereof) work and for whom.
Interpolation‐based data‐driven methods, such as the Loewner framework or the Adaptive Antoulas‐Anderson (AAA) algorithm, are established and effective approaches to find a realization of a dynamical system from frequency response data (measurements of the system's transfer function). If a system‐theoretic representation of the original model is not available or unfeasible to evaluate efficiently, such reduced realizations enable effective analysis and simulation. This is especially relevant for models of interconnected dynamical systems, which typically have a high number of inputs and outputs to model their coupling conditions correctly.
Tangential interpolation is an established strategy to construct accurate reduced‐order models while ensuring a reasonably small size even if many inputs and/or outputs have to be considered. In this contribution, we evaluate the applicability and effectiveness of data‐driven interpolation methods to compute reduced‐order models of dynamical systems with many inputs and outputs. Additionally, we extend AAA to a tangential interpolation setting and thus enable the use of AAA‐like methods in the context of transfer function interpolation for systems with many inputs and outputs.
The auditory system has an amazing ability to rapidly encode auditory regularities. Evidence comes from the popular oddball paradigm, in which frequent (standard) sounds are occasionally exchanged for rare deviant sounds, which then elicit signs of prediction error based on their unexpectedness (e.g., MMN, P3a). Here, we examine the widely neglected characteristics of deviants being bearers of predictive information themselves: Naïve participants listened to sound sequence constructed according to a new, modified version of the oddball paradigm including two types of deviants that followed diametrically opposed rules: one deviant sound occurred mostly in pairs (repetition rule), the other deviant sound occurred mostly in isolation (non-repetition rule). Due to this manipulation, the sound following a first deviant (either the same deviant or a standard) was either predictable or unpredictable based on its conditional probability associated with the preceding deviant sound. Our behavioural results from an active deviant-detection task replicate previous findings that deviant repetition rules (based on conditional probability) can be extracted when behaviourally relevant. Our electrophysiological findings obtained in a passive-listening setting indicate that conditional probability also translates into differential processing at the P3a level. However, MMN was confined to global deviants and was not sensitive to conditional probability. This suggests that higher-level processing concerned with stimulus selection and/or evaluation (reflected in P3a) but not lower-level sensory processing (reflected in MMN) considers rarely encountered rules.
Production companies are facing more and more the challenges of a globalized market with its increased competition and high customer demands. The claim for individualized products requires production to be flexible and dynamic, while simultaneously supply shortages, short-notice orders or machine downtime increase the difficulty to fulfil production plans and, hence, to stay competitive. In this context, creating highly effective and efficient processes is essential. Thus, for complex systems, as manufacturing processes with many different internal and external influences as well as connections between the production steps are, the cause-effect relations of their actions must be made transparent. We propose a methodology to formulate key performance indicators (KPIs) which are relevant for the achievement of the company’s set goals, to reveal the dependencies between the elements necessary for the calculation of these KPIs, and to identify critical indicators and their effects on the process chain. The methodology is based on Systems Thinking and allows companies to get a deep understanding on the usage of KPI dependencies to effectively control and improve their processes.
Production planning and control systems that regulate the Work-In-Process (WIP) in the production system are argued to increase throughput and reduce cycle times. This study assesses the performance of Kanban, Constant WIP (ConWIP) and a hybrid Kanban/ConWIP system that is typically realized in real life production lines with limited buffer space. A physical lab scale system model of a production line is built, and a new digital twin framework to realize production planning and control implemented. Results indicate that production planning and control systems that regulate the WIP reduce the time it takes a job to pass through the production system. However, they reduce throughput, and consequently increase the time a worker (capacity) spends with the job (processing and waiting). The term “cycle time” may refer to both in the literature. Results highlight that there is a trade-off, which has important implications for practice since management must decide which cycle time is the most important in their shop. This study further shows how production planning and control systems can be implemented using new technology, and it highlights the potential of lab scale system models as alternatives to computer simulations.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Straße der Nationen 62, 09111, Chemnitz, Saxony, Germany
Website
http://www.tu-chemnitz.de/