Recent publications
Product engineering is a highly complex process faced with many challenges. To meet today's challenges of society, such as climate change, energy production and demands, and demographic shifts, and to achieve economic success for companies, technical solutions in products and systems must evolve and advance. Hereby, university research provides a potent source of cutting-edge technologies and knowledge that companies can use as input to advance their solutions. However, many challenges and barriers hinder the process of transferring new technologies, concepts, and knowledge from research to corporate engineering. In this article, we present 22 recommendations to improve the usability of research results for the activities and processes of corporate product engineering. These recommendations address the three relevant target groups: (I) corporate engineers and companies, (II) researchers and research facilities, and (III) funding agencies and (research) policymakers. First, we offer recommendations for corporate engineers and companies to integrate research results more efficiently. Second, we present recommendations for researchers and research facilities to support the promotion and transferability of their research results. Third, we provide recommendations for funding agencies and (research) policymakers to positively influence the usability of research results in corporate engineering. We expect that implementing one or more of our recommendations will enhance the efficiency of knowledge and technology transfer into corporate product engineering. We anticipate this will lead to faster technological advancements for companies and social benefits by addressing today's major challenges more effectively.
Predicting the physicochemical properties of ionizable solutes, including solubility and lipophilicity, is of broad significance. Such predictions rely on the accurate determination of solvation free energies for ions. However, the limited availability of high-quality reference data poses a challenge in developing accurate, inexpensive computational prediction methods. In this study, we address both issues of data quality and availability. We present three databases and models related to ionic phenomena: 1 (1) 8,241 pK a datapoints across 8 solvents, (2) 5,536 gas-phase acidities from DLPNO- CCSD(T) QM calculations, and (3) 6,090 solvation free energies of anions across 8 solvents obtained from a thermodynamic cycle. We also report 6,088 solvation free energies of neutral conjugate solutes computed using the COSMO-RS method. The pKa data were obtained from the iBonD database, cleaned, and combined with a separate compilation of trustworthy reference pKa data. Gas-phase acidities were computed for most of the acids present in the pKa corpus. Leveraging these data, we compiled values for solvation free energies of anions. We then trained several graph neural network models, which can be used as an alternative to QM approaches to quickly estimate these properties. The pKa and gas-phase acidity models accept reaction SMILES strings of the acid dissociation as inputs, whereas the solvation energy model accepts the SMILES string of the anion. Our microscopic pKa model achieves good accuracy, with an overall test mean average error of 0.58 units on unseen solutes and 0.59 on the SAMPL7 challenge (the lowest error so far among multi-solvent models). Our gas-phase acidity model had mean absolute errors slightly above 3 kcal/mol when evaluated against experimental data. The anionic solvation free energy model had mean absolute errors of less than 3 kcal/mol in several test evaluations, comparable to (though less reliable than) several widely-used QM-based solvation models. The models and data are free and publicly available.
Research in sub-Saharan Africa has shown failures in most of municipal waste composting initiatives because of bad-quality composts due to the lack of biowaste source segregation. Until now, very few biowaste source-segregation initiatives have been carried out on this part of the world. This study aimed at assessing the biowaste sorting efficiency and the attitude of households towards a pilot biowaste source-segregation system linked to a decentralized composting plant in Tiassalé. For this purpose, the impurity rate of source-segregated biowaste was monitored through the first year of implementation. Then, a cross-sectional survey was conducted to evaluate households’ attitude. The results have shown that the average impurities rate in source-segregated biowaste was very low (1%). This finding was confirmed by the results of laboratory analysis which revealed a very low heavy metals (0.2, 12.4, 7.1 and 15.5 mg kg ⁻¹ DS for Cd, Cr, Ni and Pb, respectively) contamination in the compost produced. Regarding the acceptability of the source-segregation system, the results showed that the majority (75%) of the participants accepted the source-segregation system of biowaste and almost half (47%) of them were ready to pay for such a collection service. In conclusion, this study revealed that providing households with the needed municipal solid waste management infrastructures and rising awareness about biowaste source segregation are keys for the establishment of such a collection system in Tiassalé and similar urban areas. The findings also proved the effectiveness of biowaste source-segregation system in the production of high-quality compost.
We study strong linearizations and the uniqueness of preduals of locally convex Hausdorff spaces of scalar‐valued functions. Strong linearizations are special preduals. A locally convex Hausdorff space F(Ω) of scalar‐valued functions on a nonempty set Ω is said to admit a strong linearization if there are a locally convex Hausdorff space YY, a map δ:Ω→Y, and a topological isomorphism T:F(Ω)→Yb′ such that T(f)∘δ=f for all f∈F(Ω). We give sufficient conditions that allow us to lift strong linearizations from the scalar‐valued to the vector‐valued case, covering many previous results on linearizations, and use them to characterize the bornological spaces F(Ω) with (strongly) unique predual in certain classes of locally convex Hausdorff spaces.
Predicting the physicochemical properties of ionizable solutes, including solubility and lipophilicity, is of broad significance. Such predictions rely on the accurate determination of solvation free energies for ions. However, the limited availability of high-quality reference data poses a challenge in developing accurate, inexpensive computational prediction methods. In this study, we address both issues of data quality and availability. We present three databases and models related to ionic phenomena: 1 (1) 8,241 pK a datapoints across 8 solvents, (2) 5,536 gas-phase acidities from DLPNO- CCSD(T) QM calculations, and (3) 6,090 solvation free energies of anions across 8 solvents obtained from a thermodynamic cycle. We also report 6,088 solvation free energies of neutral conjugate solutes computed using the COSMO-RS method. The pKa data were obtained from the iBonD database, cleaned, and combined with a separate compilation of trustworthy reference pKa data. Gas-phase acidities were computed for most of the acids present in the pKa corpus. Leveraging these data, we compiled values for solvation free energies of anions. We then trained several graph neural network models, which can be used as an alternative to QM approaches to quickly estimate these properties. The pKa and gas-phase acidity models accept reaction SMILES strings of the acid dissociation as inputs, whereas the solvation energy model accepts the SMILES string of the anion. Our microscopic pKa model achieves good accuracy, with an overall test mean average error of 0.58 units on unseen solutes and 0.59 on the SAMPL7 challenge (the lowest error so far among multi-solvent models). Our gas-phase acidity model had mean absolute errors slightly above 3 kcal/mol when evaluated against experimental data. The anionic solvation free energy model had mean absolute errors of less than 3 kcal/mol in several test evaluations, comparable to (though less reliable than) several widely-used QM-based solvation models. The models and data are free and publicly available.
The phase problem is a well known ill-posed reconstruction problem of coherent lens-less microscopic imaging, where only the intensities of a complex wave-field are measured by the detector and the phase information is lost. For the reconstruction of sharp images from holograms in a near-field experimental setting, it is crucial to solve the autofocus problem, i.e., to precisely estimate the Fresnel number of the forward model. Otherwise, blurred out-of focus images that also can contain artifacts are the result. In general, a simple distance measurement at the experiment is not sufficiently accurate, thus the fine-tuning of the Fresnel number has to be done prior to the actual reconstructions. This can be done manually or automatically by an estimation algorithm. To automatize the process, as needed, e.g., for in-situ/operando experiments, different focus criteria have been widely studied in literature but are subjected to certain restrictions. The methods often rely on image analysis of the reconstructed image, making them sensitive to image noise and also neglecting algorithmic properties of the applied phase retrieval. In this paper, we propose a novel criterion, based on a model-matching approach, which improves autofocusing by also taking the underlying reconstruction algorithm, the forward model and the measured hologram into account. We derive a common autofocusing framework, based on a recent phase-retrieval approach and a downhill-simplex method for the automatic optimization of the Fresnel number. We further demonstrate the robustness of the framework on different data sets obtained at the nano imaging endstation of P05 at PETRA III (DESY, Hamburg) operated by Helmholtz-Zentrum Hereon.
Explaining and understanding AI-generated policies of control problems is crucial for the acceptance of such policies. Based on an optimisation challenge that took place at GECCO 2024, we describe different solutions for optimising generated policies for the well-known game 2048. At the same time, these generated policies aim to be simpler to understand and, thus, their decisions explainable in contrast to current solutions for 2048, as, e.g., neural network-based models. Our approach uses only Boolean expressions in the policy, and the optimisation shows that such a policy has advantages compared to more complex policy variants. The optimisation generates better results in a shorter time than for non-Boolean expressions. Additionally, the Boolean policies are smaller in size and can be reduced even more when applying existing techniques for term rewriting and simplification. These simplifications, again, may aid in understanding the policy’s decision.
The better we align AI models with our values, the easier we may make it to realign them with opposing values.
The finite element method is one of the most widely used computational methods in engineering and science. It provides approximate solutions to boundary value problems. The quality of these solutions critically depends on the underlying discretization, the so-called mesh. To optimize the mesh, adaptive refinement methods have been proposed over the last years that can improve mesh quality over a series of iteration steps. Herein, we propose a novel deep learning architecture that can cut short the process of mesh optimization. This architecture exploits fundamental invariance and equivariance properties to keep the amount of training data modest. It can generate high-quality meshes for a given boundary value problem and a desired target approximation error in a direct, non-iterative way. We demonstrate the performance of our method by the application to standard two-dimensional linear-elastic elasticity problems. There, our method generates meshes that reduce the solution error by 22.6 % (median) compared to uniform meshes with the same computational demand.
This study investigates the potential of additive-free extraction techniques to produce a proteolytically active yeast extract for use in the food industry. Brewer’s spent yeast, a by-product of the brewing industry, is utilized as a feedstock, and thus a new route for its valorization is proposed. Four methods of releasing these components while maintaining their intrinsic bioactivity are investigated: thermal autolysis, ultrasonication, cell milling and high-pressure homogenization. Thermal yeast autolysis resulted in the highest release of protease activity, with 2.45 ± 0.05 U/gdm after 3 h incubation at 45 °C. However, autolysis poses challenges for automation, and thus a stop criterion, due to the lack of in-line enzyme activity assays,. While glass bead treatment gave the highest reproducibility, ultrasonication and high-pressure homogenization resulted in comparably high protease activities in the BSY extracts produced. Both methods, in the form of a cell mill and high-pressure homogenizer, are cell disruption methods that are already employed on an industrial scale. It has now been demonstrated that these methods can be used to produce proteolytically active yeast extracts from a previously considered waste stream.
The Sustainability Nexus Analytics, Informatics, and Data (AID) Programme of the United Nations University (UNU), aims to provide information, data, computational, and analytical tools to support the sustainable management and long-term security of natural resources using a nexus approach. This paper introduces the Soil Health Module of the Sustainability Nexus AID Programme. Healthy soil is crucial for life on Earth, and it is essential for ecosystem services and functioning, access to clean water, socioeconomic structure, biodiversity, and food security for the growing population of the world. Healthy soils contribute to mitigating the effects of climate change and reduce the consequences of extreme events such as flooding and drought. Healthy soils influence the hydrologic cycle by regulating transpiration, water infiltration, and soil water evaporation affecting land–atmosphere interactions. The Soil Health Module of the UNU Sustainability Nexus AID Programme aims to evolve into the ultimate focal point, supporting a diverse array of stakeholders with state-of-the-art data and tools that are essential for soil health monitoring and projection. This paper discusses the importance of adopting a nexus approach for ensuring soil health, explores the AID tools currently at our disposal for quantifying and predicting soil health, and concludes with recommendations for future effort and direction within the Sustainability Nexus AID Programme concerning soil health.
Flying ad-hoc networks (FANET) offer a solution to provide communication among unmanned aerial systems (UAS) without ground infrastructure and, hence, may provide redundancy if main terrestrial communication systems fail. With a strongly growing UAS market, this redundancy is of particular importance especially if the vehicles are operated over densely populated regions, such as urban areas. To further develop the FANET technology, it is, therefore, essential to understand and predict the performance requirements for such networks and the necessary systems as they arise from the different applications. However, with the diversity of cities, the problem arises that custom-made simulation models often lack generality. For this reason, in our work, we introduce an approach to generate randomized cities and to model UAS traffic for selected applications to assess different performance metrics of the FANET. This approach comprises the creation of randomized city geometries, the derivation of UAS traffic demand on these geometries, as well as the simulation of the flights of each individual vehicle. To showcase the capabilities of the approach, we generate an exemplary city and obtain characteristic datasets of the UAS traffic which are then used to assess the FANET performance for a selected FANET application using a simplified network model. We will show that FANET performance is strongly influenced by the extent of the demand, demand distribution over time, vehicle speed, and delivery network geometry.
Chemical recycling of polymer waste is a promising strategy to reduce the dependency of chemical industry on fossil resources and reduce the increasing quantities of plastic waste. A common challenge in chemical recycling processes is the costly downstream separation of reaction products. For polybutylene succinate (PBS) no effective recycling concept has been implemented so far. In this work we demonstrate a promising recycling concept for PBS, avoiding costly purification steps. We developed a sequential process, coupling enzymatic hydrolysis of PBS with an electrochemical reaction step. The enzymatic step efficiently hydrolyses PBS in its monomers, succinic acid and 1,4‐butandiol. The electrochemical step converts succinic acid into ethene as final product. Ethene is easily separated from the reaction solution as gaseous product, together with hydrogen as secondary product, while 1,4‐butandiol remains in the aqueous solution. Both reaction steps operate in aqueous solvent and benign reaction conditions. Furthermore, the influence of electrolyte components on the electrochemical step was unraveled by applying molecular dynamic simulations. The final coupled process achieves a total ethene productivity of 91 µmol/cm2 over a duration of 8 hours, with 1110 µmol/cm2 hydrogen and 77% regained 1,4‐butandiol as valuable secondary products.
We present an ordinary state-based formulation of the equation of motion of an elastic peridynamic solid, where the response function state is derived from a free energy functional that depends nonlinearly on measures of both stretch and volume changes and contains two peridynamic coefficients. To determine these coefficients, we assume that the quadratic form of this functional near the undistorted natural state of the peridynamic material corresponds to the quadratic strain energy of the classical linear elasticity at an interior point of the peridynamic solid. Next, we consider a particular class of homogeneous peridynamic cylindrical solids and expand the displacement field asymptotically in terms of the thickness coordinate, following classical arguments of thin plate theory. We then reduce the equation of motion to a system of equations for the determination of the expansion coefficients.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information