Fraunhofer Institute for Industrial Mathematics
Recent publications
A design optimization task in the setting of fluid–structure interaction (FSI) with a periodic filter medium is considered. On the microscale, the thin filter has a small in-plane period and thickness ε\varepsilon and consist of flexural yarns in contact. Its topology, as well as its linear material properties, are dependent on a discrete design variable. A desired flow-induced displacement profile of the filter in steady-state is to be obtained by optimal choice of this variable. The governing state system is a one-way coupled, homogenized and dimension reduced FSI model, attained by the scale limit ε0\varepsilon \rightarrow 0. The design variable enters in the arising macroscopic model parameters, namely the filter’s homogenized stiffness tensors and its permeability tensor. The latter are attained by the solution of cell-problems on the smallest periodic unit of the filter. The existence of optimal solutions is verified by proving the continuous dependence of these macroscopic model parameters, as well as the design-to-state operator, on changes of the design. A numerical optimization example is provided.
Full characterization of electric-field waveforms in amplitude and phase is achieved across the terahertz to visible spectral range through interaction with an optical pulse shorter than a half-cycle period via the Pockels (linear electro-optic) effect. This technique of electro-optic sampling has become an indispensable tool in various areas, including ultrafast pump-probe, time-domain and frequency-comb spectroscopies, quantum optics, high-harmonic generation, and attosecond science, and holds great promise for further advances. Not only does it enable spectroscopic measurements with record dynamic range and temporal resolution, along with massively parallel real-time spectral data acquisition, but its remarkable sensitivity also allows the detection of vacuum fluctuations, i.e., “zero-point motion” of electric fields, profoundly impacting our understanding of the fundamental laws of nature.
Current quantum computers can only solve optimization problems of a very limited size. For larger problems, decomposition methods are required in which the original problem is broken down into several smaller sub-problems. These are then solved on the quantum computer and their solutions are recombined into a final solution for the original problem. Often, these decomposition methods do not take the specific problem structure into account. In this paper, we present a tailored method using a divide-and-conquer strategy to solve the 2-way Number partitioning problem (NPP) with a large number of variables. The idea is to perform a specialized decomposition into smaller NPPs, which are solved on a quantum computer, and then recombine the results into another small auxiliary NPP. Solving this auxiliary problem yields an approximate solution of the original larger problem. We experimentally verify that our method allows to solve NPPs with over a thousand variables using the D-Wave Advantage quantum annealer (Advantage_system6.4).
Exploring the potential of quantum hardware for enhancing classical and real-world applications is an ongoing challenge. This study evaluates the performance of quantum and quantum-inspired methods compared to classical models for crack segmentation. Using annotated grayscale image patches of concrete samples, we benchmark a classical mean Gaussian mixture technique, a quantum-inspired fermion-based method, Q-Seg – a quantum-annealing-based method, and a U-Net deep learning architecture. Our results indicate that quantum-inspired and quantum methods offer a promising alternative to image segmentation, particularly for complex crack patterns, and could be applied in near-future applications.
We introduce an approach to computational homogenization which unites the accuracy of interface‐conforming finite elements (FEs) and the computational efficiency of methods based on the fast Fourier transform (FFT) for two‐dimensional thermal conductivity problems. FFT‐based computational homogenization methods have been shown to solve multiscale problems in solid mechanics effectively. However, the obtained local solution fields lack accuracy in the vicinity of material interfaces, and simple fixes typically interfere with the numerical efficiency of the solver. In the work at hand, we identify the extended finite element method (X‐FEM) with modified absolute enrichment as a suitable candidate for an accurate discretization and design an associated fast Lippmann‐Schwinger solver. We implement the concept for two‐dimensional thermal conductivity and demonstrate the advantages of the approach with dedicated computational experiments.
Simulation and simulation-assisted methods have a large impact for product development in many industrial fields. Multibody dynamics (MBD) simulation, for instance, is an established method used for durability analysis and energy efficiency calculations in vehicle engineering, based on its ability to provide accurate prediction of interaction forces. In the domain of off-road vehicles and heavy machinery, such considerations need to be expanded by a method to model the soil and its interaction with the vehicle, i.e., the soil–tool interaction, as the resulting forces may drastically influence the durability and energy efficiency of the machine. Therefore, the model must be chosen carefully to maintain the high accuracy in the force prediction for the soil–tool interaction, as required for a reliable and robust product development. In this contribution, we present a workflow to tackle this topic, based on a cosimulation scheme between an MBD-based vehicle model and a particle simulation realized in the Discrete Element Method (DEM). The simulated soil is parametrized and validated by matching simulation results from a virtual experiment with measurement data from real-world soil laboratory experiments as the triaxial compression test. Using this process, the applicability and performance of the numerical methods can be determined.
The decline of insect biomass, including pollinators, represents a significant ecological challenge, impacting both biodiversity and ecosystems. Effective monitoring of pollinator habitats, especially floral resources, is essential for addressing this issue. This study connects drone and deep learning technologies to their practical application in ecological research. It focuses on simplifying the application of these technologies. Updating an object detection toolbox to TensorFlow (TF) 2 enhanced performance and ensured compatibility with newer software packages, facilitating access to multiple object recognition models - Faster Region-based Convolutional Neural Network (Faster R-CNN), Single-Shot-Detector (SSD), and EfficientDet. The three object detection models were tested on two datasets of UAV images of flower-rich grasslands, to evaluate their application potential in practice. A practical guide for biologists to apply flower recognition to Unmanned Aerial Vehicle (UAV) imagery is also provided. The results showed that Faster RCNN had the best overall performance with a precision of 89.9% and a recall of 89%, followed by EfficientDet, which excelled in recall but at a lower precision. Notably, EfficientDet demonstrated the lowest model complexity, making it a suitable choice for applications requiring a balance between efficiency and detection performance. Challenges remain, such as detecting flowers in dense vegetation and accounting for environmental variability.
This paper presents a novel approach to quantum architecture search by integrating the techniques of ZX-calculus with Genetic Programming (GP) to optimize the structure of parameterized quantum circuits employed in quantum machine learning (QML). Recognizing the challenges in designing efficient quantum circuits for QML, we propose a GP framework that utilizes mutations defined via ZX-calculus, a graphical language that can simplify visualizing and working with quantum circuits. Our methodology focuses on evolving quantum circuits with the aim of enhancing their capability to approximate functions relevant in various machine learning tasks. We introduce several mutation operators inspired by the transformation rules of ZX-calculus and investigate their impact on the learning efficiency and accuracy of quantum circuits. The empirical analysis involves a comparative study where these mutations are applied to a diverse set of quantum regression problems, measuring performance metrics such as the percentage of valid circuits after the mutation, improvement of the objective, and circuit depth and width. Our results indicate that certain ZX-calculus-based mutations perform significantly better than others for quantum architecture search (QAS) in all metrics considered. They suggest that ZX-diagram-based QAS results in shallower circuits and more uniformly allocated gates than crude genetic optimization based on the circuit model. The code used for the numerical experiments is open source and can be found at TODO https://gitlab.cc-asp.fraunhofer.de/itwm-fm-qc-public/cvqa.
Alleviating high workloads for teachers is crucial for continuous high quality education. To evaluate if Large Language Models (LLMs) can alleviate this problem in the quantum computing domain, we conducted two complementary studies exploring the use of GPT-4 to automatically generate tips for students. (1) A between-subject survey in which students (N = 46) solved four multiple-choice quantum computing questions with either the help of expert-created or LLMgenerated tips. To correct for possible biases, we additionally introduced two deception conditions. (2) Experienced educators and students (N = 23) directly compared the LLM-generated and expert-created tips. Our results show that the LLM-generated tips were significantly more helpful and pointed better towards relevant concepts while also giving away more of the answers. Furthermore, we found that participants in the first study performed significantly better in answering the quantum computing questions when given tips labeled as LLM-generated, even if they were expert-created. This points towards a placebo effect induced by the participants’ biases for LLM-generated content. Ultimately, we contribute that LLM-generated tips can be used instead of expert tips to support teaching of quantum computing basics.
Data-driven or machine learning (ML) approaches already achieved significant success in many engineering areas even fatigue assessment of industrial parts and structures. Machine learning approaches work well under the common assumption that the training data covers the relevant feature space of the application. Rebuilding new models or establish new databases for similar feature spaces needs a high effort. In such cases, knowledge transfer or transfer learning can be used. In this study, transfer learning approach is used to determine the fatigue life of welded steel joints (target task) with a ML-algorithm that is trained in non-welded steel specimen. Twenty-two fatigue test data series were used. The results of the transfer learning approach were compared with a conventional machine learning approach that was trained also on data of welded joints. Furthermore, the results were compared to an advanced analytical approach (IBESS) for the fracture mechanic-based fatigue life assessment of welded joints and fatigue strength values from recommendations.
Measuring terahertz waveforms in terahertz spectroscopy often relies on electro-optic sampling employing a ZnTe crystal. Although the nonlinearities in such zincblende semiconductors induced by intense terahertz pulses have been studied at optical frequencies, a quantitative study of nonlinearities in the terahertz regime has not been reported. In this work, we investigate the nonlinear response of ZnTe in the terahertz frequency region utilizing time-resolved terahertz-pump terahertz-probe spectroscopy. We find that the interaction of two co-propagating terahertz pulses in ZnTe leads to a nonlinear polarization change which modifies the electro-optic response of the medium at terahertz frequencies. We present a model for this polarization that showcases the second-order nonlinear behavior. We also determine the magnitude of the third-order susceptibility in ZnTe at terahertz frequencies, χ ⁽³⁾(ω THz). These results clarify the interactions in ZnTe at terahertz frequencies, with implications for measurements of intense terahertz fields using electro-optic sampling.
Methods of artificial intelligence (AI) and especially machine learning (ML) have been growing ever more complex, and at the same time have more and more impact on people’s lives. This leads to explainable AI (XAI) manifesting itself as an important research field that helps humans to better comprehend ML systems. In parallel, quantum machine learning (QML) is emerging with the ongoing improvement of quantum computing hardware combined with its increasing availability via cloud services. QML enables quantum-enhanced ML in which quantum mechanics is exploited to facilitate ML tasks, typically in the form of quantum-classical hybrid algorithms that combine quantum and classical resources. Quantum gates constitute the building blocks of gate-based quantum hardware and form circuits that can be used for quantum computations. For QML applications, quantum circuits are typically parameterized and their parameters are optimized classically such that a suitably defined objective function is minimized. Inspired by XAI, we raise the question of the explainability of such circuits by quantifying the importance of (groups of) gates for specific goals. To this end, we apply the well-established concept of Shapley values. The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general. An experimental evaluation on simulators and two superconducting quantum hardware devices demonstrates the benefits of the proposed framework for classification, generative modeling, transpilation, and optimization. Furthermore, our results shed some light on the role of specific gates in popular QML approaches.
Computing the approximation quality is a crucial step in every iteration of sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge and epsilon indicator approximation quality more efficiently. We derive criteria that assess whether the distance between a vertex of the outer approximation and the inner approximation needs to be recalculated. We interpret these criteria geometrically and compare them to a criterion developed by Dörfler et al. for a different quality indicator using convex optimization theory. For the bi-criteria case, we show that only two linear programs need to be solved in each iteration. We show that for more than two objectives, no constant bound on the number of linear programs to be checked can be derived. Numerical examples illustrate that incorporating the developed criteria into the sandwiching algorithm leads to a reduction in the approximation time of up to 94 % and that the approximation time increases more slowly with the number of iterations and the number of objective space dimensions.
We present a photonic ultrawideband frequency extension for a commercial vector network analyzer (VNA) to perform free-space measurements in a frequency range from 70 to 520 GHz with a hertz level resolution. The concept is based on the synchronization of continuous-wave (CW) lasers with highly frequency-stable electronic emitter sources as a reference. The use of CW photomixers with bandwidths up to several terahertz (THz) allows for the straightforward expansion of the covered frequency range to 1 THz or even beyond. This can be achieved, for instance, by cascading additional lasers within the synchronization scheme. Consequently, the necessity for additional frequency extender modules, as seen in current state-of-the-art VNAs, is eliminated, thereby reducing the complexity and cost of the system significantly. To showcase the capabilities of the photonic vector network analyzer (PVNA) extender concept, we conducted S21 transmission measurements of various cross-shaped bandpass filters and waveguide-coupled high-frequency (HF) components. In addition, the analyzed magnitude and phase data were either compared to electromagnetic (EM) simulations or referenced against data obtained from commercial electronic frequency extender modules.
With the increasing impact of global warming and climate change, governments must take actions to reduce CO2\text {CO}_2 CO 2 output and to encourage more environmentally friendly industrial processes. As the use of climate scenarios is the common method in illustrating possible consequences of the climate changes, we assume that it will also be the basis for political decisions with respect to future sustainability requirements on companies and thus (directly or indirectly) for sustainability constraints for life insurance companies and pension funds. Hence, we assume that bonus/malus measures by the government will already be laid out conditioned on the finally realized climate scenario. We will therefore present a corresponding framework for optimal investment under agreed government measures for the realization of future climate scenarios. This framework is particularly suited for strategies of large institutional investors such as life insurances. It will also be illustrated by explicit examples.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
192 members
Karl-Heinz Küfer
  • Department of Optimization
Philipp Süss
  • Department of Optimization
Alexander Scherrer
  • Department of Optimization
Walter Arne
  • Department of Transport Processes
Peter Klein
  • Department of Optimization
Information
Address
Kaiserslautern, Germany
Head of institution
Prof. Dr. Dieter Prätzel-Wolters