BookPDF Available

Engineering Design Optimization

Authors:

Abstract

Based on course-tested material, this rigorous yet accessible graduate textbook covers both fundamental and advanced optimization theory and algorithms. It covers a wide range of numerical methods and topics, including both gradient-based and gradient-free algorithms, multidisciplinary design optimization, and uncertainty, with instruction on how to determine which algorithm should be used for a given application. It also provides an overview of models and how to prepare them for use with numerical optimization, including derivative computation. Over 400 high-quality visualizations and numerous examples facilitate understanding of the theory, and practical tips address common issues encountered in practical engineering design optimization and how to address them. Numerous end-of-chapter homework problems, progressing in difficulty, help put knowledge into practice. Accompanied online by a solutions manual for instructors and source code for problems, this is ideal for a one- or two-semester graduate course on optimization in aerospace, civil, mechanical, electrical, and chemical engineering departments.
ENGINEERING
DESIGN OPTIMIZATION
Joaquim R. R. A. Martins
University of Michigan, Ann Arbor
Andrew Ning
Brigham Young University, Utah
Based on course-tested material, this rigorous yet accessible
graduate textbook covers both fundamental and advanced
optimization theory and algorithms. It covers a wide range
of numerical methods and topics, including both gradient-
based and gradient-free algorithms, multidisciplinary design
optimization, and uncertainty, with instruction on how to
determine which algorithm should be used for a given
application. It also provides an overview of models and how
to prepare them for use with numerical optimization, including
derivative computation. Over 400 high-quality visualizations
and numerous examples facilitate understanding of the
theory, and practical tips address common issues encountered
in practical engineering design optimization and how to
address them. Numerous end-of-chapter homework problems,
progressing in difculty, help put knowledge into practice.
Accompanied online by a solutions manual for instructors
and source code for problems, this is ideal for a one- or two-
semester graduate course on optimization in aerospace, civil,
and mechanical engineering departments.
20% DISCOUNT ON THIS TITLE
EXPIRES 30 NOVEMBER 2021
For more information, and to order, visit:
www.cambridge.org/9781108833417
and enter the code EDO2021 at the checkout
1. Introduction
2. A short history of optimization
3. Numerical models and solvers
4. Unconstrained gradient-based optimization
5. Constrained gradient-based optimization
6. Computing derivatives
7. Gradient-free optimization
8. Discrete optimization
9. Multiobjective optimization
10. Surrogate-based optimization
11. Convex optimization
12. Optimization under uncertainity
13. Multidisciplinary design optimization
A. Mathematics background
B. Linear solvers
C. Quasi-Newton methods
D. Test problems
CONTENTS
October 2021 | 246 x 189 mm | 651pp
Hardback 9781108-833417
Original Price $115/£89.99 Discount Price $92/£71.99
46720.indd 146720.indd 1 29/09/2021 14:5629/09/2021 14:56
A free PDF of the complete book is available at:
https://mdobook.github.io
... The process of executing the design and test workflows is shown in Figure 8. This manual process is characterized by decisions at each stage based on intuition or experience [40]. ...
... The process of executing the design and test workflows is shown in Figure 8. This manual process is characterized by decisions at each stage based on intuition or experience [40]. First, in Figure 8, the system is specified in the MBSE system model. ...
... A pareto front contains all variants which do not dominate each other with respect to their objectives. If two variants are compared on the pareto front, one variant will be better in one objective and worse in the other [40]. ...
Article
Full-text available
Developing modern products involves numerous domains (controlling, production, engineering, etc.) and disciplines (mechanics, electronics, software, etc.). The products have become increasingly complex while their time to market has decreased. These challenges can be overcome by Model-Based Systems Engineering (MBSE), where all development data (requirements, architecture, etc.) is stored and linked in a system model. In an MBSE system model, product requirements at the system level can lead to numerous technical variants with conflicting objectives at the parameter level. To determine the best technical variants or tradeoffs, Multidisciplinary Analysis and Optimization (MDAO) is already being used today. Linking MBSE and MDAO allows for mutually beneficial synergies to be expected that have not yet been fully exploited. In this paper, a new approach to link MBSE and MDAO is proposed. The novelty compared to existing approaches is the reuse of existing MBSE system model data. Models developed during upstream design and test activities already linked to the MBSE system model were integrated into an MDAO problem. Benefits are reduced initial and reconfiguration efforts and the resolution of the MDAO black-box behavior. For the first time, the MDAO problem was modeled as a workflow using activity diagrams in the MBSE system model. For a given system architecture, this workflow finds the design variable values that allow for the best tradeoff of objectives. The structure and behavior of the workflow were formally described in the MBSE system model with SysML. The presented approach for linking MBSE and MDAO is demonstrated using an example of an electric coolant pump.
... Then, a line-search algorithm is used to obtain an 'optimal value' within this search direction. Depending on the line search algorithm, this 'optimal value' either provides a new J that is smaller than the baseline J (e.g., the sufficient decrease condition), is based on how much the magnitude of its gradient, |∇J|, is reduced (e.g., the sufficient curvature condition), or combination of both, e.g., the strong Wolfe conditions (see More information about these algorithms can be found in Martins & Ning (2021). In addition, the FORTRAN-based code solving the Navier-Stokes and adjoint equations communicates with the SciPy minimize package, an open-source Python library (Jones et al., 2001;Oliphant, 2007; Millman & Aivazis, 2011) providing many optimization solvers. ...
... then α = α (n) . The inequality (2.54) is called the sufficient decrease condition or Armijo condition (e.g., see Martins & Ning, 2021), and a user-defined parameter µ ls = 10 −4 is used. Then, the new f according to (2.53) is updated and written to disk, and a new optimization iteration begins (next optimization iteration n + 1) until the optimization procedure converges. ...
... Martins & Ning, 2021, for more details). Depending on a user-defined tolerance, either the adjointbased optimization is terminated, or a new optimization iteration restarts to obtain a new search direction.In this work, the following gradient-based optimization algorithms have been implemented as an in-house optimization library:(i) Steepest descent, (ii) Conjugate gradient, (iii) Broyden-Fletcher-Goldfarb-Shanno (BFGS), with the following line search algorithms: (i) Backtracking, (ii) Bracketing with the pinpoint function. ...
Thesis
Turbulent reacting flows drive many energy conversion devices and play crucial roles in the power generation and transportation sectors. Due to their chaotic and multi-scale nature, predicting and optimizing such systems is challenging. Over the past several decades, direct numerical simulation (DNS) and large-eddy simulation (LES) have gained in popularity within the scientific and engineering community for simulating this class of flows. However, owing to their high computational cost, they have primarily been used to investigate micro-scale physics or develop sub-grid scale models. Meanwhile, optimizing new engineering systems or improving existing devices requires iterating upon many design/input parameters. A brute-force trial-and-error approach involves performing many simulations and is thus not practical even with modern computational resources. Current approaches typically reduce the complexity of the model, which compromises its fidelity and decreases dimensionality of the physical system. Discrete adjoint-based methods provide exact sensitivity of a quantity of interest (QoI) to many input parameters with a tractable computational cost. The sensitivity gradient obtained from an adjoint solution provides a direction to adjust parameters for minimizing (i.e., improving) the QoI. However, computing discrete adjoint sensitivity from high-fidelity numerical simulations like DNS or LES is challenging. Modern numerical methods are typically developed for solving the original governing equations and are not necessarily consistent with the discrete adjoint formulation. The objective of this dissertation is to develop a high-fidelity numerical framework that provides exact sensitivity of a QoI for turbulent reacting flows. This builds off state-of-the-art numerical discretization methods and extends them to be compatible with a discrete adjoint solver. The adjoint sensitivity is combined with gradient-based optimization techniques to find optimal parameters. The numerical framework solves the multi-component compressible Navier--Stokes equations using high-order narrow-stencil finite difference operators that satisfy the summation-by-parts (SBP) property. Simultaneous-approximation-term boundary treatment is used to enforce the boundary conditions. A SBP adaptive artificial dissipation scheme with a compatible adjoint solver is introduced to minimize boundedness errors in the scalars and retain high-order accuracy of the solution. In addition, a flamelet/progress variable approach is employed for combustion modeling, and its adjoint is formulated. This approach avoids transporting many chemical species and makes the adjoint solver flexible with respect to the choice of chemical reactions. The adjoint solver makes use of an efficient check-pointing scheme, and it computes analytic Jacobians of the Navier--Stokes equations instead of automatically differentiating them. The cost of the combined forward-adjoint simulation is about 3--3.5 times the cost of the forward run. The framework is applied to several challenging cases to assess its performance and demonstrate its efficacy in optimizing various QoIs. The methodology is used to enhance and suppress mixing and growth of high-resolution multi-mode Rayleigh--Taylor instabilities by strategically manipulating the interfacial perturbations. This example demonstrates the utility of the adjoint framework on chaotic variable-density flows before introducing complexities associated with chemical reactions and unboundedness of the mass fraction. Next, a momentum actuator is optimized to control the temporal evolution of scalar mixing in a shear layer, where more than one hundred million parameters are manipulated simultaneously by the adjoint solver. Using a coarse grid necessitates the adaptive dissipation scheme to preserve scalar boundedness. Finally, the adjoint solver is used to identify optimal forcing to control flame position in a non-premixed turbulent round jet.
... Therefore, we could say that ALM combines the merit of both methods. Convergence in ALM may occur with finite µ, and optimization problem does not even have to possess a locally convex structure [7,49,9,8,44]. These aspects of the ALM make it a suitable choice for neural networks as their objective functions are typically non-convex with respect to the parameters of the network. ...
... Assuming that the errors are normally distributed with mean zero and a standard deviation of σ, we can minimizing the log likelihood of the predictions u θ (x, t) conditioned on the observed dataũ θ (x, t) to obtain J M (θ) as follows [44] J M (θ) = 1 ...
Article
Full-text available
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE). In PINNs, the residual form of the PDE of interest and its boundary conditions are lumped into a composite objective function as soft penalties. Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach when applied to different kinds of PDEs. To address these limitations, we propose a versatile framework based on a constrained optimization problem formulation, where we use the augmented Lagrangian method (ALM) to constrain the solution of a PDE with its boundary conditions and any high-fidelity data that may be available. Our approach is adept at forward and inverse problems with multi-fidelity data fusion. We demonstrate the efficacy and versatility of our physics- and equality-constrained deep-learning framework by applying it to several forward and inverse problems involving multi-dimensional PDEs. Our framework achieves orders of magnitude improvements in accuracy levels in comparison with state-of-the-art physics-informed neural networks.
Article
This paper addresses the problem of finding a robust optimal design when uncertain parameters in the form of crisp or interval sets are present in the optimization. Furthermore, in order to make the approach as general as possible, direct search methods with the help of sensitivity analysis techniques are employed to optimize the design. Consequently, the presented approach is suitable for black box models for which no, or very little, information of the equations governing the model is available. The design of an electric drivetrain is used to illustrate the benefits of the proposed method.
Article
This paper presents five industrial cases where design automation (DA) systems supported by design optimization has been developed, and aims to summarize the lesson learned and identify needs for future development of such projects. By mapping the challenges during development and deployment of the systems, common issues were found in technical areas such as model integration and organizational areas such as knowledge transfer. The latter can be seen as a two-layered design paradox; one for the product that the DA system is developed for, and one for the development of the DA system.
Article
Full-text available
pyOptSparse is an optimization framework designed for constrained nonlinear optimization of large sparse problems and provides a unified interface for various gradient-free and gradient-based optimizers. By using an object-oriented approach, the software maintains independence between the optimization problem formulation and the implementation of the specific optimizers. The code is MPI-wrapped to enable execution of expensive parallel analyses and gradient evaluations, such as when using computational fluid dynamics (CFD) simulations, which can require hundreds of processors. The optimization history can be stored in a database file, which can then be used both for post-processing and restarting another optimization. A graphical user interface application is provided to visualize the optimization history interactively.
Article
Full-text available
Introduced in 1993, the DIRECT global optimization algorithm provided a fresh approach to minimizing a black-box function subject to lower and upper bounds on the variables. In contrast to the plethora of nature-inspired heuristics, DIRECT was deterministic and had only one hyperparameter (the desired accuracy). Moreover, the algorithm was simple, easy to implement, and usually performed well on low-dimensional problems (up to six-variables). Most importantly, DIRECT balanced local and global search (exploitation versus exploration) in a unique way: in each iteration, several points were sampled, some for global and some for local search. This approach eliminated the need for "tuning parameters" that set the balance between local and global search. However, the very same features that made DIRECT simple and conceptually attractive also created weaknesses. For example, it was commonly observed that, while DIRECT is often fast to find the basin of the global optimum, it can be slow to fine-tune the solution to high accuracy. In this paper, we identify several such weaknesses and survey the work of various researchers to extend DIRECT so that it performs better. All of the extensions show substantial improvement over DIRECT on various test functions. An outstanding challenge is to improve performance robustly across problems of different degrees of difficulty, ranging from simple (unimodal, few variables) to very hard (multimodal, sharply peaked, many variables). Opportunities for further improvement may lie in combining the best features of the different extensions.
Conference Paper
Full-text available
CFD-based aircraft design optimization has matured significantly in the last few years thanks to improvements in CFD solvers, mesh deformation, sensitivity computation, and optimization tools. We review our recent developments for each of these components, and present open-source tools made available for aerodynamic shape optimization. A variety of applications is presented, including the optimization of a supercritical airfoil starting from a circle, a web application that optimizes airfoils within a few seconds, aircraft aerodynamic and aerostructural optimization, and aeropropulsive optimization. We also review our experience with solving the Aerodynamic Design Optimization Discussion Group (ADODG) benchmarks and other problems in aerodynamic shape optimization. Among the ADODG benchmarks, we focus on the RANS-based problems and discuss some of the issues encountered, including the comparison between Euler and RANS results, and design space multimodality. The availability of these benchmarks and the open-source tools is expected to enable further studies and benchmarks in CFD-based aerodynamic design optimization and MDO.
Article
Full-text available
The adjoint method is used for high-fidelity aerodynamic shape optimization and is an efficient approach for computing the derivatives of a function of interest with respect to a large number of design variables. Over the past few decades, various approaches have been used to implement the adjoint method in computational fluid dynamics solvers. However, further advances in the field are hindered by the lack of performance assessments that compare the various adjoint implementations. Therefore, we propose open benchmarks and report a comprehensive evaluation of the various approaches to adjoint implementation. We also make recommendations on effective approaches, that is, approaches that are efficient, accurate, and have a low implementation cost. We focus on the discrete adjoint method and describe adjoint implementations for two computational fluid dynamics solvers by using various methods for computing the partial derivatives in the adjoint equations and for solving those equations. Both source code transformation and operator-overloading algorithmic differentiation tools are used to compute the partial derivatives, along with finite differencing. We also examine the use of explicit Jacobian and Jacobian-free solution methods. We quantitatively evaluate the speed, scalability, memory usage, and accuracy of the various implementations by running cases that cover a wide range of Mach numbers, Reynolds numbers, mesh topologies, mesh sizes, and CPU cores. We conclude that the Jacobian-free method using source code transformation algorithmic differentiation to compute the partial derivatives is the best option because it computes exact derivatives with the lowest CPU time and the lowest memory requirements, and it also scales well up to 10 million cells and over one thousand CPU cores. The superior performance of this approach is primarily due to its Jacobian-free adjoint strategy. The cases presented herein are publicly available and represent platform-independent benchmarks for comparing other current and future adjoint implementations. Our results and discussion provide a guide for discrete adjoint implementations, not only for computational fluid dynamics but also for a wide range of other partial differential equation solvers.
Article
Full-text available
In this paper, we develop computationally efficient techniques to calculate statistics used in wind farm optimization with the goal of enabling the use of higher-fidelity models and larger wind farm optimization problems. We apply these techniques to maximize the annual energy production (AEP) of a wind farm by optimizing the position of the individual wind turbines. The AEP (a statistic) is the expected power produced by the wind farm over a period of 1 year subject to uncertainties in the wind conditions (wind direction and wind speed) that are described with empirically determined probability distributions. To compute the AEP of the wind farm, we use a wake model to simulate the power at different input conditions composed of wind direction and wind speed pairs. We use polynomial chaos (PC), an uncertainty quantification method, to construct a polynomial approximation of the power over the entire stochastic space and to efficiently (using as few simulations as possible) compute the expected power (AEP). We explore both regression and quadrature approaches to compute the PC coefficients. PC based on regression is significantly more efficient than the rectangle rule (the method most commonly used to compute the expected power). With PC based on regression, we have reduced on average by a factor of 5 the number of simulations required to accurately compute the AEP when compared to the rectangle rule for the different wind farm layouts considered. In the wind farm layout optimization problem, each optimization step requires an AEP computation. Thus, the ability to compute the AEP accurately with fewer simulations is beneficial as it reduces the cost to perform an optimization, which enables the use of more computationally expensive higher-fidelity models or the consideration of larger or multiple wind farm optimization problems. We perform a large suite of gradient-based optimizations to compare the optimal layouts obtained when computing the AEP with polynomial chaos based on regression and the rectangle rule. We consider three different starting layouts (Grid, Amalia, Random) and find that the optimization has many local optima and is sensitive to the starting layout of the turbines. We observe that starting from a good layout (Grid, Amalia) will, in general, find better optima than starting from a bad layout (Random) independent of the method used to compute the AEP. For both PC based on regression and the rectangle rule, we consider both a coarse (∼225) and a fine (∼625) number of simulations to compute the AEP. We find that for roughly one-third of the computational cost, the optimizations with the coarse PC based on regression result in optimized layouts that produce comparable AEP to the optimized layouts found with the fine rectangle rule. Furthermore, for the same computational cost, for the different cases considered, polynomial chaos finds optimal layouts with 0.4 % higher AEP on average than those found with the rectangle rule.
Article
Full-text available
The surrogate modeling toolbox (SMT) is an open-source Python package that contains a collection of surrogate modeling methods, sampling techniques, and benchmarking functions. This package provides a library of surrogate models that is simple to use and facilitates the implementation of additional methods. SMT is different from existing surrogate modeling libraries because of its emphasis on derivatives, including training derivatives used for gradient-enhanced modeling, prediction derivatives, and derivatives with respect to training data. It also includes unique surrogate models: kriging by partial least-squares reduction, which scales well with the number of inputs; and energy-minimizing spline interpolation, which scales well with the number of training points. The efficiency and effectiveness of SMT are demonstrated through a series of examples. SMT is documented using custom tools for embedding automatically tested code and dynamically generated plots to produce high-quality user guides with minimal effort from contributors. SMT is maintained in a public version control repository.
Article
Blade element momentum methods are widely used for initial aerodynamic analysis of propellers and wind turbines. A wide variety of correction methods exist, but common to all variations, a pair of residuals are converged to ensure compatibility between the two theories. This paper shows how to rearrange the sequence of calculations reducing to a single residual. This yields the significant advantage that convergence can be guaranteed and to machine precision. Both of these considerations are particularly important for gradient-based optimization where a wide variety of atypical inputs may be explored, and where tight convergence is necessary for accurate derivative computation. On a moderate-sized example optimization problem we show over an order of magnitude increase in optimization speed, with no changes to the physics. This is done by using the single residual form, providing numerically exact gradients using algorithmic differentiation with an adjoint, and by leveraging sparsity in the Jacobian using graph coloring techniques. Finally, we demonstrate a revised formulation for cases when no inflow exists in one of the directions (e.g., a hovering rotor or a parked rotor). These new residuals allow for robust convergence in optimization applications, avoiding the occasional numerical difficulties that exist with the standard formulation.
Book
Interested in learning about Computational Fluid Dynamics? Checkout our free new book "Computational Fluid Dynamics: An Open Source Approach". It covers the essentials for undergraduate or graduate level studies, and uses all open-source tools. This means you can download and run simulations directly on your own computer, or in the cloud, completely for free! The book itself is even open-source on gitlab if you want to help contribute. Link: https://users.encs.concordia.ca/~bvermeir/books.html
Article
This paper investigates reducing power variance caused by different wind directions by using wind farm layout optimization. The problem was formulated as a multi-objective optimization. The [Formula: see text] constraint method was used to solve the bi-objective problem in a two-step optimization framework where two sequential optimizations were performed. The first was maximizing the mean wind farm power alone and the second was minimizing the power variance with a constraint on the mean power. The results show that the variance in power estimates can be greatly reduced, by as much as [Formula: see text], without sacrificing mean plant power for the different farm sizes and wind conditions studied. This reduction is attributed to the multi-modality of the design space which allows for unique solutions of high mean plant power with different power variances due to varying wind direction. Thus, wind farms can be designed to maximize power capture with greater confidence.
Article
The availability of computational models to assess the attributes of an engineering concept does not only accelerate the design process but can also facilitate its optimization: numerical algorithms can use model output to create a sequence of design candidates that converges to an optimal solution with respect to certain criteria (objectives and constraints). If the mathematical properties of the formulated problem and the computational models can guarantee the existence and computability of gradients, then gradient-based algorithms should be used. Otherwise, derivative-free optimization (DFO) should be employed.