Conference Paper

Comparison of Multi-Fidelity Approaches for Military Vehicle Design

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper overviews the efforts of a technical team within the NATO Applied Vehicle Technology Panel to apply multi-fidelity methods to vehicle design. The objectives of the team are to understand the potential benefits of multi-fidelity methods in vehicle design and to assess the relative strengths and weaknesses of different multi-fidelity methods using a common benchmark suite. Through this effort, the team hopes to initiate a community dialogue that will help transition the use of multi-fidelity techniques to the efficient study of configurations more representative of fielded systems. Context is given for other papers contributed by team members to this Special Session on Multi-Fidelity Methods for Vehicle Applications sponsored by the Multidisciplinary Design Optimization Technical Committee.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In order to reduce the computational cost associated to SDDO, several methodologies have been introduced, integrated, and succesfully applied to complex engineering problems, see e.g., [3]. Methods to reduce the computational cost include linear [4] and nonlinear [5] approaches to design-space dimensionality reduction, adaptive surrogate modeling [6], efficient optimization algorithms [7], and multi-fidelity optimization approaches [8]. Namely, multi-fidelity methods leverage on a fidelity spectrum of computational models (from low to high fidelity), with the objective of maximizing the model accuracy while minimizing the associated computational cost [9,10]. ...
... Surrogate-based methods are very promising and have shown their efficiency and effectiveness. Nevertheless, in some cases their training can be costly and their accuracy may decrease (or even drop) as objectives and constraints become noisy or discontinuous, or the number of design variables becomes large [8]. ...
... The method is assessed using an SDDO benchmark pertaining to the hull-form optimization of a destroyer-type vessel in calm water. The benchmark is taken from the NATO Science and Technology Organization, Applied Vehicle Technology, Research Task Group 331 on "Goal-Driven, Multi-Fidelity Approaches for Military Vehicle System-Level Design" [8]. The optimization aims at reducing the ship total resistance at fixed speed and even keel condition. ...
Article
Full-text available
The paper presents a multi-fidelity extension of a local line-search-based derivative-free algorithm for nonsmooth constrained optimization (MF-CS-DFN). The method is intended for use in the simulation-driven design optimization (SDDO) context, where multi-fidelity computations are used to evaluate the objective function. The proposed algorithm starts using low-fidelity evaluations and automatically switches to higher-fidelity evaluations based on the line-search step length. The multi-fidelity algorithm is driven by a suitably defined threshold and initialization values for the step length, which are associated to each fidelity level. These are selected to increase the accuracy of the objective evaluations while progressing to the optimal solution. The method is demonstrated for a multi-fidelity SDDO benchmark, namely pertaining to the hull-form optimization of a destroyer-type vessel, aiming at resistance minimization in calm water at fixed speed. Numerical simulations are based on a linear potential flow solver, where seven fidelity levels are used selecting systematically refined computational grids for the hull and the free surface. The method performance is assessed varying the steplength threshold and initialization approach. Specifically, four MF-CS-DFN setups are tested, and the optimization results are compared to its single-fidelity (high-fidelity-based) counterpart (CS-DFN). The MF-CS-DFN results are promising, achieving a resistance reduction of about 12% and showing a faster convergence than CS-DFN. Specifically, the MF extension is between one and two orders of magnitude faster than the original single-fidelity algorithm. For low computational budgets, MF-CS-DFN optimized designs exhibit a resistance that is about 6% lower than that achieved by CS-DFN.
... published by the University of Michigan MDO Lab [5]. The development of the parametric model supports an international effort chartered by the NATO Applied Vehicle Technology (AVT) Panel, which is investigating the application of multi-fidelity methods to military vehicle design [18]. This model, as well as the analysis methods applied by the AVT technical team are available for broad distribution. ...
... To generate the wing's planform shape, six high-level parameters (area, Yehudi break location, aspect ratio, leading edge sweep angle, inboard trailing edge angle, and taper ratio) are used to compute lower-level parameters such as chord lengths at the centerline, Yehudi break, and tip sections. Readers are referred to [18] for baseline values of the high-level wing parameters and the equations used to compute lower-level parameters. ...
... While an ESP model with many more airfoil sections has been developed and tested (containing ten additional cross-sections with around 200 design variables), it is not demonstrated in the present work. Furthermore, while the high-level planform parameters described in [18] may also be varied, they were held fixed in the present work. In each airfoil section, the design variables comprise a chordwise row of control point z-coordinates above the airfoil, distances to a corresponding row of points below the airfoil, and one variable that influences the trailing edge angle (all of which are normalized by chord length). ...
Article
Full-text available
The simultaneous optimization of aircraft shape and internal structural size for transonic flight is excessively costly. The analysis of the governing physics is expensive, in particular for highly flexible aircraft, and the search for optima using analysis samples can scale poorly with design space size. This paper has a two-fold purpose targeting the scalable reduction of analysis sampling. First, a new algorithm is explored for computing design derivatives by analytically linking objective definition, geometry differentiation, mesh construction, and analysis. The analytic computation of design derivatives enables the accurate use of more efficient gradient-based optimization methods. Second, the scalability of a multi-fidelity algorithm is assessed for optimization in high dimensions. This method leverages a multi-fidelity model during the optimization line search for further reduction of sampling costs. The multi-fidelity optimization is demonstrated for cases of aerodynamic and aeroelastic design considering both shape and structural sizing separately and in combination with design spaces ranging from 17 to 321 variables, which would be infeasible using typical, surrogate-based methods. The multi-fidelity optimization consistently led to a reduction in high-fidelity evaluations compared to single-fidelity optimization for the aerodynamic shape problems, but frequently resulted in a cost penalty for cases involving structural sizing. While the multi-fidelity optimizer was successfully applied to problems with hundreds of variables, the results underscore the importance of accurately computing gradients and motivate the extension of the approach to constrained optimization methods.
... potential flow or RANS solvers), the numerical accuracy (e.g. grid discretization and convergence tolerances), and data coverage [17]. Recently, [18] proposed a MF metamodel with an arbitrary number of fidelity levels ( -fidelity), based on SRBF. ...
... with U as per Eq. (17) or Eq. (18) ; // Perform adaptive sampling ...
Conference Paper
Full-text available
An adaptive-fidelity approach to metamodeling from noisy data is presented for design-space exploration and design optimization. Computational fluid dynamics (CFD) simulations with different numerical accuracy (spatial discretization) provides metamodel training sets affected by unavoidable numerical noise. The-fidelity approximation is built by an additive correction of a low-fidelity metamodel with metamodels of differences (errors) between higher-fidelity levels whose hierarchy needs to be provided. The approach encompasses two core metamodeling techniques, namely: i) stochastic radial-basis functions (SRBF) and ii) Gaussian process (GP). The adaptivity stems from the sequential training procedure and the auto-tuning capabilities of the metamodels. The method is demonstrated for an analytical test problem and a CFD-based optimization of a NACA airfoil, where the fidelity levels are defined by an adaptive grid refinement technique of a Reynolds-averaged Navier-Stokes (RANS) solver. The paper discusses: i) the effect of using more than two fidelity levels; ii) the use of least squares regression as opposed to exact interpolation; iii) the comparison between SRBF and GP; and iv) the use of two sampling approaches for GP. Results show that in presence of noise, the use of more than two fidelity levels improves the model accuracy with a significant reduction of the number of high-fidelity evaluations. Both least squares SRBF and GP provide promising results in dealing with noisy data.
... This can make the SBDO procedure very expensive from a computational viewpoint, when high-fidelity solvers are used to compute the desired outputs. For this reason, to reduce the computational burden of SBDO, multi-fidelity (MF) methods [1] can be used to combine the accuracy of high-fidelity solvers with the computational cost of low-fidelity solvers. ...
... Beran et al. [1] have proposed a classification of benchmark problems for variable-fidelity methods. The classification is based on the complexity of the benchmark, from the most simple (L1) to the most complex (L3). ...
Conference Paper
Full-text available
The paper presents a multi-fidelity coordinate-search derivative-free algorithm for non-smooth constrained optimization (MF-CS-DFN), in the context of simulation-based design optimization (SBDO). The objective of the work is the development of an optimization algorithm able to improve the convergence speed of the SBDO process. The proposed algorithmis of a line-search type and can handle objective function evaluations performed with variableaccuracy. The algorithm automatically selects the accuracy of the objective function evaluationbased an internal steplength parameter. The MF-CS-DFN algorithm starts the optimizationwith low accuracy and low-cost evaluations of the objective function, then the accuracy (andevaluation cost) is increased. The method is coupled with a potential flow solver whose accuracyis determined by the computational grid size. No surrogate models are used in the currentstudy. The algorithm is applied to the hull-form optimization of a destroyer-type vessel in calmwater using 14 hull-shape parameters as design variables. The optimization aims at the totalresistance reduction. Seven refinements of the computational grid are used by the multi-fidelityoptimizations. Four setups of the MF-CS-DFN algorithm are tested and compared with anoptimization performed only on the finest grid. The results show that three of the tested setupsachieve better performance than the high-fidelity optimization, converging to a lower resistancevalue with a reduced computational cost.
... The accelerating nature of this change is accompanied by the growth in products performance, complexity, and cost. To meet emerging requirements, faster design processes are thus required to: thoroughly and accurately explore design spaces of increased size, leverage potentially complex physical interactions for performance benefit, and avoid deleterious interactions that may greatly increase product cost through late defect discovery [3]. ...
... This motivates the interest for benchmark problems that could support the comparative and rigorous assessment of these methods. Beran et al. [3] propose to classify use cases and test problems into three classes: L1 problems, computationally cheap analytical functions with exact solutions; L2 problems, simplified engineering applications problems that can be executed with a reduced computational expense; and L3 problems, more complex engineering use cases, usually including multiphysics couplings. ...
Preprint
Full-text available
The paper presents a collection of analytical benchmark problems specifically selected to provide a set of stress tests for the assessment of multifidelity optimization methods. In addition, the paper discusses a comprehensive ensemble of metrics and criteria recommended for the rigorous and meaningful assessment of the performance of multifidelity strategies and algorithms.
... The benchmark is composed by five analytical problems taken from literature [9,[19][20][21], with one and two variables. These functions are identified as representative of real world problems within the NATO AVT-331 task group on "Goal-Driven, multi-fidelity and multidisciplinary analysis for military vehicle system level design" [22]. Each benchmark provides one, two, and three fidelity levels. ...
... Furthermore, a larger set of benchmark problems with noisy evaluations of the objective function, considering analytical function with a larger number of variables and fidelity levels will be proposed. Finally, the MF-GPR will be tested on a hull-form optimization used as a test case within the NATO AVT-331 task group on "Goal-Driven, multi-fidelity and multidisciplinary analysis for military vehicle system level design" [22]. ...
Preprint
Full-text available
Despite the increased computational resources, the simulation-based design optimization (SBDO) procedure can be very expensive from a computational viewpoint, especially if high-fidelity solvers are required. Multi-fidelity metamodels have been successfully applied to reduce the computational cost of the SBDO process. In this context, the paper presents the performance assessment of an adaptive multi-fidelity metamodel based on a Gaussian process regression (MF-GPR) for noisy data. The MF-GPR is developed to: (i) manage an arbitrary number of fidelity levels, (ii) deal with objective function evaluations affected by noise, and (iii) improve its fitting accuracy by adaptive sampling. Multi-fidelity is achieved by bridging a low-fidelity metamodel with metamodels of the error between successive fidelity levels. The MF-GPR handles the numerical noise through regression. The adaptive sampling method is based on the maximum prediction uncertainty and includes rules to automatically select the fidelity to sample. The MF-GPR performance are assessed on a set of five analytical benchmark problems affected by noisy objective function evaluations. Since the noise introduces randomness in the evaluation of the objective function, a statistical analysis approach is adopted to assess the performance and the robustness of the MF-GPR. The paper discusses the efficiency and effectiveness of the MF-GPR in globally approximating the objective function and identifying the global minimum. One, two, and three fidelity levels are used. The results of the statistical analysis show that the use of three fidelity levels achieves a more accurate global representation of the noise-free objective function compared to the use of one or two fidelities.
... The benchmark is composed by five analytical problems taken from literature [9,[19][20][21], with one and two variables. These functions are identified as representative of real world problems within the NATO AVT-331 task group on "Goal-Driven, multi-fidelity and multidisciplinary analysis for military vehicle system level design" [22]. Each benchmark provides one, two, and three fidelity levels. ...
... Furthermore, a larger set of benchmark problems with noisy evaluations of the objective function, considering analytical function with a larger number of variables and fidelity levels will be proposed. Finally, the MF-GPR will be tested on a hull-form optimization used as a test case within the NATO AVT-331 task group on "Goal-Driven, multi-fidelity and multidisciplinary analysis for military vehicle system level design" [22]. ...
Conference Paper
Full-text available
Despite the increased computational resources, the simulation-based design optimization (SBDO) procedure can be very expensive from a computational viewpoint, especially if high-fidelity solvers are required. Multi-fidelity metamodels have been successfully applied to reduce the computational cost of the SBDO process. In this context, the paper presents the performance assessment of an adaptive multi-fidelity metamodel based on a Gaussian process regression (MF-GPR) for noisy data. The MF-GPR is developed to: (i) manage an arbitrary number of fidelity levels, (ii) deal with objective function evaluations affected by noise, and (iii) improve its fitting accuracy by adaptive sampling. Multi-fidelity is achieved by bridging a low-fidelity metamodel with metamodels of the error between successive fidelity levels. The MF-GPR handles the numerical noise through regression. The adaptive sampling method is based on the maximum prediction uncertainty and includes rules to automatically select the fidelity to sample. The MF-GPR performance are assessed on a set of five analytical benchmark problems affected by noisy objective function evaluations. Since the noise introduces randomness in the evaluation of the objective function, a statistical analysis approach is adopted to assess the performance and the robustness of the MF-GPR. The paper discusses the efficiency and effectiveness of the MF-GPR in globally approximating the objective function and identifying the global minimum. One, two, and three fidelity levels are used. The results of the statistical analysis show that the use of three fidelity levels achieves a more accurate global representation of the noise-free objective function compared to the use of one or two fidelities.
... In addition to dynamic/adaptive surrogate models and with the aim of reducing further the computational cost associated to SDD, multi-fidelity (MF) approximation methods have been developed, aiming at combining the accuracy of high-fidelity solvers with the computational cost of low-fidelity solvers [3]. Thus, MF surrogate models use mainly low-fidelity simulations and only few high-fidelity simulations are used to preserve the model accuracy. ...
... The surrogate model predictionsf (x) are computed as the expected value (EV) over a stochastic ensemble of Radial Basis Function (RBF) surrogate models, defined by a stochastic tuning parameter, τ ∼ unif [1,3]: ...
Preprint
Full-text available
A multi-fidelity (MF) active learning method is presented for design optimization problems characterized by noisy evaluations of the performance metrics. Namely, a generalized MF surrogate model is used for design-space exploration, exploiting an arbitrary number of hierarchical fidelity levels, i.e., performance evaluations coming from different models, solvers, or discretizations, characterized by different accuracy. The method is intended to accurately predict the design performance while reducing the computational effort required by simulation-driven design (SDD) to achieve the global optimum. The overall MF prediction is evaluated as a low-fidelity trained surrogate corrected with the surrogates of the errors between consecutive fidelity levels. Surrogates are based on stochastic radial basis functions (SRBF) with least squares regression and in-the-loop optimization of hyperparameters to deal with noisy training data. The method adaptively queries new training data, selecting both the design points and the required fidelity level via an active learning approach. This is based on the lower confidence bounding method, which combines performance prediction and associated uncertainty to select the most promising design regions. The fidelity levels are selected considering the benefit-cost ratio associated with their use in the training. The method's performance is assessed and discussed using four analytical tests and three SDD problems based on computational fluid dynamics simulations, namely the shape optimization of a NACA hydrofoil, the DTMB 5415 destroyer, and a roll-on/roll-off passenger ferry. Fidelity levels are provided by both adaptive grid refinement and multi-grid resolution approaches. Under the assumption of a limited budget of function evaluations, the proposed MF method shows better performance in comparison with the model trained by high-fidelity evaluations only.
... In general, there is by now a large consensus in the UQ and computational sciences communities on the fact that large-scale UQ analyses can only be performed by leveraging on multi-fidelity methodologies, i.e., methodologies that explore the bulk of the variability of the quantities of interest (QoI) of the simulation over coarse grids (or more generally, computationally inexpensive models with, e.g., simplified physics), and resort to querying high-fidelity models (e.g., refined grids or full-physics models) only sparingly, to correct the initial guess produced with the low-fidelity models, see e.g., [9]. Within this general framework, several approaches can be conceived, depending on the kind of fidelity models considered and on the strategy used to sample the parameter space (i.e., for what values of the uncertain parameters the different fidelity models should be queried/ evaluated). ...
... To explain this behavior, we have a closer look at the MISC quadrature formula (9). In particular, let us recall that the computation of the first four centered moments implicitly uses surrogate models for r th powers of the quantity of interest R r T , r = 1, … , 4 , (see Sect. 2.1). ...
Article
Full-text available
This paper presents a comparison of two multi-fidelity methods for the forward uncertainty quantification of a naval engineering problem. Specifically, we consider the problem of quantifying the uncertainty of the hydrodynamic resistance of a roll-on/roll-off passenger ferry advancing in calm water and subject to two operational uncertainties (ship speed and payload). The first four statistical moments (mean, variance, skewness, and kurtosis), and the probability density function for such quantity of interest (QoI) are computed with two multi-fidelity methods, i.e., the Multi-Index Stochastic Collocation (MISC) and an adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF). The QoI is evaluated via computational fluid dynamics simulations, which are performed with the in-house unsteady Reynolds-Averaged Navier–Stokes (RANS) multi-grid solver $$\chi$$ χ navis. The different fidelities employed by both methods are obtained by stopping the RANS solver at different grid levels of the multi-grid cycle. The performance of both methods are presented and discussed: in a nutshell, the findings suggest that, at least for the current implementation of both methods, MISC could be preferred whenever a limited computational budget is available, whereas for a larger computational budget SRBF seems to be preferable, thanks to its robustness to the numerical noise in the evaluations of the QoI.
... In particular, we propose an original multifidelity formulation for domainaware active learning to accelerate the search and assessment of the design alternatives. Multifidelity methods are computational approaches to modeling and optimization that allow to include high-fidelity responses and contain the computational expense by combining information from multiple models that represent a physical system (or process) with different levels of accuracy and cost (Fernández-Godino et al. 2016;Peherstorfer et al. 2018;Beran et al. 2020). Frequently, multifidelity strategies synthesize many responses of cheap-to-evaluate models with few interrogations of expensive representations into a unique surrogate model (Kennedy and O'Hagan 2000;Forrester et al. 2007;Park et al. 2017). ...
Article
Full-text available
The multidisciplinary design optimization (MDO) of re-entry vehicles presents many challenges associated with the plurality of the domains that characterize the design problem and the multi-physics interactions. Aerodynamic and thermodynamic phenomena are strongly coupled and relate to the heat loads that affect the vehicle along the re-entry trajectory, which drive the design of the thermal protection system (TPS). The preliminary design and optimization of re-entry vehicles would benefit from accurate high-fidelity aerothermodynamic analysis, which are usually expensive computational fluid dynamic simulations. We propose an original formulation for multifidelity active learning that considers both the information extracted from data and domain-specific knowledge. Our scheme is developed for the design of re-entry vehicles and is demonstrated for the case of an Orion-like capsule entering the Earth atmosphere. The design process aims to minimize the mass of propellant burned during the entry maneuver, the mass of the TPS, and the temperature experienced by the TPS along the re-entry. The results demonstrate that our multifidelity strategy allows to achieve a sensitive improvement of the design solution with respect to the baseline. In particular, the outcomes of our method are superior to the design obtained through a single-fidelity framework, as a result of the principled selection of a limited number of high-fidelity evaluations.
... The North Atlantic Treaty Organization (NATO) Science and Technology Organization (STO) working group AVT-331 "Goal-driven, multi-fidelity approaches for military vehicle system-level design" is currently investigating computing frameworks that could significantly impact the design of next-generation military vehicles [1]. In order to save computational time for uncertainty quantification, optimization, or design space exploration, the use of surrogate models can be a very good strategy. ...
Article
Full-text available
In this article, multi-fidelity kriging and sparse polynomial chaos expansion (SPCE) surrogate models are constructed. In addition, a novel combination of the two surrogate approaches into a multi-fidelity SPCE-Kriging model will be presented. Accurate surrogate models, once obtained, can be employed for evaluating a large number of designs for uncertainty quantification, optimization, or design space exploration. Analytical benchmark problems are used to show that accurate multi-fidelity surrogate models can be obtained at lower computational cost than high-fidelity models. The benchmarks include non-polynomial and polynomial functions of various input dimensions, lower dimensional heterogeneous non-polynomial functions, as well as a coupled spring-mass-system. Overall, multi-fidelity models are more accurate than high-fidelity ones for the same cost, especially when only a few high-fidelity training points are employed. Full-order PCEs tend to be a factor of two or so worse than SPCES in terms of overall accuracy. The combination of the two approaches into the SPCE-Kriging model leads to a more accurate and flexible method overall.
... simplified physics), and resort to querying high-fidelity models (e.g., refined meshes or full-physics models) only sparingly, to correct the initial guess produced with the low-fidelity models, see e.g. [5]. Within this general framework, several approaches can be conceived, depending on the kind of fidelity models considered and on the strategy used to sample the parameter space (i.e., for what values of the uncertain parameters the different fidelity models should be queried/evaluated). ...
Preprint
Full-text available
This paper presents a comparison of two multi-fidelity methods for the forward uncertainty quantification of a naval engineering problem. Specifically, we consider the problem of quantifying the uncertainty of the hydrodynamic resistance of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties (ship speed and payload). The first four statistical moments (mean, variance, skewness, kurtosis), and the probability density function for such quantity of interest (QoI) are computed with two multi-fidelity methods, i.e., the Multi-Index Stochastic Collocation (MISC) method and an adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) algorithm. The QoI is evaluated via computational fluid dynamics simulations, which are performed with the in-house unsteady Reynolds-Averaged Navier-Stokes (RANS) multi-grid solver $\chi$navis. The different fidelities employed by both methods are obtained by stopping the RANS solver at different grid levels of the multi-grid cycle. The performance of both methods are presented and discussed: in a nutshell, the findings suggest that, at least for the current implementations of both algorithms, MISC could be preferred whenever a limited computational budget is available, whereas for a larger computational budget SRBFs seem to be preferable, thanks to its robustness to the numerical noise in the evaluations of the QoI.
... where R is the resistance in calm water at Froude equal to 0.28, L pp is the length between perpendicular, ∇ the displacement, B the overall beam, T the drought, and V the volume reserved for the sonar in the bow dome; finally, u l and u u are the lower and upper bounds of u, respectively. The design space has been defined within the activities of the NATO Science and Technology Organization, Applied Vehicle Technology (AVT), Research Task Group (RTG) 331 on "Goal-Driven, Multi-Fidelity Approaches for Military Vehicle System-Level Design" [4]. The original design space is formed by M = 22 design variables, defined by the FFD method [21]. ...
Preprint
Methodologies for reducing the design-space dimensionality in shape optimization have been recently developed based on unsupervised machine learning methods. These methods provide reduced dimensionality representations of the design space, capable of maintaining a certain degree of the original design variability. Nevertheless, they usually do not allow to use the original parameterization, representing a limitation to their widespread use in the industrial field, where the design parameters often pertain to well-established parametric models, e.g. CAD (computer aided design) models. This work presents how to embed the parametric-model original parameters in a reduced-dimensionality representation. The method, which takes advantage from the definition of a newly-introduced generalized feature space, is demonstrated to the reparameterization of a free-form deformation design space and the consequent solution of a simulation-driven design optimization problem of a naval destroyer in calm water.
... Surrogate Based Optimization (SBO) can significantly improve the efficiency of the optimization procedure: the available information is exhausted and synthetized into a surrogate model to lower the amount of required expensive function evaluations thus saving time, resources and the associated costs [1,2,3,4,5,6]. Efficiency can be further improved in a multifidelity setting, where we have cheaper, but potentially biased approximations to the function that can be used to assist the search of optimal points [7,8,9,10,11,12]. Within this context, we propose a scheme for resource-aware multifidelity active learning to reduce the computational time and cost associated with the optimization of black-box functions. ...
Conference Paper
Traditional methods for black box optimization require a considerable number of evaluations of the objective function. This can be time consuming, impractical, and unfeasible for many applications in aerospace science and engineering, which rely on accurate representations and expensive models to evaluate. Bayesian Optimization (BO) methods search for the global optimum by progressively (actively) learning a surrogate model of the objective function along the search path. Bayesian optimization can be accelerated through multifidelity approaches which leverage multiple black-box approximations of the objective functions that are computationally cheaper to evaluate, but still provide relevant information to the search task. Further computational benefits are offered by the availability of parallel and distributed computing architectures whose optimal usage is an open opportunity within the context of active learning. This paper introduces the Resource Aware Active Learning (RAAL) algorithm, a multifidelity Bayesian scheme to accelerate the optimization of black box functions. At each optimization step, the RAAL procedure computes the set of best sample locations and the associated fidelity sources that maximize the information gain to acquire during the parallel/distributed evaluation of the objective function, while accounting for the limited computational budget. The scheme is demonstrated for a variety of benchmark problems and results are discussed for both single fidelity and multifidelity settings. In particular, we observe that the RAAL strategy optimally seeds multiple points at each iteration, which allows for a major speed up of the optimization task.
... Surrogate Based Optimization (SBO) can significantly improve the efficiency of the optimization procedure: the available information is exhausted and synthetized into a surrogate model to lower the amount of required expensive function evaluations thus saving time, resources and the associated costs [1,2,3,4,5,6]. Efficiency can be further improved in a multifidelity setting, where we have cheaper, but potentially biased approximations to the function that can be used to assist the search of optimal points [7,8,9,10,11,12]. Within this context, we propose a scheme for resource-aware multifidelity active learning to reduce the computational time and cost associated with the optimization of black-box functions. ...
Preprint
Full-text available
Traditional methods for black box optimization require a considerable number of evaluations which can be time consuming, unpractical, and often unfeasible for many engineering applications that rely on accurate representations and expensive models to evaluate. Bayesian Optimization (BO) methods search for the global optimum by progressively (actively) learning a surrogate model of the objective function along the search path. Bayesian optimization can be accelerated through multifidelity approaches which leverage multiple black-box approximations of the objective functions that can be computationally cheaper to evaluate, but still provide relevant information to the search task. Further computational benefits are offered by the availability of parallel and distributed computing architectures whose optimal usage is an open opportunity within the context of active learning. This paper introduces the Resource Aware Active Learning (RAAL) strategy, a multifidelity Bayesian scheme to accelerate the optimization of black box functions. At each optimization step, the RAAL procedure computes the set of best sample locations and the associated fidelity sources that maximize the information gain to acquire during the parallel/distributed evaluation of the objective function, while accounting for the limited computational budget. The scheme is demonstrated for a variety of benchmark problems and results are discussed for both single fidelity and multifidelity settings. In particular we observe that the RAAL strategy optimally seeds multiple points at each iteration allowing for a major speed up of the optimization task.
... Finally, the application of multi-fidelity methodologies to the analysis and design of vehicles is addressed by the AVT-331 group on "Goal-Driven, Multi-Fidelity Approaches for Military Vehicle System-Level Design." An overview on the AVT-331 activities on multi-fidelity approaches may be found in [7]. ...
Preprint
Full-text available
This paper presents a comparison of two methods for the forward uncertainty quantification (UQ) of complex industrial problems. Specifically, the performance of Multi-Index Stochastic Collocation (MISC) and adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) surrogates is assessed for the UQ of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties, namely the ship speed and draught. The estimation of expected value, standard deviation, and probability density function of the (model-scale) resistance is presented and discussed obtained by multi-grid Reynolds averaged Navier-Stokes (RANS) computations. Both MISC and SRBF use as multi-fidelity levels the evaluations on different grid levels, intrinsically employed by the RANS solver for multi-grid acceleration; four grid levels are used here, obtained as isotropic coarsening of the initial finest mesh. The results suggest that MISC could be preferred when only limited data sets are available. For larger data sets both MISC and SRBF represent a valid option, with a slight preference for SRBF, due to its robustness to noise.
... Finally, the application of multi-fidelity methodologies to the analysis and design of vehicles is addressed by the AVT-331 group on "Goal-Driven, Multi-Fidelity Approaches for Military Vehicle System-Level Design." An overview on the AVT-331 activities on multi-fidelity approaches may be found in [7]. ...
Conference Paper
Full-text available
This paper presents a comparison of two methods for the forward uncertainty quantification (UQ) of complex industrial problems. Specifically, the performance of Multi-Index Stochastic Collocation (MISC) and adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) surrogates is assessed for the UQ of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties, namely the ship speed and draught. The estimation of expected value, standard deviation, and probability density function of the (model-scale) resistance is presented and discussed obtained by multi-grid Reynolds averaged Navier-Stokes (RANS) computations. Both MISC and SRBF use as multi-fidelity levels the evaluations on different grid levels, intrinsically employed by the RANS solver for multi-grid acceleration; four grid levels are used here, obtained as isotropic coarsening of the initial finest mesh. The results suggest that MISC could be preferred when only limited data sets are available. For larger data sets both MISC and SRBF represent a valid option, with a slight preference for SRBF, due to its robustness to noise.
... Among these methods, multi-fidelity approaches are gaining attention, due to their capability to combine the accuracy of high-fidelity solvers with the computational cost of lowfidelity solvers. See for instance the NATO Science and Technology Organization, Applied Vehicle Technology task group AVT-331, which is collaboratively assessing "Goal-Driven, Multi-Fidelity Approaches for Military Vehicle System-Level Design", including applications from air, space, and sea domains (Beran et al., 2020). Multi-fidelity methods leverage a fidelity spectrum of computational models (from low-to high-fidelity), with the objective of maximizing the model accuracy while minimizing the associated computational cost (Giselle Fernández-Godino et al., 2019). ...
Conference Paper
Full-text available
Despite recent advances in machine learning, simulation-driven design optimization using high-fidelity simulations may still be prohibitively expensive for practical applications. This paper investigates improvements in multi-fidelity surrogate-based hydrodynamic optimization , which are intended to make the process faster and more efficient. Specific innovations are: a) the use of a reduced initial dataset with only one data point for all fidelity levels except the lowest, to reduce the computational cost of surrogate-model initialization; b) accounting for noise variance in the selection of the fidelity level to sample, to avoid oversampling well-resolved but noisy fidelity levels, c) improving the automatic mesh adaptation protocol for the CFD simulations, to further optimize mesh adaptation; and d) restarting high-fidelity simulations from converged low-fidelity results, to improve the overall efficiency of the design optimization process. These methodological advancements are demonstrated for an analytical test problem, as well as the shape optimization of a NACA 4-digit airfoil and the DTMB 5415 for calm-water resistance. The results show that the reduced dataset drastically reduces the computational cost of initialization and favors efficient low-fidelity exploration of the design space. The noise-corrected fidelity selection encourages the selection of higher fidelities, to effectively determine the true optimum. Finally, the CFD solver advancements make high-fidelity simulations faster, up to eight times. Compared with previous work, the solution of the DTMB 5415 problem exhibits a more robust training process, providing a slightly improved optimum, in less than half the computational time.
Conference Paper
Predicting aeroelastic flutter during the early stages of the aircraft design process is important to help avoid costly problems that may appear later in the design process. By using multifidelity modeling, a relatively cheap flutter analysis can be performed by using the doublet lattice method and the time-accurate Euler equations. This strategy has already been shown to alleviate the cost of transonic flutter prediction. This paper describes the work that has been done so far to make an implementation of this prediction framework that is open-source and accessible to a broader community of researchers. The first half of the framework has been implemented and is described in this paper. An overview of the entire framework is given first. This is followed by the methods section which introduces the theory of the framework and how it has been implemented. The open-source implementation is then verified with the AGARD 445.6. The results of the AGARD 445.6 wing are compared against the results of a closed-source implementation as well as data from the literature. The open-source implementation is also used to obtain results on a version of the undeflected Common Research Model (uCRM) wing. The paper concludes by outlining the work that is left and the anticipated challenges that must be overcome.
Article
Full-text available
The present paper proposes a new mixed-fidelity method to optimize the shape of ships using genetic algorithms (GA) and potential flow codes to evaluate the hydrodynamics of variant hull forms, enhanced by a surrogate model based on an Artificial Neural Network (ANN) to account for viscous effects. The performance of the variant hull forms generated by the GA is evaluated for calm water resistance using potential flow methods which are quite fast when they run on modern computers. However, these methods do not take into account the viscous effects which are dominant in the stern region of the ship. Solvers of the Reynolds-Averaged Navier-Stokes Equations (RANS) should be used in this respect, which, however, are too time-consuming to be used for the evaluation of some hundreds of variants within the GA search. In this study, a RANS solver is used prior to the execution of the GA to train an ANN in modeling the effect of stern design geometrical parameters only. Potential flow results, accounting for the geometrical design parameters of the rest of the hull, are combined with the aforementioned trained meta-model for the final hull form evaluation. This work concentrates on the provision of a more reliable framework for the evaluation of hull form performance in calm water without a significant increase of the computing time.
Conference Paper
Full-text available
A generalized multi-fidelity (MF) metamodel of CFD (computational fluid dynamics) computations is presented for design-and operational-space exploration, based on machine learning from an arbitrary number of fidelity levels. The method is based on stochastic radial basis functions (RBF) with least squares regression and in-the-loop optimization of RBF parameters to deal with noisy data. The method is intended to accurately predict ship performance while reducing the computational effort required by simulation-based optimization (SBDO) and/or uncertainty quantification problems. The present formulation here exploits the potential of simulation methods that naturally produce results spanning a range of fidelity levels through adaptive grid refinement and/or multi-grid resolution (i.e. varying the grid resolution). The performance of the method is assessed for one analytical test and three SBDO problems based on CFD simulations, namely a NACA hydrofoil, the DTMB 5415 model, and a roll-on/roll-off passenger ferry in calm water. Under the assumption of a limited budget of function evaluations, the proposed MF method shows better performance in comparison with its single-fidelity counterpart. The method also shows very promising results in dealing with and learning from noisy CFD data.
Conference Paper
Full-text available
The focus of the present paper is the assessment of deterministic and stochastic methods for the prediction of large amplitude ship motions in heavy weather, including comparison with experimental fluid dynamics (EFD) data. The research was conducted under the auspices of NATO AVT-280 on "Evaluation of Prediction Methods for Ship Performance in Heavy Weather." EFD data are obtained from free-running model tests of a naval destroyer hull form. Namely, an appended 5415M model is assessed for course keeping in irregular stern-quartering waves at target Froude number equal to 0.33. Irregular waves are based on the JONSWAP spectrum. The static stability, forward speed, wave direction, and wave spectrum were set such that large amplitude roll motions were recorded, including a number of capsizes. Dynamic stability phenomena witnessed during the experiments include resonant roll, loss of static stability, and significant deck edge immersion. Deterministic validation is performed for a weakly non-linear as well as a body-exact time domain panel code. Results show fairly good predictions for roll, heave and pitch motions, and forward speed variations. Sway velocity, yaw motions, and deck edge immersion heights are seen to be more difficult to predict accurately. Next, stochastic validation and deterministic reconstruction of severe (large roll angles) and rare (capsizing) events are assessed by free-running CFD (URANS) simulations. Validation against EFD data includes roll decay at zero speed and self-propulsion revolution per minute studies in calm water. The stochastic validation of free-running CFD is achieved by statistical assessment of EFD data and CFD results by large-sample uncertainty quantification methods for input wave and ship response via spectrum, autocovariance, and bootstrap analysis. CFD results are in a very good agreement with EFD, showing satisfactory outcomes of the stochastic validation procedure. EFD and CFD data are used to identify wave sequences causing large roll angles. These are based on expected value and standard deviation of wave amplitude and encounter period. As proof of concept, this sequence is used for the deterministic reconstruction of severe (large roll angles) and rare (capsizing) events by CFD simulations.
Conference Paper
Full-text available
This paper presents a comparison of two methods for the forward uncertainty quantification (UQ) of complex industrial problems. Specifically, the performance of Multi-Index Stochastic Collocation (MISC) and adaptive multi-fidelity Stochastic Radial Basis Functions (SRBF) surrogates is assessed for the UQ of a roll-on/roll-off passengers ferry advancing in calm water and subject to two operational uncertainties, namely the ship speed and draught. The estimation of expected value, standard deviation, and probability density function of the (model-scale) resistance is presented and discussed obtained by multi-grid Reynolds averaged Navier-Stokes (RANS) computations. Both MISC and SRBF use as multi-fidelity levels the evaluations on different grid levels, intrinsically employed by the RANS solver for multi-grid acceleration; four grid levels are used here, obtained as isotropic coarsening of the initial finest mesh. The results suggest that MISC could be preferred when only limited data sets are available. For larger data sets both MISC and SRBF represent a valid option, with a slight preference for SRBF, due to its robustness to noise.
Conference Paper
Full-text available
An adaptive-fidelity approach to metamodeling from noisy data is presented for design-space exploration and design optimization. Computational fluid dynamics (CFD) simulations with different numerical accuracy (spatial discretization) provides metamodel training sets affected by unavoidable numerical noise. The-fidelity approximation is built by an additive correction of a low-fidelity metamodel with metamodels of differences (errors) between higher-fidelity levels whose hierarchy needs to be provided. The approach encompasses two core metamodeling techniques, namely: i) stochastic radial-basis functions (SRBF) and ii) Gaussian process (GP). The adaptivity stems from the sequential training procedure and the auto-tuning capabilities of the metamodels. The method is demonstrated for an analytical test problem and a CFD-based optimization of a NACA airfoil, where the fidelity levels are defined by an adaptive grid refinement technique of a Reynolds-averaged Navier-Stokes (RANS) solver. The paper discusses: i) the effect of using more than two fidelity levels; ii) the use of least squares regression as opposed to exact interpolation; iii) the comparison between SRBF and GP; and iv) the use of two sampling approaches for GP. Results show that in presence of noise, the use of more than two fidelity levels improves the model accuracy with a significant reduction of the number of high-fidelity evaluations. Both least squares SRBF and GP provide promising results in dealing with noisy data.
Article
Full-text available
Thanks to their versatility, ease of deployment and high-performance, surrogate models have become staple tools in the arsenal of uncertainty quantification (UQ). From local interpolants to global spectral decompositions, surrogates are characterised by their ability to efficiently emulate complex computational models based on a small set of model runs used for training. An inherent limitation of many surrogate models is their susceptibility to the curse of dimensionality, which traditionally limits their applicability to a maximum of O(102) input dimensions. We present a novel approach at high-dimensional surrogate modelling that is model-, dimensionality reduction- and surrogate model- agnostic (black box), and can enable the solution of high dimensional (i.e. up to 104) problems. After introducing the general algorithm, we demonstrate its performance by combining Kriging and polynomial chaos expansions surrogates and kernel principal component analysis. In particular, we compare the generalisation performance that the resulting surrogates achieve to the classical sequential application of dimensionality reduction followed by surrogate modelling on several benchmark applications, comprising an analytical function and two engineering applications of increasing dimensionality and complexity.
Article
Full-text available
The surrogate modeling toolbox (SMT) is an open-source Python package that contains a collection of surrogate modeling methods, sampling techniques, and benchmarking functions. This package provides a library of surrogate models that is simple to use and facilitates the implementation of additional methods. SMT is different from existing surrogate modeling libraries because of its emphasis on derivatives, including training derivatives used for gradient-enhanced modeling, prediction derivatives, and derivatives with respect to training data. It also includes unique surrogate models: kriging by partial least-squares reduction, which scales well with the number of inputs; and energy-minimizing spline interpolation, which scales well with the number of training points. The efficiency and effectiveness of SMT are demonstrated through a series of examples. SMT is documented using custom tools for embedding automatically tested code and dynamically generated plots to produce high-quality user guides with minimal effort from contributors. SMT is maintained in a public version control repository.
Conference Paper
Full-text available
The paper presents how to efficiently and effectively solve stochastic shape optimization problems by combing Reynolds-averaged Navier-Stokes (RANS) equation solvers with design-space augmented dimensionality reduction (ADR). This study has been conducted within the NATO Science and Technology Organization, Applied Vehicle Technology, Task Group AVT-252 "Stochastic Design Optimization for Naval and Aero Military Vehicles." The application pertains to the robust and the reliability-based robust design optimization of a destroyer hull-form for resistance in calm water and waves and seakeeping performance, under stochastic environmental and operating conditions (speed, sea state, heading). The current work extends previous research by the authors, presented at earlier AIAA conferences [1-3], where only potential flow solvers were used. In the present work, the expected value of the total resistance is reduced respectively by 4.4 and 3% in calm water and waves. An 8% improvement of the seakeeping performance is also achieved. Design-space assessment by ADR is demonstrated to be a viable option in solving the curse of dimensionionality in shape optimization, especially when high-fidelity CPU-expensive solvers are used.
Conference Paper
Full-text available
This paper presents a multifidelity method for optimization under uncertainty for aerospace problems. In this work, the effectiveness of the method is demonstrated for the robust optimization of a tailless aircraft that is based on the Boeing Insitu ScanEagle. Aircraft design is often affected by uncertainties in manufacturing and operating conditions. Accounting for uncertainties during optimization ensures a robust design that is more likely to meet performance requirements. Designing robust systems can be computationally prohibitive due to the numerous evaluations of expensive-to-evaluate high-fidelity numerical models required to estimate system-level statistics at each optimization iteration. This work uses a multifidelity Monte Carlo approach to estimate the mean and the variance of the system outputs for robust optimization. The method uses control variates to exploit multiple fidelities and optimally allocates resources to different fidelities to minimize the variance in the estimates for a given budget. The results for the ScanEagle application show that the proposed multifidelity method achieves substantial speed-ups as compared to a regular Monte-Carlo-based robust optimization.
Conference Paper
Full-text available
Optimization requires the quantities of interest that define objective functions and constraints to be evaluated a large number of times. In aerospace engineering, these quantities of interest can be expensive to compute (e.g., numerically solving a set of partial differential equations), leading to a challenging optimization problem. Bayesian optimization (BO) is a class of algorithms for the global optimization of expensive-to-evaluate functions. BO leverages all past evaluations available to construct a surrogate model. This surrogate model is then used to select the next design to evaluate. This paper reviews two recent advances in BO that tackle the challenges of optimizing expensive functions and thus can enrich the optimization toolbox of the aerospace engineer. The first method addresses optimization problems subject to inequality constraints where a finite budget of evaluations is available, a common situation when dealing with expensive models (e.g., a limited time to conduct the optimization study or limited access to a supercomputer). This challenge is addressed via a lookahead BO algorithm that plans the sequence of designs to evaluate in order to maximize the improvement achieved, not only at the next iteration, but once the total budget is consumed. The second method demonstrates how sensitivity information, such as gradients computed with adjoint methods, can be incorporated into a BO algorithm. This algorithm exploits sensitivity information in two ways: first, to enhance the surrogate model, and second, to improve the selection of the next design to evaluate by accounting for future gradient evaluations. The benefits of the two methods are demonstrated on aerospace examples.
Article
Full-text available
This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier–Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253–261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19–29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868–886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to compare the hydrodynamic forces and the attitudes assumed at different velocities. A very good agreement between numerical and experimental results demonstrates the reliability of the single-phase level set approach for the predictions of high Froude numbers flows.
Article
Full-text available
Since its introduction, the NASA Common Research Model has proved a useful aerodynamic benchmark for predicting computational-fluid-dynamics-based drag and aerodynamic design optimization. The model was originally conceived as a purely aerodynamic benchmark, and as such the wing geometry corresponds to the deflected shape at its nominal 1g flight condition. However, interest has been growing to extend this model to aeroelastic studies. Unfortunately, because of its predefined deflection, the model is not suitable for aeroelastic analysis and design. To address this issue, an undeflected Common Research Model is defined through an inverse design procedure that includes the outer mold line geometry of the undeflected wing and the corresponding internal wingbox structure. Additionally, because modern transport aircraft are trending toward higher-aspect-ratio wing designs, a higher-aspect-ratio variant of the Common Research Model wing is developed to assess next-generation wing designs. This variant has an aspect ratio of 13.5 and is designed by using buffet-constrained multipoint aerostructural optimization. The purpose of these models is to provide publicly available benchmarks for aeroelastic wing analysis and design optimization
Conference Paper
Full-text available
This paper presents methodological investigations performed in research activities in the field of MDO in overall aircraft design in the ongoing EU funded research project AGILE. AGILE is developing the next generation of aircraft Multidisciplinary Design and Optimization processes, which targets significant reductions in aircraft development costs and time to market, leading to cheaper and greener aircraft solutions. The paper introduces the AGILE project structure and describes the achievements of the 1st year (Design Campaign 1) leading to a reference distributed MDO system. A focus is then made on the different novel optimization techniques studied during the 2nd year, all willing to ease the optimization of complex workflows, characterized by high degree of discipline interdependencies, high number of design variables in the context of multi-level and multi-partner collaborative engineering projects. Then the implementation of these methods in the enhanced MDO framework is discussed.
Article
Full-text available
Simulations are often computationally expensive and the need to perform multiple realizations, as in uncertainty quantification (UQ) or optimization, makes surrogate models an attractive option. However, for expensive high-fidelity models (HFM) even performing the number of simulations needed for fitting a surrogate may be too expensive. Inexpensive but less accurate low-fidelity models (LFM) are often also available. Multi-fidelity models (MFM) combine HFM and LFM in order to achieve accuracy at a reasonable cost. With the increasing popularity of MFM in mind, the aim of this paper is to summarize the state-of-the-art of MFM modeling trends. For this purpose, publications in the literature reviewed in this work are classified based on application, surrogate selection if any, the difference between fidelities, the method used to combine these fidelities, the field of application and the year published. Computer time savings are usually the reason for using MFM, hence it is important to properly report the achieved savings. Unfortunately, we find that many papers do not present sufficient information to determine these savings. Therefore, the paper also includes guidelines for authors to present their MFM savings in a way that it is useful to future MFM users.Based on papers that provided enough information we find that time savings are highly problem dependent and that MFM approaches we surveyed provided time savings up to 90%.
Article
Full-text available
Within an aerodynamic shape optimization framework, an efficient shape parameterization and deformation scheme is critical to allow flexible deformation of the surface with the maximum possible design space coverage. Numerous approaches have been developed for the geometric representation of airfoils. A fundamental approach is considered here from the geometric perspective; and a method is presented to allow the derivation of efficient, generic, and orthogonal airfoil geometric design variables. This is achieved by the mathematical decomposition of a training library. The resulting geometric modes are independent of a parameterization scheme, surface and volume mesh, and flow solver; thus, they are generally applicable. However, these modes are dependent on the training library, and so a benchmark performance measure, called the airfoil technology factor, has also been incorporated into the scheme to allow intelligent metric-based filtering, or design space reduction, of the training library to ensure efficient airfoil deformation modes are extracted. Results are presented for several geometric shape recovery problems, using two optimization approaches, and it is shown that these mathematically extracted degrees of freedom perform particularly well in all cases, showing excellent design space coverage. These design variables are also shown to outperform those based on other widely used approaches; the Hicks-Henne "bump" functions and a linear (deformative) approximation to Sobieczky's parametric section parameterization are considered.
Article
This work adopts the manifold mapping multifidelity modeling technique for frequency-domain flutter prediction. While this type of multifidelity model is usually employed in a trust-region optimization framework, the current approach uses it, in combination with the p-k method, to find a critical point in the high-fidelity Mach-reduced frequency space (namely, the matched flutter point for a given density and sound speed). This is done by first computing a grid of low-fidelity (doublet lattice-based) aerodynamic influence coefficients in modal coordinates. After starting at the low-fidelity match point, the process iteratively approaches the high-fidelity (Euler-based) match point using a single Mach and reduced frequency evaluation per iteration. The locally accurate multifidelity model operates by applying a correction matrix to the low-fidelity response using the changes in high- and low-fidelity response vectors between iterations; the correction matrix is computed using the Moore-Penrose pseudoinverse operator, making it reminiscent of various doublet lattice correction methods. When applied to a common test case, the process is shown to converge in 4 to 8 high-fidelity evaluations, depending on the fluid density and sound speed. For comparison, standard time- and frequency domain methods are estimated to be 3 to 10 times more costly in terms of time steps or frequency evaluations.
Article
In this paper, multifidelity modeling using the new nondeterministic localized Galerkin approach is introduced to address the practical challenges associated with 1) multiple low-fidelity models, 2) localized correlations of low-fidelity models to a high-fidelity one, and 3) low-fidelity data or models under uncertainty. The proposed method employs two technical processes: the consolidation of multiple low-fidelity models and the refined adaptation of the consolidated model. Along with the resulting prediction model, the proposed method also provides model dominance information that can be used to understand the characteristic response of the high-fidelity model regarding essential behavior described by the low-fidelity models. Nondeterministic kriging is employed for variable-fidelity modeling under uncertainty. The performance and characteristics of the proposed method are demonstrated and discussed with multiple fundamental mathematical examples and a thermally coupled aircraft structural design problem. It is found that the proposed localized Galerkin multifidelity method can effectively deal with the practical challenges and provide an accurate prediction model with potential uncertainty bounds along with model dominance information.
Article
A method based on the Karhunen-Loève expansion (KLE) is formulated for the assessment of arbitrary design spaces in shape optimization, assessing the shape modification variability and providing the definition of a reduced-dimensionality global model of the shape modification vector. The method is based on the concept of geometric variance and does not require design-performance analyses. Specifically, the KLE is applied to the continuous shape modification vector, requiring the solution of a Fredholm integral equation of the second kind. Once the equation is discretized, the problem reduces to the principal component analysis (PCA) of discrete geometrical data. The objective of the present work is to demonstrate how this method can be used to (a) assess different design spaces and shape parameterization methods before optimization is performed and without the need of running simulations for the performance prediction, and (b) reduce the dimensionality of the design space, providing a shape reparameterization using KLE/PCA eigenvalues and eigenmodes. A demonstration for the hull-form optimization of the DTMB 5415 model in calm water is shown, where three design spaces are investigated, namely provided by free-form deformation, radial basis functions, and global modification functions.
Article
Multifidelity sparse polynomial chaos expansion (MFSPCE) models of critical flutter dynamic pressures as a function of Mach number, angle of attack, and thickness-to-chord ratio are constructed in lieu of solely using computationally expensive high-fidelity engineering analyses. Compressed sensing is used to determine a sparse representation, and an all-at-once approach is employed to create multifidelity polynomial chaos expansions with hybrid additive/multiplicative bridge functions. To demonstrate that accurate MFSPCE models can be obtained at lower computational cost than high-fidelity full-order polynomial chaos expansions, two analytic test functions and a more complex application example, which is the well-known Advisory Group for Aerospace Research and Development 445.6 aeroelastic model, are employed. The high- and low-fidelity levels considered are Euler and panel solutions, respectively, which are all combined with a modal structural solver.
Article
The paper presents a study on four adaptive sampling methods of a multi-fidelity (MF) metamodel, based on stochastic radial basis functions (RBF), for global design optimisation based on expensive CFD computer simulations and adaptive grid refinement. The MF metamodel is built as the sum of a low-fidelity-trained metamodel and an error metamodel, based on the difference between high-and low-fidelity simulations. The MF metamodel is adaptively refined using dynamic sampling criteria, based on the prediction uncertainty in combination with the objective optimum and the computational cost of high-and low-fidelity evaluations. The adaptive sampling methods are demonstrated by four analytical benchmark and two design optimisation problems, pertaining to the resistance reduction of a NACA hydrofoil and a destroyer-type vessel. The performance of the adaptive sampling methods is assessed via objective function convergence.
Article
The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result in either costly changes or the acceptance of a configuration that does not meet expectations. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. A new multifidelity algorithm has been proposed, combining elements from typical trust region model management and classical quasi-Newton methods. In this paper, the algorithm is compared with a single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations compared with a single-fidelity approach. A contracting trust region is found to slow the progress of the multifidelity optimizer. However, by leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produces better designs in the cases considered. Investigating the return on investment confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to pure high-fidelity optimization is critical.
Article
In this paper, the nondeterministic kriging (NDK) method is proposed, aiming for the applications of engineering design exploration, especially when only a limited number of random samples is available from either nondeterministic simulations or physical experiments under uncertainty. To handle nondeterministic data, the proposed NDK method uses separate aleatory and epistemic uncertainty processes. In a general situation in which resources are limited in generating random samples, an aleatory variance is assessed via a local regression kernel process. It is often found that a prediction model built with a conventional kriging suffers from the overfitting issue, which becomes worse with noisy and random data. The proposed NDK method can provide physically meaningful insights into both the main trend and the prediction uncertainty of system behaviors by capturing uncertainty in the sample data and suppressing the numerical instability. The predicted uncertainty from the proposed approach can be represented in terms of distinguishable aleatory and epistemic uncertainties, which will be useful in a decision-making process for an adaptive model building and design exploration. The potential benefits of using the proposed NDK method are demonstrated with multiple numerical examples, including mathematical and aircraft concept design problems.
Article
In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.
Article
Multifidelity approaches are frequently used in design when high-fidelity models are too expensive to use directly and lower-fidelity models of reasonable accuracy exist. In optimization, corrected low-fidelity data are typically used in a sequence of independent, approximate optimizations bounded by trust regions. A new, unified, multifidelity quasi-Newton approach is presented that preserves an approximate inverse Hessian between iterations, determines search directions from high-fidelity data, and uses low-fidelity models for line searches. The proposed algorithm produces better search directions, maintains larger step sizes, and requires significantly fewer low-fidelity function evaluations than Trust Region Model Management. The multifidelity quasi-Newton method also provides an expected optimal point that is forward looking and is useful in building superior low-fidelity corrections. The new approach is compared with Trust Region Model Management and the BFGS quasi-Newton method on several analytic test problems using polynomial and kriging corrections. For comparison, a technique is demonstrated to initialize high-fidelity optimization when transition away from approximate models is deemed fruitful. In summary, the unified multifidelity quasi-Newton approach required fewer or equal high-fidelity function evaluations than Trust Region Model Management in about two-thirds of the test cases, and similarly reduced cost in more than half of cases compared with BFGS.
Article
A new approach to multifidelity, gradient-enhanced surrogate modeling using polynomial chaos expansions is presented. This approach seeks complementary additive and multiplicative corrections to low-fidelity data whereas current hybrid methods in the literature attempt to balance individually calculated calibrations. An advantage of the new approach is that least squares-optimal coefficients for both corrections and the model of interest are determined simultaneously using the high-fidelity data directly in the final surrogate. The proposed technique is compared to the weighted approach for three analytic functions and the numerical simulation of a vehicle's lift coefficient using Cartesian Euler CFD and panel aerodynamics. Investigation of the individual correction terms indicates the advantage of the proposed approach is that complementary calibrations separately adjust the low-fidelity data in local regions based on agreement or disagreement between the two fidelities. In cases where polynomials are suitable approximations to the true function, the new all-at-once approach is found to reduce error in the surrogate faster than the method of weighted combinations. When the low-fidelity is a good approximation of the true function, the proposed technique out-performs monofidelity approximations as well. Sparse grid constructions alleviate the growth of the training set as root-mean-square-error is calculated for increasingly higher polynomial orders. Utilizing gradient information provides an advantage at lower training grid levels for low-dimensional spaces, but worsens numerical conditioning of the system in higher dimensions.
Article
This paper studies Bayesian ranking and selection (R&S) problems with correlated prior beliefs and continuous domains, i.e. Bayesian optimization (BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely studied for discrete R&S problems, which sample the one-step Bayes-optimal point. When used over continuous domains, previous work on the knowledge gradient [Scott et al., 2011, Wu and Frazier, 2016, Wu et al., 2017] often rely on a discretized finite approximation. However, the discretization introduces error and scales poorly as the dimension of domain grows. In this paper, we develop a fast discretization-free knowledge gradient method for Bayesian optimization. Our method is not restricted to the fully sequential setting, but useful in all settings where knowledge gradient can be used over continuous domains. We show how our method can be generalized to handle (i) batch of points suggestion (parallel knowledge gradient); (ii) the setting where derivative information is available in the optimization process (derivative-enabled knowledge gradient). In numerical experiments, we demonstrate that the discretization-free knowledge gradient method finds global optima significantly faster than previous Bayesian optimization algorithms on both synthetic test functions and real-world applications, especially when function evaluations are noisy; and derivative-enabled knowledge gradient can further improve the performances, even outperforming the gradient-based optimizer such as BFGS when derivative information is available.
Article
In recent years, Bayesian optimization has proven successful for global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to decrease the number of objective function evaluations required for good performance. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), for which we show one-step Bayes-optimality, asymptotic consistency, and greater one-step value of information than is possible in the derivative-free setting. Our procedure accommodates noisy and incomplete derivative information, and comes in both sequential and batch forms. We show dKG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, kernel learning, and k-nearest neighbors.
Book
This new edition of the near-legendary textbook by Schlichting and revised by Gersten presents a comprehensive overview of boundary-layer theory and its application to all areas of fluid mechanics, with particular emphasis on the flow past bodies (e.g. aircraft aerodynamics). The new edition features an updated reference list and over 100 additional changes throughout the book, reflecting the latest advances on the subject.
Article
Simulation-based design optimization methods integrate computer simulations, design modification tools, and optimization algorithms. In hydrodynamic applications, often ob- jective functions are computationally expensive and noisy, their derivatives are not directly provided, and the existence of local minima cannot be excluded a priori, which motivates the use of deterministic derivative-free global optimization algorithms. The enhancement of two algorithms of this type, DIRECT (DIviding RECTangles) and DPSO (Determin- istic Particle Swarm Optimization), is presented based on global/local hybridization with derivative-free line search methods. The hull-form optimization of the DTMB 5415 model is solved for the reduction of the calm-water resistance at Fr = 0.25, using potential flow and RANS solvers. Six and eleven design variables are used respectively, modifying both the hull and the sonar dome. Hybrid algorithms show a faster convergence towards the global minimum than the original global methods and are a viable option for ship hydro- dynamic optimization. A significant resistance reduction is achieved both by potential flow and RANS-based optimizations, showing the effectiveness of the optimization procedure.
Article
To alleviate computational challenges in uncertainty quantification and multidisciplinary design optimization, Kriging has gained its popularity due to its high accuracy and flexibility interpolating non-linear system responses with collected data. One of the benefits of using Kriging is the availability of expected mean square error along with a response prediction at any location of interest. However, a stationary covariance structure, as is the case with the typical Kriging methodology, used with data collected adaptively from an optimal data acquisition strategy will result in lower quality predictions across the entire sample space. In this paper, a Locally Optimized Covariance Kriging (LOC-Kriging) method is proposed to address the difficulties of building a Kriging model with unevenly distributed adaptive samples. In the proposed method, the global non-stationary covariance is approximated by constructing and aggregating multiple local stationary covariance structures. An optimization problem is formulated to find a minimum number of LOC windows and a membership weighting function is used to combine the LOC-Krigings across the entire domain. This paper will demonstrate that LOC-Kriging improves efficiency and provides more reliable predictions and estimated error bounds than a stationary covariance Kriging, especially with adaptively collected data. © 2015 American Institute of Aeronautics and Astronautics Inc. All rights reserved.