Structural and Multidisciplinary Optimization

Published by Springer Nature
Online ISSN: 1615-1488
Print ISSN: 1615-147X
Learn more about this page
Recent publications
  • Arnoud Delissen
    Arnoud Delissen
  • Fred van Keulen
    Fred van Keulen
  • Matthijs Langelaar
    Matthijs Langelaar
The design of high-performance mechatronic systems is very challenging, as it requires delicate balancing of system dynamics, the controller, and their closed-loop interaction. Topology optimization provides an automated way to obtain systems with superior performance, although extension to simultaneous optimization of both topology and controller has been limited. To allow for topology optimization of mechatronic systems for closed-loop performance, stability, and disturbance rejection (i.e. modulus margin), we introduce local approximations of the Nyquist curve using circles. These circular approximations enable simple geometrical constraints on the shape of the Nyquist curve, which is used to characterize the closed-loop performance. Additionally, a computationally efficient robust formulation is proposed for topology optimization of dynamic systems. Based on approximation of eigenmodes for perturbed designs, their dynamics can be described with sufficient accuracy for optimization, while preventing the usual threefold increase of additional computational effort. The designs optimized using the integrated approach have significantly better performance (up to 350% in terms of bandwidth) than sequentially optimized systems, where eigenfrequencies are first maximized and then the controller is tuned. The proposed approach enables new directions of integrated (topology) optimization, with effective control over the Nyquist curve and efficient implementation of the robust formulation.
  • Zhaohui Xia
    Zhaohui Xia
  • Haobo Zhang
    Haobo Zhang
  • Ziao Zhuang
    Ziao Zhuang
  • [...]
  • Liang Gao
    Liang Gao
Isogeometric analysis has been widely applied in topology optimization in recent years, and various methods have been derived. However, most methods are accompanied by significant computational costs, which make it difficult to deal with complex models and large-scale design problems. In this paper, an isogeometric topology optimization method based on deep neural networks is proposed. The computational time of optimization can be effectively reduced while ensuring high accuracy. With the IGA-FEA two-resolution SIMP method, the machine-learning dataset can be obtained during early iterations. Unlike existing data-driven methods, online dataset generation both significantly reduces data collection time and enhances relevance to the design problem. As the iterations process, the machine learning model can be updated online by continuously collecting new data to ensure that the optimized topology structures approach the standard results. Through a series of 2D and 3D design examples, the generality and reliability of the proposed model have been verified and its time-saving advantage becomes more pronounced as the design scale increases. Furthermore, the impacts of neural network parameters on the results are studied through several controlled experiments.
  • Wang Zhao
    Wang Zhao
  • Lei Wang
    Lei Wang
In this paper, an interval reliability-based topology optimization (IRBTO) of piezoelectric structure is proposed based on a single-loop strategy. Firstly, the effect of negative velocity feedback control is equivalent to damping. Then the topology optimization formula of piezoelectric structures constrained by transient dynamic response is briefly described. The interval model is employed to describe uncertainty. Considering the complex mapping relationship between the displacement response function and uncertain parameter input, the adaptive subinterval dimension-wise method is used to calculate the feasible bounds of the displacement response function. Considering the effects of interval uncertainty, the IRBTO formula of piezoelectric structure is established. The single-loop strategy is employed to decouple the IRBTO into multi-level deterministic topology optimization and uncertainty analysis. The shift value of the displacement response function in the deterministic topology optimization of the piezoelectric structure in each cycle is calculated by using the modified performance measure approach according to the reliability analysis results. The adjoint vector method is used to obtain the sensitivity of the displacement response function and design variables. Three numerical examples are given to illustrate the effectiveness and applicability of the proposed method, and the results show that different uncertainties lead to different optimization layouts.
  • Akatsuki Nishioka
    Akatsuki Nishioka
  • Yoshihiro Kanno
    Yoshihiro Kanno
We consider a worst-case robust topology optimization problem under load uncertainty, which can be formulated as a minimization problem of the maximum eigenvalue of a symmetric matrix. The objective function is nondifferentiable where the multiplicity of maximum eigenvalues occurs. Nondifferentiability often causes some numerical instabilities in an optimization algorithm such as oscillation of the generated sequence and convergence to a non-optimal point. We use a smoothing method to tackle these issues. The proposed method is guaranteed to converge to a point satisfying the first-order optimality condition. In addition, it is a simple first-order optimization method and thus has low computational cost per iteration even in a large-scale problem. In numerical experiments, we show that the proposed method suppresses oscillation and converges faster than other existing methods.
To learn the effect of interested distribution parameter, also the design variable of random input vector, on time-dependent failure probability, and to decouple time-dependent reliability-based design optimization (T-RBDO), estimating time-dependent failure probability function (T-FPF), a relation of time-dependent failure probability varying with the distribution parameter in interested design region, is necessary. However, estimating T-FPF is time-consuming and a challenge at present. Thus, this paper proposes a novel single-loop meta-model importance sampling with adaptive Kriging model (SL-Meta-IS-AK) to estimate T-FPF efficiently. In SL-Meta-IS-AK, for estimating the T-FPF by single-loop simulation, an optimal importance sampling probability density function (IS-PDF), which can envelope the interested distribution parameter region and be free of the distribution parameter, is constructed by an integral operation. After the Kriging model is adaptively constructed for time-dependent performance function to approach optimal IS-PDF for T-FPF by quasi-optimal one, a simple sampling strategy is designed to extract the samples of quasi-optimal IS-PDF, and a time-dependent misclassification probability function is derived to update the Kriging model adaptively until it can accurately recognize the states of all extracted samples, on which the T-FPF at the whole interested distribution parameter region can be estimated as a byproduct. Due to the single-loop simulation aided by the IS-PDF covering the interested distribution parameter region but free of the distribution parameter, the efficiency of estimating T-FPF is improved by the proposed SL-Meta-IS-AK, which is verified by presented numerical and aviation engineering examples including a wing structure and a turbine shaft structure.
In structural reliability analysis, the HL-RF method may not converge in some nonlinear cases. The chaos control based first-order second-moment method (CC) achieves convergence by controlling the step length with chaotic control factors, but it commonly requires very time-consuming computation. In this paper, an Armijo-based hybrid step length release method based on chaos control is proposed to surmount the above issue. An iterative control angle is introduced for the proposed method to select an adaptive adjustment step length strategy. Then, a step length release method is proposed to speed up the convergence when the iterative rotation angle is less than the rotation control angle. When the iterative rotation angle exceeds the rotation control angle, an adaptive adjustment method for step length is defined based on the Armijo rule to provide an optimal choice of adaptive step length for the iterative process and guarantee convergence. After that, the robustness and efficiency of the proposed method are proved through several examples. The examples show that the proposed method is capable of generating a suitable adaptive step length, therefore accessing a more stable and accurate solution with greater efficiency in both high and low nonlinearity cases. It can well combine the advantages of HL-RF and the CC methods, and the efficiency is further improved without sacrificing its robustness. Finally, a discussion is brought out to investigate the selection of optimal parameters and how the two step length selection strategy cooperates and co-action with one another. It can be seen that the efficiency improvement of the proposed method mainly contributed to the step length release method, while the Armijo-based adaptive adjustment method for step length guaranteed convergence.
In this paper, the acoustic inverse problem modeled in the time domain featuring wave velocity reconstruction in the presence of sharp interfaces is addressed using an integer design variable approach. The medium being reconstructed is assumed piecewise constant, with single-material obstacles embedded in a homogeneous background. The wave equation is modeled using the Finite Element Method (FEM). The inversion procedure aims at finding the parameter field that minimizes a least-squares misfit function with respect to data generated from a synthetic model. The proposed optimization methodology is based on a sequential Integer Linear Programming (ILP) formulation used in the field of Topology Optimization (TO). Since this is a gradient-based technique, the sensitivity with respect to the integer design variable is evaluated by the adjoint method. Sensitivities are modified using both damping filters and Helmholtz-type Partial Differential Equation (PDE) filters to deal with the ill-posedness that is inherent to this class of inverse problem. The integer design variable is binary, associating each point of the domain either to the homogeneous background or to the embedded obstacles. This description naturally incorporates the sharp interface hypothesis, whereas a continuous design variable may generate transition regions with intermediate values and no clearly defined boundary. The damping filter is successful in controlling instabilities by incorporating the whole optimization history to the design update. Furthermore, the generality and effectiveness of the proposed framework are evaluated by addressing 2D problems from the literature and a proposed 3D case, all featuring sharp interfaces.
Reliability updating can be interpreted by the process of reevaluating structural reliability with data stemming from structural health monitoring sensors or platforms. In virtue of the power of Bayesian statistics, reliability updating incorporates the up-to-date information within the framework of uncertainty quantification, which facilitates more reasonable and strategic decision-making. However, the associated computational cost for quantifying uncertainty can be also increasingly challenging due to the iterative simulation of sophisticated models (e.g., Finite Element Model). To expedite reliability updating with complex models, reliability updating with surrogate model has been proposed to overcome aforementioned limitations. However, the past work merely integrates reliability updating with Kriging-based crude Monte Carlo Simulation, thereby, still exists many computational limitations. For example, parameters such as the coefficient of variation of posterior failure probability, the batch size of samples, and active learning stopping criterion are not well defined or devised, which can lead to computational pitfalls. Therefore, this paper proposes RUAK-IS (Reliability Updating with Adaptive Kriging using Importance Sampling) to address the aforementioned limitations. Specifically, importance sampling is incorporated with Kriging to enable updating of small failure probability with robust estimate and error quantification. Two numerical and one practical finite element examples are investigated to explore the computational efficiency and accuracy of the proposed method. Results demonstrate the computational superiority of RUAK-IS in terms of robustness and accuracy.
In this article, a density-driven unified multi-material topology optimization framework is suggested for functionally graded (FG) structures under static and dynamic responses. For this, two-dimensional solid structures and plate-like structures with/without variable thickness are investigated as design domains using multiple in-plane bi-directional FG materials (IBFGMs). In the present approach, a generally refined interpolation scheme relying upon Solid Isotropic Material with Penalization is proposed to deal with equivalent properties of IBFGMs. This methodology’s topological design variables are totally independent of all material phases. Therefore, the present method can yield separate material phases at their contiguous boundaries without intermediate density materials. The assumption of mixed interpolation of tensorial components of the 4-node shell element is employed to analyze plate elements, aiming to tackle the shear-locking phenomenon encountered as the optimal plate thickness becomes thinner. The mesh-independence filter is utilized to suppress the checkerboard formation of the material distribution. The method of Moving Asymptotes is used as an optimizer to update design variables in the optimization process. Several numerical examples are presented to evaluate the efficiency and reliability of the current approach.
This article presents an educational code written in FreeFEM, based on the concept of topological derivative together with a level-set domain representation method and adaptive mesh refinement processes, to perform compliance minimization in structural optimization. The code is implemented in the framework of linearized elasticity, for both plane strain and plane stress assumptions. As a first-order topology optimization algorithm, the topological derivative is in fact used within the numerical procedure as a steepest descent direction, similar to methods based on the gradient of cost functionals. In addition, adaptive mesh refinement processes are used as part of the optimization scheme for enhancing the resolution of the final topology. Since the paper is intended for educational purposes, we start by explaining how to compute topological derivatives, followed by a step-by-step description of the code, which makes the binding of the theoretical aspects of the algorithm to its implementation. Numerical results associated with three classic examples in topology optimization are presented and discussed, showing the effectiveness and robustness of the proposed approach.
The aim of this work is to present a continuos mathematical model that characterizes and enforces connectivity in a topology optimization problem. That goal is accomplished by constraining the second eigenvalue of an auxiliary eigenproblem, solved together with the governing state law in each step of the iterative process. Our density-based approach is illustrated with 2d and 3d numerical examples in the context of structural design.
Multidisciplinary design optimization has great potential to support the turbomachinery development process by improving designs at reduced time and cost. As part of the industrial compressor design process, we seek for a rotor blade geometry that minimizes stresses without impairing the aerodynamic performance. However, the presence of structural mechanics, aerodynamics, and their interdisciplinary coupling poses challenges concerning computational effort and organizational integration. In order to reduce both computation times and the required exchange between disciplinary design teams, we propose an inter- instead of multidisciplinary design optimization approach tailored to the studied optimization problem. This involves a distinction between main and side discipline. The main discipline, structural mechanics, is computed by accurate high-fidelity finite element models. The side discipline, aerodynamics, is represented by efficient low-fidelity models, using Kriging and proper-orthogonal decomposition to approximate constraints and the gas load field as coupling variable. The proposed approach is shown to yield a valid blade design with reasonable computational effort for training the aerodynamic low-fidelity models and significantly reduced optimization times compared to a high-fidelity multidisciplinary design optimization. Especially for expensive side disciplines like aerodynamics, the multi-fidelity interdisciplinary design optimization has the potential to consider the effects of all involved disciplines at little additional cost and organizational complexity, while keeping the focus on the main discipline.
The time-dependent reliability analysis aims at estimating the probability of failure, occurring within a specified time period, of a structure subjected to stochastic and dynamic loads or stochastic degradation of performance. Development of efficient numerical algorithms with accuracy assurance for solving this problem, although has been investigated with, e.g., Gaussian Process Regression (GPR)-based active learning procedures, keeps being a bottleneck. Inspired by the concept of up-crossing rate used in the first-passage methods, a new acquisition function (also called learning function) is developed with the consideration of the temporal correlation information across each sample trajectory. It measures the (subjective) probability of mis-judging the occurrence of the up-crossing event within each time sub-interval. With this new acquisition function, the classical active learning procedure is improved. Considering the necessity for estimating small failure probability, the proposed active learning method is then combined with the subset simulation for multi-stage learning. With this method, a series of intermediate surrogate failure surface is actively updated with the target of approaching the true failure surface with pre-specified error tolerance. The effectiveness of the proposed methods are demonstrated with numerical and engineering examples.
Model calibration is a process aimed at adjusting unknown parameters to minimize the error between the simulation model output and experimental observations. In computer-aided engineering, uncertainties in physical properties and modeling discrepancies can generate errors. Among various model calibration approaches, Kennedy and O’Hagan (KOH)’s Bayesian model calibration is noted for its ability to consider a variety of sources of uncertainty. However, one of the difficulties in KOH’s Bayesian model calibration is the complexity of determining the prior distributions of hyperparameters, which is often challenging in real-world problems due to insufficient information. Most previous studies have relied on users’ intuition to mitigate this issue. Thus, this study proposes a statistical prior modeling method for the correlation hyperparameter of a model discrepancy, which affects the calibration performance. In this work, a radius-uniform distribution is introduced as a prior distribution of the correlation hyperparameter based on the properties of the Gaussian process. Three case studies are provided, one numerical and two engineering cases, to confirm that the proposed method results in lower error than any other previously proposed distribution without additional computational cost. Further, the proposed method does not require user-dependent knowledge, which is a significant advantage over previous methods.
Mesh antennas in orbit are periodically affected by solar radiation, earth reflection and space low temperature environment, and the temperature fluctuates in a wide range. Mesh antenna produce large thermal deformation or even obvious thermal disturbance under extreme temperature condition, which seriously deteriorates the surface accuracy and the tension distribution. To improve the shape stability of reflector surface and the rationality of tension distribution, a thermal design optimization method for mesh antenna considering the interaction between cable net and flexible truss is proposed. The equilibrium equation of mesh antenna system under space thermal loads is established based on finite element theory and force density equation. Due to the complexity of directly analyzing the influence of thermal loads on the entire mesh antenna, a research strategy of applying thermal loads step by step from flexible truss to cable network is adopted, and the force density increment equation of cable net under space thermal loads is derived. Then, the force density vector of the cable net is selected as the design variable, and the sum of squares of the thermal deformation of the reflector nodes is taken as the objective function, and the stability optimization model of the reflector in the whole temperature interval is established. Finally, a typical AstroMesh antenna under uniform temperature working conditions is used to illustrate the effectiveness and feasibility of the proposed method. Compared with the traditional optimization method, which can only ensure the better performance of a certain temperature point, the proposed method has better surface accuracy and thermal stability in the whole temperature interval.
Buckling is a critical phenomenon in structural members under compression, which could cause catastrophic failure of a structure. To increase the buckling resistance in structural design, a novel topology optimization approach based on the bi-directional evolutionary structural optimization (BESO) method is proposed in this study with the consideration of buckling constraints. The BESO method benefits from using only two discrete statuses (solid and void) for design variables, thereby alleviating numerical issues associated with pseudo buckling modes. The Kreisselmeier-Steinhauser aggregation function is introduced to aggregate multiple buckling constraints into a differentiable one. An augmented Lagrangian multiplier is developed to integrate buckling constraints into the objective function to ensure computational stability. Besides, a modified design variable update scheme is proposed to control the evolutionary rate after the target volume fraction is reached. Four topology optimization design examples are investigated to demonstrate the effectiveness of the buckling-constrained BESO method. The numerical results show that the developed optimization algorithm with buckling constraints can significantly improve structural stability with a slight increase in compliance.
Developing appropriate analytic-function-based constitutive models for new materials with nonlinear mechanical behavior is demanding. For such kinds of materials, it is more challenging to realize the integrated design from the collection of the material experiment under the classical topology optimization framework based on constitutive models. The present work proposes a mechanistic-based data-driven topology optimization (DDTO) framework for three-dimensional continuum structures under finite deformation. In the DDTO framework, with the help of neural networks and explicit topology optimization method, the optimal design of the three-dimensional continuum structures under finite deformation is implemented only using the uniaxial and equi-biaxial experimental data. Numerical examples illustrate the effectiveness of the data-driven topology optimization approach, which paves the way for the optimal design of continuum structures composed of novel materials without available constitutive relations.
Form-finding design is a significant process for cable mesh reflectors to realize the required surface accuracy and electromagnetic performance. From the classification of the objective function, there are two kinds of optimization methods available for form-finding design: simple structural design optimization, which employs surface accuracy as the objective function, and integrated structural electromagnetic optimization, which directly utilizes the electromagnetic performance as the objective function. Although the electromagnetic performance can be reflected in integrated structural electromagnetic optimization, this necessitates complex computations and iterations. To solve these problems and inherit the advantages of multidisciplinary optimization, a weighting form-finding design optimization method is presented that chooses electromagnetic properties as the weighting coefficients to evaluate the surface accuracy and the weighting surface accuracy as the objective function. The proposed method can not only consider the electromagnetic properties, but also avoid complex computations and iterations. Compared with integrated structural electromagnetic optimization, the method can improve the iteration efficiency with satisfactory surface accuracy and electromagnetic performance. An offset cable mesh reflector and an umbrella cable mesh reflector are adopted to show the effectiveness and benefits of the proposed method.
The dimensioning of overhang slabs in bridge decks is usually based on simplified, thus conservative methods. The resulting over-dimensioned overhang bridge slabs can also affect the design of the girders. In this paper, an optimization procedure for the design of this structural element is presented. The aim is to minimize investment cost and global warming potential in the material production stage simultaneously while fulfilling all safety requirements. The design variables used in this study are the thicknesses of the overhang slab and the height of the edge beam. However, a complete detailed design of reinforcement is performed as well. Both a single-objective and a multi-objective formulation of the nonlinear problem are presented and handled with two well-known optimization algorithms: pattern search and genetic algorithm. The procedure is applied to a case study, which is a bridge in Sweden designed in 2013. One single solution minimizing both objective functions is found and leads to savings in investment cost and CO2-equivalent emissions of 4.2% and 9.3%, respectively. The optimization procedure is then applied to slab free lengths between 1 and 3 m. The outcome is a graph showing the optimal slab thicknesses for each slab length to be used by designers in the early design stage.
The vehicular structural system design is critical to protect passengers from fatal injuries in inevitable accidents. Traditional optimization methods take only metal sheet thickness, i.e., thickness-based, as design variables due to the CAD re-modeling and re-meshing difficulties for changing the geometric shapes of assembled and interacted (welded, bolted, or riveted) parts inside the vehicle during an automatic optimization process. This, however, may limit the size of the design space and restrict the safety performance of the optimal design. In this study, a radial-basis function mesh morphing method is developed to change geometric shapes by moving node locations. Bayesian optimization is implemented to form a framework for handling the induced high-dimensional and nonlinear problem. A baseline model is validated and used as the initial design. Under the full-frontal crash scenario, four components selected based on the prior knowledge generated by a data mining method are parameterized by 32 variables, including node locations and metal sheet thickness. The node locations are constrained in case of the component intervention. Weighting vehicle peak acceleration and maximum intrusion of the passenger compartment form a single objective. Varying weights are responsible for generating the Pareto front. The results show that compared with the original design, the peak acceleration and maximum intrusion are reduced by 46.7 and 56.2%, respectively, at maximum. Structural bending modes and energy-absorbing behaviors are varied with different weights. Additional studies show that the node-based morphing method with a Bayesian optimization algorithm can achieve a better optimum globally than the traditional thickness-based method by a larger design space.
Optimized result of crawler. The grayscale
of the design domain represents the value of γp,
and the red (blue) area represents the positive
(negative) stress induced by the actuation.
Optimized result of a walker. The grayscale of the design domain represents the value of γp, and the red area represents the positive stress induced by the actuation.
Topology optimization methods have widely been used in various industries, owing to their potential for providing promising design candidates for mechanical devices. However, their applications are usually limited to the objects which do not move significantly due to the difficulty in computationally efficient handling of the contact and interactions among multiple structures or with boundaries by conventionally used simulation techniques. In the present study, we propose a topology optimization method for moving objects incorporating the material point method, which is often used to simulate the motion of objects in the field of computer graphics. Several numerical experiments demonstrate the effectiveness and the utility of the proposed method.
This brief note aims to introduce the recent paradigm of distributional robustness in the field of shape and topology optimization. Acknowledging that the probability law of uncertain physical data is rarely known beyond a rough approximation constructed from observed samples, we optimize the worst-case value of the expected cost of a design when the probability law of the uncertainty is “close” to the estimated one up to a prescribed threshold. The “proximity” between probability laws is quantified by the Wasserstein distance, a notion pertaining to optimal transport theory. The combination of the classical entropic regularization technique in this field with recent results from convex duality theory allows to reformulate the distributionally robust optimization problem in a way which is tractable for computations. Two numerical examples are presented, in the different settings of density-based topology optimization and geometric shape optimization. They exemplify the relevance and applicability of the proposed formulation regardless of the selected optimal design framework.
To reduce the computational cost, multi-fidelity (MF) metamodel methods have been widely used in engineering optimization. Most of these methods are based on the standard Gaussian random process theory; thus, the time cost required for hyperparameter estimation increases significantly with an increase in the dimension and nonlinearity of the problems especially for high-dimensional problems. To address these issues, by exploiting the great potential of deep neural networks in high-dimensional information extraction and approximation, a meta-learning-based multi-fidelity Bayesian neural network (ML-MFBNN) method is developed in this study. Based on this, to further reduce the computational cost, an adaptive multi-fidelity sampling strategy is proposed in combination with Bayesian deep learning to sequentially select the highly cost-effective samples. The effectiveness and advantages of the proposed MF-MFBNN and adaptive multi-fidelity sampling strategy are verified through eight mathematical examples, and the application to model validation of computational fluid dynamics and robust shape optimization of the ONERA M6 wing.
The vehicle structure is a highly complex system as it is subject to different requirements of many engineering disciplines. Multidisciplinary optimization (MDO) is a simulation-based approach for capturing this complexity and achieving the best possible compromise by integrating all relevant CAE-based disciplines. However, to enable operative application of MDO even under consideration of crash, various adjustments to reduce the high numerical resource requirements and to integrate all disciplines in a target way must be carried out. They can be grouped as follows: The use of efficient optimization strategies, the identification of relevant load cases and sensitive variables as well as the reduction of CAE calculation time of costly crash load cases by so-called finite element (FE) submodels. By assembling these components in a clever way, a novel, adaptively controllable MDO process based on metamodels is developed. There are essentially three special features presented within the scope of this paper: First, a module named global sensitivity matrix which helps with targeted planning and implementation of a MDO by structuring the multitude of variables and disciplines. Second, a local, heuristic and thus on all metamodel types computable prediction uncertainty measure that is further used in the definition of the optimization problem. And third, a module called adaptive complexity control which progressively reduces the complexity and dimensionality of the optimization problem. The reduction of resource requirements and the increase in the quality of results are significant, compared to the standard MDO procedure. This statement is confirmed by providing results for a FE full vehicle example in six load cases (five crash load cases and one frequency analysis).
Accurate assessment of the remaining life of infrastructure assets is important for safe operation and efficient maintenance. In the case of inland navigation infrastructure, the United States Army Corps of Engineers have identified the embedded steel anchorages on miter gates as a critical component of the infrastructure network. Many of these anchorages are of an age such that they are at or beyond their useful life. The embedded nature of the anchorage precludes visual inspection, and the complicated interaction between the steel anchorage components and the embedding concrete is challenging to analyze. The traditional analysis method of the anchorages neglects the concrete embedment when determining member stresses. As a result, a conservative estimate of remaining fatigue life is obtained. A more accurate assessment of member stresses has shown that the surrounding concrete significantly reduces the steel stresses, resulting in substantially longer estimates of remaining life. To generalize results obtained from tests of a specific miter gate specimen to be more broadly applicable to other embedded anchorages, this work uses Bayesian model updating to calibrate a set of springs representative of the concrete embedment. These spring constants can be used in the analysis of other embedded anchorage configurations to obtain a more accurate assessments of remaining fatigue life.
Channel cooling structures are widely used in heat generating products and tools. A popular combination has been designing using the topology optimization method and manufacturing by an advanced method, such as 3D printing. Considering the heat sink design with its thermal mechanical effects, a feature-based cooling channel topology optimization design method is given. The presented method is adopted to accurately describe the topological parameters in the cooling channel structure. To address the phenomenon of mixing between different phases and avoid the parameter continuation tuning process by using the feature-based method, a phase-mixing constraint is proposed. To improve the computational efficiency, an equivalent flow field model fit to low and high Reynold’s number is proposed. The shape feature parameters are discussed in more detail. Furthermore, a hot stamping tool is taken as an example, in which the topology optimization design of the cooling channel structure is carried out and discussed.
This paper presents a new topology optimization framework in which the design decisions are made by humans and machines in collaboration. The new Human-Informed Topology Optimization approach eases the accessibility of topology optimization tools and enables improved design identification for the so-called ‘everyday’ and ‘in-the-field’ design situations. The new framework is based on standard density-based compliance minimization. However, the design engineer is enabled to actively use their experience and expertise to locally alter the minimum feature size requirements. This is done by conducting a short initial solution and prompting the design engineer to evaluate the quality. The user can identify potential areas of concern based on the initial material distribution. In these areas, the minimum feature size requirement can be altered as deemed necessary by the user. The algorithm rigorously resolves the compliance problem using the updated filtering map, resulting in solutions that eliminate, merge, or thicken topological members of concern. The new framework is demonstrated on 2D benchmark examples and the extension to 3D is shown. Its ability to achieve performance improvement with few computational resources are demonstrated on buckling and stress concentration examples.
This paper presents a CAD-aware plug-and-play framework for topology optimization that results in CAD compatible-optimized geometries. The framework uses two separate kernels: one for defining and updating the geometry, and the other for an unfitted finite element analysis (FEA). The level-set method is used for the handling the geometry, while a moment-vector based simulation is used for the FEA. Moments can be used to generate quadrature rules for arbitrary geometries, which in turn can be used to accurately compute finite element entities such as stiffness or mass matrices. We introduce the notion of moment-averaged stress that can accurately capture maximum stress without post-processing or stress reconstruction. We also present the adjoint sensitivity analysis that enables the moment-based simulation to be coupled with the level-set method. Using numerical examples in 2D and 3D, we show the efficiency of our method in producing lightweight designs optimized for minimum compliance and minimum stress. More importantly, we show that the framework allows for the optimized geometry to be seamlessly exported as CAD compatible formats without the need for any cumbersome post-processing.
Statistical analysis is frequently used to determine how manufacturing tolerances or operating condition uncertainties affect system performance. Surrogate is one of the accelerating ways in engineering tolerance quantification to analyze uncertainty with an acceptable computational burden rather than costly traditional methods such as Monte Carlo simulation. Compared with more complicated surrogates such as the Gaussian process, or Radial Basis Function (RBF), the Polynomial Regression (PR) provides simpler formulations yet acceptable outcomes. However, PR with the common least-squares method needs to be more accurate and flexible for approximating nonlinear and nonconvex models. In this study, a new approach is proposed to enhance the accuracy and approximation power of PR in dealing with uncertainty quantification in engineering tolerances. For this purpose, first, by computing the differences between training sample points and a reference point (e.g., nominal design), we employ certain linear and exponential basis functions to transform an original variable design into new transformed variables. A second adjustment is made to calculate the bias between the true simulation model and the surrogate's approximated response. To demonstrate the effectiveness of the proposed PR approach, we provide comparison results between conventional and proposed surrogates employing four practical problems with geometric fabrication tolerances such as three-bar truss design, welded beam design, and trajectory planning of two-link and three-link (two and three degrees of freedom) robot manipulator. The obtained results prove the preference of the proposed approach over conventional PR by improving the approximation accuracy of the model with significantly lower prediction errors.
The equivalent static loads method (ESLM) is a structural optimization method that can consider nonlinear and dynamic responses. ESLM can cover linear dynamic, nonlinear static, and nonlinear dynamic problems. Many studies have been carried out to use ESLM in various structural optimization disciplines such as size, shape, and topology optimizations. The limit of the existing ESLM is that only one finite element model can be considered. The multi-model optimization (MMO) technique is known as an optimization method that handles plural finite element models simultaneously. A study is conducted to expand the current ESLM to multi-model optimization. Each model could be involved in several types of analysis that generate linear/nonlinear and static/dynamic responses. However, the optimization process uses only linear static response models generated by ESLM and multiple linear static response models are utilized for multi-model optimization simultaneously. The proposed method is applied to size and topology optimization examples, and the performance of the method is discussed.
Shape memory alloys (SMA) are an ideal class of metallic materials for reusable energy dissipation structures because of pseudo-elasticity. This paper presents a density-based topology optimization framework for the design of SMA structures utilizing pseudo-elastic behaviors to dissipate large amounts of energy under prescribed design constraints. A phenomenological constitutive model is adopted to accurately simulate the mechanical behaviors of SMA, and the corresponding material interpolation scheme is developed via SIMP method. Numerical instability caused by excessive distortion of low-density elements is alleviated by super element method. The degree of phase transformation, which is related to the energy dissipation, is characterized by strain energy and end compliance. Sensitivities are derived via adjoint method. A number of optimized simple-supported beam structures and 2D lattice structures with different energy dissipation performance and stiffness capacity are tailored. In addition, the load dependency and initial design dependency for the optimization of SMA energy dissipation structures are discussed.
Cellular structures have gained popularity in the modern industrial fields for their lightweight and high specific strength. The optimization design methods for the cellular structure were developed simultaneously with the advancement of additive manufacturing. The scale-separated multiscale methods have been widely used based on the homogenization approach. Most scale-separated optimization designs focus on the configuration of the cellular structure, especially the non-stochastic cellular structure. However, it posed disadvantages in structure anisotropy and poor connectivity of the adjacent microstructure. In this work, a scale-separated variable cutting (VCUT) level set method for designing the graded stochastic Voronoi cellular structure is proposed. The method contains three parts: the analysis of the stochastic Voronoi microstructure, the optimization of the macrostructure, and the reconstruction of the full-scale graded Voronoi cellular structure. The proposed method can guarantee good connectivity between the adjacent microstructure and can be applied in the optimization of the design domain with arbitrary shapes and boundaries. Numerical examples are given to demonstrate the effectiveness and advantage of the developed method for designing the stochastic multiscale structure.
In engineering design optimization, there are often multiple conflicting optimization objectives. Bayesian optimization (BO) is successfully applied in solving multi-objective optimization problems to reduce computational expense. However, the expensive expense associated with high-fidelity simulations has not been fully addressed. Combining the BO methods with the bi-fidelity surrogate model can further reduce expense by using the information of samples with different fidelities. In this paper, a bi-fidelity BO method for multi-objective optimization based on lower confidence bound function and the hierarchical Kriging model is proposed. In the proposed method, a novel bi-fidelity acquisition function is developed to guide the optimization process, in which a cost coefficient is adopted to balance the sampling cost and the information provided by the new sample. The proposed method quantifies the effect of samples with different fidelities for improving the quality of the Pareto set and fills the blank of the research domain in extending BO based on the lower confidence bound (LCB) function with bi-fidelity surrogate model for multi-objective optimization. Compared with the four state-of-the-art BO methods, the results show that the proposed method is able to obviously reduce the expense while obtaining high-quality Pareto solutions.
Despite the long history of the truss layout optimization approach, its practical applications have been limited, partly due to high manufacturing costs associated with complex optimized structures consisting of members with different cross-sectional areas and member lengths. To address this issue, this study considers optimizing truss structures comprising limited types of members. On this topic, two distinct problems are considered, wherein the first problem, members of the same type share the same cross-sectional area (i.e., section-type problem); and in the second problem, members of the same type share the same cross-sectional area and length (i.e., member-type problem). A novel post-processing approach is proposed to tackle the target problems. In this approach, the optimized structures from the traditional layout and geometry optimization approaches are used as the starting points, members of which are then separated into groups by the k-means clustering approach. Subsequently, the clustered structures are geometrically optimized to reduce the area and length deviations (i.e., the differences between member area/length values and the corresponding cluster means). Several 2D and 3D examples are presented to demonstrate the capability of the proposed approaches. For the section-type problem, the area deviations can be reduced to near 0 for any given cluster number. The member-type problem is relatively more complex, but by providing more clusters, the maximum length deviation can be reduced below the target thresholds. Through the proposed clustering approach, the number of different members in the optimized trusses can be substantially decreased, thereby significantly reducing manufacturing costs.
This paper describes a novel method developed for the optimization of composite components against distortion caused by cure-induced residual stresses. A novel ply stack alteration algorithm is described, which is coupled to a parametrized CAD/FE model used for optimization. Elastic strain energy in 1D spring elements, used to constrain the structure during analysis, serves as an objective function incorporating aspects of global/local part stiffness in predicted distortion. Design variables such as the number and stacking sequence of plies, and geometric parameters of the part are used. The optimization problem is solved using commercial software combined with Python scripts. The method is exemplified with a case study of a stiffened panel subjected to buckling loads. Results are presented, and the effectiveness of the method to reduce the effects of cure-induced distortion is discussed.
A digital twin perspective to the multi-generational design of systems. The digital twin perspective to design enables the integration of design and operation decisions as shown in the bottom half of the figure. In addition, the consideration of the entire life cycle enables the use of data over multiple generations of a system’s design and is visualized in the top half of the figure
Visualization of the digital twin-inspired approach for the design of a vehicle tire. During the design phase, fleet data (i.e., data from all previous systems) can be used to make nominal design and operation decisions. Conversely, during a system’s operation life data can be used to make operation decisions for a specific system as they are exposed to unique operating conditions (e.g., different operator decisions and environmental conditions)
The design and operation of systems are conventionally viewed as a sequential decision-making process that is informed by data from physical experiments and simulations. However, the integration of these high-dimensional and heterogeneous data sources requires the consideration of the impact of a decision on a system’s remaining life cycle. Consequently, this introduces a degree of complexity that in most cases can only be solved through a simplified decision-making approach. In this perspective paper, we use the digital twin concept to formulate an integrated perspective for the design of systems. Specifically, we show how the digital twin concept enables the integration of system design decisions and operational decisions during each stage of a system’s life cycle. This perspective has two advantages: (i) improved system performance as more effective decisions can be made, and (ii) improved data efficiency as it provides a framework to utilize data from multiple sources and design instances. The novelty in the presented perspective is that it necessitates an approach that enables fleet-level (i.e., decisions that influence a plurality of systems) and system-level decisions. From a formal definition, we identify a set of eight capabilities that are vital constructs to bring about the potential, as defined in this paper, that the digital twin concept holds for the design of systems. Subsequently, by comparing these capabilities with the available literature on digital twins, we identify research questions and forecast their broader impact. By conceptualizing the potential that the digital twin concept holds for the design of systems, we hope to contribute to the convergence of definitions, problem formulations, research gaps, and value propositions in this burgeoning field. Addressing the research questions, associated with the digital twin-inspired formulation for the design of systems, will bring about more advanced systems that can meet some of the societies’ grand challenges.
There has been a significant surge in the demand for electric all-terrain vehicles (eATV) recently due to a rising number of government subsidies for electric vehicles, better availability of charging infrastructure, and the increasing need to minimize the level of greenhouse gas emissions. To address this growing demand and enhance the overall performance of new ATV architectures, automakers are looking to develop new lightweight chassis designs with the application of multi-material parts, assemblies, and systems. To achieve these goals, conventional material selection and design strategies may be employed, such as standard material performance indices, full-combinatorial substitution studies, or recently developed multi-material topology optimization (MMTO). In this paper, a prototype design for a lightweight eATV chassis using titanium, aluminium, and carbon fibre sheet moulding compound is developed using a novel design process. The design space for the MMTO design is developed from the baseline steel chassis design. The proposed design process reduces the time required to design an eATV chassis significantly as it reduces the number of design update iterations by efficiently modelling the material interface. The process utilizes MMTO considering total joint cost (TJC) constraints to refine the chassis design and reduce the material interface. This method allows for simultaneously minimizing the compliance as well as restricting the TJC of the structure which is calculated based on the user-defined relative costs for each material interface. This reduces the number of iterations between design update and validation as the performance of the prototype design would be more in line with the design considering actual joint material properties than considering the MMTO design without TJC constraint. A Pareto front is created to determine the trade-off between the two competing functions. It was observed that an 80% reduction in TJC can be obtained in the eATV chassis design with only a 4.2% loss in stiffness, as compared to an MMTO design without joint considerations. Both this final design and the single-material topology optimization (SMTO) design considering titanium were reinterpreted into a practical concept and analysed. The final selected MMTO design is 32% lighter than the baseline steel chassis design, with a stiffness increase of 64%.
The structural battery composite (SBC) is a novel class of multifunctional materials with the ability to work as a lithium-ion battery that can withstand mechanical loads. The motivation of this study is to address one of the major challenges in the development of SBCs, which is a strong conflict in the structural and electrical demands for its electrolyte (i.e., high stiffness and high ionic conductivity). Furthermore, there is a design requirement that the electrochemical cycling should not result in overheating of the SBC. The novelty of this study is the development of an efficient multi-objective multiphysics density-based topology optimization framework that considers electrochemical/thermal/structural physics to identify the optimized design of a structural battery electrolyte (SBE). The optimization methodology is defined as solving a multi-objective problem of maximizing effective ionic conductivity and minimizing compliance of SBE. The problem is subjected to constraints on volume fraction and the maximum allowable temperature. The normalized-normal-constraint approach is utilized to generate a Pareto-front curve for this multi-objective problem. The proposed method is computationally efficient owing to utilizing a low-fidelity resistance network approach, for the electrochemical module and parallelizes the workload using portable, and extendable toolkit for scientific computing and message-passing interface. Several numerical examples are solved to demonstrate the applicability of the proposed methodology under different loading scenarios. The results reveal that the proposed methodology provides a better understanding of the required microstructural design of SBE for the performance improvement of structural battery composites.
The mechanical failure of battery-pack systems (BPSs) under crush and vibration conditions is a crucial research topic in automotive engineering. Most studies evaluate the mechanical properties of BPSs under a single operating condition. In this study, a dual-objective optimization method based on non-dominated sorting genetic algorithm II (NSGA-II) is proposed to evaluate the crushing stress of BPS modules and the vibration fatigue life of the BPS. This method can obtain better combinations of the thicknesses of the BPS components, which helps engineers achieve robust and efficient designs. First, a nonlinear finite element (FE) model of a BPS is developed and experimentally verified. The crush and vibration simulations are performed, and the FE analysis data are obtained. Second, two third-order response surface models are created to characterize the relationship between the input (thicknesses of the BPS components) and the output (crushing stress of the BPS modules and vibration fatigue life of the BPS). Finally, a linear weighting model and an NSGA-II model are used to conduct dual-objective optimization. The solution of the linear weighting method and the non-dominated Pareto solution set of the thicknesses of the BPS components are obtained and compared. Furthermore, a reasonable interval in the Pareto frontier is defined and considered the best solution to the dual-objective optimization problem. Therefore, the reliability of the BPS is improved to ensure the safety of electric vehicles in crushing and vibration environments. This method offers an effective solution to the problem of evaluating the mechanical responses of BPSs under various operating conditions. It can be used to generate a robust design for safe and durable BPSs.
In this paper, a novel reliability analysis method is proposed by combining relevance vector machine and subset simulation (RVM-SS). It not only improves the computational efficiency of reliability analysis that requires expensive finite element simulations, but also ensures the accuracy of the evaluated failure probability. In this method, relevance vector machine (RVM) is first utilized to approach relatively rough limit states. Subsequently, subset simulation (SS) is performed based on the constructed RVM. Simultaneously, in order to improve the prediction accuracy of RVM, samples in the first and last level of SS are used for the sequential refinement of RVM. In addition, a learning function considering the current design of experiment position and a stopping condition for reliability prediction error estimation are applied to avoid redundant iterations in RVM update process. The updated RVM proves to have a high prediction accuracy for sample symbols, so the obtained failure probability is accurate. Furthermore, the samples are predicted by the carefully constructed RVM instead of being assessed with the time-consuming performance function, resulting in a significant reduction in computational effort. The efficiency and accuracy of the proposed method are verified by five examples involving small failure probability, nonlinearity, high-dimensional and implicit problems.
Considering both supersonic and subsonic aerodynamic performances in aircraft design is challenging. This challenge can be alleviated through morphing design or plasma flow control. Therefore, if they are both considered in the aerodynamic optimization, the results can be undoubtedly improved. In this study, first, a new sliding shear variable sweep design scheme which can change both the plane shape (such as the span and sweep angle) and the wing profile (such as the chord length and the relative thickness) is proposed and some information about the elastomeric skin scheme is given. Second, an efficient global optimization framework based on surrogate-based optimization algorithm is established for the aerodynamic shape optimization of this morphing wing. Third, two optimizations are conducted, wherein one considers the effect of plasma actuation while the other does not. Due to the complexity and large calculations required, the effect of plasma actuation is not directly considered in computational fluid dynamics simulation but is indirectly considered by relaxing the subsonic lift constraint, which assumes that plasma actuation can offset the lift loss. Therefore, it is called “plasma constraint relaxation”. In the two optimizations, three different configurations of the morphing wing which are 20°-, 30°- and 70°- sweep angle state, and three different flow conditions, which are subsonic (0.25 Ma), transonic (0.85 Ma) and supersonic (3 Ma) are considered. The results show that the comprehensive performance (objective function) improves by 12.6% with the effect of plasma actuation while it improves by 7.6% without the effect of plasma actuation after a two-round optimization. This suggests that the subsonic lift constraint, as an active constraint, significantly impacts the final optimization results. Finally, to verify whether plasma actuation can offset the lift loss, an experiment of nanosecond pulse dielectric barrier discharge plasma controlling flow separation is conducted to increase the subsonic lift of the optimization shape. The results show that the maximum lift increases by 18.1% when the actuation voltage is 8 kV and actuation frequency is 160 Hz and the lift loss caused by the constraint relaxation is 14.5%.
Surrogate-assisted evolutionary algorithms have recently shown exceptional abilities for handling with computationally Expensive Constrained Optimization Problems (ECOPs) where the constraints can be structural performance constraints such as volume, stiffness, and stress or computational fluid simulations in real-world complex engineering problems. But most of them are limited to solving ECOPs with inequality constraints. Therefore, a constraint boundary Pursuing-based Surrogate-Assisted Differential Evolution (PSADE) is designed to solve ECOPs with mixed constraints including inequality and equality. Specifically, potential areas near feasible region are explored by Trial Vector Generation Mechanism (TVGM) according to interactive guidance between elite solutions and current population, and an Expected Improvement-based Local Search (EILS) is employed to improve the accuracies of the Kriging models in promising neighboring areas of constraint boundary. Then a specific Solution Identification-based Local Search (SILS) is put forward for guiding two kinds of elite solutions, in which an expected feasibility-based local search method is designed for moving the elite infeasible solutions that violate the equality constraints toward the feasible region. Therefore, PSADE is able to maintain a good balance between convergence and diversity when considering both constraints and objective. Experimental studies on classical test problems show that PSADE is highly competitive on solving ECOPs with mixed constraints under an acceptable computational cost.
In this paper, a novel framework is proposed to optimize variable stiffness (VS) composite circular cylinders designed with the direct fiber path parameterization technique using cubic and quadratic Bézier curves as curvilinear fiber paths. The Bézier curves allow generating fiber paths with nonlinear angle variation, and they are defined by simple design variables such as segment/station angles and multipliers/curvatures. A finite element model of VS shells under pure bending with stiffness variation in circumferential direction due to axially shifted courses is implemented and optimized for maximum buckling load considering curvature and strength constraints. The proposed design optimization framework, called pre-trained multi-step/cycle surrogate-based optimization, is conducted in two steps using a non-dominated sorting genetic algorithm (NSGA-II). The framework leverages prior knowledge of the design space by using laminated VS shells with single ply definitions in the first step before performing the optimization of all VS plies in the second step. Four different stacking sequences are considered, consisting of all VS plies and partial VS plies in combination with unidirectional fibers. The VS composite shell modeled using cubic Bézier curves of constant curvature as the fiber path for all plies shows a 41% increase in buckling load compared to the reference quasi-isotropic composite cylindrical shell.
Ventilation plays a crucial role in controlling temperature, humidity, and air contamination in poultry houses. This work presents a simulation-based optimization approach that combines stochastic particle–swarm optimization (PSO) with numerical aerodynamic and structural analysis in a staggered mode to design a new propeller that reduces the number of blades and improves the performance curve and efficiency of an actual fixed-speed axial exhaust fan. The fitness–function is to maximize the aerodynamic performance of the exhaust fan, subject to a serie of aerodynamic and structural constraints, which include power, noise, stress, blade tip displacement, and vibrations, and allow the direct retrofit of the optimized blades into the actual rotor assembly. The blades are represented with four parameters: airfoil family, width fraction, chord, and twist distribution. A polynomial parametrization of the chord and twist distribution of the blades is proposed, and appropriate bounds of these parameters are determined with an analytical design theory for axial fans. An 18% increase in efficiency was obtained with regards to a prescribed propeller while all design constraints were fulfilled.
Cross-section optimization is an effective way to improve the mechanical performance of a vehicle body and reduce its structural mass. However, previous studies suffer from the deficiencies involving inaccurate cross-sectional model, insufficient consideration of manufacturability constraints and inefficient single-objective optimization. In this work, eight typical cross-sections of a body are optimized. A chain node-based parametric modeling is proposed to realize accurately cross-sectional discretization, and the geometric and manufacturability constraints as well as three optimization objectives are considered in the cross-sectional optimization models. To realize multi-objective optimization, a multi-objective intelligence adaptive optimization algorithm (MIAOA) is proposed. By classifying the non-dominated solutions and applying a reward-penalty strategy, the MIAOA realizes intelligent iteration. The experimental results on ZDT and DTLZ suites obtained by MIAOA are better than those of five typical algorithms in terms of convergence, stability, uniformity and extensiveness. Besides, the MIAOA is applied to improve the moments of inertia of the cross-sections and reduce their material areas. These optimized cross-sections are applied to the body, and the optimized body shows better mechanical performances involving torsional stiffness, bending stiffness, first-order mode and second-order mode, while reducing the total mass by 9.96 kg. In conclusion, the proposed methods can effectively realize lightweight automobiles.
This paper investigates the impact of kernel functions on the accuracy of bi-fidelity Gaussian process regressions (GPR) for engineering applications. The potential of composite kernel learning (CKL) and model selection is also studied, aiming to ease the process of manual kernel selection. Using the autoregressive Gaussian process as the base model, this paper studies four kernel functions and their combinations: Gaussian, Matern-3/2, Matern-5/2, and Cubic. Experiments on four engineering test problems show that the best kernel is problem dependent and sometimes might be counter-intuitive, even when a large amount of low-fidelity data already aids the model. In this regard, using CKL or automatic kernel selection via cross validation and maximum likelihood can reduce the tendency to select a poor-performing kernel. In addition, the CKL technique can create a slightly more accurate model than the best-performing individual kernel. The main drawback of CKL is its significantly expensive computational cost. The results also show that, given a sufficient amount of samples, tuning the regression term is important to improve the accuracy and robustness of bi-fidelity GPR, while decreasing the importance of the proper kernel selection.
Traditional methods for structural uncertainty problems with nonconventional distributions involve a significant computational burden, attributed to the nonlinear increase of structures incurred by the transformation of variables to conventional distributions. To solve the above problem, in this study, a derivative lambda probability density function (λ-PDF) is proposed for quantifying the uncertainties in a unified framework. Furthermore, an efficient uncertainty propagation approach for complex structures based upon the improved derivative λ-PDF and dimension reduction method (DRM) is developed. Firstly, the uncertainties of random variables with large skewness and kurtosis are quantified by the improved derivative λ-PDF. Secondly, the n-dimensional structural model is decomposed into a sum of several subsystems. Next, the first-four moments of structural responses are derived using the DRM and Gaussian-weighted integral. Finally, the probability density function and cumulative distribution function of structural responses are reconstructed to quantify their uncertainties by the improved derivative λ-PDF. Six demonstrative examples are engaged to illustrate the effectiveness and accuracy of the proposed method. Reference methods, such as Monte Carlo simulation, the maximum entropy method, the Edgeworth series expansion, and the shifted generalized lognormal distribution, are engaged as calibers to demonstrate the superiority of the proposed method.
The moment method can effectively estimate the structure reliability and its local reliability sensitivity (LRS). But the existing moment method has two limitations. The first one is that error may exist in computing LRS due to the LRS is derived on the numerical approximation of failure probability (FP). The second one is the computational cost increases exponentially with the dimension of random input. To solve these limitations, a simple and efficient method for LRS is proposed in this paper. Firstly, the proposed method uses integral transformation to equivalently derive the LRS as the weighted sum of FP and several extended FPs, and these FPs have the same performance function but different probability density functions (PDFs), in which no assumption is introduced in case of normal input. Secondly, by taking advantage of the derived FPs with the same performance function and different PDFs, where these different PDFs have an explicit and specific relationship, a strategy of sharing integral nodes is dexterously designed on the multiplicative dimensional reduction procedure to simultaneously estimate the moments, which are required by estimating the FP and the extended FPs with moment-based method, of performance function with different PDFs. After the derived FPs are estimated by their corresponding moments, the LRS can be estimated as a byproduct. Compared with the existing moment method for LRS, the proposed method avoids its first limitation by equivalently deriving the LRS as a series of FPs without introducing error in case of normal input. Moreover, because of the designed strategy of sharing integral nodes, the computational cost of the proposed method increases linearly with the dimension of random input, which avoids the second limitation of the existing method for LRS. The superiority of the proposed method over the existing method is verified by numerical and engineering examples.
Geometry, loading, and boundary conditions of the 36-bar 3D truss
Geometry, loading, and boundary conditions of a glass/vinylester composite plate. We consider one quarter of the full plate domain, which can represent the original square plate with domain D=(25mm×25mm)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {D}}=(25~\textrm{mm}\times 25~\textrm{mm})$$\end{document} including a circular hole of radius r=1.25mm\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=1.25~\textrm{mm}$$\end{document} in the center due to the symmetry conditions in the x1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_1$$\end{document} and x2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_2$$\end{document} directions of the center lines
FEA results for the glass/vinylester composite plate: The ultimate tensile load in the load–displacement curve a is recorded as the output of interest. Part b shows von-Mises stress contour obtained in regimes A, B, and C of the load–displacement curve, indicating that the stress concentration occurred at the top of circular hole advances in the regime A to B before a fracture occurs in the regime C
The finite element mesh of the glass/vinylester composite plate: The fine mesh in a comprising 3887 elements is used to generate high-fidelity output data, while the coarse mesh in b comprising 441 elements is used to generate lower-fidelity output data
Cumulative distribution function of the ultimate tensile load in the glass/vinylester composite plate: High-fidelity (HF) and low-fidelity (LF) output and three bi-fidelity approximations are used for univariate, 3rd-order DD-GPCE approximations. The calculated DD-GPCE approximations are resampled 10,000 times to estimate the CDFs
Digital twin models allow us to continuously assess the possible risk of damage and failure of a complex system. Yet high-fidelity digital twin models can be computationally expensive, making quick-turnaround assessment challenging. Toward this goal, this article proposes a novel bi-fidelity method for estimating the conditional value-at-risk (CVaR) for nonlinear systems subject to dependent and high-dimensional inputs. For models that can be evaluated fast, a method that integrates the dimensionally decomposed generalized polynomial chaos expansion (DD-GPCE) approximation with a standard sampling-based CVaR estimation is proposed. For expensive-to-evaluate models, a new bi-fidelity method is proposed that couples the DD-GPCE with a Fourier-polynomial expansion of the mapping between the stochastic low-fidelity and high-fidelity output data to ensure computational efficiency. The method employs measure-consistent orthonormal polynomials in the random variable of the low-fidelity output to approximate the high-fidelity output. Numerical results for a structural mechanics truss with 36-dimensional (dependent random variable) inputs indicate that the DD-GPCE method provides very accurate CVaR estimates that require much lower computational effort than standard GPCE approximations. A second example considers the realistic problem of estimating the risk of damage to a fiber-reinforced composite laminate. The high-fidelity model is a finite element simulation that is prohibitively expensive for risk analysis, such as CVaR computation. Here, the novel bi-fidelity method can accurately estimate CVaR as it includes low-fidelity models in the estimation procedure and uses only a few high-fidelity model evaluations to significantly increase accuracy.
Top-cited authors
Ole Sigmund
  • Technical University of Denmark
Andrew T Gaynor
  • Army Research Laboratory
Liang Gao
  • Huazhong University of Science and Technology
Lei Li
  • Pacific Northwest National Laboratory
Zhan Kang
  • Dalian University of Technology