Conference PaperPDF Available

Uncertainty Quantification of Composite Structures with Defects using Multilevel Monte Carlo Simulations

Authors:

Abstract and Figures

This paper demonstrates the huge computational gains achieved when using a novel multilevel Monte Carlo methodology for a typical aerospace model problem. To demonstrate the gains, we quantify the structural performance of a composite wing skin panel with three types of random manufacturing defects (ply angle perturbations and two classes of localised fibre waviness). We introduce the multilevel Monte Carlo method in an abstract way so that it could easily be applied also to other similar problems. We theoretically com-pare its complexity to standard Monte Carlo simulation and provide a simple-to-implement practical algorithm. Numerical experiments for all types of defects in our model problem confirm the theoretically predicted gains with as much as 152-fold computational speed-ups, bringing problems that would otherwise be unthinkable into the feasible range.
Content may be subject to copyright.
Uncertainty Quantification of Composite Structures
with Defects using Multilevel Monte Carlo Simulations
Richard Butler
, Timothy J. Dodwell
, Tatiana Kim
, Stef Kynaston§and Rob Scheichl
.
University of Bath, Bath, BA2 7AY, UK.
Raphael T. Haftkakand Nam H. Kim∗∗
University of Florida, Gainesville, FL 32611, USA.
This paper demonstrates the huge computational gains achieved when using a novel mul-
tilevel Monte Carlo methodology for a typical aerospace model problem. To demonstrate
the gains, we quantify the structural performance of a composite wing skin panel with
three types of random manufacturing defects (ply angle perturbations and two classes of
localised fibre waviness). We introduce the multilevel Monte Carlo method in an abstract
way so that it could easily be applied also to other similar problems. We theoretically com-
pare its complexity to standard Monte Carlo simulation and provide a simple-to-implement
practical algorithm. Numerical experiments for all types of defects in our model problem
confirm the theoretically predicted gains with as much as 152-fold computational speed-ups,
bringing problems that would otherwise be unthinkable into the feasible range.
I. Introduction
Whilst the basic advantages of composite laminates are well proven, they are often compromised by
high cost, long development time and poor quality due to multiple defects, particularly in complex parts
such as those found in aerospace applications. Within the aerospace manufacturing sector, where safety is
paramount, risk is quantified and reduced by heuristic safety factors and expensive programmes of empirical
testing over a variety of length scales before a new laminate design can enter production, with more tests
at coupon than at component scale, the so-called test pyramid (Fig. 1, left). The high cost of certification
and the inefficiency of general safety factors has led to a new initiative26 whereby numerical simulation and
stochastic methods can be used to demonstrate structural integrity, offering scope to challenge conservative
failure limits and reduce design-to-manufacture time.
The sensitivity of the buckling load of thin shells and plates to manufacturing imperfections is well known
and is commonly accounted for by assuming imperfections in the form of the buckling modes.7, 20 However,
it is now recognized that the statistical nature of imperfections needs to be characterized and simulated.19
Furthermore, in complex composite manufacturing processes, uncertainity arises from a number of different
sources, e.g. material variability, machine tolerance24 and process-induced defects such as fibre waviness
and ply wrinkling.9However, statistical simulations typically require a large number of analyses, and thus
can become extremely expensive, computationally. For that reason a host of techniques for mitigating
their cost has been developed in the engineering and statistical communities. When the probability of
failure due to a single failure mode is all that is needed, methods, such as the first order reliability method
(FORM) [21, Section 4.4], can substantially reduce the number of required simulations. Such methods have
been applied to buckling of shells with random imperfections.11 However, such methods do not capture the
statistical interaction between multiple failure modes. Monte Carlo simulations are often the only choice for
Professor, Department of Mechanical Engineering R.Buter@bath.ac.uk
Prize Fellow, Department of Mechanical Engineering T.J.Dodwell@bath.ac.uk
Research Officer, Departments of Mechanical Engineering and Mathematical Sciences T.Kim@bath.ac.uk
§PhD Student, Department of Mathemetical Sciences S.J.Kynaston@bath.ac.uk
Professor, Department of Mathematical Sciences R.Scheichl@bath.ac.uk
kDistinguished Professor, Department of Mechanical and Aerospace Engineering haftka@ufl.edu
∗∗Associate Professor, Department of Mechanical and Aerospace Engineering nkim@ufl.edu
1 of 18
American Institute of Aeronautics and Astronautics
Figure 1: (Left) Test-pyramid for aircraft structural components. (Right) Model problem set-up.
capturing such interactions. There are therefore a host of methods that are targeted at reducing the cost of
Monte Carlo simulations. Importance sampling methods, e.g. [21, Section 3.4], reduce the number of required
simulations by preferential sampling near the boundary that is separating the safe and the failure domains.
However, to identify this boundary can be as difficult as the original Monte Carlo simulation. Similarly,
separable Monte Carlo methods25 take advantage of the independence of uncertainty sources, and the two
approaches can be combined for additional savings.5Surrogates are often used for allowing large Monte
Carlo sampling.2However, surrogates suffer from the curse of dimensionality, one approach for alleviating
this problem is to combine a large number of low-fidelity, inexpensive simulations with a small number of
higher fidelity simulations. For example, Alexandrov et al. (2001) describes the use of multiple meshes for
constructing surrogates for aerodynamic optimization.1
In our application, where a large number of defects need to be simulated it would be impractical to
construct accurate surrogates. However, we can still take advantage of combining fidelities with different
mesh sizes. This paper, therefore, sets out to optimize the use of a hierarchy of coarse and fine finite
element (FE) models for Monte-Carlo simulations of panel failure due to defects. “Failure” refers to either
buckling, or the exceedance of known material strain limits. We show that only a handful of costly fine scale
computations are needed to accurately estimate the probability of panel failure at a given load, as opposed to
thousands of fine scale samples typically needed in classical Monte Carlo analyses. The missing exploration
of the variability is taken care of by a large number of coarse simulations.
The multilevel Monte Carlo (MLMC) method has been first suggested in the context of option pricing
in financial mathematics.12 Its huge potential in uncertainty quantification for engineering applications has
been identified by Cliffe et al.6where it has been motivated via a subsurface hydrology application. Since
then it has been applied to a range of other applications,4, 22, 23 it has been improved8,10 and extended to
allow also for experimental data to be taken into account in a Bayesian setting.14, 17
The focus of this paper is to introduce the aerospace community to this new powerful methodology and
to show the huge gains that are possible on a typical model problem, namely the structural performance of
a composite wing skin panel with random manufacturing defects. The model problem and its discretisation
are described in Section II. We introduce three types of random defects: ply angle perturbations, as well
as waviness defects localised in x-direction and localised in xand y-direction. In doing so, we introduced
a new, low-dimensional, description of localised fibre waviness, which may prove a useful tool in defect
characterisation. In Section III we describe the multilevel Monte Carlo method in a fairly abstract way so
that it could also be applied to other similar problems. We compare its theoretical complexity with that of
a standard Monte Carlo simulation and provide a simple-to-implement, practical algorithm. The numerical
experiments in Section IV confirm the theoretically predicted gains for the model problem described in
Section II for all types of defects with up to 152 times faster code, bringing problems that would otherwise
be unthinkable into the feasible range. Whilst the model problem is chosen to represent the typical gains
achieved by the MLMC methodology, in addition, we learn something about the engineering implications of
the different types of defects. Perhaps unsurprisingly, numerical results show that random variations in ply
angles increase the risk of buckling failure, whilst localised waviness defects lead to a predominantly in-plane
local strain failure. The paper concludes with a brief discussion of future avenues of research.
2 of 18
American Institute of Aeronautics and Astronautics
II. Model Problem - Structural performance of a wing skin panel with
random defects
In this section we describe the model problem used to test the multilevel Monte Carlo methodology.
Here, as an illustrative example for our new methodology, the structural performance of a wing skin panel
subject to a typical in-service load is considered. Failure of the panel occurs at the lowest load at which
either in-plane failure occurs (according to a maximum strain criterion) or the panel buckles. Firstly the
pristine panel problem is formulated. Then we describe how random defects are introduced. For consistency,
in the mathematical descriptions which follow a,a(or ai), a
e(or aij ) and a(or aijkl ) denote scalars, vectors,
second-order and fourth-order tensors, respectively. Subscripts refer to the coordinate system, either global
(x, y, z) or local (1,2,3), but never denote differentiation.
A. Model Setup and Mathematical Description
Consider a rectangular composite plate of thickness t, length Lxand width Ly, with the un-deformed mid-
plane of the plate occupying the domain = [0, Lx]×[0, Ly] with boundary Γ. The laminate is made up
of Kidentical, orthotropic, composite plies characterised by the elastic tensor Qand arranged in a stacking
sequence with angles [ψ1, . . . , ψK]. The deformation of the plate is described by in-plane displacements
u(x, y) = [u, v]T, whilst out-of-plane deformations are described by the vertical displacement w(x, y) and
rotations of the midplane θ(x, y) = [θ, φ]. The plate is subjected to uniform axial compression: λ=/Lx.
Out-of-plane, the plate is simply-supported around all boundaries.
The problem is reduced to a 2D problem in Ω, by applying classical laminate theory (CLT)13 , which
gives laminates stiffness tensors
A=
K
X
k=1
¯
Q(k)(zkzk1),B=1
2
K
X
k=1
¯
Q(k)(z2
kz2
k1) and D=1
3
K
X
k=1
¯
Q(k)(z3
kz3
k1),(1)
where zkis the distance from the top edge of the kth ply to the neutral axis of the plate and where ¯
Q(k)is the
elastic tensor of the kth ply in global coordinates. These homogenised tensors connect in-plane strains of the
laminate ε
e(u) = 1
2u+uTwith out-of-plane curvatures κ
e(θ) = 1
2θ+θT. With the additional
assumption that the in-plane and out-of-plane behaviour are decoupled, it follows that the in-plane stress
and the moment are given by
σ
e=t1Aε
eand µ
e
=Dκ
e,(2)
respectively. Here, D=D BTA1B, which conservatively knocks down the bending resistance of the
panel to account for coupling effects. This naturally divides the analysis into two parts.
1. In-Plane Calculation
Firstly we consider the in-plane calculation, which provides both the in-plane stress field σ
efor the buckling
analysis to follow and an in-plane failure load PI. Therefore equilibrium of in-plane stresses, is given by
· σ
e= 0 such that u= 0 on x= 0, u =λon x=Lxand σ
e·n= 0 on y= 0 and y=Ly,(3)
where ndenotes the normal to the boundary Γ. To solve (3) via the finite element method (FEM), the
differential equation is first recast in variational (or weak) form
Z
t1Aε
e(u) : εu)dx +ZΓ
σ
e(u)·n dx = 0.(4)
This equation has to hold for any variation ˜uin a suitable function space V. The solution uis also sought
in V here the Sobolev space H1, of all square integrable functions with square integrable first derivatives
satsisfying the boundary conditions.
To approximate (4),the domain is discretised into a set of uniform quadrilateral elements
Th={(i)
e}nel
i=1,
3 of 18
American Institute of Aeronautics and Astronautics
where nel denotes the number of elements, nnod the number of nodes and hthe largest dimension of
the elements. The solution is approximated by restricting (4) to the finite dimensional subspace VhVof
continuous, piecewise polynomial finite element functions on Th. In the simplest case, this finite element space
is characterised by a basis of piecewise linear shape functions {φi(x, y)}nnod
i=1 , such that the two components
of the approximate solution uh, can be written as uh=Pnnod
i=1 dI
iφiand vh(x, y) = Pnnod
i=1 dI
i+nnodφi(x, y ).
Since (3) is a 2D vector equation, the total number of degrees of freedom is M= 2 nnod, where nnod is
proportional to h2. As for any standard finite element analysis, by substituting the expansions for uhand
vhabove, (4) can be rewritten as a linear system of the form
KIdI=f(5)
where KIRM×Mis the global (in-plane) stiffness matrix and fRMis the load vector due to the
prescribed boundary conditions. The vector dIRMcontains the coefficients of the in-plane degrees
of freedom in the expansions of uhand vhabove. In the examples which follow, KIis assembled as a
sparse matrix in Matlab. The resulting matrix equation is solved using a sparse direct solver (Matlab’s
backslash operator).
The finite element solution uhis calculated for a prescribed compressive end shortening <0. The
resulting piecewise constant stress field σ
e(uh) is used for the buckling analysis, whilst the associated strain
field ε
e(uh) determines the in-plane failure load PI. The in-plane failure is calculated as follows. Within each
element the laminate strain is constant ε
e
i=ε
e(uh)|(i)
e. Individual ply strains ε
e
i,k are calculated by rotating
the laminate strains by the ply angle orientations. For each ply, within any element the following ratios are
calculated
αi,k
1=
εi,k
11 f
cfor εi,k
11 <0
εi,k
11 f
tfor εi,k
11 0, αM,k
2=
εi,k
22 f
cfor εi,k
22 <0
εi,k
22 f
tfor εi,k
22 0and αi,k
3=εi,k
12
εf
s
,(6)
where εf
c,εf
tand εf
sare the in-plane compressive, tensile and shear strains at failure, respectively. Since the
in-plane problem is linear, the ratios calculated give a scalar multiple of the end displacement (i.e. αi,k
i∆)
which, if applied, would initiate in-plane failure in that particular ply, in that element and for that (local)
mode. By recording the maximum ratios over all plies, elements and modes, i.e. α= max[j,i,k]αi,k
j, the
first in-plane failure occurs at an axial strain of λ1=α/Lxwith a corresponding in-plane failure load of
PI=α1RΓ|x=Lxσ
e(uh)·n dx. Since n= [1,0]tand σxy =σyx = 0 on Γ|x=Lxand PI= [PI,0]T.
2. Buckling Analysis of a Composite Plate using Reissner-Mindlin Plate Theory
The critical buckling loading of the plate is calculated using Reissner-Mindlin (RM) plate theory, since the
advantages over Kirchhoff Plate theory are well documented15 . In the absence of body forces, a moment
equilibrium for the RM plate gives the linear eigenvalue problem
· (Dκ
e(θ)) kG(wθ) = λ · (σ
ew) such that w= 0 and µ
e
·n= 0 on Γ,(7)
where Gis the through thickness shear stiffness and k= 5/6 is the shear correction (both constants), whilst
σ
eis the in-plane stress field calculated in Section 1 above. Again, (7) is solved using FEM, and therefore the
weak form of the eigenvalue problem is used, such that the problem becomes: Find the smallest (positive
real) eigenvalue λand associated (buckling) eigenmode 0 6= (θ, w)V2×Vsuch that
Z
Dκ
e(θ) : κ
e(ˆ
θ)d + kG Z
(wθ)·(ˆwˆ
θ)d = λZ
σ
ew· ˆw d(ˆ
θ, ˆw)V×V. (8)
Approximating solutions of (8) using the same mesh Th,as for the in-plane calculations, and such that w
and θare interpolated with the same shape functions {φi(x, y)}nnod
i=1 , the matrix form of (8) is
KBdB=λGdB,(9)
where KBRM×Mis the global stiffness matrix (LHS of (8)) whilst GRM×Mis the geometric stiffness
matrix (RHS of (8)). In the buckling analysis, the coefficient vector dBcontains M= 3 nnod total degrees of
4 of 18
American Institute of Aeronautics and Astronautics
freedom. The sparse matrices are again assembled in Matlab, and solved using a sparse iterative eigenvalue
solver (eigs). The required quantity of interest, the buckling load, is PB= [PB,0]t=λRΓ3σ
e(uh)·n dx.
An important consideration in the choice of plate-bending elements and numerical integration schemes is
the ‘constraint’ ratio rof total degrees of freedom (M) to the number of shear constraints arising in the thin
plate limit (t0). For this problem the optimal ratio is 3/2, where a lower value indicates a propensity of
an element to ‘lock’ and will lead to significant overestimates of the critical buckling loads. We note that
full intergration (two-point Gaussian rule) of a four node element gives a constraint ratio of 3/8, whereas
reduced integration (one point) Gaussian integration gives the optimal value. Whilst reduced integration
helps prevent locking it leads to rank-deficiency, i.e. the element stiffness matrices have extra zero-energy
modes in excess of three-rigid body modes. In fact the one point Gaussian integration rule leads to four
extra zero-energy modes, which can result in numercial errors in specific cases. As a comprimise, selective
integration is implemented in which only the shear terms are calculated with reduced integration (i.e. the
second term on the RHS of (8)). This element is still rank deficient, but with only two zero-energy modes,
and yet maintains an optimal constraint ratio. The numerical experiments show that this element performs
well and only in very few cases causes numerical errors (i.e eigs fails to converge). The implementation of
higher-order hybrid elements which achieve both full-rank and are free of shear locking is outside the scope
of this contribution, but will be considered in future work.
B. Composite Plates with Random Defects
To illustrate our new methodology, three distinct types of random defect are incorporated into the model
problem. These defected laminates are considered as perturbations away from a pristine or base laminate
design. Specifically, we consider random ply angle perturbation (homogeneous), prismatic in-plane fibre
waviness (x-dependent), and non-prismatic fibre waviness (x- and y-dependent).
For scenarios with spatially dependent defects, the stiffness matrices for the composite plate A,Band
D, are varying spatially, i.e. A=A(x, y ), B=B(x, y) and D=D(x, y). These are evaluated separately
on each element eand, in particular, lead to a spatially-dependent (adapted) bending stiffness matrix,
D=D(x, y). The finite element methodology from above remains unchanged, however implementation is
more costly due to the individual evaluation of element stiffness matrices, which now have (x, y)-dependence.
1. Ply Angle Perturbation. Given an intended, pristine stacking sequence, ψ, for the laminate, a small
random perturbation, φi, is applied to the angle of the ith ply, for each i= 1, ..., K. In this way, a
new “defective” stacking sequence, ψd= [ψd
1, ..., ψd
8], is obtained. These random correctional terms are
normally distributed, such that φi N (0, s2), for some specified standard deviation s.
2. Prismatic Fibre Waviness. x-dependent defects are incorporated into the model by introducing in-plane
fibre “waviness” in some plies, which is assumed to be consistent across the entire width (y-dimension)
of the affected ply. This could be caused by a perturbation in the tow path. Waves for individual plies
are modelled using the “wave functions” given by
fWav,i(x) = δisech23
ξi
(xx
i), i = 1, . . . , K, (10)
where the random parameters, xi,δi, and ξiin (10) define the location, amplitude and width of defect
i, respectively. Defining the initial pristine tow path of ply ias a function of x,fPris,i (x), we obtain
the defective tow path via, fDef,i(x) = fPris,i(x) + fWav,i(x). The angle of the ith ply at position xis
then given by
ψDef,i(x) = ψPris,i + arctan(f0
Wav,i(x)).
3. Non-Prismatic Fibre Waviness. Finally, (x, y)-dependent defects are considered. As above, fibre waves
for individual plies are determined using “wave functions”; however, in this scenario, y-dependence is
incorporated by defining the wave amplitude as a function of y. Thus we now have
fWav,i(x, y ) = δi(y)sech23
ξi
(xx
i), i = 1, . . . , K. (11)
In particular, the functions δi(y) are chosen so as to “smooth out” the defect outside of some constructed
random y-region.
5 of 18
American Institute of Aeronautics and Astronautics
III. Multi-Level Monte Carlo Methodology and Implementation
To describe our novel multilevel uncertainty quantification method, let us for a moment step away from the
concrete example and formulate the problem in an abstract way. More details about this new methodology
can be found in Cliffe et al.6
Let us assume we have a FE model of a mechanical structure that is subject to some uncertainty in its
material properties (e.g. the example in the previous section). The accuracy and the computational cost of
this FE model are directly linked to the number Mof degrees of freedom and thus to the mesh resolution.
Typically we are only interested in some scalar quantity of interest, e.g. the minimum of buckling and in-
plane failure load Q= min(PI, PB), but to find this we need to compute the entire state vector XMRM, i.e.
[dI;dB] in the model above with M= 5nnod. As discussed, we use stochastic modelling of the uncertainty
and assume that the randomness in the material parameters enters the model via a vector ZRsof s
random parameters, e.g. φi,xi,δi,ξiabove. Note that therefore both XMand QMare random variables.
A. Standard Monte-Carlo Simulation
In a typical Monte Carlo (MC) analysis, we create a large number Nof independent realisations (or samples)
Z(j)of our parameters (i.e. samples of defective panel models in the above application) and then compute
X(j)
M, the corresponding sample of the output vector of our FE model with Mspatial degrees of freedom, as
well as the corresponding quantity of interest Q(j)
M. The average
b
QMC
M,N =1
N
N
X
j=1
Q(j)
M(12)
of these independent samples of QMis then the standard Monte Carlo estimator for the expected value
E[QM] of QM. Higher-order statistical moments or failure probabilities can be estimated in an analogous
way. For example to estimate the pth moment of min(PI, PB), we simply have to choose QM= min(PI, PB)p
in (12). Similarly, to estimate the probability that min(PI, PB)P, for some critical load P, we simply
have to set QM= 1 if min(PI, PB)Pand QM= 0 otherwise.
There are two sources of error in this estimator. Firstly, we are actually interested in the expected value
E[Q] of Q, the (inaccessible) random variable corresponding to the exact solution of the buckling or in-plane
problem without any FE error. However, since the FE method converges for each sample, as M , we
also have
|E[QMQ]| C1Mα,(13)
where α > 0 is the order of convergence and C1is a constant independent of M. This error is called the bias
error. We can reduce this error below any prescribed bias tolerance τbby making Msufficiently large. In
particular, choosing M(τb/C1)1 will ensure this.
Secondly, there is the sampling error due to the finite number Nof samples that we have to take.
Typically, the total error is quantified via the root mean square error (RMSE), given by
e(b
QMC
M,N ) = E[( b
QMC
M,N E[Q])2]1/2.(14)
The mean square error can be easily seen to expand as6
e(b
QMC
M,N )2= (E[QMQ])2+N1V[QM].(15)
where V[QM] denotes the variance of the random variable QM. For Msufficiently large, V[QM]V[Q] = V,
a problem specific constant. The first term in (15) is the square of the bias error which we have already
discussed. To ensure that the second term in (15) is smaller than a sample tolerance τ2
s, it suffices to choose
NV τ2
s. The total MSE error is then less than τ2
b+τ2
s. To ensure that this is less than τ2we can choose
τ2
s=θτ 2and τ2
b= (1 θ)τ2,for some 0 < θ < 1.(16)
We observe that in order to reduce the total error in (14) it is necessary to increase both the number
of degrees of freedom Mand the number of samples N. This very quickly leads to an intractable problem
6 of 18
American Institute of Aeronautics and Astronautics
Figure 2: Hierarchy of FE meshes for the multilevel algorithm.
when the cost to compute each sample to a sufficiently high accuracy is high, which is typically the case,
especially for localised defects. The cost for one sample Q(i)
Mof QM, in terms of floating point operations
(FLOPs) or CPU time, depends on the complexity of the FE solver and of the eigensolver for the buckling
problem. Typically it will grow like C2Mγ, for some γ1 and some constant C2, independent of jand of
M. Thus, the total cost to achieve a RMSE e(b
QMC
M,N )τwith standard MC is
Cost( b
QMC
M,N )C2N M γC3τ2γ .(17)
Typically α1 and it can be as small as α= 1/3 in 3D. In the numerical experiments for the wing skin
panel below, we have α= 1 and the cost to compute a sample grows like C2M1.17, i.e. γ1.17. Therefore,
to half the error, the cost has to grow by a factor of 23.17 9, which quickly leads to an unacceptable
computational cost.
B. Multilevel Monte-Carlo Simulation
Multlilevel Monte Carlo simulation (MLMC)4, 6, 12 seeks to reduce the variance of the estimator and thus to
reduce computational time, by recursively using a hierarchy of FE models as control variates. The standard
MC estimator in the previous section was too costly because all samples were computed to the required
discretisation (or bias) error. Let us now introduce a hierarchy of FE models, typically obtained by uniform
or adaptive refinement of a coarse mesh as shown in Fig. 2. Each mesh corresponds to a level 0`L
in our multilevel method with M0< M1<· · · < M`=Mdegrees of freedom, respectively, where M0is
typically very small, say 82or 162. In the case of uniform mesh refinement in two dimensions, we have
M`= 4`M0. Naturally, both the accuracy and the cost increase as we move up the levels.
By exploiting the linearity of the expectation operator, the MLMC method avoids estimating E[Q] directly
on the finest level L. It estimates instead the mean on the coarsest level and corrects this mean successively
by adding estimates of the expected values of Y`=QM`QM`1, for `1, i.e. using the simple identity
E[QM] = E[QM0] +
L
X
`=1
E[Y`],(18)
we define the MLMC estimator as
b
QML
M=b
QMC
M0,N0+
L
X
`=1 b
YMC
`,N`(19)
where the numbers of samples N`are judiciously chosen to minimise the total cost of this estimator for a
given prescribed sampling error (see below). Note that samples Y(j)
`of Y`require the FE approximations
Q(j)
M`and Q(j)
M`1on two consecutive mesh levels, i.e. two solves, but crucially both with the same sample
Z(j)of the parameters. The cost of this estimator is
Cost( b
QML
M) =
L
X
`=0
N`C`,(20)
where C`is the cost to compute one sample of Y`(resp. QM0) on level `(resp. 0).
7 of 18
American Institute of Aeronautics and Astronautics
Although this is not necessary, we will assume that the L+ 1 standard MC estimators in (19) are
independent, i.e. that we have used independent samples across all the levels. Due to this independence,
the mean square error of b
QML
Msimply expands to
e(b
QML
M)2=E[QMQ]2+
L
X
l=0
N1
`V`,(21)
where V0=V[QM0] and V`=V[Y`], for `1. This leads to a hugely reduced variance of the estimator since
both FE solutions QM`and QM`1converge to Qand thus
V`=V[QM`QM`1]0 as M`1 .
Let us assume that V`C5Mβ
`. Typically β2α.
As for the standard MC estimator, we can ensure that the bias error is less than τbby choosing M=
M`(τb/C1)1. To choose the numbers of samples N`on each of the levels and thus to ensure that the
sampling error is less than τs, we have still got some freedom and we will use this to minimise the cost.
It is a simple discrete, constrained optimisation problem12 to minimise Cost( b
QML
M) in (20) with respect to
N0, . . . , N`, subject to the constraint PL
`=0 N1
`V`=τ2
s. This leads to12
N`=τ2
s L
X
`=0 pV`C`!rV`
C`
and (22)
Cost( b
QML
M) = τ2 L
X
`=0 pV`C`!2
C4τ2max(0,γβ
α),(23)
where α, β and γare as defined above and τis again the tolerance for the total RMSE error.
There are three regimes: if the variance V`decays faster than the cost C`grows (with respect to `), then
the majority of the work is on level 0 and the total cost is proportional to τ2; if V`decays slower than C`
grows, then the majority of the work is on level Land the total cost is proportional to τ2γβ
α; if V`C`is
constant, then the work is spread evenly over all levels and the constant C4grows with the number of levels
which is proportional to log(τ)2. In any case there is a huge gain over standard MC. Taking again as an
example the test problem in Section IV below, where α= 1, β= 2 and γ1.17, the cost to half the error
in the estimate with MLMC grows only by a factor 22= 4 (instead of 9 in the case of standard MC).
C. Implementation
We finish this section with a short discussion on how the MLMC algorithm is implemented in practice, and
how the (optimal) values of L,M`and {N`}L
`=0 can be computed “on the fly” from the sample averages and
the sample variances of Y`. For ease of presentation, let us define Y0=QM0. We will also restrict ourselves
to the case of uniform mesh refinement where the mesh size is simply halved each time, i.e. h`= 2`h0. In
this case we have M`4`M0in two dimensions.
We describe now a simple adaptive algorithm6, 12 that uses the computed samples to estimate the bias and
the sampling error and thus to choose the optimal values for L(and thus M=M`) and N`(see Algorithm 1
below). More sophisticated algorithms also exist.8
Algorithm 1. Multilevel Monte Carlo simulation
1. Set τ,θ,Ninit and L= 1.
2. For all levels `= 0, . . . , L do
a. If N`is undefined, set N`=Ninit .
b. Compute N`samples of Y`.
c. Compute b
YMC
`,N`and s2
`, and estimate C`.
3. Update our estimates for N`using (25) and if b
YMC
L,N`>(4α1)τb, increase LL+ 1.
4. If there is no change
Go to 5.
Else
Return to 2.
5. Set M=M`and b
QML
M=PL
`=0 b
YMC
`,N`.
8 of 18
American Institute of Aeronautics and Astronautics
To estimate the bias error, let us assume that M`is sufficiently large, so that we are in the asymptotic
regime, i.e. |E[QM`Q]| C1Mα
`in (13). Since |E[Y`]|=|E[QM`Q]E[QM`1Q]|and M`4`M0,
it is an easy exercise10 to show that the bias error on level `can be overestimated by
|E[QM`Q]| 1
4α1b
YMC
`,N`.(24)
To estimate the sampling error and to compute the optimal values for N`, let us define the sample variance
estimator
s2
`=1
N`
N`
X
j=1 Y(j)
`b
YMC
`,N`2V`.
Then we can estimate and update the optimal values using
N`τ2
s L
X
`=0 qs2
`C`!ss2
`
C`
.(25)
where the computational times C`are estimated from the runs up-to-date.
IV. Results
In this section, we first describe the pristine laminate case and analyse the error convergence and compu-
tational cost, before demonstrating the performance of the MLMC method in computing the expected value
of the failure load, Q= min(PB, PI), compared to standard MC simulations.
A. Pristine laminate: Model Setup. Error Convergence and Computational cost
Before introducing random defects to the laminate, we consider a pristine plate of length Lx= 636mm
and width Ly= 212mm as a benchmark for all results which follow. We consider a laminate made up of 8
composite plies in a fully uncoupled (Winckler) stacking sequence [45,45,45,45,45,45,45,45],
where each ply is 0.8mm thick, with elastic ply properties taken from the IM7-8552 data, so that: E11 =
130GPa, E22 = 9.25GPa, G12 = 5.13GPa and ν12 = 0.36, whilst through thickness shear is G= 5.13GPa.
The layup and dimensions are chosen so that: (i) the baseline panel withstands a typical wing skin load of
at least 1kN/mm and (ii) in-plane and buckling failure occur at approximately the same load, as for any
near optimal panel design.
We now consider the convergence rates for the critcial buckling load (PB), as well as the associated
computational costs (Cost) under uniform mesh refinement. The convergence of the in-plane calculation is
not considered here, since for the defect-free case the plate is homogeneous and therefore, for these boundary
conditions, the in-plane equilibrium states can be captured exactly with a single finite element. However,
once localised defects are introduced, convergence of the in-plane solution is vital and will depend on the
size of heterogeneity introduced. We will get back to that below. For the pristine case the inplane failure
load is PI= 277kN.
Figure 3 shows the convergence of the relative error of PBunder uniform mesh refinement. We see that
for the pristine case, the buckling load converges to a value of 278.59kN, at a rate α1 with respect to the
number of degrees of freedom M, i.e.
|1P(h)
B/PB| CM 1,
for some constant C, independent of M. This agrees with the theoretically predicted convergence rate for
buckling modes for this element. Finally we approximate the value γ, the rate at which the Cost (of the
buckling problem, in CPU-time) scales with M, as shown in Fig. 3 (right). The gradient of the line shows
that C(QM)CM 1.17, i.e. a value of γ1.17. The CPU-time is made up of matrix assembly for (5) and
(9), a single solve of (5) using backslash and the calculation of the smallest eigenvalue of (9) using eigs.
For the size of problems considered here (`8), the CPU-time is dominated by the matrix assembly, which
scales linearly with M. For larger problem sizes (M'1e6), the two solves will dominate the CPU-time and
γwill increase, as the limit of sparse direct solvers for 2D problems is reached.
9 of 18
American Institute of Aeronautics and Astronautics
Figure 3: (Left) Plot of the critical buckling mode of the pristine panel corresponding to the critical buckling
load of 278.59kN. (Middle) Log-Log plot of the relative FE error in the buckling load |1P(h)/P |against
M, shows error converges with order α1 (Right) Log-Log plot of Cost (CPU-time) against Mshowing
C(QM)'M1.17 (i.e γ= 1.17).
Figure 4: (Left) CDF for failure load, Q= min(PB, PI) for laminates with random ply angle perturbation,
generated from 10000 samples. (Right) Joint PDF for critical buckling load PBand in-plane failure load PI.
B. MLMC Methodology: Numerical Experiments
The cost of the algorithms is quantified by the CPU-time required for the estimators to satisfy a specified
error tolerance τ. The MLMC simulation is implemented over a hierarchy of meshes, created via repeated
uniform refinement of the initial coarse mesh. The coarsest mesh (`= 0) is obtained from four uniform
refinements of a single (rectangular) element covering the whole domain, and hence has hx= 39.75mm,
hy= 13.25mm, MI= 578 for the in-plane problem, and MB= 867 for the buckling problem.
1. Test Case 1: Random Ply Angle Perturbations
The first example we consider is that of random ply angle perturbations, as described in Section II.B. In
particular, the perturbations φi,i= 1,...,8, are normally distributed such that Φ N (0,32). This standard
deviation has been chosen to conform with the typical machine accuracy in the industry; typically machines
have an allowable error tolerance of ±5. Hence, in order to obtain sample perturbations, φi, satisfying this
error tolerance with 95% confidence, we require a standard deviation of s= 5/1.65 3.0(where 1.65 = z.05
is the critical zvalue for the one-sided 95% confidence interval of the normal distribution).
The left-hand plot in Figure 4 shows the cumulative distribution function (CDF) for the quantity of
interest Q= min(PB, PI), i.e. the probability that Qis below Qas a function of Q, generated from
10000 samples. The right-hand plot shows the corresponding bivariate probability density function (PDF)
for critical buckling load PBand in-plane failure load PI; that is, the value taken at position (P
B, P
I) is
10 of 18
American Institute of Aeronautics and Astronautics
Figure 5: (Top-Left) Mean of Q`and Y`for ply angle perturbation. (Top-Right) Variances of Q`and Y`.
(Bottom-Left) Number of samples required on each level for various error tolerances τ. (Bottom-Right)
Comparison of cost (CPU-time) for MLMC and standard MC, for various relative error tolerances τrel.
the relative likelihood that a laminate in this regime will have critical buckling load P
Band critical in-plane
failure load P
I. From this we see that the failure of the laminate under ply angle perturbation is largely
dominated by buckling failure, with laminate samples typically failing in buckling at approximately 277kN.
This is expected, as the plate remains homogeneous and hence no strain concentration is induced in the
in-plane solution.
We carried out a MLMC simulation for an error tolerance of τ= 0.1kN, which corresponds to a relative
error of approximately τrel = 4 ×104. We split the error equally between bias and sampling error, i.e.
we chose θ= 1/2 in (16) above. In order to estimate the values of the parameters αand β, as defined
in Section III.B, we can consult Figure 5, which shows the log-log plots of the mean and variance of Q`
and Y`=Q`Ql1, as the degrees of freedom, M`, are increased. Looking first at the behaviour of the
expectation of Q`and Y`(top left), we see that
E[Y`]CM 1.0
`
approximately, and hence α1.0. Next, considering the variance plot (top right), we see that approximately
V[Y`]CM 2.18
`;
that is, β2.18. We note that, this agrees with the results obtained in the case of the pristine laminate,
since failure occurs almost entirely due to buckling and the material (with angle defects) remains homogenous
throughout the plate.
The bottom two plots in Figure 5 are concerned with the implementation of the MLMC simulation
and its cost in comparison to standard MC simulation. The bottom left plot shows the optimal numbers of
samples N`required on each refinement level, when implementing MLMC for several different error tolerances
(computed as per the formula in Eqn. (25)). The bottom right plot compares the computational cost of the
MLMC simulation versus standard MC, again for various relative error tolerances. From this it is apparent
that the use of MLMC results in large savings in computational cost. We also observe that the cost of the
MLMC simulation grows exactly proportionally to τ2, as predicted by our theory in Section III.B, and that
it increases significantly slower than the cost of standard MC simulation for decreasing error tolerances.
In Tables 1 and 2, we give a more detailed analysis of the MLMC simulation. First, in Table 1, we list the
numbers N`of samples for error tolerance τ= 0.1kN, as well as the cost per sample C`on each level. From
11 of 18
American Institute of Aeronautics and Astronautics
Level Number of mesh refinements Number of samples N`Cost per sample C`(in seconds)
0 4 7378 0.49
1 5 1008 2.39
2 6 118 9.55
3 7 15 38.62
4 8 2 158.18
Table 1: Number of samples required by MLMC on each refinement level for error τ= 0.1kN, and corre-
sponding cost per sample.
Rel. error Level MLMC cost MC cost Number of samples for MC Speed-up factor
4×1044 2.28 hrs 46.6 hrs 1318 20.4
1.5×1033 8.1 min 63 min 112 7.8
5.8×1032 33 sec 61 sec 8 1.9
Table 2: Cost (CPU-time) for MLMC and MC simulation for laminates with random angle ply perturbation,
for several error tolerances. The required discretisation level and the number of samples needed for standard
MC simulation are also given.
this we can deduce that the total cost for the MLMC simulation is approximately 2.28 hours. To estimate
the cost of the standard MC simulation, note first that V[Q4]=6.59kN. Hence the number of samples that
would be required to achieve a sample error of τ= 0.1kN is
NMC =6.59
0.5τ2= 1318 samples.
The cost (CPU-time) of a single sample of QM`(as opposed to Y`, which is more expensive) at level `= 4 is
127.2 seconds, and so the total cost for standard MC is approximately 46.6 hours. MLMC simulation is 24
times cheaper for this example and tolerance. Table 2 shows the comparative costs for MLMC and standard
MC simulation for further error tolerances (computed in the same manner), along with the obtained speed-up
factor.
However, as mentioned above the method is not just limited to calculating means of quantities of interest.
We can also calculate failure probabilities. For example, we can compute the probability that the failure load
Q= min(PB, PI) is less than Q= 271.75kN with MLMC, using a finest level Lwith uniform 7 refinements,
with N0= 20000, N1= 5000, N2= 2500 and N3= 750. We obtain an estimate of P(Q < Q) = 0.0899
with a RMSE of 0.0041, thus satisfying that the probability of a defective plate being able to sustain a load
of Qis less than 0.0966 <0.1 with 95% confidence (B-basis). The cost for this simulation is about 20 hrs.
To compute the same estimate with standard MC would require 5700 samples at the finest level at a cost of
about 50 hrs.
2. Test Case 2: Prismatic Fibre Waviness
Again the laminate under consideration is as described in the benchmark problem in Section II.A, with an
assumed pristine stacking sequence (i.e. no angle perturbation is included). We assume that the waves,
e.g. due to a defective tow path, have normally distributed x-position, and whether waviness occurs in a
given ply is determined by a Bernoulli (“on/off”) random variable pi. Fibre waves for individual plies are
constructed using the wave functions defined in (10), reproduced here for convenience:
fWav,i(x) = δisech23
ξi
(xx
i), i = 1,...,8.
The random parameters, xi,δi, and ξidefine the location, amplitude and width of defect i, respectively.
The chosen distributions in our numerical experiments are:
12 of 18
American Institute of Aeronautics and Astronautics
Figure 6: (Left) Example of prismatic waves in plies 3 and 7 of the laminate, and resulting critical buckling
mode. (Centre and right) In-plane strain concentrations in plies 3 and 7, respectively.
1. piB(1,0.3),
2. x
i NTrunc(0.3Lx,0.01L2
x),
3. δiU(0.1Ly,0.3Ly),
4. ξiU(0.1Lx,0.5Lx).
where B(1, p) is the Bernoulli distribution with success rate pand U(a, b) is the uniform distribution on the
interval [a, b]. To avoid unphysical values we use NTrunc(µ, s2), the truncated normal distribution16 with
mean µ, and standard deviation s.
Figure 6 shows typical waviness defects (here affecting two plies of the laminate), and the resulting
critical buckling mode, computed on a mesh created with six uniform refinements of the initial geometry.
Also included are visualisations of the in-plane strain components across the defective plies (ε11,ε22, and
ε12, respectively). In this instance, the defect parameters are x3= 199.53, δ3= 92.27, ξ3= 245.02, and
x7= 118.08, δ7= 30.86, ξ7= 132.45. The resulting failure loads are PB= 277.94kN and PI= 149.78kN,
with in-plane failure occurring first in ply 3.
Figure 7 shows again the CDF of the quantity of interest, Q= min(PB, PI), generated from 10000 samples,
as well as the corresponding bivariate PDFfor critical buckling load PBand in-plane failure load PI. We
observe that the inclusion of fibre waviness leads to the failure of the laminate being largely dominated by
in-plane failure, with the failure typically occurring in-plane at around 120kN. This is probably not typical
in real panels and it is due to the waviness in our experiment being slightly exaggerated.
As a consequence we also have a larger variance of the quantity of interest in this experiment, and
we carry out our MLMC simulation for larger error tolerances; in particular, we choose the smallest error
tolerance to be τ= 1kN , which corresponds to a relative error of τrel = 8 ×103. We now seek to estimate
again the values of the parameters αand β, from Section III.B. Figure 8, shows the behaviour of the mean
and variance of Q`and Y`=Q`Ql1, as Mincreases. From the gradient of the mean plot for Y`, we
estimate that
E[Y`]CM 0.54
`,
for some constant C, and hence α0.54. Next, looking at the behaviour of the variance of Y`, we estimate
V[Y`]CM 1.37
`,
and hence β1.37. The lower rates of convergence here are due to the failure occuring predominantly
in-plane. The theoretical convergence rate for the maximum in-plane load is half that of the buckling mode.
Again, the bottom left plot shows the number of samples, N`, required on each refinement level when
implementing MLMC for various relative error tolerances. The bottom right plot shows a comparison of the
computational cost of the two methods, again for various error tolerances. The cost of MLMC grows again
13 of 18
American Institute of Aeronautics and Astronautics
Figure 7: (Left) CDF for failure load, Q= min(PB, PI) for laminates featuring prismatic wave defects,
generated from 10000 samples. (Right) PDF for critical buckling load, PB, and in-plane failure load, PI.
Rel. error Level MLMC cost MC cost Number of samples for MC Speed-up factor
0.008 5 3.16 hrs 20.0 days 3124 151.8
0.02 4 21.4 min 14.95 hrs 411 41.9
0.06 3 2.64 min 24.5 min 46 9.3
0.16 2 21.6 sec 62.6 sec 8 2.9
Table 3: Cost (CPU-time) for MLMC and MC simulation for laminates with prismatic wave defects, for
several error tolerances. The required discretisation level and the number of samples needed for standard
MC simulation are also given.
proportionally to τ2, but here the savings in computational cost over standard MC simulation are even
larger and especially for lower accuracies (which are typically sufficient in engineering applications). Table 3
gives again the comparative costs of MLMC and standard MC simulation for a range of tolerances, along
with the observed speed-up factor.
3. Test Case 3: Non-Prismatic Fibre Waviness
We finish by considering the case of (x, y)-dependent fibre waviness, again assuming an underlying pristine
stacking sequence. The assumptions are as in the case of prismatic wave defects. Recall the modified wave
functions presented in (11):
fWav,i(x, y ) = gi(y)δisech23
ξi
(xx
i), i = 1,...,8,
where the random parameters, pi,xi,δi, and ξiare chosen identical to the prsimatic case above. The
functions gi(y) are intended to smooth out the wave defects outside of some y-range, and are constructed
via
gi(y) = ˜
δisech23
˜
ξi
(yy
i), i = 1,...,8,
where the random parameters yi,δy
iand ξy
iare distributed according to:
1. y
iU(0, Ly)
2. δy
iU(0,1),
3. ξy
iU(0.1Ly,0.3Ly).
14 of 18
American Institute of Aeronautics and Astronautics
Figure 8: (Top-Left) Mean of Q`and Y`for prismatic wave defects. (Top-Right) Variances of Q`and
Y`. (Bottom-Left) Number of samples required on each level for various error tolerances. (Bottom-Right)
Comparison of cost (CPU-time) for MLMC and standard MC, for various relative error tolerances τrel.
Rel. error Level MLMC cost MC cost Number of samples for MC Speed-up factor
0.0065 5 5.24 hrs 28.9 days 4380 132.4
0.018 4 40.1 min 22.3 hrs 580 33.4
0.05 3 5.11 min 60.7 min 110 11.9
0.14 2 40.3 sec 115.9 sec 14 2.9
Table 4: Cost (CPU-time) for MLMC and MC simulation for laminates with prismatic wave defects, for
several error tolerances. The required discretisation level and the number of samples needed for standard
MC simulation are also given.
Figure 9 shows typical tow paths afflicted with non-prismatic fibre waviness (here affecting two plies of
the laminate), and the resulting critical buckling mode. Also plotted are visualisations of each in-plane strain
components across the defective plies (ε11,ε22, and ε12, respectively). For this case, the defect parameters
are x4= 132.31, δ4= 28.93, ξ4= 140.35, y4= 33.46, δy
4= 0.62, ξy
4= 49.27 and x6= 463.68, δ6= 36.14,
ξ6= 105.06, y6= 32.45, δy
6= 0.76, ξy
6= 23.80. The resulting failure loads are PB= 279.43kN and
PI= 156.67kN, with in-plane failure occurring first in ply 6.
Figure 10 shows the cumulative distribution function for Qand the corresponding bivariate probability
density function for PBand PI, generated from 10000 samples. From the PDF, we again observe that
laminates typically fail in-plane at around 120kN.
As with prismatic wave defects, the large variance of the quantity of interest leads us to choose a minimum
error tolerance of τ= 1kN, this time corresponding to a relative error of approximately 6.5×103. Once
again, we estimate the values of the parameters αand βfrom the gradients of the upper two plots in figure
11. In particular, we have (from the top left plot) that α0.91, and (from the top right plot) that β1.90.
Finally, table 4 shows the comparative costs for MLMC and standard MC simulation for a range of relative
error tolerances, along with the obtained speed-up factor; again we observe large savings.
15 of 18
American Institute of Aeronautics and Astronautics
Figure 9: (Left) Example of non-prismatic waves in plies 4 and 6 of the laminate, and resulting critical
buckling mode. (Centre and right) In-plane strain concentrations in plies 4 and 6, respectively.
Figure 10: (Left) CDF for failure load, Q= min(PB, PI) for laminates featuring non-prismatic wave defects,
generated from 10000 samples. (Right) PDF for critical buckling load, PB, and in-plane failure load, PI.
V. Conclusion and Future Work
In this paper we have successfully demonstrated the applicability of MLMC simulation on a typical
aerospace model problem, specifically the performance of a composite wing skin panel afflicted with random
manufacturing defects under an applied axial loading. From the numerical results in section IV, the advan-
tages of MLMC simulation over standard MC simulation are apparent, with huge savings in computational
cost being observed for all defect models considered. We see also that MLMC simulation is not limited to
easy problems, and in fact the gains are more pronounced in cases where the discretisation error is large (as
with prismatic and non-prismatic fibre waviness). We have further demonstrated the versatility of MLMC
simulation, showing that the method is not restricted to problems in which the quantity of interest is sim-
ply the mean of a smooth functional of the state vector XM, but can also be applied to calculate failur
probabilities, as in the case study of the B-basis failure criterion.
From an engineering perspective, we have presented a novel approach to modelling in-plane fibre wave de-
fects and investigated this alongside ply angle perturbation. The chosen quantity of interest, Q= min(PB, PI)
in case studies 1-3 is particularly interesting in this setting, as the joint distribution of PBand PIis complex.
In the case of random angle perturbations we always observe a knock-down in the critical buckling load,
since the pristine stacking sequence is optimal for this problem; consequently, the distribution of PBhere is
one-sided. This is not the case for the critical in-plane failure load, which we observe to be two-sided. In
16 of 18
American Institute of Aeronautics and Astronautics
Figure 11: (Top-Left) Mean of Q`and Y`for non-prismatic wave defects. (Top-Right) Variance of Q`and
Y`. (Bottom-Left) Number of samples required on each level, for various error tolerances. (Bottom-Right)
Comparison of cost (CPU-time) for MLMC and standard MC simulation, for various error tolerances.
contrast, the introduction of fibre wave defects allows for an increase in critical buckling load beyond that
of the pristine problem. Thus, for these cases, both PBand PIare two-sided. The numerical results imply
that the nature of defect determines whether buckling failure or in-plane failure is more critical. This is
particularly apparent when looking at the joint probability distribution functions of PBand PI, in figures 4,
7 and 10. It is also shown that the knock-down in critical failure load incurred by ply angle perturbation is
largely insignificant when compared to that caused by in-plane fibre wave defects.
In carrying out the calculations for this specific model we have observed a number of key extensions to
the methodology which would further improve the method’s computational efficiency. Firstly, in aerospace
applications the quantity of interest is often taken to be the probability a design will fail below a safe load.
Thankfully, this probability is very small, but obtaining robust estimates for these rare events is much more
difficult since, by definition, a large number of samples are required to observe even a single case. Future
work will look to combine MLMC simulation with importance sampling and subset simulation.3,21 The idea
here is that samples are not drawn at random, but biased towards those defects which cause premature
structural failure. Further gains can be achieved by using a finite element error estimator to dictate whether
finer mesh calculations must be carried out.10 In other words, the actual failure load is bounded by an
error estimator, and if the approximate failure load is sufficiently far from the safe load, no further fine scale
calculations are required since error convergence guarantees that such samples will fail above the safe load
irrespective of the mesh resolution.
Secondly, we considered only uniform refinement of the mesh at each level. In particular for the appli-
cations of localised defects considered here, substantial gains can be achieved if adaptive mesh refinement
is used. In this case the mesh would be different for each sample, since it would depend on the specific
defects.10 It is thus proposed that instead of defining levels by mesh size, levels are defined by some error
metric; this would allow the methodology to be applied in a similar way as presented here.
The final important extension of the methodology, is to incorporate real test data into the the MLMC
simulations. These techniques have been developed for other fields14,17 using a Bayesian setting. Such
techniques with be essential here to reach the goal of challenging conservative failure limits and reducing the
required number of physical tests across the Test Pyramid, shown in Figure 1.
However, the real impact for the new methodology is expected in the context of out-of-plane defects,
such as ply wrinkles,9since it is not possible to use classical laminate theory and to homogenise through
17 of 18
American Institute of Aeronautics and Astronautics
thickness in that case. Therefore, individual samples will have to be modelled in 3D leading to even larger
computational costs and an even bigger need for the novel multilevel methodology. See also the companion
paper18 in which we started work in this direction.
Acknowledgements
This work falls within the EPSRC ‘Multiscale Modelling of Aerospace Composites’ project (EP/K031368/1).
Butler has a Royal Academy of Engineering/GKN Aerospace Research Chair and Kynaston’s PhD is funded
by the Smith Institute for Industrial Mathematics and System Engineering, and GKN Aerospace.
References
1N.M. Alexandrov, R.M. Lewis, C.R. Gumbert, L.L. Green, and P.A. Newman. Model management in aerodynamic
optimization with variable-fidelity models. J. Aircraft, 38(6):1093–1101, 2001.
2D. Allaire and K Willcox. Surrogate modeling for uncertainty assessment with application to avaiation environmental
system models. AIAA Journal, 48(8):1791–1803, 2010.
3S.-K. Au and J. L. Beck. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic
Engineering Mechanics, 16:263277, 2001.
4A. Barth, Ch. Schwab, and N. Zollinger. Multi-level Monte Carlo finite element method for elliptic PDE’s with stochastic
coefficients. Numer. Math., 119:123–161, 2011.
5A. Chaudhuri and R.T. Haftka. Separable Monte Carlo combined with importance sampling for variance reduction.
International Journal of Reliability and Safety, 7(3):201–215, 2013.
6K.A. Cliffe, M.B. Giles, R. Scheichl, and A.L. Teckentrup. Multilevel Monte Carlo methods and applications to elliptic
PDEs with random coefficients. Computing and Visualization in Science, 14(1):3–15, 2011.
7G.A. Cohen and R.T. Haftka. Sensitivity of buckling loads of anisotropic shells of revolution to geometric imperfection
and design changes. Computers and Structures, 31(6):985–995, 1989.
8N. Collier, A.L. Haji-Ali, F. Nobile, E. von Schwerin, and R. Tempone. A continuation multilevel Monte Carlo algorithm.
Preprint arXiv:1402.2463, 2014.
9T. J. Dodwell, R. Butler, and G. W. Hunt. Out-of-plane ply wrinkling defects during consolidation over an external
radius. Composites Science and Technology, 105:151–159, 2014.
10D. Elfverson, F. Hellman, and A. alqvist. A multilevel Monte Carlo method for computing failure probabilities. Preprint
arXiv:1408.6856, 2014.
11I. Elishakoff, S. van Manent, P.G. Vermeulent, and J. Arbocz. First-order second-moment analysis of the buckling of
shells with random imperfections. AIAA Journal, 25(8):1113–1117, 1987.
12M.B. Giles. Multilevel Monte Carlo path simulation. Operations Research, 56(3):981–986, 2008.
13Z. Gurdal, R. T. Haftka, and P. Hajela. Design and optimisation of laminated composite materials. Wiley, 1999.
14V.H. Hoang, Ch. Schwab, and A.M. Stuart. Complexity analysis of accelerated MCMC methods for Bayesian inversion.
Inverse Problems, 29:085010, 2013.
15Thomas Hughes. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis. Dover, Mineola,
New York, 2000.
16N.L. Johnson, S. Kotz, and N.Balakrishnan. Continuous Univariate Distributions, Volume 1. Wiley, 1994.
17C. Ketelsen, R. Scheichl, and A.L. Teckentrup. A hierarchical multilevel Markov chain Monte Carlo algorithm with
applications to uncertainty quantification in subsurface flow. Preprint arXiv:1303.7343, 2013.
18T. Kim, T. Fletcher, T. Dodwell, R. Butler, R. Scheichl, J. Ankersen, and R. Newley. The effect of free edges and
manufacturing process on inter-laminar performance of curved laminates. AIAA Science and Technology Forum and Exposition,
Kissimmee, FL, 5th -9th January 2015.
19Y-W Li, I. Elishakoff, J.H. Starnes, and D. Bushnell. Effect of the thickness variation and initial imperfection on buckling
of composite cylindrical shells: Asymptotic analysis and numerical results by BOSOR4 and PANDA2. Int. J. Solids Structures,
34(28):3755–3767, 1997.
20W. Liu, R. Butler, A.R. Mileham, and A.J. Green. Bilevel optimization and postbuckling of highly strained composite
stiffened panels. AIAA Journal, 44(11):2562–2570, 2006.
21R.E. Melchers. Structural reliability analysis and prediction. John-Wiley, 2 edition, 1999.
22S. Mishra, C. Schwab, and J. Sukys. Multi-level Monte Carlo finite volume methods for shallow water equations with
uncertain topography in multi-dimensions. SIAM Journal on Scientific Computing, 34:761–784, 2012.
23F. Mller, P. Jenny, and D.W. Meyer. Multilevel Monte Carlo for two phase flow and Buckley-Leverett transport in
random heterogeneous porous media. Journal of Computational Physics, 250:685–702, 2013.
24A. T. Rhead, T. J. Dodwell, and R. Butler. The effect of tow gaps on compression after impact strength of robotically
laminated structures. Computers, Materials and Continua, 35(1):1–16, 2013.
25B.P. Smarslok, R.T. Haftka, L. Carraro, and D. Ginsbourger. Improving accuracy of failure probability estimates with
separable Monte Carlo. International Journal of Reliability and Safety, 4:393–414, 2010.
26US Department of Transportation. Composite aircraft structure. Advisory Circular 20–107B, 2010.
18 of 18
American Institute of Aeronautics and Astronautics
... It has been shown that MLMC is more cost effective than MC for many stochastic differential equations [8,9,6,21,22]. Due to the success of this method, we consider continuing along the path of variance -more precisely MSE -reduction by applying control variates to each E[Y ] estimate with the aim of reducing the number of samples N required by MLMC or the required work to achieve a desired MSE ε 2 . ...
... Following each case, we present several results when comparing MLCV and MLMC for both tests. In particular, we consider the convergence of ρ 2 in (22), and subsequently the MSE at each level, which controls the sampling error of MLCV. For values of ρ 2 close to one, a significant reduction in the MSE ofŴ can be observed, and thus a reduction in the numberÑ of required samples per level. ...
Preprint
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
... While manufacturing large, complex composite components, small process-induced defects can form [1], for example porosity [2], in-plane fibre waviness [3], out-of-plane wrinkles [4,5]. In practice, we observe a distribution of locations, sizes and shapes of these defects, and therefore the direct effect they have on part performance is uncertain. ...
... With more available data, which includes a broader class of wrinkle defects [4,5,7,17], this definition could be generalized. Yet, here the choice is sufficient to demonstrate the methodology, and draw some interesting preliminary engineering results. ...
Preprint
This paper presents a novel stochastic framework to quantify the knock down in strength from out-of-plane wrinkles at the coupon level. The key innovation is a Markov Chain Monte Carlo algorithm which rigorously derives the stochastic distribution of wrinkle defects directly informed from image data of defects. The approach significantly reduces uncertainty in the parameterization of stochastic numerical studies on the effects of defects. To demonstrate our methodology, we present an original stochastic study to determine the distribution of strength of corner bend samples with random out-plane wrinkle defects. The defects are parameterized by stochastic random fields defined using Karhunen-Lo\'{e}ve (KL) modes. The distribution of KL coefficients are inferred from misalignment data extracted from B-Scan data using a modified version of Multiple Field Image Analysis. The strength distribution is estimated, by embedding wrinkles into high fidelity FE simulations using the high performance toolbox 'dune-composites' from which we observe severe knockdowns of 74%74\% with a probability of 1/200. Supported by the literature our results highlight the strong correlation between maximum misalignment and knockdown in coupon strength. This observations allows us to define a surrogate model providing fast assessment of predicted strength informed from stochastic simulations utilizing both observed wrinkle data and high fidelity finite element models.
... The second ingredient is a stochastic algorithm that models uncertainty. Widely used algorithms include (a) stochastic spectral approaches [20,21,41], (b) perturbation-based techniques [22][23][24]49] and (c) Monte-Carlo simulation (MCS) as well as its variations [25][26][27][28][29][30][31][32][33]. ...
... During the online stage, M forward evaluations of the reduced-order system are conducted to obtain the quantities of interest and project them back to the initial full system. It is noteworthy that the developed scheme can also be combined with existing MCs acceleration techniques, such as parallel MCS [28], sensitivity derivatives MCs [29], and multilevel MCs [32]. ...
Article
Full-text available
This work models spatially uncorrelated (independent) load uncertainty and develops a reduced-order Monte Carlo stochastic isogeometric method to quantify the effect of the load uncertainty on the structural response of thin shells and solid structures. The approach is tested on two demonstrative applications of uncertainty, namely, spatially uncorrelated loading, with (1) Scordelis–Lo Roof shell structure, and (2) a 3D wind turbine blade. This work has three novelties. Firstly, the research models spatially uncorrelated (independent) load uncertainties (including both their magnitude and/or direction) using stochastic analysis. Secondly, the paper advances a reduced-order Monte Carlo stochastic isogeometric method to quantify the spatially uncorrelated load uncertainty. It inherits the merits of isogeometric analysis, which enables the precise representation of geometry and alleviates shell shear locking, thereby reducing the model’s uncertainties. Moreover, the method retains the generality and accuracy of classical Monte Carlo simulation (MCS), with significant efficiency gains. The demonstrative results suggest that there is a cost, which is 3% of the time used by the standard MCS. Furthermore, a significant observation is made from the conducted numerical tests. It is noticed that the standard deviation of the output (i.e., displacement) is strongly influenced when the load uncertainty is spatially uncorrelated. Namely, the standard derivation (SD) of the output is roughly 10 times smaller than the SD for correlated load uncertainties. Nonetheless, the expected values remain consistent between the two cases.
... For the former, the magnitude of the deflections and the developed strains constitute the quantities of interest, while for the latter natural frequencies, mode shapes, structural damping coefficients and the generalized mass of the most important vibration modes are sought. The GVT method is widely established in the literature and has been used for S. Kilimtzidis et al. [14]. ...
Article
This article presents a methodology for the detailed sizing of a composite materials aircraft wing subject to wind tunnel testing design requirements. Aiming at the comparison among the numerical and experimental models in terms of structural response, a ground testing campaign has also been conducted. The present wing, designed and manufactured within the scope of the GRETEL project, consists of several internal, external and interface components, since provision is being made for a wind tunnel test campaign. The Finite Element Method (FEM) modeling technique for all the relevant parts of the wing is initially provided, taking into account the boundary conditions as well as the externally applied aerodynamic loads. The sizing methodology and subsequent compliance of the relevant parts and their connectivity elements with respect to design requirements is also explored in detailed fashion. Certain manufacturing requirements and aspects are also manifested and discussed. Following an introduction on the ground testing facilities and measuring equipment, the results of the static tests and the Ground Vibration Testing (GVT) are compared with the corresponding numerical values. Overall, the numerical and experimental results, in terms of displacements, natural frequencies and eigenmodes are in close agreement.
... Using this technique, Siva et al. [2] studied uncertainty present in helicopter performance. Butler et al. [3] used multilevel Monte Carlo for uncertainty quantification in composite structures. In the Monte Carlo method, uncertain parameters are randomly sampled, and the solution is developed using each random sample [4]. ...
... Thus, PINN can be efficiently used for UQ for a well-set problem. Examples of UQ with MC are found in the work of Butler et al. [38]. MC-based UQs are the simplest and the most reliable methods. ...
Article
A model based on the Physics-Informed Neural Networks (PINN) for solving elastic deformation of heterogeneous solids and associated Uncertainty Quantification (UQ) is presented. For the present study, the PINN framework-Modulus developed by Nvidia is utilized, wherein we implement a module for mechanics of heterogeneous solids. We use PINN to approximate momentum balance by assuming isotropic linear elastic constitutive behavior against a loss function. Along with governing equations, the associated initial / boundary conditions also softly participate in the loss function. Solids where the heterogeneity manifests as voids and fibers in a matrix are analyzed, and the results are validated against solutions obtained from a commercial Finite Element (FE) analysis package. The present study also reveals that PINN can capture the stress jumps precisely at the material interfaces. Additionally, the present study explores the advantages associated with the surrogate features in PINN via the variation in geometry and material properties. The presented UQ studies suggest that the mean and standard deviation of the PINN solution are in good agreement with Monte-Carlo FE results. The effective Young's modulus predicted by PINN for single representative void and single fiber composites compare very well against the ones predicted by FE.
... The proposed method produced a more reasonable probabilistic estimation than the Monte Carlo simulation due to the enhancement of its convergence due to the preconditioning conjugate gradient technique. Dodwell et al. [40] directly adopted a Monte Carlo simulation at a multilevel framework to characterize failure statistics of laminated composites. The proposed approach proved to be more simple and selfadaptive and was hugely more computationally effective in comparison to the classical Monte Carlo simulation. ...
Article
Full-text available
Deterministic and stochastic bending and buckling characteristics of antisymmetric cross-ply and angle-ply laminated composite plates are thoroughly examined. Partial differential equations for cross-ply and angle-ply laminates are derived using the three variable refined shear deformation theory based on the Hamilton principle. Deterministic Navier’s solutions are obtained for specific boundary conditions and numerical results are validated with the first-order and third-order shear deformation theories. Two stochastic sampling methods, namely Monte Carlo simulation and Latin hypercube sampling, are presented and analyzed to determine the optimal one based on convergence studies and criteria of sampling errors. Comprehensive probability characteristics of stochastic bending deflections and stochastic critical buckling loads of antisymmetric cross-ply and angle-ply laminated composite plates are investigated using the optimal sampling technique. Probability distribution functions of various stochastic cases provide good assessments for the effects of each inevitable source uncertainty on the bending and buckling behaviors of the laminated composites. This study presents a good alternative for the classical and expensive Monte Carlo simulations and provides a fundamental understanding of bending and buckling statistics of laminated composites.
... Unfortunately, and to the authors' concern, no experimental evidence of the structural response variability stemming from the defects herein considered has been found. Nevertheless, recent numerical works (see Dodwell et al. [54] and van der Broek et al. [55]) have addressed the influence of misalignments in the buckling and post-buckling regime, providing COV that are in agreement with the ones obtained in this manuscript. ...
Article
Full-text available
It is well known that fabrication processes inevitably lead to defects in the manufactured components. However, thanks to the new capabilities of the manufacturing procedures that have emerged during the last decades, the number of imperfections has diminished while numerical models can describe the ground truth designs. Even so, a variety of defects has not been studied yet, let alone the coupling among them. This paper aims to characterise the buckling response of Variable Stiffness Composite (VSC) plates subjected to spatially varying fibre volume content as well as fibre misalignments, yielding a multiscale sensitivity analysis. On the one hand, VSCs have been modelled by means of the Carrera Unified Formulation (CUF) and a layer-wise (LW) approach, with which independent stochastic fields can be assigned to each composite layer. On the other hand, microscale analysis has been performed by employing CUF-based Mechanics of Structure Genome (MSG), which was used to build surrogate models that relate the fibre volume fraction and the material elastic properties. Then, stochastic buckling analyses were carried out following a multiscale Monte Carlo analysis to characterise the buckling load distributions statistically. Eventually, it was demonstrated that this multiscale sensitivity approach can be accelerated by an adequate usage of sampling techniques and surrogate models such as Polynomial Chaos Expansion (PCE). Finally, it has been shown that sensitivity is greatly affected by nominal fibre orientation and the multiscale uncertainty features.
... The effect of uncertainties at the microscale on the macroscale mechanical property is evaluated through extensive Monte Carlo (MC) simulation [39,40], where the stochastic variables are void and fiber volume fractions. Details of modeling of these stochastic variables are described in Section 3.1, where the joint probability distribution function provides a probabilistic characterization of the microstructure of the specimen for each process condition. ...
Article
Full-text available
This study investigates the effects of process conditions on the inherent variabilities in fused filament fabrication (FFF) of short carbon-fiber-reinforced Nylon-6 composites, where the sources of uncertainty and their adverse effects on microstructures and Young’s modulus are quantified. Microstructural characteristics such as fiber volume fraction, void volume fraction, and their spatial distributions are first extracted via image-based data analytics, and then their uncertainties are quantified by the analysis of variance. A Monte Carlo sampling method is introduced to enrich the datasets for analyzing uncertainty propagation from micro-level (microstructures) to macro-level (mechanical property). A modified Halpin-Tsai model with the consideration of fiber and void distributions is developed to quantify the propagated uncertainties on Young’s modulus, which are further validated through quasi-static tensile tests. This study examined the process-structure-property relationship of FFF samples and quantified the underlying variations in both micro- and macro-levels.
Article
Full-text available
In this paper we address the problem of the prohibitively large computational cost of existing Markov chain Monte Carlo methods for large--scale applications with high dimensional parameter spaces, e.g. in uncertainty quantification in porous media flow. We propose a new multilevel Metropolis-Hastings algorithm, and give an abstract, problem dependent theorem on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions. For a typical model problem in subsurface flow, we then provide a detailed analysis of these assumptions and show significant gains over the standard Metropolis-Hastings estimator. Numerical experiments confirm the analysis and demonstrate the effectiveness of the method with consistent reductions of more than an order of magnitude in the cost of the multilevel estimator over the standard Metropolis-Hastings algorithm for tolerances ε<102\varepsilon < 10^{-2}.
Article
Full-text available
We propose and analyze a method for computing failure probabilities of systems modeled as numerical deterministic models (e.g., PDEs) with uncertain input data. A failure occurs when a functional of the solution to the model is below (or above) some critical value. By combining recent results on quantile estimation and the multilevel Monte Carlo method we develop a method which reduces computational cost without loss of accuracy. We show how the computational cost of the method relates to error tolerance of the failure probability. For a wide and common class of problems, the computational cost is asymptotically proportional to solving a single accurate realization of the numerical model, i.e., independent of the number of samples. Significant reductions in computational cost are also observed in numerical experiments.
Article
Full-text available
Monte Carlo (MC) methods are often used to carry out reliability based design of structures. Methods that improve the accuracy of MC simulation include Separable Monte Carlo (Separable MC), Markov Chain Monte Carlo and importance sampling. We explore the utility of combining Separable MC and importance sampling for improving accuracy. The accuracy of the estimates is compared for Standard MC, Separable MC, importance sampling and combined method for a composite plate example and a tuned mass damper example. For these examples Separable MC and importance sampling reduced the error individually by factors of 2-5, and the combination reduced it further by about a factor of 2. The results were also compared with the First Order Reliability Method (FORM). FORM was grossly inaccurate for the tuned mass-damper example which has a failure region bounded by safe regions on either side.
Article
Full-text available
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending with the desired one. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical examples substantiate the above results and illustrate the corresponding computational savings.
Article
Full-text available
Numerical simulation models to support decision-making and policy-making processes are often complex, involving many disciplines, many inputs, and long computation times. Inputs to such models are inherently uncertain, leading to uncertainty in model outputs. Characterizing, propagating, and analyzing this uncertainty is critical both to model development and to the effective application of model results in a decision-making setting; however, the many thousands of model evaluations required to sample the uncertainty space (e.g., via Monte Carlo sampling) present an intractable computational burden. This paper presents a novel surrogate modeling methodology designed specifically for propagating uncertainty from model inputs to model outputs and for performing a global sensitivity analysis, which characterizes the contributions of uncertainties in model inputs to output variance, while maintaining the quantitative rigor of the analysis by providing confidence intervals on surrogate predictions. The approach is developed for a general class of models and is demonstrated on an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how the confidence intervals on surrogate predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of the statistical outputs of interest.
Article
Full-text available
In this paper we address the problem of the prohibitively large computational cost of ex-isting Markov chain Monte Carlo methods for large–scale applications with high dimensional parameter spaces, e.g. in uncertainty quantification in porous media flow. We propose a new multilevel Metropolis-Hastings algorithm, and give an abstract, problem dependent theorem on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions. For a typical model problem in subsurface flow, we then provide a detailed analysis of these assumptions and show significant gains over the standard Metropolis-Hastings estimator. Nu-merical experiments confirm the analysis and demonstrate the effectiveness of the method with consistent reductions of a factor of O(10–50) in the ε-cost of the multilevel estimator over the standard Metropolis-Hastings algorithm for tolerances ε around 10 −3 .
Article
The initial data and bottom topography, used as inputs in shallow water models, are prone to uncertainty due to measurement errors. We model this uncertainty statistically in terms of random shallow water equations. We extend the multilevel Monte Carlo (MLMC) algorithm to numerically approximate the random shallow water equations efficiently. The MLMC algorithm is suitably modified to deal with uncertain (and possibly uncorrelated) data on each node of the underlying topography grid by the use of a hierarchical topography representation. Numerical experiments in one and two space dimensions are presented to demonstrate the efficiency of the MLMC algorithm.
Article
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley-Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.