PreprintPDF Available

A Flexible Simulation Metamodel for Exploring Multiple Design Spaces

Authors:

Abstract and Figures

We present an approach to build a flexible simulation metamodel whose inputs are generalised, such that the model can be carried forward from one design space to the next. In fields such as aero-astro and automotive engineering, metamodels are used to substitute complex and computationally demanding simulation with mathematical models, that are simpler and much faster to compute. Metamodels are also receiving attention from the architectural design community, because they can be used to facilitate faster evaluation and/or optimisation time-cycles. However, we argue that typical metamodels lack flexibility when exploring multiple design spaces, because a fresh metamodel has to be recomputed each time new design variables are introduced. Thus, information learned about a design space is not carried forward from one metamodel to the next. In this paper we present an approach to build a generalised metamodel whose inputs are (i) independent from the design variables, and (ii) are critical to the calculation of the simulation response. Our approach can be used to generalise any type of metamodel, however, in this research, we build up on the Bayesian Network metamodel (BNM), that was introduced in our previous work (Xuereb Conti and Kaijima [1]). The BNM is a knowledge-oriented metamodel that allows bi-directional exploration of relationships between design and engineering response variables. We demonstrate our approach by building a BNM for the finite element analysis of a 2D truss using beam elements, and carry it forward through two subsequent truss designs. Our research shows that the generalised BNM predicts confidently when introduced with new design problems.
Content may be subject to copyright.
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
July 16-20, 2018, MIT, Boston, USA
Caitlin Mueller, Sigrid Adriaenssens (eds.)
Copyright © 2018 by Zack XUEREB CONTI, Sawako KAIJIMA
Published by the International Association for Shell and Spatial Structures (IASS) with permission.
A Flexible Simulation Metamodel for Exploring Multiple Design
Spaces
Zack XUEREB CONTI*, Sawako KAIJIMAa
*Singapore University of Technology and Design
8, Somapah Rd, 487372
xuereb_zack@mymail.sutd.edu.sg
a Harvard University Graduate School of Design
Abstract
We present an approach to build a flexible simulation metamodel that can input data from different
project sources, implying that the same model can be carried forward from one design space to the next.
In fields such as aerospace and automotive engineering, metamodels are used to substitute complex and
computationally demanding simulation with mathematical models, that are simpler and much faster to
compute. Metamodels are also receiving attention from the architectural design community, because
they can be used to facilitate faster evaluation and/or optimisation time-cycles. However, we argue that
typical metamodels lack flexibility when exploring multiple design spaces, because a fresh metamodel
has to be recomputed each time new design variables are introduced. Thus, information learned about a
design space is not carried forward from one metamodel to the next.
In this paper we present an approach to build a generalised metamodel whose inputs are (i) independent
from the design variables, and (ii) are critical to the calculation of the simulation response. Our approach
can be used to generalise any type of metamodel however, in this research we build up on the Bayesian
Network metamodel (BNM), that was introduced in our previous work (Xuereb Conti and Kaijima [1]).
The BNM is a knowledge-oriented metamodel that allows bi-directional exploration of relationships
between design and engineering response variables. We demonstrate our approach by building a BNM
for the finite element analysis of a 2D truss using beam elements, and carry it forward through two
subsequent truss designs. Our research shows that the generalised BNM predicts confidently when
introduced with new design problems.
Keywords: metamodel, statistics, probabilistic, finite element analysis, Bayesian networks
1. Introduction
Statistical techniques have been used for decades, in fields such as mechanical engineering, to substitute
complex and computationally demanding simulation with a mathematical model, referred to as a
metamodel, that is simpler to interpret and far more computationally efficient to execute. Metamodels
are gaining increasing attention from the architectural design community because they can drastically
reduce evaluation and/or optimisation time-cycles during the early stages of design. In previous work
we proposed a knowledge-oriented metamodel in the form of a Bayesian Network that focuses on
understanding the influence between design variables and simulation response. More specifically, a
Bayesian Network metamodel (BNM) is a type of statistical model that does not distinguish between
independent (!) and dependent (") variables and thus, enables bi-directional exploration between inputs
and outputs.
In this paper we challenge the flexibility of metamodels for architectural design. We highlight that
metamodels do not generalise for new design variables and thus, can hinder the exploratory nature of
the conceptual stages. In other words, each time new design variables are introduced to the design-
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
2
analysis system, a completely new metamodel is built from fresh simulation data. Thus, information
learned about a design space is not carried forward from one metamodel to the next.
In response, we propose an approach to build a flexible metamodel whose inputs can generalise for new
design problems. We argue that since the domain of any mathematical model is bound by the inputs and
output/s that characterise it, the ability for a metamodel to generalise for new problems is directly related
to how generalised the selected inputs are. If we were to look underneath the hood of any engineering
simulation tool, we would observe that inputs to the numerical analysis are not design problem
dependent, but are generalised such that any simulation analysis model is described by one set of
fundamental variables that are derived from domain-related theory and that are critical to calculating the
response. Furthermore, these variables are computationally inexpensive to extract from any simulation
analysis model. For example, in the finite element analysis (FEA) of different parametric truss models,
the moment of inertia, mass, centre of gravity, axial and bending member stiffness, are crucial for
calculating the stiffness matrix, while independent of the parametric description of each truss. While, it
is not our scope to delve into the math underlying the numerical model, we hypothesise that if we can
identify the set of variables that are critical to the math itself through domain expert help, and introduce
them as input variables into the metamodel, we can build a flexible metamodel that can describe physical
behaviour, independent of the variables that describe the design problem, thus can be carried forward
from one design space to the next. Our approach can be adopted for any metamodel, however, in this
paper we will retain our focus on knowledge-based metamodels, introduced in previous work (Xuereb
Conti and Kaijima [1]).
The document will proceed as follows: in section 2 we briefly reintroduce the BNM, in section 3 we
explain and discuss our approach in more detail, using truss design and finite element analysis (FEA) as
an example, and in section 4 we utilise the generalised BNM to explore new variables to demonstrate
the flexibility of our metamodeling approach.
2. Metamodels
2.1. Typical metamodels
A metamodel can be described as a model of a model and is typically expressed as # $ %& ' ( ) ' ,
where & is the numerical model underlying the simulation, and ) is the compressed approximation of &.
Subsequently, ) is used to predict # more efficiently. The most common metamodeling techniques for
approximating simulation models include polynomial regression (Kleijnen [2]), response surfaces
(Kleijnen and Sargent [3]), Kriging (Ankenman, et al. [4]), and Neural Networks (Fonseca, et al. [5]),
among others. The benefits of metamodels for faster design-analysis-optimisation cycles, have attracted
the attention of the architectural design community. For example, Capozzoli, et al. [6] formulate a
metamodel using regression analysis to substitute complex energy calculations and computationally
demanding energy simulation, respectively; Klemm, et al. [7] present a metamodel derived by
polynomial regression from CFD simulation results to derive objective functions for faster optimisation
of building aerodynamics, Tresidder, et al. [8] use Kriging metamodels to optimise CO2 emissions and
construction costs of buildings, and more recently, Wortmann, et al. [9] demonstrate the advantageous
application of metamodel-based optimisation using radial-basis functions, for architectural daylight
optimisation problems.
Typical metamodels can be extremely efficient for searching a design space with a specific objective.
However, the objective is not always clear to the designer and hence why in our previous work we move
beyond the performance-optimisation-driven-design agenda, towards a knowledge-based approach
that allows us to make deeper inferences about what is driving the physical behaviour. We argue that a
knowledge-based approach allows for human intelligence to intervene and thus enables us to drive and
control design with our creative intuition. More specifically, we presented a Bayesian Network
Metamodel, which is a type of model that does not compress relationships into a deterministic function
() ' ) unlike typical metamodels, but one that takes into account all sampled input values and
simulation response values, as a joint probability distribution (JPD) that can be accessed and explored
using statistical inference techniques.
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
3
2.2. Knowledge-based metamodels: Bayesian network metamodel (BNM)
A BNM is a type of statistical model that is represented in the form of a directed mathematical graph,
where inputs and outputs are represented as nodes, while relationships between variables are represented
as edges, whose direction indicates a causal influence as shown in Figure 1, B (Pearl [10]). Each node
is encoded with a marginal probability distribution, while each edge is encoded with a conditional
probability distribution matrix. Together, this information is jointly represented as a joint probability
distribution (JPD), that can be accessed and explored using a statistical method called Bayesian
inference. More specifically, a JPD does not distinguish between independent (!) and dependent (")
variables, thus allows bi-directional inference, such that designers and engineers can predict simulation
response for a set of inputs (Figure 1, B1), and/or vice versa, predict the input probability distributions
(PD) for a target response of interest (Figure 1, B2).
Figure 1: Shift from typical forward metamodel (A), to a bi-directional metamodel (B).
3. A Flexible Bayesian network metamodel (BNM)
The use of metamodels for engineering simulation such as FEA, first appeared in fields such as
automotive and aerospace engineering and have been used for decades (Simpson, et al. [11]). It is well
known that design problems in engineering fields are much more well-defined than those encountered
in architectural design. For example, the overall appearance of an aircraft has not changed significantly
over the past decades because the overall design is heavily governed by physical laws and principles.
Nowadays, the fundamental variables describing these laws and principles are well understood by
practitioners in the field and thus, result in more robust implementation of techniques such as
metamodels. The same cannot be said for the field of architectural design however, because the design
space of a building is vague and wide open, and can involve the exploration of many variables and
different problem descriptions. In this context, we imply that when importing techniques such as
metamodels, from engineering fields to architectural design, we need to accommodate for flexibility, to
address the iterative nature of creativity in design.
We hypothesise that through collaboration between designers and engineers, we can achieve a
synthesised metamodel that addresses the engineering aspects controlling the physical behaviour
directly, without diving too deep into the laws of physics. In turn, the metamodel becomes more
fundamental to the problem domain thus generalises for different design problems within that domain.
In this context, we propose to shift from adopting design variables as the metamodel inputs, towards
selecting variables that are known to drive the simulation response and are thus design problem-
independent. As a result, the design-independent metamodel becomes more general and is thus, able to
accumulate data and information from one problem to the next (Figure 2). Furthermore, the
generalizability of the inputs can also be taken advantage of to build a metamodel from multiple data
sources (Figure 2), for example, from past and present design projects.
In theory, the proposed approach applies to any type of metamodel however, in this paper we retain
focus on building a flexible knowledge-oriented metamodel using a Bayesian Network. Furthermore,
we will focus on the design of 2D truss structures using FEA as the structural analysis of choice. Figure
2 illustrates the overall workflow in building a flexible metamodel.
In the following section we will discuss an example to illustrate building a flexible metamodel using our
approach. We will focus on explaining steps 1, 3A and 4A from Figure 2.
x2=?
x1=?
xk=?
y=0
x2
x1
xk
y
x2=4
x1=2
xk=1
y=?
bayesian network
metamodel (BNM)
specify
hard input
values
predict
response PD
infer input
PD
B
1
2
3
0
1
2
3
4
1
2
4
PDs of
sampled
values
PDs of
simulation
responses
y = f(X)
typical
metamodel
inference with BNM
0
2
3
0
1
3
4
0
2
4
0
1
2
4
1
2
4
0
1
3
4
1
0
1
2
4
0
2
3
4
specify hard
response value
B1
B2
A
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
4
Figure 2: Workflow to build a flexible BNM, that can generalise for new design inputs.
3.1. Case study problem: 2D truss design and analysis using FEA
To demonstrate our approach, we assume three different parametric 2D cantilever truss designs, each
defined by a respective set of design variables (as indicated in Figure 3). A parametric analysis model
of each structure was modelled in Grasshopper and then analysed using ‘Millipede’ (Michalatos and
Kaijima [12]), which is a Grasshopper plugin for FEA-based structural analysis. For all problems we
assume solid round steel cross-sections with a density of 7800 *)+,-, Young’s Modulus of 200 ./0 ,
fixed conditions in ', # and rotational directions on the left side, and a vertical point load of 10012
acting in the -# direction, at the far right node/s.
Figure 3: (a) symmetric, (b) asymmetric and varying cross-sections, (c) michell truss.
3.2. Identifying and extracting the metamodel input variables (Steps1, 3A)
The scope is to select variables that are (i) known to be critical in the calculation of the simulation
response, or known to drive the physical behaviour, and are (ii) general, i.e. independent of the design
STEP 3A
STEP 1
PARAMETRIC
FEM
EXTRACT SIMULATION
RESPONSE VALUES
y
STEP 4A
STEP 3B
STEP 4B
BUILD
METAMODEL
STEP 2
EXTRACT VALUES FROM
IMPORTANT VARIABLES
XE1, … , XEn
XE1
XE2
XEn
XE3
xn
x1
x2
STEP 5
SELECT IMPORTANT
VARIABLES
DESIGN VARIABLES
EXPLORE NEW
DESIGN
VARIABLE/S ?
YES
A B
METAMODEL
INPUTS
METAMODEL
OUTPUT
xk
xi
xii
xz
xa
xb
DESIGN VARIABLES
PARAMETRIC
FEM
RUN PARAMETRIC FEA
y
INPUT SAMPLE MATRIX SINPUT SAMPLE MATRIX S
y
x
SPAN [2m , 6.5m]
DESIGN PROBLEM A
100KN
SPAN [2m , 6.5m]
MIN_RADIUS
[40mm , 100mm]
f ( THETA )
[0.5 , 2]
DESIGN PROBLEM B
100KN
NUM SEGMENTS
[5 , 15]
SPAN
[4m, 10m]
TENSILE_RAD
[50mm , 200mm]
COMPRESSIVE_RAD
[50mm , 200mm]
BOUNDARY_RAD
[50mm , 200mm]
DESIGN PROBLEM C
100KN
DEPTH
[0.5m, 2m]
NUM
SEGMENTS
[2 , 6]
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
5
problem. In this step, we encourage a collaboration with domain experts as to help with identifying
suitable metamodel input variables. The following is merely an example for this paper, to illustrate the
type of variables that could be used for generalising a metamodel thus, we emphasise that our
suggestions are subject to improvement based on further expert consultation. For our example, we take
a peek at what constitutes the Finite Element (FE) method, which is the numerical calculation underlying
FEA-based structural analysis.
3.2.1. Finite element method background
The description of the laws of physics driving engineering phenomena are usually expressed in terms of
partial differential equations (PDEs). However, when complicated geometries, loadings and material
properties are involved, it becomes impossible to solve these PDEs analytically. Instead, methods such
as the FE method are adopted to construct approximations of the solutions of the PDEs, by discretising
the problem domain and solution into a set of smaller parts, called ‘finite elements’, that can be solved
numerically as a set of algebraic equations. Together, the finite elements make up a finite element model.
In general, the algebraic equation of a linear finite element model is expressed as 3 4 $ 5 , where
4 is the unknown vector of nodal deformations, 3 the stiffness matrix and 5 the vector of external
forces, that depends on the loading conditions. These vectors and forces are assembled from the
respective contributions of each element 6 given by the element nodal deformation vector 7489, element
stiffness matrix 38 and element load vector 7589. The stiffness matrix of each finite element contains
the material and geometric information that indicates the resistance of the element to deformation when
subjected to loading. In the case of 2D beam elements, the deformation may be constituted of axial and
bending effects. The local axial stiffness of an element depends on the Young’s Modulus of the material
:, the cross-sectional area ;, and length of the element <, and can be expressed in terms of :;+<, while
the local bending stiffness depends on :, moment of inertia of the element = and <, and can be expressed
as :=+<. Furthermore, the local deformation of each element is transformed to the global axis in which
the load is acting, such that global axial stiffness becomes%>:; ?@ ABC D in '-direction, and
>:; ?@ CEF D in #-direction, while global bending stiffness becomes >:= ?@ ABC D in '-direction, and
>:= ?@ CEF D in #-direction, where D is the angle between the global x-axis and the neutral axis of the
element. Subsequently, the transformed stiffness matrices from each element are assembled together
into the global stiffness matrix that takes into account the connectivity between the elements, and the
node deformations are calculated by solving the global system 4 $ 3 GH 5.
3.2.3. Identify important variables (step 1)
For this example, we can therefore identify >:; ?@ ABC D, >:; ?@ CEF D, >:= ?@ ABC D,
and%>:= ?@ CEF D as suitable variables for our flexible metamodel because they are (i) critical to the
calculation of the simulation response, and (ii) are independent of the design variables. Our scope is not
to assemble the global stiffness matrix, but to select metamodel inputs. Therefore, we treat the values of
each important variable as an accumulation of all the elements. Furthermore, we decide to also include
additional variables; the total mass of the assembled structure and the center of gravity in ' and #
directions (cogx and cogy, respectively). Even though they are already implicitly considered in the
bending and axial stiffness calculations, we argue that additional independent metamodel inputs might
benefit the model to capture information that is being compressedwhen accumulating stiffness values.
3.2.4. Extract the important variables from FEM (step 3A)
In our example, we obtain ;, ?, D%and = for each element, directly from the parametric model where, D
is calculated as the dot product between the neutral axis of the element and the global '-axis, and = is
obtained using the parallel axis theorem to find the moment of inertia of each element with respect to
the center of gravity of the global structure. Finally, the Young’s Modulus :, is prescribed and kept
constant in our example.
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
6
3.3. Building the generalised base BNM
In this section, we first build the base metamodel from data generated with problem A. Subsequently, in
section 4, we carry forward the same metamodel to problems B and C (Figure 4).
The simulation response data for the base metamodel is generated by sampling and evaluating the design
space of problem A, using a quasi-random sampling algorithm as per typical metamodeling practice.
The difference with our approach is that on each simulation run, besides the maximum deflection, we
also store values for >:; ?@ ABC D, >:; ?@ CEF D, >:= ?@ ABC D, >:= ?@ CEF D, total mass, cogx and
cogy. Each time, new design variables are explored, the new generated data is concatenated with the
previous dataset and the metamodel is then rebuilt, at no significant computational expense.
Subsequently, all past and new continuous data is discretised (as required by Bayesian Networks), and
introduced as probability distributions (nodes) in a Bayesian Network. The edge direction of causal
structure is interpreted from input to output nodes (Figure 4). Subsequently, the probabilistic
relationships between the nodes (marginal and conditional probabilities) are then learned automatically
from the discretised data by means of a supervised learning EM algorithm. For this research we made
use of ‘libpgm’ (CyberPoint International [13]), which is a Python library for modelling Bayesian
networks and performing inference. Once the base metamodel is built, design variables are then mapped
onto the metamodel inputs in the form of a secondary Bayesian Network such that the metamodel can
then be used to predict response, and/or infer the design input distributions for a target response values
of interest. The subsequent design problems are mapped onto the base model in the same way.
Figure 4: Workflow to map new design inputs onto the generalsied BNM.
3.3.1. Cross-validate the base BNM
In order to test the robustness of the metamodel for predicting simulation response, we perform a ten-
fold cross validation, which splits the dataset into training/testing, in ten different ways. Since the output
of the BNM is non-scalar, we cannot make use of scalar prediction error measures such as mean square
error. Instead we plot the distribution of differences between the mean of the predicted distribution bin
and the actual simulated value. Figure 5 illustrates three of the ten folds and demonstrates that for a
metamodel based on 2000 samples, the BNM predicted responses within ~10% of their actual value.
Figure 5: Prediction robustness of base BNM for 2000 samples from design problem A.
PROBLEM A
GENERALISED METAMODEL
INPUT DESIGN SPACE A
EA/L
sinθ
EA/L
cosθ
EI/L
cosθ
µ
max
span
num
segs
EI/L
sinθ
mass
cogx
cogy
EA/L
sinθ
EA/L
cosθ
EI/L
cosθ
EI/L
sinθ
mass
cogx
cogy
[ENGINEERING DOMAIN]
[DESIGN DOMAIN]
INPUT DESIGN SPACE B
INPUT DESIGN SPACE C
DESIGN INPUT VARIABLE
FUNDAMENTAL VARIABLE
FORWARD INFERENCE
TO PREDICT RESPONSE
REVERSE INFERENCE
TO PREDICT
DESIGN INPUTS
LEGEND
RESPONSE VARIABLE
t_
rad
EA/L
sinθ
EA/L
cosθ
EI/L
cosθ
EI/L
sinθ
mass
cogx
cogy
c_
rad
b_
rad
num
segs
PROBLEM C
PROBLEM B
theta
span
min_
rad
EA/L
sinθ
EA/L
cosθ
EI/L
cosθ
EI/L
sinθ
mass
cogx
cogy
depth
span
Frequency
% Prediction Error % Prediction Error % Prediction Error
Fold_0 Fold_6 Fold_8
Frequency
Frequency
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
7
4. Using the generalized base BNM to explore new design spaces
In order to demonstrate the generalizability of the base BNM built in the previous section, we will carry
forward and build on the same model to predict response and infer inputs, for two new truss design
problems: B and C (Figure 3). For each problem we illustrate plots to demonstrate (a) robustness for
predicting maximum deflection and (b) an example of reverse inference to predict the input distributions
for minimised max deflection. See Xuereb Conti and Kaijima [1], for interpreting the inferred PDs.
4.1. Input design problem B (3 design variables - 1000 new simulation runs)
For this problem, we generate 1000 simulation runs from problem B, and concatenate them to the
previous dataset, to produce a dataset of 3000 data points. Subsequently, we rebuild the BNM from the
concatenated dataset of 3000 data points. As a validation technique, we use the rebuilt BNM to predict
the maximum deflection for each of 2000 combinations of span, theta and min_radius values, which the
model has not seen before. Figure 6, left demonstrates that the generalised BNM predicts the maximum
deflection values within more or less 10% from the correct simulated values. Figure 6, right illustrates
the use of the BNM to explore the input distributions that would likely yield a max_deflection < ~1mm.
Figure 6: Prediction validation (left), example of inferring input distributions (right).
4.1. Input design problem C (5 design variables - 500 new simulation runs)
In this example, we carry forward the metamodel from problem B to a problem with five design
variables. We concatenate 500 new simulation runs from problem C, to the previous 3000. We show
that despite the increased number of design variables, the histogram (Figure 7, left) indicates that the
model has the potential to predict within decent error range, considering the addition of only 500 new
data points from problem C. Figure 7, right illustrates the use of the BNM to explore input configurations
that yield a max_deflection < ~4mm.
It is important to note that the number of new data points, is highly dependant on the the quality of the
selected metamodel inputs.
Figure 7: Prediction validation (left), example of inferring input distributions (right).
Frequency
max_defletion span theta min_rad
% Prediction Error
P(def) <
0.927mm
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
Probability
Probability
Probability
Ranges (mm) Ranges (m) Ranges Ranges (m)
Probability
Frequency
% Prediction Error
Frequency
Probability
Probability
Ranges (mm)
Probability
Ranges (m)
Probability
Ranges (m)
Probability
Ranges
Probability
Ranges (m) Ranges (m)
boundary_radspan
max_defletion
compression_radtensile_raddensity
P(def) <
3.93mm
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
inferred PD
marginal PD
Probability
Proceedings of the IASS Symposium 2018
Creativity in Structural Design
8
5. Conclusion and future work
In this paper we presented an approach to build a metamodel that can generalise for new problems such
that it can either be used to carry data forward from one design space exploration to the next, or be used
to pipe multiple sources of existing data into one metamodel. We achieve this flexibility through a
careful selection of the metamodel inputs. Our approach requires a collaboration with domain experts in
search for important variables that are (i) critical for the calculation of the simulation response, and (ii)
independent of the design variables. As a result, in our case study we demonstrated that we can
accumulate data from one problem to then next with the same metamodel, which in turn has the potential
to reduce the amount of simulation data for subsequent design problems. The latter depends on the
quality of the selected metamodel inputs however, the sub-hypothesis requires further investigation.
In future work, we would like to push the generalizability of the metamodel further such that it can
generalise for different boundary, loading, and material scenarios. Furthermore, we would like to focus
on smarter sampling strategies, such that we can predict more specifically where in the design space the
samples should be mostly concentrated to avoid redundant sampling that is already captured form data
of previous problems.
Acknowledgements
We would like to thank Dr. Oliver Weeger and Dr. Narasimha Boddeti from the Digital Manufacturing
and Design centre at SUTD, for their expert insight in aspects of structural mechanics.
References
[1] Z. X. Conti and S. Kaijima, "Enabling Inference in Performance-Driven Design Exploration,"
in Humanizing Digital Reality: Design Modelling Symposium Paris 2017, K. De Rycke, C.
Gengnagel, O. Baverel, J. Burry, C. Mueller, M. M. Nguyen, et al., Eds., ed Singapore: Springer
Singapore, 2018, pp. 177-188.
[2] J. P. Kleijnen, Design and analysis of simulation experiments vol. 20: Springer, 2008.
[3] J. P. Kleijnen and R. G. Sargent, "A methodology for fitting and validating metamodels in
simulation," European Journal of Operational Research, vol. 120, pp. 14-29, 2000.
[4] B. Ankenman, B. L. Nelson, and J. Staum, "Stochastic kriging for simulation metamodeling,"
Operations research, vol. 58, pp. 371-382, 2010.
[5] D. Fonseca, D. Navaresse, and G. Moynihan, "Simulation metamodeling through artificial
neural networks," Engineering Applications of Artificial Intelligence, vol. 16, pp. 177-183,
2003.
[6] A. Capozzoli, H. E. Mechri, and V. Corrado, "Impacts of architectural design choices on
building energy performance applications of uncertainty and sensitivity techniques,"
International Building Performance Simulation Association, 2009.
[7] K. Klemm, W. Marks, and A. J. Klemm, "Multicriteria optimisation of the building arrangement
with application of numerical simulation," Building and Environment, vol. 35, pp. 537-544,
2000.
[8] E. Tresidder, Y. Zhang, and A. I. Forrester, "Acceleration of building design optimisation
through the use of kriging surrogate models," Proceedings of building simulation and
optimization, pp. 1-8, 2012.
[9] T. Wortmann, A. Costa, G. Nannicini, and T. Schroepfer, "Advantages of surrogate models for
architectural design optimization," Artificial Intelligence for Engineering Design, Analysis and
Manufacturing, vol. 29, pp. 471-481, 2015.
[10] J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible inference: Morgan
Kaufmann, 1988.
[11] T. W. Simpson, J. D. Poplinski, N. P. Koch, and J. K. Allen, "Metamodels for Computer-based
Engineering Design: Survey and recommendations," Engineering with Computers, vol. 17, pp.
129-150, 2014.
[12] P. Michalatos and S. Kaijima. (2014). Millipide (Grasshopper plugin for structural analysis).
[13] L. CyberPoint International. (2012). libpgm - Python library for Probabalistic Graphical
Models.
... They used machine learning to learn network structure and parameters from a set of simulated data and then used that to simulate the result of choosing specific design parameters on the outputs or find the most probable parameters to have a specific value in the output. Later on, they developed a method for developing meta-models which are not limited to Bayesian networks and implement it on a case study using Bayesian networks (Conti & Kaijima, 2018). Another example of Bayesian network structure learning algorithms for supporting early stage design support can be found in Matthews ' (2007) work. ...
Thesis
Full-text available
Bayesian networks in additive manufacturing and reliability engineering A Bayesian network (BN) is a powerful tool to represent the quantitative and qualitative features of a system in an intuitive yet sophisticated manner. The qualitative aspect is represented with a directed acyclic graph (DAG), depicting dependency relations between the random variables of the system. In a DAG, the variables of the system are shown with a set of nodes and the dependencies between them are shown with a directed edge. A DAG in the Bayesian network can be a causal graph under certain circumstances. The quantitative aspect is the local conditional probabilities associated with each variable, which is a factorization of the joint probability distribution of the variables in the system based on the dependency relation represented in the DAG. In this study, the benefits of using BNs in reliability engineering and additive manufacturing is investigated. In the case of reliability engineering, there are several methods to create predictive models for reliability features of a system. Predicting the possibility and the time of a possible failure is one of the important tasks in the reliability engineering principle. The quality of the cor-rective maintenance after each failure is affecting consecutive failure times. If a maintenance task after each failure involves replacing all the components of an equipment, called perfect maintenance , it is considered that the equipment is restored to an "as good as new" (AGAN) condition, and based on that, the consecutive failure times are considered independent. Not only in most of the cases the maintenance is not perfect, but the environment of the equipment and the usage patterns have a significant effect on the consecutive failure times. In this study, this effect is investigated by using Bayesian network structural learning algorithms to learn a BN based on the failure data of an industrial water pump. In additive manufacturing (AM) field, manufacturing systems are normally a complex combination of multiple components. This complex nature and the associated uncertainties in design and manufacturing parameters in additive manufacturing promotes the need for models that can handle uncertainties and are efficient in calculations. Moreover, the lack of AM knowledge in practitioners is one of the main obstacles for democratizing it. In this study, a method is developed for creating Bayesian network models for AM systems that includes experts' and domain knowledge. To form the structure of the model, causal graphs obtained through dimensional analysis conceptual modeling (DACM) framework is used as the DAG for a Bayesian network after some modifications. DACM is a framework for extracting the causal graph and the governing equations between the variables of a complex system. The experts' knowledge is extracted through a probability assessment process, called the analytical hierarchy process (AHP) and encoded into local probability tables associated with the independent variables of the model. To complete the model, a sampling technique is used along with the governing equations between the intermediate and output variables to obtain the rest of the probability tables. Such models can be used in many use cases, namely domain knowledge representation, defect prognosis and diagnosis and design space exploration. The qualitative aspect of the model is obtained from the physical phenomena in the system and the quantitative aspect is obtained from the experts' knowledge, therefore the model can interactively represent the domain and the experts' knowledge. In prognosis tasks, the probability distribution for the values that an output variable can take is calculated based on the values chosen for the input variables. In diagnosis tasks, the designer can investigate the reason for having a specific value in an output variable among the inputs. Finally, the model can be used to perform design space exploration. The model reduces the design space into a discretized and interactive Bayesian network space which is very convenient for design space exploration.
Conference Paper
Full-text available
In this paper we present an approach to enable inference when coupling computational design systems and engineering simulation, in order to narrow the ambiguity of a design space to a space that is meaningful for a designer's goals. Inference is a statistical technique to draw judgement about data. The emergence of computational design systems in architecture has enabled the utilization of engineering simulation to evaluate and drive exploration of the design space. However, we argue that designers find it challenging to infer an thorough understanding of the design space when considering many variables because coupled systems are limited to one directional operations (input ! output). Consequently, the qualitative control over the quality of design comes into question. In response, we present a probabilistic representation of the design-analysis system whereby, input and output variables are represented as probability distributions to enable bi-directional inference between input and output. Subsequently, the capability to infer cause from effect provides a deeper understanding about the relationships between design variables and physical behaviour. Furthermore, we discuss Bayesian networks as a statistical technique to handle inference over complex design spaces.
Article
Full-text available
Climate change, resource depletion, and worldwide urbanization feed the demand for more energy and resource-efficient buildings. Increasingly, architectural designers and consultants analyze building designs with easy-to-use simulation tools. To identify design alternatives with good performance, designers often turn to optimization methods. Randomized, metaheuristic methods such as genetic algorithms are popular in the architectural design field. However, are metaheuristics the best approach for architectural design problems that often are complex and ill defined? Metaheuristics may find solutions for well-defined problems, but they do not contribute to a better understanding of a complex design problem. This paper proposes surrogate-based optimization as a method that promotes understanding of the design problem. The surrogate method interpolates a mathematical model from data that relate design parameters to performance criteria. Designers can interact with this model to explore the approximate impact of changing design variables. We apply the radial basis function method, a specific type of surrogate model, to two architectural daylight optimization problems. These case studies, along with results from computational experiments, serve to discuss several advantages of surrogate models. First, surrogate models not only propose good solutions but also allow designers to address issues outside of the formulation of the optimization problem. Instead of accepting a solution presented by the optimization process, designers can improve their understanding of the design problem by interacting with the model. Second, a related advantage is that designers can quickly construct surrogate models from existing simulation results and other knowledge they might possess about the design problem. Designers can thus explore the impact of different evaluation criteria by constructing several models from the same set of data. They also can create models from approximate data and later refine them with more precise simulations. Third, surrogate-based methods typically find global optima orders of magnitude faster than genetic algorithms, especially when the evaluation of design variants requires time-intensive simulations.
Conference Paper
Full-text available
In this paper, a sensitivity analysis has been carried out on a set of variables identified during the building conceptual design stage. The sensitivity analysis is performed on a simple intermediate floor of a typical multi-storey office representative of the office building sector for different Italian climatic zones. The conclusions are drawn in terms of sensitivity indexes of energy performance indicators for a set of different design variables and for different climatic zones. A multi-linear regression analysis is carried out to develop the polynomial approximation of the energy performance indicators.
Article
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of today’s engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper, we review several of these techniques, including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning and kriging. We survey their existing application in engineering design, and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations, and how common pitfalls can be avoided.
Article
Simulation metamodeling has been a major research field during the last decade. The main objective has been to provide robust, fast decision support aids to enhance the overall effectiveness of decision-making processes. This paper discusses the importance of simulation metamodeling through artificial neural networks (ANNs), and provides general guidelines for the development of ANN-based simulation metamodels. Such guidelines were successfully applied in the development of two ANNs trained to estimate the manufacturing lead times (MLT) for orders simultaneously processed in a four-machine job shop.The design of intelligent systems such as ANNs may help to avoid some of the drawbacks of traditional computer simulation. Metamodels offer significant advantages regarding time consumption and simplicity to evaluate multi-criteria situations. Their operation is notoriously fast compared to the time required to operate conventional simulation packages.
Article
This paper presents an example of the optimisation of two building arrangements as a function of three variables describing their alignment relative to each other and relative to North. The presented results are limited to the wind flow speeds around buildings in flat undeveloped terrain. The turbulence model K-ϵ has been adopted for this study. Calculations have been made with the use of simulation program FLUENT 4.3.The potential application of a multicriteria optimisation method for the selection process of the optimal arrangement of buildings which would provide required wind comfort has been investigated.Computer program CAMOS (Computer Aided Multicriterion Optimisation System) has been used for the optimisation procedure.
Article
This paper proposes a methodology that replaces the usual ad hoc approach to metamodeling. This methodology considers validation of a metamodel with respect to both the underlying simulation model and the problem entity. It distinguishes between fitting and validating a metamodel, and covers four types of goal: (i) understanding, (ii) prediction, (iii) optimization, and (iv) verification and validation. The methodology consists of a metamodeling process with 10 steps. This process includes classic design of experiments (DOE) and measuring fit through standard measures such as R-square and cross-validation statistics. The paper extends this DOE to stagewise DOE, and discusses several validation criteria, measures, and estimators. The methodology covers metamodels in general (including neural networks); it also gives a specific procedure for developing linear regression (including polynomial) metamodels for random simulation.
Article
We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation out- put performance measures as functions of the controllable design or decision variables. To accomplish this we charac- terize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, de- rive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.