Content uploaded by Thomas Wortmann
Author content
All content in this area was uploaded by Thomas Wortmann on Apr 11, 2017
Content may be subject to copyright.
OPOSSUM
Introducing and Evaluating a Model-based Optimization Tool for
Grasshopper
THOMAS WORTMANN
1Singapore University of Technology and Design, Singapore
1thomas_wortmann@mymail.sutd.edu.sg
Abstract. This paper presents Opossum, a new optimization plug-in
for Grasshopper, a visual data-flow modelling software popular among
architects. Opossum is the first publicly available, model-based opti-
mization tool aimed at architectural design optimization and especially
applicable to problems that involve time-intensive simulations of for ex-
ample day-lighting and building energy. The paper details Opossum’s
design and implementation and compares its performance to four single-
objective and one multi-objective solver. The test problem is time-
intensive and simulation-based: optimizing a screened façade for day-
light and glare. Opossum outperforms the other single-objective solvers
and finds the most accurate approximation of the Pareto front.
Keywords. Design Tool; Architectural Design Optimization;
Model-based Optimization; Sustainable Design.
1. Architectural Design Optimization
This paper presents Opossum, a new optimization plug-in for Grasshopper, a visual
data-flow modelling software popular among architects. Opossum is the first pub-
licly available (from www.food4rhino.com), model-based optimization tool aimed
at architectural design optimization (ADO) and especially applicable to problems
that involve time-intensive performance simulations of for example day-lighting
and building energy. Such simulations play an increasingly larger role in archi-
tectural design processes and were for example employed by the designers of the
Louvre Abu Dhabi (Imbert et al. 2013). Model (or surrogate)-based optimization
methods find good results with small numbers of simulations (Holmström et al.
2008; Costa & Nannicini 2014; Wortmann & Nannicini 2016). This high speed of
convergence is important for sustainable design problems such as daylighting and
building energy, where a single simulation takes several minutes or hours to com-
plete. In such cases, it is impractical to perform the thousands of simulations re-
quired by population-based metaheuristics such as genetic algorithms (GAs). But
P. Janssen, P. Loh, A. Raonic, M. A. Schnabel (eds.), Protocols, Flows and Glitches, Proceedings of the
22nd International Conference of the Association for Computer-Aided Architectural Design Research in Asia
(CAADRIA) 2017, 283-293. © 2017, The Association for Computer-Aided Architectural Design Research
in Asia (CAADRIA), Hong Kong.
284 T. WORTMANN
model-based methods are rarely used in ADO (Evins 2013). Grasshopper and
other architectural, parametric design software such as Dynamo (Asl. et al. 2015)
and DesignBuilder (Singh & Kensek 2014) come equipped with only metaheuris-
tics. Opossum fills this gap by making model-based optimization available to a
wider audience and accessible to non-experts.
1.1. GLOBAL BLACK-BOX OPTIMIZATION
Simulation-based optimization problems define the relationship between variables
and performance objectives not with an explicit, mathematical function but by
evaluating a parametric model with numerical simulations. This relationship of-
ten exhibits local optima and complex, non-linear dependencies between variables.
Global black-box (or derivative-free) optimization methods do not require a mathe-
matical formulation and therefore are particularly appropriate for simulation-based
ADO problems. Such methods balance establishing an overview over the design
space with focusing on a promising region of the design space to find the best so-
lution. There are three categories of black-box methods: (1) Direct search, (2)
model-based methods and (3) metaheuristics. Global direct search and model-
based methods alternate between global and local search, while metaheuristics
limit an initially global search to an increasingly local one as the optimization pro-
gresses.
1.1.1. Direct Search
Direct search methods evaluate design candidates in a deterministic sequence. The
Hooke-Jeeves and Nelder-Mead Simplex algorithms (Nocedal & Wright 2006) are
classic examples of local direct search algorithms. DIRECT (Jones et al. 1993) is
a global direct search algorithm that recursively subdivides the design space into
hyper-boxes. This paper tests the implementation of DIRECT in the free, open-
source NLopt library (Johnson 2010).
1.1.2. Model-based Methods
Model-based methods employ surrogate models (explicit estimates of the implicit
mathematical formulations of black-box problems) to guide the search for good
solutions. Trust region methods (Nocedal & Wright 2006) employ local mod-
els, while more recent, global model-based methods model design spaces com-
pletely. Global methods construct models with a variety of statistical (e.g. Poly-
nomial Regression and Kriging) and machine learning-related (e.g. Neural Net-
works and Support Vector Machines) techniques (Koziel et al. 2011). Opossum,
the optimization tool presented here, approximates the design space using radial
basis functions (Gutmann 2001). Surrogate models accelerate optimization pro-
cesses, since they are much faster to calculate than the underlying simulations. Ap-
proaches that completely replace time-intensive simulations with surrogate models
(e.g. Yang et al. 2016) and then apply optimization methods are limited by the
models‘ initial precision. Increasing the models’ precision requires a larger sam-
ple size, which can negate the initial speed advantage. Contrastingly, model-based
methods iteratively build and refine models during the optimization process. One
OPOSSUM 285
optimization steps consist of (1) searching the model for a promising solution to
evaluate, (2) simulating the found solution and (3) updating the model based on
the simulation results. In this way, model-based methods continuously increase the
model’s accuracy. The ten runs of the model-based algorithm in the benchmark be-
low each constructed and updated a surrogate model from 200 simulated solutions.
In testing the accuracy of these models against all 10.200 solutions simulated dur-
ing the benchmark, the maximum deviation was 53% of the true objective value,
but the mean deviation was only 11% of the true value. Global model-based meth-
ods are particularly effective for optimizing problems with costly evaluation-for
example time-intensive simulations-and complex relationships between variables
and objective (Holmström et al. 2008). This effectiveness, combined with oppor-
tunities for visualization and interaction afforded by the surrogate model, makes
model-based optimization attractive for ADO (Wortmann et al. 2015). Opossum,
the optimization tool presented here, provides an easy-to-use interface to RBFOpt
(Costa & Nannicini 2014), a state-of-the-art model-based optimization library. In
the 2015 GECCO Black-Box Competition, which considered 1.000 mathematical
benchmark problems with two to sixty-four variables, RBFOpt ranked first among
the open-source solvers (Loshchilov & Glasmacher 2017).
1.1.3. Metaheuristics
Population-based metaheuristics (Talbi 2009) start with randomly generated pop-
ulations of design candidates that they improve heuristically. Unlike direct search
and model-based methods, metaheuristics do not rely on mathematical proofs of
convergence but draw their inspiration from natural processes, such as genetic evo-
lution or “swarm intelligence”. Due to this lack of rigor and poor performance on
benchmarks (Rios & Sahinidis 2013; Costa & Nannicini 2014), the mathematical
optimization community regards metaheuristics as “methods of last resort” (Conn
et al. 2009). Nevertheless, metaheuristics are the most popular category for ADO,
and GAs the most popular algorithm (Evins 2013). This popularity is due to a
relative ease of implementation, a wide availability, and a perception that meta-
heuristics are especially appropriate for complex problems with multiple optima
(e.g. Attia et al. 2013; Evins 2013). This paper tests implementations of single-
and multi-objective GAs and of particle swarm optimization (PSO) and simulated
annealing (SA).
1.2. PARETO-BASED OPTIMIZATION
Multi-objective optimization aims to find good values for more than one perfor-
mance objective. The problem in this paper considers daylight and glare, but com-
bines the two into a single objective by subtracting them from each other. In other
words, it defines an a priori weightage between daylight and glare. Pareto-based,
multi-objective algorithms, which often are GAs, do not define such weightages,
but instead try to satisfy all objectives as much as possible. Often, there is a trade-
off between objectives: e.g. allowing more daylight into a room can lead to more
glare. Pareto-based optimization illuminates such tradeoffs by searching for non-
dominated solutions, i.e., solutions where improving one of the objectives is only
possible by worsening others, and graphing them on the Pareto front. Compared to
286 T. WORTMANN
single-objective optimization, Pareto-based optimization is a less established field,
with a smaller number of algorithms, proofs of convergence and experimental re-
sults. However, it nevertheless is popular in ADO (Evins 2013). Radford and Gero
(1980) suggest an affinity between the numerous tradeoffs addressed by architec-
tural design and the smaller number of objectives addressed by multi-objective op-
timization. But Pareto-based optimization is less efficient than its single-objective
counterpart since the former focuses on finding not only good solutions, but also
on achieving a good spread of solutions on the Pareto front. In the benchmark of
a building energy problem with two objectives by Hamdy et al. (2016), it took
1400-1800 function evaluations, i.e. simulations, for the Pareto fronts to stabilize.
To explore this efficiency difference, this paper compares five single-objective
algorithms with the multi-objective HypE (Bader and Zitzler 2008). It evaluates
HypE’s performance as a single-objective algorithm and compares the Pareto front
found by HypE with the fronts found implicitly by the single-objective algorithms.
Extensions of model-based methods to Pareto-based optimization exist: Knowles
(2006) recalculates a single surrogate model with different weightages at every
iteration. Akhtar and Shoemaker (2016) employ one surrogate model for every
performance objective.
2. Opossum: a New Model-based Optimization Tool
RBFOpt is programmed in Python 2.7 and relies on libraries for numerical com-
putations (NumPy and SciPy) and auxiliary optimization (Pyomo). But Grasshop-
per supports only IronPython, a Python variant integrated with Microsoft’s .NET
framework that does not support these libraries. Opossum is written in C#, which
Grasshopper supports. The C# program starts an external Python 2.7 process that
runs RBFOpt. Opossum and RBFOpt exchange data via a (hidden) command line
window.
2.1. ALGORITHM PARAMETERS
RBFOpt has more than 40 parameters, some of which are interrelated and some
of which dramatically change its behavior. One can choose between two model-
based algorithms: Gutmann (2001) and MSRSM (Regis & Shoemaker 2007). For
global search, Gutmann evaluates the surrogate model’s point of largest curva-
ture, based on the assumption that this point yields a large improvement of the
model’s accuracy. MSRSM searches the model for points that balance improving
the model’s accuracy with the promise of better solutions, using either a genetic
algorithm, random sampling or mathematical solvers. There are five choices of in-
terpolating radial basis functions (linear, multi-quadratic, cubic, thin plate spline
and automatic selection), which result in models with varying accuracy depending
on the optimization problem.
2.2. OPOSSUM GUI
To make this complexity accessible to non-experts, Opossum’s GUI consists of
three tabs that afford increasing levels of control (figure 1): (1) The first tab lets
users choose between minimization and maximization, select one of three pre-sets
OPOSSUM 287
of parameters, and start and stop the optimization. The pre-sets (Fast, Extensive
and Alternative) are based on intensive testing with mathematical test functions.
“Fast” runs MSRSM with a genetic algorithm. “Extensive” is identical, but spends
more time on searching the model. “Alternative” runs Guttmann, which works
well in certain cases. The first tab also displays an animated convergence graph
to inform users about the progress of the optimization. (2) The second tab lets
users define stopping conditions based on the number of iterations or the elapsed
time, and to conduct and log multiple optimization runs. (3) The third tab ac-
cepts command line parameters for RBFOpt. When desired, this “expert” window
gives the user full control, with the parameters entered here overriding parameters
set by the first two tabs. In Grasshopper, Opossum follows the look and behav-
ior of existing optimization tools, including conventions regarding the colors of
optimization components and their connections to variables and objective values
(figure 2). Double-clicking on an optimization component opens a window with
a GUI unique to each tool, which, for Opossum, contains the three tabs discussed
above. Opossum thus presents a complex and innovative optimization library in a
manner that is easy-to-use and familiar to users of Grasshopper.
Figure 1. The three tabs of Opossum’s GUI, from left to right.
Figure 2. Opossum in Grasshopper. The curves on the left link to the variables and the one on
the right to the objective.
3. Evaluation
To evaluate RBFOpt and Opossum, we compare its performance with five other
solvers available in Grasshopper: the GA and SA included in Galapagos (Rut-
ten 2010), the PSO of Silvereye (Cichocka et al. 2015), the DIRECT algorithm
included in Goat and HypE included in Octopus (Vierlinger 2013). We test the
solvers on a time-intensive, simulation-based problem: optimizing a screened
façade for daylight and glare. We simulate daylight and glare with DIVA-for-
Rhino 4.0 (Jakubiec & Reinhart 2011).
288 T. WORTMANN
3.1. EXAMPLE PROBLEM: OPTIMIZING DAYLIGHT AND GLARE
We consider a single room in Singapore (figure 3). The rectangular room has a
South-facing, 10.8 meter-long and 3.6 meter-high façade, and is 7.2 meters deep.
The room’s floor is raised 20 meters above the ground level. For the façade, we
propose a porous screen with a triangular grid of 1.692 circular openings. To avoid
controlling every opening with an individual variable and to create a graduated,
cloudy appearance, a grid of forty “attractor points” controls the openings, with
weights in the range [0.0, 1.0]. To create a soft falloff, we calculate the radius of
every opening as the average of the values of all attractor points, weighted by the
inverse cubes of their distances to the opening and multiplied by the maximum
radius of 65 millimeters. Openings with a radius below 10 millimeters are closed
completely. This formulation results in a problem with forty continuous variables.
Figure 3. Diagram of the room being optimized in terms of daylight and glare. The numbers
from one to forty indicated the positions of the attractor points, the crosses the sensor grid for
simulating UDI and the cone the camera position and view for simulating DGP. The
visualization on the right represents the best solution found, with 86% UDI and 24% DGP.
The objectives of the optimization are to (1) maximize Useful Daylight Illumi-
nance (UDI) while (2) minimizing Daylight Glare Probability (DGP). UDI mea-
sures the annual percentage of time that a sensor point receives an amount of day-
light that is sufficient for office work while avoiding glare and excessive heat gains
(300-3000 lux). This problem calculates UDI as the average from a seven by five
grid of sensor points. DGP measures glare as a percentage for a specific camera
view and for a specific point-in-time. This value indicates whether the glare is
imperceptible (35% > DGP), perceptible (40% > DGP ≥ 35%), disturbing (45%
> DGP ≥ 40%) or intolerable (DGP ≥ 45%). We calculate this value for a single
camera. The south-facing camera points directly at the screen and is in the center
of the room at a height of 1.6 meters from the floor. An annual glare simulation
calculates direct sunlight only for the daylight hours of five days (21th of June, Au-
gust, September, October and December). In Singapore, these days add up to 59
daylight hours. For the remaining hours in the year, the simulation interpolates the
direct sunlight contribution. Nevertheless, annual glare simulations can take hours
even at low quality settings. This time-intensiveness makes such simulations im-
practical as an optimization objective, especially for the repeated runs necessary
OPOSSUM 289
for benchmarking. Instead, we approximate annual glare as the average of the 59
DGP values corresponding to the 59 daylight hours on which the more extensive
annual glare simulations rely. Although less accurate than a full annual simula-
tion, this approach yields a good qualitative assessment of the amount of glare one
would experience in the room. If quality of daylight and avoidance of glare are
equally important, subtracting (approximated) average annual DGP gfrom aver-
age annual UDI uyields a single maximization objective (both UDI and DGP are
in the range [0,1]). We turn this result into a minimization objective by subtracting
it from 1.0:
min f(x) = 1.0−u(x) + g(x)(1)
On an Intel Xeon E5-1620 CPU with eight threads and 3.6 GHz, one evaluation
of this objective, i.e. generating the parametric geometry and performing the day-
lighting and glare simulations, takes about 90 seconds.
3.2. BENCHMARKING METHODOLOGY
We compare the results of the six solvers (DIRECT, RBFOpt, GA, SA, PSO and
HypE) from ten runs with 200 function evaluations. To make the results indepen-
dent from computing speed and implementation details, we compare the solvers in
terms of the number of function evaluations, rather than in terms of running time.
In practice, compared to the time required for function evaluations, running time
differences between solvers often are negligible.
We run all solvers with default parameters, except for the GA and HypE, where
we reduce the population sizes to 25 to achieve a larger number of generations.
Choice of parameters can have a significant impact on the performance of opti-
mization algorithms, especially for metaheuristics (Talbi 2009), and is problem-
dependent. Nevertheless, we assume that the authors of the tested solvers have
chosen sensible default parameters. Furthermore, in practice there usually is a
limited evaluation budget, which is better spent on using solvers that are efficient
out-of-the box than on tuning algorithmic parameters.
We compare the solvers in terms of two criteria: (1) speed of convergence
and (2) stability. Speed of convergence refers to how fast algorithms approach
the optimum, measured as the improvement per function evaluation. Stability
refers to the reliability of optimization algorithms and is a concern especially for
stochastic algorithms such as metaheuristics. One measures stability by applying
statistical measures such as standard deviation to the results of repeatedly running
an optimization algorithm on the same problem. We compare the Pareto-based
solver with the single objective ones by calculating single objective values for the
solutions found by HypE with the weighted objective function (formula 1). We
compare the single-objective solvers with HYPe by recording individual UDI and
DGP values. Plotting the non-dominated solutions allows a visual comparison of
the Pareto fronts found by the six solvers in terms of the quality and spread.
3.3. RESULTS
The convergence graph on the left in figure 4 depicts the average, current best
value found by the solvers relative to the number of evaluations. DIRECT is the
290 T. WORTMANN
worst-performing solver and shows little improvement because its recursive subdi-
vision proceeds too slowly in the forty dimensions corresponding to the variables.
(DIRECT tends to show better performance on lower dimensional problems (Wort-
mann & Nannicini 2016).) SA performs relatively poorly, while the remaining
metaheuristics, including the Pareto-based HypE, perform similarly and improve
the objective by around 40%. Opossum’s RBFOpt is the best-performing solver
with an improvement of 50%. Note RBFOpt’s rapid progress after 40 evaluations:
Here the algorithm starts profiting from the surrogate model, while the earlier eval-
uations sample the objective function with a quasi-random Latin Hypercube De-
sign. The box plot on the right in figure 4 indicates the range of objective values
found by the solvers in five runs. DIRECT is fully deterministic. Of the remain-
ing algorithms, RBFOpt is the most stable, with the single-objective metaheuristics
displaying wider, less stable ranges.
Figure 4. The convergence graph on the left displays the number of function evaluations on
the x, and the average objective value on the y axis. The box plot on the right indicates the
range of objective values found by the six solvers in ten runs. .
In figure 5, RBFOpt and HypeE have found the closest approximations of the
Pareto front. The diagonal fronts indicate a tradeoff between maximizing day-
light and minimizing glare, although large improvements of daylight quality can
be achieved by accepting small increases in glare. (Note that low average DGP
values can contain isolated instances of disturbing or intolerable glare.) This im-
provement is especially noticeable for RBFOpt: It finds high-daylight solutions
that also suffer less glare than the next best daylight solutions by other solvers.
HypE suggests a steep tradeoff between daylight and glare, while RBFOpt indi-
cates that the tradeoff is less dramatic.
OPOSSUM 291
Figure 5. Pareto fronts found during each solver’s most representative run. ”Best” is the
combined front from all solvers and runs (The markers’ color indicates the solver). UDI
indicated on the x-, and average DGP indicated on the y-axis.
4. Conclusion
We have presented a clear example of a time-intensive, simulation-based ADO
problem that benefits from model-based optimization. Although algorithmic per-
formance is problem-dependent, Opossum’s RBFOpt is the best choice when the
evaluation budget is small. The comparison with a Pareto-based algorithm indi-
cates that, as a single-objective algorithm, HypE performs similar to other meta-
heuristics. But the single-objective RBFOpt finds the closest approximation of
the Pareto front, especially for high-daylight solutions. Designers should thus em-
ploy multi-objective optimization judiciously and only when a large evaluation
budget is available to avoid an inaccurate approximation of the Pareto front. The
future development of Opossum will aim to make it more visual and interactive,
following the ideas outlined in (Wortmann 2017). Another direction is to support
Pareto-based optimization.
References
Akhtar, T. and Shoemaker, C.A.: 2016, Multi objective optimization of computationally ex-
pensive multi-modal functions with RBF surrogates and multi-rule selection, J Glob Optim,
64(1), 17-32.
Asl, M.R., Stoupine, A., Zarrinmehr, S. and Yan, W.: 2015, Optimo: A BIM-based Multi-
Objective Optimization Tool, Proceedings of eCAADe 2015, Vienna, AUT, 673-682.
Attia, S., Hamdy, M., O’Brien, W. and Carlucci, S.: 2013, Assessing gaps and needs for inte-
grating building performance optimization tools in net zero energy buildings design, Energy
Build.,60, 110-124.
Bader, J. and Zitzler, E.: 2008, HypE: An algorithm for fast hypervolume-based many-objective
optimization, TIK-Report 286, ETH Zurich.
Cichocka, J., Browne, W. and Rodriguez, E.: 2015, Evolutionary Optimization Processes as
Design Tools, Proceedings of 31th International PLEA Conference, Bologna, IT.
Conn, A., Scheinberg, K. and Vicente, L.: 2009, Introduction to Derivative-Free Optimization,
Society for Industrial and Applied Mathematics, Philadelphia, PA.
Costa, A. and Nannicini, G.: 2010, RBFOpt: an open-source library for black-box optimization
with costly function evaluations, Optimization Online 4538.
292 T. WORTMANN
Evins, R.: 2013, A review of computational optimisation methods applied to sustainable build-
ing design, Renew. Sustainable Energy Rev.,22, 230-245.
Gutmann, H.M.: 2001, A Radial Basis Function Method for Global Optimization, J Glob Optim,
19(3), 201-227.
Hamdy, M., Nguyen, A.T. and Hensen, J.L.M.: 2016, A performance comparison of multi-
objective optimization algorithms for solving nearly-zero-energy-building design problems,
Energy Build.,121, 57-71.
Holmström, K., Quttineh, N.H. and Edvall, M.M.: 2008, An adaptive radial basis algorithm
(ARBF) for expensive black-box mixed-integer constrained global optimization, Optim Eng,
9(4), 311-339.
Imbert, F., Frost, K.S., Fisher, A., Witt, A., Tourre, V. and Koren, B.: 2013, Concurrent Geo-
metric, Structural and Environmental Design: Louvre Abu Dhabi, AAG 2012, 77-90.
Jakubiec, J.A. and Reinhart, C.F.: 2011, DIVA 2.0: Integrating daylight and thermal simulations
using Rhinoceros 3D, Daysim and EnergyPlus, IBPSA 2011, Sydney, AUS, 2202-2209.
Johnson, S.G.: 2010, “The NLopt nonlinear-optimization package” . Available from <http://ab
-initio.mit.edu/nlopt>.
Jones, D.R., Perttunen, C.D. and Stuckman, B.E.: 1993, Lipschitzian optimization without the
Lipschitz constant, J Optimiz Theory,79(1), 157-181.
Knowles, J.: 2006, ParEGO: a hybrid algorithm with on-line landscape approximation for ex-
pensive multiobjective optimization problems, IEEE Trans. Evolut. Comput.,10(1), 50-66.
Koziel, S., Ciaurri, D.E. and Leifsson, L. 2011, Surrogate-Based Methods, in S. Koziel and X.S.
Yang (eds.), Computational Optimization, Methods and Algorithms, Springer, Heidelberg.
Loshchilov, I. and Glasmacher, T.: 2015, “Black-Box Optimization Competition” . Available
from <bbcomp.ini.rub.de> (accessed 14 February 2017).
Nocedal, J. and Wright, S.J.: 2006, Numerical optimization, Springer, New York.
Radford, A.D. and Gero, J.S.: 1980, On Optimization in Computer Aided Architectural Design,
Build Environ,15, 73-80.
Regis, R.G. and Shoemaker, C.A.: 2007, A Stochastic Radial Basis Function Method for the
Global Optimization of Expensive Functions, INFORMS J Comput,19(4), 497-509.
Rios, L.M. and Sahinidis, N.V.: 2013, Derivative-free optimization: a review of algorithms and
comparison of software implementations, J Glob Optim,56(3), 1247-1293.
Rutten, D.: 2010, “Evolutionary Principles applied to Problem Solving” . Available from <ww
w.grasshopper3d.com/profiles/blogs/evolutionary-principles>.
Singh, S. and Kensek, K.: 2014, Early design analysis using optimization techniques in de-
sign/practice, ASHRAE Conference Proceedings, Atlanta, GA.
Talbi, E.: 2009, Metaheuristics: from design to implementation, John Wiley & Sons, Hoboken,
N.J.
Vierlinger, R.: 2013, Multi Objective Design Interface, Master’s Thesis, TU Wien.
Wortmann, T.: 2017, Surveying design spaces with performance maps, IJAC,15(1).
Wortmann, T., Costa, A., Nannicini, G. and Schroepfer, T.: 2015, Advantages of Surrogate
Models for Architectural Design Optimization, AIEDAM,29(4), 471-481.
Wortmann, T. and Nannicini, G.: 2016, Black-box optimization for architectural design: An
overview and quantitative comparison of metaheuristic, direct search, and model-based op-
timization methods, Proceedings of CAADRIA 2016, Melbourne, AU, 177-186.
Yang, D., Sun, Y., Stefano, D.d., Turrin, M. and Sariyildiz, S.: 2016, Impacts of problem scale
and sampling strategy on surrogate model accuracy, 2016 IEEE CEC, 4199-4207.