Conference PaperPDF Available

Evo-Devo in the Sky


Abstract and Figures

Designers interested in applying evo-devo-design methods for performance based multi-objective design exploration have typically faced two main hurdles: it's too hard and too slow. An evo-devo-design method is proposed that effectively overcomes the hurdles of skill and speed by leveraging two key technologies: computational workflows and cloud computing. In order to tackle the skills hurdle, Workflow Systems are used that allow users to define computational workflows using visual programming techniques. In order to tackle the speed hurdle, cloud computing infrastructures are used in order to allow the evolutionary process to be parallelized. We refer to the proposed method as Evo-Devo In The Sky (EDITS). This paper gives an overview of both the EDITS method and the implementation of a software environment supporting the EDITS method. Finally, a case-study is presented of the application of the EDITS method.
Content may be subject to copyright.
205Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 |
Evo-Devo in the Sky
Patrick Janssen
National University of Singapore, Singapore
Abstract. Designers interested in applying evo-devo-design methods for performance
based multi-objective design exploration have typically faced two main hurdles: it’s too
hard and too slow. An evo-devo-design method is proposed that effectively overcomes the
hurdles of skill and speed by leveraging two key technologies: computational workows
and cloud computing. In order to tackle the skills hurdle, Workow Systems are used that
allow users to dene computational workows using visual programming techniques.
In order to tackle the speed hurdle, cloud computing infrastructures are used in order
to allow the evolutionary process to be parallelized. We refer to the proposed method as
Evo-Devo In The Sky (EDITS). This paper gives an overview of both the EDITS method
and the implementation of a software environment supporting the EDITS method. Finally,
a case-study is presented of the application of the EDITS method.
Keywords. Evolutionary algorithms; multi-objective optimisation; workow system;
cloud computing; parametric modelling.
Evolutionary design is loosely based on the neo-Dar-
winian model of evolution through natural selection
(Frazer, 1995). A population of individuals is main-
tained and an iterative process applies a number of
evolutionary procedures that create, transform, and
delete individuals in the population.
Evo-devo-design diers from other types of evo-
lutionary approaches with regards to the complexity
of both the developmental procedure and the eval-
uation procedures. The developmental procedure
generates design variants using the genes in the
genotype (Kumar and Bentley, 1999). The evaluation
procedures evaluate design variants with respect to
certain performance metrics. These procedures will
typically rely on existing stand-alone programs, in-
cluding Visual Dataow Modelling (VDM) systems
and simulation programs (Janssen and Chen, 2011;
Janssen et al., 2011). In many cases, these systems
may be computationally expensive and slow to ex-
Designers interested in applying evo-devo-
design methods for performance based multi-ob-
jective design exploration have typically faced two
main hurdles: skill and speed (i.e. “it’s too hard and
too slow!”). From a skills perspective, the require-
ment for advanced interoperability engineering and
software programming skills is often too demand-
ing for designers. From the speed perspective, the
requirement for processing large numbers of design
variants can lead to excessively long execution times
(often taking weeks to complete).
Previous research has demonstrated how these
hurdles can be overcome using a VDM procedural
modelling software called Sidefx Houdini (Janssen
and Chen, 2011). Firstly, a number of simulation
programs were embedded within this VDM system,
206 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
thereby allowing designers to dene development
and evaluation procedures without requiring any
programming. Secondly, the evolutionary algorithm
was executed using a distributed environment,
thereby allowing the computational execution to be
Although the research demonstrated how the
challenges of skill and speed could be overcome,
the solution was specic to the software tools being
used, in particular Sidefx Houdini. Furthermore, for
most designers, the proposed approach remained
problematic due to the fact that they do not have
access to computing grids. This paper will propose a
generalized method for evo-devo-design that over-
comes these limitations. The method uses two key
technologies: computational workows and cloud
computing. In order to tackle the skill hurdle, com-
putational workow management systems are used,
called Scientic Workow Systems (Altıntaş, 2011;
Deelman et al., 2008). In order to tackle the speed
hurdle, readily available cloud computing infrastruc-
ture is used. We refer to the proposed method as
Evo-Devo In The Sky (EDITS).
The next section will focus on the proposed ED-
ITS method, followed by a section describing the
implementation of a prototype EDITS environment.
The nal section will briey present a demonstra-
tion of how the method and environment can be
An EDITS design method is proposed that over-
comes the hurdles of skill and speed in a generalized
way that is not linked to specic proprietary soft-
ware applications.
The EDITS method enables users to run a pop-
ulation-based evo-devo design exploration process.
This requires four computational tasks to be gener-
ated that will automatically be executed when the
evolutionary process is run: initialisation, growth,
feedback, and termination. The initialisation and ter-
mination tasks are executed at the start and end of
the evolutionary process respectively, and perform
various ‘housekeeping’ procedures. In addition, the
initialisation task also creates the initial population
of design variants.
The growth and feedback tasks are used to pro-
cess design variants in the population. The growth
task will take in just a single individual with a geno-
type and will generate a phenotype and a set of per-
formance scores for that individual. (In the proposed
method, the processes of development and evalua-
tion are thus dened as a single growth workow.)
The feedback task will take in a pool of fully-eval-
uated individuals and based on a ranking of those
individuals will kill some and will select some for
generating new children. With just these two tasks,
a huge variety of evolutionary algorithms can eas-
ily be specied. For example, if the pool size for the
feedback is equal to the population size, then a gen-
erational evolutionary algorithm will result, while if
pool size is much smaller than the population size, a
steady-state evolutionary algorithm will result.
The rst hurdle that EDITS must address is the
skills hurdle. The initialisation, feedback, and termi-
nation tasks are highly standardized and can there-
fore be generated automatically based on a set of
user-dened parameters. The growth task on the
other hand is highly problem-specic and there-
fore needs to be dened by the user. In order to
overcome the skill hurdle, the EDITS method uses
a Workow System for dening these tasks. Work-
ow Systems allow users to create computational
procedures using a visual dataow programming.
Users are presented with a canvas for diagramming
workows as nodes and wires, where tools are rep-
resented as a nodes, and data links as wires.
Furthermore, this approach can also be used
to exibly link together existing design tools such
as CAD and simulation programs. Interoperabil-
ity issues can be overcome by using data mappers,
whereby the output data from one tool may be
linked to the input data of another tool via a set of
data transform, aggregation, and compensation
procedures. This approach therefore allows paramet-
ric modelling tools to be linked to simulation tools
through an external coupling, which aords the user
greater exibility in tool choice and linking options.
207Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 |
The second hurdle to be overcome is the speed
hurdle. The evolutionary process consists of a con-
tinuous process of extracting individuals from the
population, processing them with the growth and
feedback tasks, and inserting the updated and new
individuals back into the population. Since the tasks
are independent from one another, they can easily
be parallelized. Cloud computing infrastructures al-
low users to have access to computing grids on an
on-demand basis at a low cost and can therefore be
used to enable such parallelization. In the proposed
EDITS method, cloud computing is used for distrib-
uting the execution of both the growth and feed-
back tasks.
In order to demonstrate the EDITS method, a pro-
totype EDITS environment has been implemented.
Three key type of software are used: a distributed
execution environment called Dexen, a workow
system called VisTrails, and a set of design tools,
such as CAD and simulation programs.
Dexen is a highly generic Distributed Execution
Environment for running complex computa-
tional jobs on grid computing infrastructures,
previously developed by the author (Janssen et
al., 2011). Dexen uses a data-driven execution
model, where tasks are automatically execut-
ed whenever the right type of data becomes
available. Dexen consists of three main com-
ponents: the Dexen Client provides a graphi-
cal user interface for managing jobs and tasks;
the Dexen Server manages the population and
orchestrates the execution of jobs; and Dexen
Workers execute the tasks.
VisTrails is an open-source workow system
that allows users to visually dene computa-
tional workows (Callahan et al., 2006). VisTrails
uses a dataow execution model that is well-
suited to the types of procedures that need to
be dened. It also provides good support for
integrating existing programs. VisTrails can be
used in one of two modes: in interactive mode,
VisTrails provides a graphical user interface for
building workows; in batch mode, VisTrails
can be used to execute previously dened
workows without requiring any user interac-
A set of design tools, including CAD tools (such
as Houdini or Blender) and simulation tools
(such as Radiance, EnergyPlus, and Calculix).
(Other popular commercial CAD tools could
also be integrated with this environment. How-
ever, due to inexible licensing policies, it is
currently dicult to deploy such tools in the
cloud.) The CAD tools can typically run either
in interactive mode or in batch mode while the
simulation programs run only in batch mode,
with all interaction being restricted to text
based input and output les.
The EDITS environment is delivered as a cloud
based service. Cloud computing can deliver services
to the user at a number of dierent levels, ranging
from delivering computing infrastructure to deliv-
ering fully functional software (Rimal et al., 2009).
These levels are typically divided into three catego-
ries: Infrastructure as a Service (IaaS), Platform as
a Service (PaaS) and Software as a Service (SaaS).
These levels can also build on one another.
The EDITS environment is divided into three lay-
ers, corresponding to IaaS, PaaS and SaaS, as shown
in Figure 1. For the base IaaS layer, the EDITS envi-
ronment uses Amazon EC2, which is a commercial
web service that allows users to rent virtual ma-
chines on which they run their own software. Ama-
zon provides a web-application where users can
manage their virtual machines, including starting
and stopping machines. The SaaS and PaaS layers
will be described in more detail below.
The SaaS layer
The SaaS layer consists of a number of graphical
tools for running EDITS jobs. Overall, there are four
main steps for the user: 1) starting the server, 2) cre-
ating the growth task, 3) executing the evolutionary
job, and 4) reviewing progress of the job.
Step 1 involves using the Amazon EC2 web ap-
plication to start an EDITS server. This simply con-
208 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
sists of logging onto the Amazon EC2 website with
a standard browser, and then starting an Amazon in-
stance. The operating system and software installed
on a virtual machine is packaged as an Amazon Ma-
chine Image (AMI), and for EDITS a customized AMI
has been created. This AMI is saved on the Amazon
server, so it can simply be selected by the user from
a list of options. The same server can be used for
running multiple jobs.
In step 2, the user denes the growth task by
creating a workow with the VisTrails workow sys-
tem using a set of specially developed EDITS nodes.
Figure 2 shows an example of such a workow, con-
sisting of a development procedure followed by
three parallel evaluation procedures. The develop-
ment procedure uses SideFX Houdini to generate
the phenotype. The evaluation procedures use the
Radiance, Calculix, and EnergyPlus simulation pro-
grams to generate performance scores. These proce-
dures will be explained in more detail in the section
describing the demonstration.
Step 3 involves executing the EDITS job. For the
user, it is good if this execution could be orches-
trated from within the same VisTrails environment.
However, since the EDITS job may take several hours
to execute, it is preferable to interact with it in an
asynchronous manner. The user should be able to
start the EDITS job in the cloud and then reconnect
with the running EDITS job intermittently in order to
download the latest results. A plugin has therefore
been implemented for VisTrails that adds an EDITS
menu to the menu bar for starting EDITS jobs. When
a new job is started, the user can select the growth
workow, and can specify a number of parameters,
including population size, mutation and crossover
probabilities, selection pool size and the ranking al-
gorithm. Once these parameters are set, a number
of Python scripts required to run the job are auto-
matically generated and uploaded to the server to-
gether with the growth workow. The job will then
start running automatically.
In step 4, the user connects to the EDITS jobs to
review progress and analyse the data that is gener-
ated. Dexen has its own client application with a
graphical user interface that allows users to get an
overview of all the jobs that are running and to in-
terrogate the execution of individual tasks in detail,
providing information on execution time, crashes,
error messages, and so forth. Data related to in-
dividual design variants can also be downloaded.
However, downloading and viewing design variants
one at the time can be tedious and error prone. In
order to streamline this process, a set of VisTrails ED-
ITS nodes have been created for downloading data
Figure 1
The three layers of the EDITS
209Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 |
and design variants directly from the server running
in the cloud. These nodes can for example be used
to create a workow that rst downloads the per-
formance scores of all design variants and then se-
lects a subset of these design variants for display to
the user. VisTrails provides a visual spreadsheet that
can be used to simultaneously display 3D models of
multiple design variants (Figure 5).
The PaaS Layer
The PaaS layer builds on top of the Amazon EC2 IaaS
layer, by dening an AMI for the EDITS Platform. A
customised AMI was created for EDITS with all nec-
essary software preinstalled and all settings precon-
gured. The EDITS AMI includes the base operating
system, together with Dexen, VisTrails, and a set of
commonly used CAD and simulation programs.
The software used for orchestrating distributed
execution of the EDITS job is Dexen. When the EDITS
server is started on EC2, Dexen will be automatically
started and all the other required software will be
congured and available. The two main tasks that
need to be executed are the growth and feedback
tasks. Dexen maintains the population of individu-
Figure 2
The EDITS growth workow in
the VisTrails environment.
210 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
als in a centralized database and will automatically
schedule the execution of growth and feedback
tasks. For the growth task, individuals are processed
one at a time. For the feedback task, individuals are
processed in pools of individuals, selected randomly
from all fully evaluated individuals in the population.
Each time either a growth or feedback task needs to
be executed, Dexen will extract the individuals from
the database, and send them to an available Dexen
worker for processing. Once the worker has com-
pleted the task, the updated and/or new individuals
will be retrieved and reinserted back into the popu-
lation database.
The Python scripts for the initialisation, growth,
feedback, and termination tasks are automatically
generated by EDITS. The growth task is the most
complex due to the various layers that are involved.
The task has a nested ‘Russian Doll’ structure, con-
sisting of a cascade of invocations three layers deep,
as shown in Figure 3. The outer layer consists of the
Python script. When this script is executed, it will
invoke VisTrails Batch Mode in order to execute the
workow. Since this workow may contain numer-
ous nodes that link to other design tools such as
CAD and simulation programs, VisTrails will then in-
voke these design tools. For the end-user, the com-
plexity of the growth task is hidden, since they are
only required to create VisTrails workow.
As a demonstration of the EDITS approach, the de-
sign for a complex residential apartment building is
evolved. The case study experiment is based on the
design of the Interlace by OMA. The design consists
of thirty-one apartment blocks, each six stories tall.
The blocks are stacked in an interlocking brick pat-
tern, with voids between the blocks. Each stack of
blocks is rotated around a set of vertical axes, there-
by creating a complex interlocking conguration.
Each block is approximately 70 meters long by
16.5 meters wide, with two vertical axes of rotation
spaces 45 meters apart. The axes of rotation coin-
cide with the location of the vertical cores of the
building, thereby allowing for a single vertical core
to connect blocks at dierent levels. The blocks are
almost totally glazed, with large windows on all four
facades. In addition, blocks also have a series of bal-
conies, both projecting out from the facade and in-
Figure 3
The software layers involved in
executing the growth task. The
workow, highlighted in grey,
is the only layer that needs
input from the end-user.
211Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 |
set into the facade. The initial conguration, shown
in Figure 4, is based on the original design by OMA.
The blocks are arranged into 22 stacks of varying
height, and the stacks are then rotated into a hex-
agonal pattern constrained within the site bounda-
ries. At the highest point, the blocks are stacked four
For the case study, new congurations of these
31 blocks were sought that optimise certain perfor-
mance measures. For the new congurations, the
size and number of blocks will remain the same, but
the way that they are stacked and rotated can dier.
A VisTrails growth workow was dened that per-
formed both development and three evaluations.
The workow shown in Figure 2 was developed for
this demonstration.
Growth workow: design development
For the procedural modelling of phenotypes, SideFX
Houdini was used. For the genotype to phenotype
mapping, an encoding technique was developed
called decision chain encoding (Janssen and Kaushik,
2013). At each decision point in the modelling pro-
cess, a set of rules is used to generate, lter, and se-
lect valid options for the next stage of the modelling
process. The generate step uses the rules to create a
set of options. The lter step discards invalid options
that contravene constraints. The select step chooses
one of the valid options. In order to minimise the
complexity of the modelling process, options are
generated in skeletal form with a minimum amount
of detail. The full detailed model is then generated
only at the end, once the decision chain has nished
In the decision chain encoding process, the
placement of each of the 31 blocks is dened as
a decision point. The process places one block at
the time, starting with the rst block on the empty
site. At each decision point, a set of rules is used to
generate, lter, and select possible positions for the
next block. Each genotype has 32 genes, and all are
real values in the range {0,1}. In the generation step,
possible positions for the next block will be created
using a few simple rules. First, locations are identi-
ed, and second orientations for each location are
identied. The locations are always dened relative
to the existing blocks already placed, and could be
either on top of or underneath those blocks. The ori-
entations are then generated in 15° increments in a
180° sweep perpendicular to either end of the exist-
ing block. In the ltering step, constraints relating to
proximity between blocks and proximity to the site
boundary are applied, thereby ensuring that only
the valid positions remain. In the selection step, the
decision gene in the genotype chooses one of the
valid block positions.
The resulting phenotypes consist of simple po-
lygonal models. Three separate les are generated,
one for each of the simulations. These models rep-
resent dierent sub-sets of information relating to
Figure 4
The initial conguration based
on the original design, consist-
ing of 31 blocks in 22 stacks of
varying heights.
212 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
the same design variant. These sub-sets of infor-
mation are selected in order to match the data re-
quirements of the simulation programs. In order to
facilitate the data mapping, custom attributes are
dened for geometric elements in the model. For
example, polygons may have attributes that de-
ne key characteristics, such as block (e.g. block1,
block2), type (e.g. wall, oor, ceiling), and parent (e.g.
the parent of the shade is the window; the parent of
the window is the wall). These attributes are used by
the mapping nodes in order to generate appropri-
ate input les for the simulations. The geometry to-
gether with the attributes are saved as JSON les (i.e.
simple text les).
Growth workow: design evaluations
For the multi-objective evaluation, three perfor-
mance criteria were dened: maximisation of day-
light, minimisation of structural strain, and minimi-
sation of cooling load. These performance criteria
have been selected in order to explore possible con-
icts. For example, if the blocks are clustered close
together the cooling load will decrease due to inter-
block shading but the daylight levels will also re-
duce. If the blocks are stacked higher, then they are
likely to get better daylight but they may become
less structurally stable. The three performance crite-
ria are calculated as follows:
Maximisation of daylight: An evaluation is de-
ned that executes Radiance in order to cal-
culate daylight levels on all windows under a
cloudy overcast sky. The amount of light enter-
ing each window is then adjusted according to
the visual transmittance of the glazing system
for that window. The performance criterion is
dened as the maximization of the total num-
ber of windows where the light entering the
window is above a certain threshold level for
reasonable visual comfort.
Minimisation of structural strain: An evalua-
tion is dened that executes Calculix in order
to calculate the global structural behaviour
using Finite Element Analysis (FEA) under vari-
ous loading conditions. In order to reduce the
computational complexity, the building con-
guration is modelled in a simplied way, by
grouping individual structural elements into
larger wholes called super-elements (Guyan,
1965). The performance criterion is dened as
the minimisation of the maximum strain within
the structure.
Minimisation of cooling load: An evaluation is
dened that executes EnergyPlus in order to
calculate the cooling load required in order to
maintain interior temperatures below a cer-
tain threshold for a typical schedule. In order
to reduce the computational complexity, an
ideal-load air system together with a simplied
zoning model is used, and the simulation is run
for a periods of one week at the solstices and
equinoxes. The performance criterion is de-
ned as the minimisation of the average daily
cooling load.
In Figure 2, the three workow branches den-
ing the evaluation procedures are shown. Each eval-
uation procedure includes two mapper nodes: an
input mapper for generating the required input les,
and an output mapper for generating the nal per-
formance score. These mapper nodes are currently
implemented as Python scripts, but part of this re-
search is the development of a graphical application
for dening mapper nodes. See Janssen at al. (2013)
for more details.
The input mappers transform the JSON les
from the developmental procedure to the appropri-
ate input les for the simulations. As well as the ge-
ometry information from these JSON les, the map-
pers also require other material information. The
output mappers transform the raw simulation data
into performance scores: for the Radiance data, the
mapper calculates the number of windows below
the daylight threshold; for Calculix, the mapper cal-
culates the maximum strain in the structure; and, for
EnergyPlus, the mapper calculates the average daily
cooling load. These three evaluation scores are then
provided as the nal output of the growth task.
213Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 |
When running the job, the population size was set
to 200 and a simple asynchronous steady-state evo-
lutionary algorithm was used. Each generation, 50
individuals were randomly selected from the popu-
lation and ranked using multi-objective Pareto rank-
ing. The two design variants with the lowest rank
were killed, and the two design variants with the
highest rank (rank 1) were used as parents for repro-
duction. Standard crossover and mutation operators
for real-valued genotypes were used, with a muta-
tion probability being set to 0.01. Reproduction be-
tween pairs of parents results in two new children,
thereby ensuring that the population size remains
The evolutionary algorithm was run for a total
of 10,000 births, taking approximately 8 hours to
execute. The nal non-dominated Pareto set for the
whole population contained a range of design vari-
ants with diering performance tradeos.
A workow was created in order to retrieve and
display designs from the Pareto front. A selection of
design variants are shown in Figure 5.
For designers, the EDITS approach allows two key
hurdles of skills and speed to be overcome. First, it
overcomes the skills hurdle by allowing designer to
dene growth tasks as workows using visual pro-
gramming techniques. Second, it overcomes the
speed hurdle by using cloud computing infrastruc-
tures to parallelize the evolutionary process. The
demonstration case-study shows how the EDITS ap-
proach can be applied to a complex design scenario.
Future research will focus on the development
of VisTrails data analytics nodes. This would allow
users to create workows to perform various types
of analysis on the data generated by the evolution-
ary process, including hypervolume and clustering
Altıntaş, İ 2011, Collaborative Provenance for Workow-driv-
en Science and Engineering, PhD Thesis, University of
Callahan, S, Freire, J, Santos, E, Scheidegger, C, Silva, C and
Vo, H 2006, ‘Vistrails: Visualization Meets Data Man-
Figure 5
A set of design variants shown
in the visual spreadsheet tool
within VisTrails.
214 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
agement’, Proceedings of the SIGMOD, Chicago, pp.
Deelman, E, Gannon, D, Shields, M and Taylor, I 2008, ‘Work-
ows and e-Science: An Overview of Workow System
Features and Capabilities’, Future Generation Computer
Systems, pp. 528–540.
Frazer, JH 1995, An Evolutionary Architecture, AA Publica-
tions, London, UK.
Guyan, RJ 1965, ‘Reduction of Stiness and Mass Matrices’,
AIAA Journal, 3(2), pp. 380–380.
Janssen, PHT, Basol, C and Chen, KW 2011, ‘Evolutionary De -
velopmental Design for Non-Programmers’, Proceed-
ings of the eCAADe Conference, Ljubljana, Slovenia, pp.
Janssen, PHT, Chen, KW and Basol, C 2011, ‘Iterative Virtual
Prototyping: Performance Based Design Exploration’,
Proceedings of the eCAADe Conference, Ljubljana, Slove-
nia, pp. 253–260.
Janssen, PHT and Chen, KW 2011, ‘Visual Dataow Model-
ling: A Comparison of Three Systems, Proceedings of the
CAAD Futures Conference, Liege, Belgium, pp. 801–816.
Janssen, PHT and Kaushik, V 2012, ‘Iterative Renement
Through Simulation: Exploring Trade-o s Between
Speed and Accuracy’, Proceedings of the 30th eCAADe
Conference, pp. 555–563.
Janssen, PHT and Kaushik, V 2013, ‘Decision Chain Encod-
ing: Evolutionary Design Optimization with Complex
Constraints’, Proceedings of the 2nd EvoMUSART Confer-
ence, pp. 157–167.
Janssen, PHT, Stous, R, Chaszar, A, Boeykens S and Toth
B, ‘Data Transformations in Custom Digital Workows:
Property Graphs as a Data Model for User‐Dened
Mappings’, Intelligent Computing in Engineering Confer-
ence - ICE2012, pp. 1–10.
Kumar, S and Bentley, PJ 1999, ‘The ABCs of Evolutionary
Design: Investigating the Evolvability of Embryog-
enies for Morphogenesis’, Proceedings of the GECCO, pp.
Rimal, BP, Eunmi, C and Lumb, I 2009, ‘A Taxonomy and Sur-
vey of Cloud Computing Systems’, Proceedings of the
INC, IMS and IDC Joint Conference, pp. 25–27.
During the early stages of design exploration, competing design strategies are typically considered. This chapter presents a design method, supported by a novel type of evolutionary algorithm, that maintains a heterogeneous population of design variants based on competing design strategies. Each strategy defines its own search space of design variants, all sharing a common generative concept or idea. A population of design variants is evolved through a process of selection and variation. As evolution progresses, some design strategies will become extinct while others will gradually dominate the population. A demonstration is presented showing how a designer can explore competing strategies by running a series of iterative evolutionary searches. The evolutionary algorithm has been implemented on a cloud platform, thereby allowing populations design variants to be processed in parallel. This results in a significant reduction in computation time, allowing thousands of designs to be evolved in just a few minutes.
Simulation-Based Multi-Objective Optimization (SBMOO) methods are being increasingly used in conceptual architectural design. They mostly focus on the solving, rather than the re-formulation, of a Multi-Objective Optimization (MOO) problem. However, Optimization Problem Re-Formulation (Re-OPF) is necessary for treating ill-defined conceptual architectural design as an iterative exploration process. The paper proposes an innovative SBMOO method which builds in a dynamic and interactive Re-OPF phase. This Re-OPF phase, as the main novelty of the proposed method, aims at achieving a realistic MOO model (i.e., a parametric geometry-simulation model which includes important objectives, constraints, and design variables). The proposed method is applied to the conceptual design of a top-daylighting system, focusing on divergent concept generation. The integration of software tools Grasshopper and modeFRONTIER is adopted to support this application. The main finding from this application is that the proposed method can help to achieve quantitatively better and qualitatively more diverse Pareto solutions.
Conference Paper
Full-text available
Visual programming languages enable users to create computer programs by manipulating graphical elements rather than by entering text. The difference between textual languages and visual languages is that most textual languages use a procedural programming model, while most visual languages use a dataflow programming model. When visual programming is applied to design, it results in a new modelling approach that we refer to 'visual dataflow modelling' (VDM). Recently, VDM has becoming increasingly popular within the design community, as it can accelerate the iterative design process, thereby allowing larger numbers of design possibilities to be explored. Furthermore, it is now also becoming an important tool in performance-based design approaches, since it may potentially enable the closing of the loop between design development and design evaluation. A number of CAD systems now provide VDM interfaces, allowing designers to define form generating procedures without having to resort to scripting or programming. However, these environments have certain weaknesses that limit their usability. This paper will analyse these weaknesses by comparing and contrasting three VDM environments: McNeel Grasshopper, Bentley Generative Components, and Sidefx Houdini. The paper will focus on five key areas: * Conditional logic allow rules to be applied to geometric entities that control how they behave. Such rules will typically be defined as if-then-else conditions, where an action will be executed if a particular condition is true. A more advanced version of this is the while loop, where the action within the loop will be repeatedly executed while a certain condition remains true. * Local coordinate systems allow geometric entities to be manipulated relative to some convenient local point of reference. These systems may be either two-dimensional or three-dimensional, using either Cartesian, cylindrical, or spherical systems. Techniques for mapping geometric entities from one coordinate system to another also need to be considered. * Duplication includes three types: simple duplication, endogenous duplication, and exogenous duplication. Simple duplication consists of copying some geometric entity a certain number of times, producing identical copies of the original. Endogenous duplication consist of copying some geometric entity by applying a set of transformations that are defined as part of the duplication process. Lastly, exogenous duplication consists of copying some geometric entity by applying a set of transformations that are defined by some other external geometry. * Part-whole relationships allow geometric entities to be grouped in various ways, based on the fundamental set-theoretic concept that entities can be members of sets, and sets can be members of other sets. Ways of aggregating data into both hierarchical and non-hierarchical structures, and ways of filtering data based on these structures need to be considered. * Spatial queries include relationships between geometric entities such as touching, crossing, overlapping, or containing. More advanced spatial queries include various distance based queries and various sorting queries (e.g. sorting all entities based on position) and filtering queries (e.g. finding all entities with a certain distance from a point). For each of these five areas, a simple benchmarking test case has been developed. For example, for conditional logic, the test case consists of a simple room with a single window with a condition: the window should always be in the longest north-facing wall. If the room is rotated or its dimensions changed, then the window must re-evaluate itself and possibly change position to a different wall. For each benchmarking test-case, visual programs are implemented in each of the three VDM environments. The visual programs are then compared and contrasted, focusing on two areas. First, the type of constructs used in each of these environments are compared and contrasted. Second, the cognitive complexity of the visual programming task in each of these environments are compared and contrasted.
Conference Paper
Full-text available
This paper proposes a digitally enhanced type of performance driven design method. In order to demonstrate this method, a design environment is presented that links the SideFx Houdini modelling and animation program to the Radiance and EnergyPlus simulation programs. This environment allows designers to explore large numbers of design variants using a partially automated iterative process of design development, design evaluation, and design feedback.
Conference Paper
Full-text available
Performance-based design approaches typically use iterative simulation as a way of exploring design variants. For such approaches, the speed of execution of the simulations is critical to enabling a fluid and interactive design process. This research proposes an iterative simulation design method where simulations are configured to run in two modes: in fast mode, simulations produce less accurate results but due to their speed can be applied successfully within an iterative refinement process; in slow mode, the simulations produce more accurate results that can be used to verify the performance improvements achieved using the iterative refinement process. A case study is presented where the proposed method is used to explore performance improvements related to levels of incident illuminance and incident irradiance on windows
Conference Paper
Full-text available
Evolutionary developmental design (Evo-Devo-Design) is a design method that combines complex developmental techniques with an evolutionary optimisation techniques. In order to use such methods, the problem specific developmental and evaluation procedures typically need to be define using some kind of textual programming language. This paper reports on an alternative approach, in which designers can use Visual Dataflow Modelling (VDM) instead of textual programming. This research described how Evo-Devo-Design problems can defined using the VDM approach, and how they can subsequently be run using a Distributed Execution Environment (called Dexen) on multiple computers in parallel. A case study is presented, where the Evo-Devo-Design method is used to evolve designs for a house, optimised for daylight, energy consumption, and privacy.
Conference Paper
Full-text available
A novel encoding technique is presented that allows constraints to be easily handled in an intuitive way. The proposed encoding technique structures the genotype-phenotype mapping process as a sequential chain of decision points, where each decision point consists of a choice between alternative options. In order to demonstrate the feasibility of the decision chain encoding technique, a case-study is presented for the evolutionary optimization of the architectural design for a large residential building.
Conference Paper
Full-text available
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Full-text available
Scientific workflow systems have become a necessary tool for many applications, enabling the composition and execution of complex analysis on distributed resources. Today there are many workflow systems, often with overlapping functionality. A key issue for potential users of work- flow systems is the need to be able to compare the capabilities of the various available tools. There can be confusion about system functionality and the tools are often selected without a proper functional analysis. In this paper we extract a taxonomy of features from the way sci- entists make use of existing workflow systems and we illustrate this feature set by providing some examples taken from existing workflow systems. The taxonomy provides end users with a mechanism by which they can assess the suitability of workflow in general and how they might use these features to make an informed choice about which workflow system would be a good choice for their particular application.
Conference Paper
The computational world is becoming very large and complex. Cloud Computing has emerged as a popular computing model to support processing large volumetric data using clusters of commodity computers. According to J.Dean and S. Ghemawat [1], Google currently processes over 20 terabytes of raw Web data. It's some fascinating, large-scale processing of data that makes your head spin and appreciate the years of distributed computing fine-tuning applied to today's large problems. The evolution of cloud computing can handle such massive data as per on demand service. Nowadays the computational world is opting for pay-for-use models and Hype and discussion aside, there remains no concrete definition of cloud computing. In this paper, we first develop a comprehensive taxonomy for describing cloud computing architecture. Then we use this taxonomy to survey several existing cloud computing services developed by various projects world-wide such as Google,, Amazon. We use the taxonomy and survey results not only to identify similarities and differences of the architectural approaches of cloud computing, but also to identify areas requiring further research.
In "An Evolutionary Architecture", John Frazer presents an overview of his work for the past 30 years. Attempting to develop a theoretical basis for architecture using analogies with nature's processes of evolution and morphogenesis. Frazer's vision of the future of architecture is to construct organic buildings. Thermodynamically open systems which are more environmentally aware and sustainable physically, sociologically and economically. The range of topics which Frazer discusses is a good illustration of the breadth and depth of the evolutionary design problem. Environmental Modelling One of the first topics dealt with is the importance of environmental modelling within the design process. Frazer shows how environmental modelling is often misused or misinterpreted by architects with particular reference to solar modelling. From the discussion given it would seem that simplifications of the environmental models is the prime culprit resulting in misinterpretation and misuse. The simplifications are understandable given the amount of information needed for accurate modelling. By simplifying the model of the environmental conditions the architect is able to make informed judgments within reasonable amounts of time and effort. Unfortunately the simplications result in errors which compound and cause the resulting structures to fall short of their anticipated performance. Frazer obviously believes that the computer can be a great aid in the harnessing of environmental modelling data, providing that the same simplifying assumptions are not made and that better models and interfaces are possible. Physical Modelling Physical modelling has played an important role in Frazer's research. Leading to the construction of several novel machine readable interactive models, ranging from lego-like building blocks to beermat cellular automata and wall partitioning systems. Ultimately this line of research has led to the Universal Constructor and the Universal Interactor. The Universal Constructor The Universal Constructor features on the cover of the book. It consists of a base plug-board, called the "landscape", on top of which "smart" blocks, or cells, can be stacked vertically. The cells are individually identified and can communicate with neighbours above and below. Cells communicate with users through a bank of LEDs displaying the current state of the cell. The whole structure is machine readable and so can be interpreted by a computer. The computer can interpret the states of the cells as either colour or geometrical transformations allowing a wide range of possible interpretations. The user interacts with the computer display through direct manipulation of the cells. The computer can communicate and even direct the actions of the user through feedback with the cells to display various states. The direct manipulation of the cells encourages experimentation by the user and demonstrates basic concepts of the system. The Universal Interactor The Universal Interactor is a whole series of experimental projects investigating novel input and output devices. All of the devices speak a common binary language and so can communicate through a mediating central hub. The result is that input, from say a body-suit, can be used to drive the out of a sound system or vice versa. The Universal Interactor opens up many possibilities for expression when using a CAD system that may at first seem very strange.However, some of these feedback systems may prove superior in the hands of skilled technicians than more standard devices. Imagine how a musician might be able to devise structures by playing melodies which express the character. Of course the interpretation of input in this form poses a difficult problem which will take a great deal of research to achieve. The Universal Interactor has been used to provide environmental feedback to affect the development of evolving genetic codes. The feedback given by the Universal Interactor has been used to guide selection of individuals from a population. Adaptive Computing Frazer completes his introduction to the range of tools used in his research by giving a brief tour of adaptive computing techniques. Covering topics including cellular automata, genetic algorithms, classifier systems and artificial evolution. Cellular Automata As previously mentioned Frazer has done some work using cellular automata in both physical and simulated environments. Frazer discusses how surprisingly complex behaviour can result from the simple local rules executed by cellular automata. Cellular automata are also capable of computation, in fact able to perform any computation possible by a finite state machine. Note that this does not mean that cellular automata are capable of any general computation as this would require the construction of a Turing machine which is beyond the capabilities of a finite state machine. Genetic Algorithms Genetic algorithms were first presented by Holland and since have become a important tool for many researchers in various areas.Originally developed for problem-solving and optimization problems with clearly stated criteria and goals. Frazer fails to mention one of the most important differences between genetic algorithms and other adaptive problem-solving techniques, ie. neural networks. Genetic algorithms have the advantage that criteria can be clearly stated and controlled within the fitness function. The learning by example which neural networks rely upon does not afford this level of control over what is to be learned. Classifier Systems Holland went on to develop genetic algorithms into classifier systems. Classifier systems are more focussed upon the problem of learning appropriate responses to stimuli, than searching for solutions to problems. Classifier systems receive information from the environment and respond according to rules, or classifiers. Successful classifiers are rewarded, creating a reinforcement learning environment. Obviously, the mapping between classifier systems and the cybernetic view of organisms sensing, processing and responding to environmental stimuli is strong. It would seem that a central process similar to a classifier system would be appropriate at the core of an organic building. Learning appropriate responses to environmental conditions over time. Artificial Evolution Artificial evolution traces it's roots back to the Biomorph program which was described by Dawkins in his book "The Blind Watchmaker". Essentially, artificial evolution requires that a user supplements the standard fitness function in genetic algorithms to guide evolution. The user may provide selection pressures which are unquantifiable in a stated problem and thus provide a means for dealing ill-defined criteria. Frazer notes that solving problems with ill-defined criteria using artificial evolution seriously limits the scope of problems that can be tackled. The reliance upon user interaction in artificial evolution reduces the practical size of populations and the duration of evolutionary runs. Coding Schemes Frazer goes on to discuss the encoding of architectural designs and their subsequent evolution. Introducing two major systems, the Reptile system and the Universal State Space Modeller. Blueprint vs. Recipe Frazer points out the inadequacies of using standard "blueprint" design techniques in developing organic structures. Using a "recipe" to describe the process of constructing a building is presented as an alternative. Recipes for construction are discussed with reference to the analogous process description given by DNA to construct an organism. The Reptile System The Reptile System is an ingenious construction set capable of producing a wide range of structures using just two simple components. Frazer saw the advantages of this system for rule-based and evolutionary systems in the compactness of structure descriptions. Compactness was essential for the early computational work when computer memory and storage space was scarce. However, compact representations such as those described form very rugged fitness landscapes which are not well suited to evolutionary search techniques. Structures are created from an initial "seed" or minimal construction, for example a compact spherical structure. The seed is then manipulated using a series of processes or transformations, for example stretching, shearing or bending. The structure would grow according to the transformations applied to it. Obviously, the transformations could be a predetermined sequence of actions which would always yield the same final structure given the same initial seed. Alternatively, the series of transformations applied could be environmentally sensitive resulting in forms which were also sensitive to their location. The idea of taking a geometrical form as a seed and transforming it using a series of processes to create complex structures is similar in many ways to the early work of Latham creating large morphological charts. Latham went on to develop his ideas into the "Mutator" system which he used to create organic artworks. Generalising the Reptile System Frazer has proposed a generalised version of the Reptile System to tackle more realistic building problems. Generating the seed or minimal configuration from design requirements automatically. From this starting point (or set of starting points) solutions could be evolved using artificial evolution. Quantifiable and specific aspects of the design brief define the formal criteria which are used as a standard fitness function. Non-quantifiable criteria, including aesthetic judgments, are evaluated by the user. The proposed system would be able to learn successful strategies for satisfying both formal and user criteria. In doing so the system would become a personalised tool of the designer. A personal assistant which would be able to anticipate aesthetic judgements and other criteria by employing previously successful strategies. Ultimately, this is a similar concept to Negroponte's "Architecture Machine" which he proposed would be computer system so personalised so as to be almost unusable by other people. The Universal State Space Modeller The Universal State Space Modeller is the basis of Frazer's current work. It is a system which can be used to model any structure, hence the universal claim in it's title. The datastructure underlying the modeller is a state space of scaleless logical points, called motes. Motes are arranged in a close-packing sphere arrangement, which makes each one equidistant from it's twelve neighbours. Any point can be broken down into a self-similar tetrahedral structure of logical points. Giving the state space a fractal nature which allows modelling at many different levels at once. Each mote can be thought of as analogous to a cell in a biological organism. Every mote carries a copy of the architectural genetic code in the same way that each cell within a organism carries a copy of it's DNA. The genetic code of a mote is stored as a sequence of binary "morons" which are grouped together into spatial configurations which are interpreted as the state of the mote. The developmental process begins with a seed. The seed develops through cellular duplication according to the rules of the genetic code. In the beginning the seed develops mainly in response to the internal genetic code, but as the development progresses the environment plays a greater role. Cells communicate by passing messages to their immediate twelve neighbours. However, it can send messages directed at remote cells, without knowledge of it's spatial relationship. During the development cells take on specialised functions, including environmental sensors or producers of raw materials. The resulting system is process driven, without presupposing the existence of a construction set to use. The datastructure can be interpreted in many ways to derive various phenotypes. The resulting structure is a by-product of the cellular activity during development and in response to the environment. As such the resulting structures have much in common with living organisms which are also the emergent result or by-product of local cellular activity. Primordial Architectural Soups To conclude, Frazer presents some of the most recent work done, evolving fundamental structures using limited raw materials, an initial seed and massive feedback. Frazer proposes to go further and do away with the need for initial seed and start with a primordial soup of basic architectural concepts. The research is attempting to evolve the starting conditions and evolutionary processes without any preconditions. Is there enough time to evolve a complex system from the basic building blocks which Frazer proposes? The computational complexity of the task being embarked upon is not discussed. There is an implicit assumption that the "superb tactics" of natural selection are enough to cut through the complexity of the task. However, Kauffman has shown how self-organisation plays a major role in the early development of replicating systems which we may call alive. Natural selection requires a solid basis upon which it can act. Is the primordial soup which Frazer proposes of the correct constitution to support self-organisation? Kauffman suggests that one of the most important attributes of a primordial soup to be capable of self-organisation is the need for a complex network of catalysts and the controlling mechanisms to stop the reactions from going supracritical. Can such a network be provided of primitive architectural concepts? What does it mean to have a catalyst in this domain? Conclusion Frazer shows some interesting work both in the areas of evolutionary design and self-organising systems. It is obvious from his work that he sympathizes with the opinions put forward by Kauffman that the order found in living organisms comes from both external evolutionary pressure and internal self-organisation. His final remarks underly this by paraphrasing the words of Kauffman, that life is always to found on the edge of chaos. By the "edge of chaos" Kauffman is referring to the area within the ordered regime of a system close to the "phase transition" to chaotic behaviour. Unfortunately, Frazer does not demonstrate that the systems he has presented have the necessary qualities to derive useful order at the edge of chaos. He does not demonstrate, as Kauffman does repeatedly, that there exists a "phase transition" between ordered and chaotic regimes of his systems. He also does not make any studies of the relationship of useful forms generated by his work to phase transition regions of his systems should they exist. If we are to find an organic architecture, in more than name alone, it is surely to reside close to the phase transition of the construction system of which is it built. Only there, if we are to believe Kauffman, are we to find useful order together with environmentally sensitive and thermodynamically open systems which can approach the utility of living organisms.