Conference PaperPDF Available

Iterative Refinement through Simulation: Exploring trade-off s between speed and accuracy

Authors:
  • White Lioness Technologies

Abstract and Figures

Performance-based design approaches typically use iterative simulation as a way of exploring design variants. For such approaches, the speed of execution of the simulations is critical to enabling a fluid and interactive design process. This research proposes an iterative simulation design method where simulations are configured to run in two modes: in fast mode, simulations produce less accurate results but due to their speed can be applied successfully within an iterative refinement process; in slow mode, the simulations produce more accurate results that can be used to verify the performance improvements achieved using the iterative refinement process. A case study is presented where the proposed method is used to explore performance improvements related to levels of incident illuminance and incident irradiance on windows
Content may be subject to copyright.
555Simulation, Prediction, and Evaluation - Volume 1 - eCAADe 30 |
INTRODUCTION
Visual Dataow Modelling (VDM) (Janssen and Chen
2011) has becoming increasingly popular within the
design community, as it can accelerate the iterative
design process, thereby allowing larger numbers
of design possibilities to be explored. Modelling
in a VDM system consists of creating dataow net-
works using nodes and links, where nodes can be
thought of as functions that perform actions, and
links connect the output of one function to the in-
put of another function. VDM is now also becoming
an important tool in performance-based design ap-
proaches, since it may potentially enable designers
to explore and rene design possibilities through
an iterative process of parametric variation cou-
pled with performance simulation (Shea et. al. 2005,
Coenders 2007, Lagios et. al 2010, Toth et. al. 2011,
Janssen et. al. 2011).
In order for the process of iterative renement
to be eective, it is critical to set appropriate trade-
Iterative Renement Through Simulation
Exploring trade-os between speed and accuracy
Patrick Janssen1, Vignesh Kaushik2
National University of Singapore
1patrick@janssen.name, 2vigneshkaushik@gmail.com
Abstract. Performance-based design approaches typically use iterative simulation as
a way of exploring design variants. For such approaches, the speed of execution of the
simulations is critical to enabling a uid and interactive design process. This research
proposes an iterative simulation design method where simulations are congured to run
in two modes: in fast mode, simulations produce less accurate results but due to their
speed can be applied successfully within an iterative renement process; in slow mode,
the simulations produce more accurate results that can be used to verify the performance
improvements achieved using the iterative renement process. A case study is presented
where the proposed method is used to explore performance improvements related to levels
of incident illuminance and incident irradiance on windows.
Keywords. Iterative; design; renement; simulation; Radiance.
os between simulation speed and simulation ac-
curacy. In order for simulations to be used uidly
and interactively, execution time must be kept to a
minimum. However, the accuracy of the simulation
is often inversely related to the speed of execution.
Fast simulations produce low-accuracy results, while
slow simulations produce high-accuracy results.
This paper proposes an iterative simulation
design method that overcomes this problem by
calibrating simulations to run both in a fast and
less accurate mode and in a slow and more accu-
rate mode. The fast mode simulations are used to
enable designers to apply iterative renement in a
uid and interactive manner, while the slow mode
simulations are used to verify the performance im-
provements achieved using the iterative renement
process.
In order to demonstrate the proposed method, a
case-study experiment has been conducted, where
556 | eCAADe 30 - Volume 1 - Simulation, Prediction, and Evaluation
the method is used to explore design variants for a
large residential project consisting of over a thou-
sand units. Design variants are evaluated based on
visible daylight and radiant heat (including sunlight)
incident on the surface of the windows of residential
units.
SIMULATION NODES
Visible daylight is measured as illuminance, which is
the visible light incident on the surface, and is meas-
ured in Lumens/m2 or Lux. Radiant heat is measured
as irradiance, which is the electromagnetic radiation
incident on the surface, measured in Watts/m2. Both
illuminance and irradiance are calculated using the
simulation program Radiance [1].
Radiance simulations
Radiance is a collection of programs that perform a
variety of related tasks. The main input le for Radi-
ance is the RAD le that describes the scene to be
simulated. Given a RAD le, the rst step is to con-
vert this into a dierent le format called an octree,
using a program called oconv. Using this octree le
as input, the user can specify sensor points in the
model and then use the rtrace program to measure
illuminance or irradiance at these points.
When the octree is generated using the oconv
program, the radiance description of the sky dome
can be included. Dierent skies need to be gener-
ated for the illuminance and irradiance simulations.
For the illuminance simulation, standard CIE over-
cast sky is required, as this is the worst case scenario
for calculating daylighting. This sky is not aected
by the position of the sun, and as a result it is time
independent. The illuminance that is calculated will
then represent the actual Lux value for the worst
case scenario.
For the irradiance simulation, rather than fo-
cusing on the worst case, a cumulative approach is
needed whereby the irradiance incident on a par-
ticular point throughout the year is added up. In ad-
dition, irradiance is of course time dependent as it is
aected by the position of the sun. One option for
calculating cumulative irradiance would be to gen-
erate skies for multiple points in time and then to
run multiple simulations. However, a more ecient
approach is to create a cumulative annual sky from
a climate le (Robinson and Stone 2004). The irradi-
ance results from a single simulation run using such
a cumulative sky would then represent cumulative
irradiance for the whole year.
For generating the standard CIE overcast sky
for the illuminance simulations, Radiance includes
a program called gensky[2]. For generating the cu-
mulative annual sky, a program called GenCumula-
tiveSky[3] is used. This program produces cumulative
annual skies in Radiance format from EnergyPlus
weather les [4]. The sky is discretized into 145
patches using a method devised by Tregenza (1987)
and the Perez luminance/radiance distribution
model (Perez et. al. 1993) is used to determine the
radiance of each patch, according to the information
from the climate le. Figure 1 shows both the stand-
ard CIE overcast sky and the cumulative annual sky
used in this research.
Figure 1
Falsecolor image of the stand-
ard CIE overcast sky generated
by gensky (left) and the cumu-
lative annual sky generated
by GenCumulativeSky (right)
using the EnergyPlus weather
le for Singapore.
557Simulation, Prediction, and Evaluation - Volume 1 - eCAADe 30 |
Radiance VDM nodes
In order to support a user-friendly integration of
Radiance into the design workow, VDM nodes
were created for an advanced procedural modelling
system called SideFX Houdini [5]. These nodes link
Houdini with the various Radiance programs and
the GenCumulativeSkyprogram. For more informa-
tion on the development of these nodes, see Jans-
sen et. al. (2011). For this research, the nodes were
further developed to allow users to create a sky via
three methods: by using the gensky program, by us-
ing the GenCumulativeSky program, or by loading a
sky description le.
The main simulation node executing oconv and
rtrace has two inputs: one for the model geometry
and one for sensor grids. The rst input is for in-
putting the geometry. The node will translate each
Houdini polygon to a Radiance primitive of type pol-
ygon. The polygons expected to have custom attrib-
utes to dene the material, and the node will extract
the values of these attributes when generating the
Radiance input le.
The second input of the Radiance node is for
inputting sensor grids. When the rtrace simulation
is run, the simulation results will be copied to the
sensor points within Houdini as attributes. This then
means that the results from an rtrace simulation can
be graphically displayed inside Houdini, using col-
oured surfaces.
Simulation parameters
In terms of speed, rtrace will tend to take signicant-
ly more time to execute than the other programs
such as oconv, gensky, and GenCumulativeSky. The
settings for the rtrace parameters are therefore criti-
cal when it comes to the trade-o between speed
and accuracy.
The Houdini node provides parameters for spec-
ifying certain key rtrace parameters relating to ambi-
ent lighting, namely [6]:
• Ambient bounces (ab): the maximum number
of diuse bounces computed by the indirect
calculation. A value of zero implies no indirect
calculation
• Ambient accuracy (aa): the approximate error
from indirect illuminance interpolation. A value
of zero implies no interpolation
• Ambient resolution (ar): The maximum density
of ambient values used in interpolation. Errors
will start to increase on surfaces spaced closer
than the scene size divided by the ambient
resolution
• Ambient divisions (ad): The error in the Monte
Carlo calculation of indirect illuminance will be
inversely proportional to the square root of this
number. A value of zero implies no indirect cal-
culation
• Ambient super-samples (as): Super-samples
are applied only to the ambient divisions which
show a signicant change
A detailed explanation of these parameters is be-
yond the scope of this paper (see the Radiance
documentation for more information). However, it
should be noted that perhaps the most important
parameter is the ambient bounces. The number of
bounces may vary from 0 to 8, with higher num-
ber of bounces producing more accurate results
but also in much higher computation times. Since
the sky is modelled as ‘glow’, it will only take part
in Radiance’s indirect lighting calculation, which is
only performed when ambient bounces is set to 1
or more. Since the only light is coming from the sky,
then a value of 1 is equivalent to calculating only
direct and diuse light, but ignoring any reected
indirect light.
For the ambient accuracy parameter, a lower
value produces more accurate results with slower
execution times. However, note that if no reected
indirect lighting is calculated (as is the case when
the only light is coming from the sky dome and the
ambient bounces parameter is set to 1), then this pa-
rameter can be set to 0, as it will not have any eect.
For the nal three parameters, higher values will
generally produce more accurate results but with
slower execution times.
558 | eCAADe 30 - Volume 1 - Simulation, Prediction, and Evaluation
CASE STUDY EXPERIMENT
The case study experiment will focus on the Inter-
lace, a large residential project designed by OMA
[7] and currently under construction in Singapore.
The design consists of thirty-one apartment blocks,
each six stories tall. The blocks are stacked in an in-
terlocking brick pattern, with voids between the
bricks. Each stack of blocks is rotated around a set of
vertical axes, thereby creating a complex interlock-
ing conguration. An example is shown in Figure
2, where 6 blocks are stacked and rotated to form a
hexagonal conguration.
Design exploration task
For this research, an exploration task has been de-
ned, in which the designer is required to minimize
the number of windows receiving either low illu-
minance or high irradiance. The designer will carry
out this exploration task via a process of iterative re-
nement, whereby a parametric model is built and
the parameters in the model are gradually adjusted
in order to try and improve performance. Each it-
eration of parametric adjustment by the designer
is followed by a simulation of design performance,
and if performance improves then the parametric
changes are kept. Using this approach the designer
may gradually be able to improve performance of
the design.
For this task, the parametric changes that can
be made by the designer have been constrained to
the rotation of the blocks and the addition of sun
shades. Block rotation a change that aects global
conguration, while sun shading is seen as a change
that aects only local conguration. In order to con-
strain the task, other possible changes, such as the
stacking of the blocks and the position and size of
balconies were not considered. However, it is noted
that the iterative approach used in this research
could also be expanded to include such parameters.
In order to test the iterative approach, a Houdini
model of the design was built that included all sig-
nicant exterior features including walls, windows,
inset balconies and protruding balconies. On the
interior, most of the detail was omitted and only
unit walls were included. The resulting model had
a total of approximately 47.5 thousand polygons, of
Figure 2
The process of rotating the
brick pattern. The diagram
on the left shows 6 blocks
arranged in a straight line,
while the diagram on the right
shows the same six blocks
folded into a hexagonal
pattern.
Figure 3
A typical oor plan [8].
Each block is approximately 70 meters long by 16.5
meters wide, with two vertical axes of rotation spaced
45 meters apart. The axes of rotation coincide with
the location of the vertical cores of the building,
thereby allowing for a single vertical core to connect
blocks at dierent levels. The blocks are almost totally
glazed, with large windows on all four facades. In ad-
dition, blocks also have a series of balconies, both
projecting out from the facade and inset into the fa-
cade. A typical oor plan is shown in Figure 3.
The OMA design has stacked the 31 blocks into
22 stacks of varying height, and has then rotated the
stacks into a hexagonal pattern constrained within
the site boundaries. At the highest point, the blocks
are stacked four high.
559Simulation, Prediction, and Evaluation - Volume 1 - eCAADe 30 |
which about 7800 were windows. These windows
were grouped into four types: living room windows,
bedroom windows, kitchen windows, and utility
windows. For the performance exploration, it was
decided to focus on the living rooms and bedroom
windows only, which totalled 5250 windows. The il-
luminance and irradiance incident on each window
was measured at just one point in the centre of the
window. Therefore, for each iteration, illuminance
and irradiance was to be measured at 5250 points in
the model.
For the exploration task, target thresholds were
set for both illuminance and irradiance. Windows
falling either below the illuminance threshold or
above the irradiance threshold were considered to
be undesirable, and therefore in need of improve-
ment. The aim of the exploration task was to reduce
the total number of undesirable windows. These
thresholds were mainly used as a simple way of
summarizing relative performance, so that the de-
signer was able to quickly assess whether improve-
ments has been made.
Parameterisation of the model
In order to allow the designer to uidly and inter-
actively make changes to the rotation angles of the
stacks of blocks, the blocks need to be parametrical-
ly linked. Looking at the arrangement of the blocks
in plan in Figure 4, it is evident that the congura-
tion is actually a branching hierarchical structure,
with a central root and three branches.
This type of branching structure can be modelled
within animation tools such as Houdini using ob-
jects that have parent-child relationships. In the plan
in Figure 4, the root node is indicated by the larger
dot and is the parent of three block stacks: s1, s5 and
s10. Each of these three stacked blocks is the start
of one branch. The parent-child linking relationship
means that any transformation applied to an object
will automatically also be applied to all the descend-
ants. The designer can therefore freely explore dier-
ent rotation combinations without having to worry
about the stacked blocks becoming disconnected.
Iterative simulation design method
The key step in the iterative simulation design meth-
od was the executions of the simulations. Calculat-
ing the illuminance and irradiance at a high level of
accuracy can be very time consuming, and therefore
very disruptive for the designer.
For obtaining accurate results, the following Ra-
diance ambient settings were used: ab=4, aa=0.15,
ar=2048, ad=516, and as=516. Using these settings,
the illuminance simulation took 8 hours and 30 min-
utes and the irradiance simulation took 13 hours 50
minutes making a total of 22 hours and 20 minutes.
The computer being used for running the simula-
tions was a typical oce computer: a 2.4GHz dual-
core processor with 8GB RAM running 64 bit Win-
dows.
Figure 4
The original design. The plan
on the left shows the root
node and the branching
structure. The model on the
right shows windows with low
illuminance in dark grey, and
windows with high irradiance
in light grey.
560 | eCAADe 30 - Volume 1 - Simulation, Prediction, and Evaluation
The simulation results showed that within the ex-
isting design, a signicant portion of windows had
either low illuminance or high irradiance. The il-
luminance and irradiance patterns on the facade
were also seen to be very varied and hard to pre-
dict due the complex massing of the building, and
also due the eects of protruding balconies shad-
ing windows below. The iterative simulation design
method was therefore deemed to be appropriate for
exploring options with fewer undesirable windows.
However, due to the excessive simulation time, an
iterative simulation design method was developed
where each simulation was congured to run in fast
mode and in slow mode. The aim was to reduce the
total simulation time of the fast version to below
two minutes, but to ensure that the results from the
fast and the slow mode simulations still correlated
reasonably well. This would then allow the fast simu-
lation to be used as a driver for the exploration pro-
cess.
The iterative simulation design method was di-
vided into three main phases: calibration, iteration,
and verication. In the calibration phase, the fast
mode simulations are set up and congured in order
to ensure that appropriate trade-os are achieved
between speed and accuracy. In the iteration phase,
the fast mode simulations are used within the itera-
tive renement process in order to explore design
variants with improved performance. Finally, in the
verication stage, both the initial design and the -
nal design from the iterative process are evaluated
using the slow mode simulations in order to verify
the performance improvements. The three phases of
the iterative simulation design method are shown in
Figure 5.
The calibration phase
For the calibration phase, a series of Radiance simu-
lations were executed with parameter settings that
favoured speed over accuracy. In all cases, the ambi-
ent bounces parameter was set to 1 and the ambi-
ent accuracy parameter was set to 0. This therefore
meant that no indirect reections were calculated
which signicantly reduced the execution time. For
each of these simulations, Microsoft Excel was then
used to plot the trend-line between the fast and
slow mode simulation results, and to calculate the
R2 correlation coecient (or the coecient of deter-
mination). Table 1 shows the results for these experi-
ments.
Figure 5
The three phases of the
iterative simulation design
method.
561Simulation, Prediction, and Evaluation - Volume 1 - eCAADe 30 |
Based on the execution time and R2 correlation
results, it was decided that for both the fast illumi-
nance simulation and the fast irradiance simula-
tions, the second set of settings from table 1 would
be used. These settings allow the simulations to be
executed in under 1 minute each, and also maintain
an R2 correlation of close to 0.9.
The nal step in setting up the fast simulations
was to map the results from the fast simulation us-
ing the linear trend-line equation. Microsoft Excel
was used to obtain the linear trend-line equation,
which was then transferred back to Houdini, where
it was used to map the results from the fast simula-
tion. This option for mapping the simulation results
was provided as part of the Houdini node. In eect,
this mapping of the fast simulation results adjusts
the trend line so that it passes through the graph
origin at 45 degrees.
The iteration phase
Within the Houdini environment, the total number
of undesirable windows for both illuminance and
irradiance were continuously displayed to the de-
signer as both numeric totals, and as coloured poly-
gons within the three-dimensional model. Once the
designer had made a set of changes to the model,
they were then able to trigger the simulations to
re-execute. After two minutes, once the simulations
completed executing, both the numeric totals and
the coloured polygons would be automatically up-
dated, thereby giving fast feedback to the designer
as to whether their changes resulted in better per-
formance.
The exploration process was set up as a two
stage process. In the rst stage, the rotation parame-
ters were iteratively explored. For each iteration, the
designer would identify a particular cluster of win-
dows with low illuminance, and would then make a
Table 1
Table showing the execution
time (T) and R2 correlation for
a range of dierent ambient
light settings for the Radiance
rtrace simulation.
Figure 6
The design modied in order
to reduce the number of win-
dows with low illuminance.
The plan on the left shows
how the branching structure
has been modied to try
and increase the openness
between the branches. The
model on the right shows win-
dows with low illuminance in
dark grey, and windows with
high irradiance in light grey.
Radiance rtrace ambient settings
Illuminance
Irradiance
T
R2
T
R2
ab=1, aa=0, ar=2048, ad=512, as=512 92s 0.8892 88s 0.8841
ab=1, aa=0, ar=1024, ad=256, as=256 49s 0.8892 53s 0.8839
ab=1, aa=0, ar=512, ad=128, as=128 32s 0.8875 36s 0.8796
562 | eCAADe 30 - Volume 1 - Simulation, Prediction, and Evaluation
small number of changes in order to try to reduce
the obstructions for those windows. In some cases,
such changes would indeed improve the situation,
but in other cases, the changes would cause dete-
rioration in performance in some other part of the
design. At this stage, the focus was on reducing the
number of windows with low illuminance, as this
was deemed to be a more challenging task. How-
ever, the designer also kept a check on the number
of windows with high irradiance, since changes that
improved illuminance often also resulted in higher
levels of irradiance. The nal design for stage 1 is
shown in Figure 6.
In the second stage, the best solution from the
rst stage was selected and the addition of solar
shading devices was then explored with the aim of
reducing the number of windows with high irradi-
ance. For the rotation parameters, the changes were
applied manually, since there were only 22 set of
stacked blocks. However, for the windows, the man-
ual approach could not be used since there were
thousands of windows. An automated approach was
therefore created within Houdini whereby shading
devices were parametrically generated for the win-
dows with high levels of irradiance. The depth of
the shading devices was varied in relation to level
of irradiance on the window. In this case, the itera-
tive process was used to explore the relationships
between the depth of the shading devices and the
level of irradiance on the window. As with stage 1,
the designer also kept a check on the number of
windows with low illuminance, since the addition of
solar shading devices reduced illuminance levels in
some cases.
It was found that in the rst stage, the reduction
of the number of windows with low illuminance was
dicult to achieve. The number of low illuminance
windows was reduced through an iterative rene-
ment process consisting of 18 iterative steps. In the
second stage, the windows with high irradiance
were more easily solved using additional sun shad-
ing devices. During this stage, the number of high
irradiance windows was reduced in 6 iterative steps.
The verication phase
In order to verify that performance had indeed been
improved, the initial design and the nal design
were evaluated using the slow mode simulations
and the results were compared. Note that the goal
of this verication was not to compare the results
from the fast mode simulations with those from
the slow mode simulations, but rather to measure
the actual performance improvements that were
achieved through the iterative renement process.
Despite good R2 correlations of close to 0.9, the re-
sults from the fast mode simulations could not be
used as an objective measure of performance. In-
stead, the fast simulation modes were used only as a
way of measuring relative performance, and within
the iterative phase were used as a driver for the ex-
ploration process.
The slow mode simulation results show that the
total number of windows with low illuminance and
high irradiance have been reduced by 8% and 32%
respectively. This conrms that the performance was
successfully improved using the iterative simulation
design method.
CONCLUSIONS
This research aimed to explore the trade-o be-
tween speed and accuracy when applying iterative
simulation approaches to complex designs where
the size of the digital models typically becomes
large, and as a result execution times for simulations
may become prohibitively slow.
This research has explored an approach in which
simulations are run in two modes: fast mode and
slow mode. An iterative simulation design method
has been developed consisting of three phases:
in the rst phase, fast mode simulations are cali-
brated by setting appropriate trade-os between
speed and accuracy; in the second phase, the fast
mode simulations are used to iteratively rene the
design in response to performance feedback; lastly,
in the third phase, the performance improvements
achieved through the iterative renement process
are veried. The application of the proposed meth-
563Simulation, Prediction, and Evaluation - Volume 1 - eCAADe 30 |
od to a complex case study of a large residential
design demonstrates the feasibility of the approach.
Future research will focus on further exploring
how the proposed approach can be applied to a
wider range of simulation tools, including structural
simulations and energy simulations.
REFERENCES
Coenders, JL 2007, Interfacing between parametric asso-
ciative and structural software.In Proceedings of the 4th
International Conference on Structural and Construction
Engineering, Melbourne, Australia, 26–28 September.
Janssen, PHT, Chen, KW and Basol, C 2011, Iterative Virtual
Prototyping: Performance Based Design Exploration. In
Proceedings of The International Conference on Educa-
tion and research in Computer Aided Architectural De-
sign in Europe (eCAADe ‘11), pp. 253–260.
Janssen, PHT and Chen, KW 2011, Visual Dataow Model-
ling: A Comparison of Three Systems, in Proceedings of
the CAAD Futures ‘11, pp. 801–816.
Kolarevic, B and Malkawi, A 2005, Operative Performativity
(panel discussion), in Performative Architecture: Beyond
Instrumentality, New York :Spon Press, 2005, pp.239–
246.
Lagios, K, Niemasz, J, and Reinhart, CF 2010, Animated
Building Performance Simulation (ABPS) – Linking Rhi-
noceros/Grasshopper with Radiance/Daysim, in Pro-
ceedings of SimBuild 2010, New York City, August 2010
Robinson, D and Stone, A 2004, Irradiation modelling made
simple: the cumulative sky approach and its applica-
tions, InPlea2004 – The 21st Conference on passive and
low Energy Architecture, Eindhoven, The Netherlands.
Tregenza, P 1987, Subdivision of the Sky Hemisphere for
Luminance Measurements, Lighting Research and Tech-
nology, Vol 19, pp. 13–14.
Toth, B, Salim, F, Frazer, J, Drogemuller, R, Burry, J, and Burry,
M 2011, Energy-oriented design tools for collaboration
in the cloud. International Journal of Architectural Com-
puting, 4(9): 339–359.
Shea, K, Aish, R, and Gourtovaia, M 2005, Towards integrat-
ed performance–driven generative design tools. Auto-
mation in Construction. March 2005, 14(2): 253–264.
[1] http://radsite.lbl.gov/radiance/
[2] http://radsite.lbl.gov/radiance/man_html/gensky.1.html
[3] http://diva4rhino.com/
[4] http://apps1.eere.energy.gov/buildings/energyplus/
cfm/weather_data.cfm
[5] http://www.sidefx.com/
[6] http://radsite.lbl.gov/radiance/man_html/rtrace.1.html
[7] http://oma.eu/projects/2009/the-interlace
[8] http://www.theinterlace.com.sg/
564 | eCAADe 30 - Volume 1 - Simulation, Prediction, and Evaluation
... For these two experiments the DAYSIM simulation software is coupled with Houdini (Sidefx, 2014) and Dexen (Janssen et al., 2012) to generate a dataset of daylight autonomy calculations for 10,000 design variants. The surrogate models are then trained separately with SUMO Toolbox (Gorissen et al., 2010) with input from the sample dataset created. ...
... A custom node within Houdini which links to the DAYSIM is created. Dexen allow simulations to run on multiple cores and computers in parallel (Janssen et al., 2012). This dramatically speeds up the execution time required to generation 10,000 design variants. ...
Conference Paper
Full-text available
Existing performance-based design exploration methods typically suffer from a lack of real-time feedback and a lack of actionable feedback. This paper proposes a hybrid design exploration method that overcomes these issues by combining parametric modelling, surrogate modelling, and evolutionary algorithms. The proposed method is structured as a mixed-initiative approach, in which parametric modelling is the key to creating a synergistic relationship between the architect and the computational system. Surrogate-based techniques will address the issue of real-time feedback, the evolutionary exploration techniques will address the issue of actionable feedback. As a first stage in developing the PEX method, this paper reports on two experiments conducted to identify an appropriate surrogate modelling technique that is efficient and robust.
... The proposed method is based on a general design method developed by Janssen and Kaushik [6].This method is adapted to the design of semitransparent BIPV facades using an evolutionary optimisation approach.The method consists of three phases: 1) calibration, 2) optimisation, and 3) validation as shown in Figure 1. In the calibration phase, simulation models are selected and simulation programs are configured and tested. ...
... For electricity generation and cooling load, exactly the same models are used as in the first demonstration. For daylight savings, an enhanced model is developed that assumes that occupants will close the blinds to counteract visual discomfort due to glare.This will reduce the daylight savings that can be achieved since closing blinds will reduce daylight autonomy.The modified equation for calculating the total electricity generated is shown in Equation 6. ...
Article
Full-text available
The optimisation of semi-transparent building integrated photovoltaic facades can be challenging when attempting to find an overall balance performance between conflicting performance criteria. This paper presents a three-phase design optimisation method that maximises overall electricity savings generated by these types of facades by simulating the combined impact of electricity generation, cooling load, and daylight autonomy.Two demonstrations are performed, with the difference being that the second demonstration uses an enhanced model for calculating daylight savings that takes into account the use of blinds to counteract glare. For both demonstrations, the three-phase optimisation method significantly reduces optimisation run times. Comparing the design variants evolved by the two demonstrations, the use of the enhanced daylight savings model results in a total electricity savings that is more accurate but in terms of visual differentiation, the difference between the optimized design variants is relatively small.
... The window daylight procedure calculates the number of windows receiving daylight below a certain minimum threshold on an overcast day. The façade cost procedure calculates the cost of the façade, including glazing systems and shades required to bring the heat gain through the facade to below a certain minimum threshold (Janssen and Kaushik, 2012). The core length procedure calculates the total vertical core length, representing lifts and other services. ...
... The proposed method is based on a general design method developed by Janssen and Kaushik (2012). This generalised method is adapted to the design of semitransparent BIPV facades using an evolutionary optimisation approach. ...
Article
A platform for experimenting with population-based design exploration algorithms is presented, called Dexen . The platform has been developed in order to address the needs of two distinct groups of users loosely labeled as researchers and designers . Whereas the researchers group focuses on creating and testing customized toolkits, the designers group focuses on applying these toolkits in the design process. A platform is required that is scalable and extensible: scalable to allow computationally demanding population-based exploration algorithms to be executed on distributed hardware within reasonable time frames, and extensible to allow researchers to easily implement their own customized toolkits consisting of specialized algorithms and user interfaces. In order to address these requirements, a three-tier client–server system architecture has been used that separates data storage, domain logic, and presentation. This separation allows customized toolkits to be created for Dexen without requiring any changes to the data or logic tiers. In the logic tier, Dexen uses a programming model in which tasks only communicate through data objects stored in a key-value database. The paper ends with a case study experiment that uses a multicriteria evolutionary algorithm toolkit to explore alternative configurations for the massing and façade design of a large residential development. The parametric models for developing and evaluating design variants are described in detail. A population of design variants are evolved, a number of which are selected for further analysis. The case study demonstrates how evolutionary exploration methods can be applied to a complex design scenario without requiring any scripting.
Conference Paper
Full-text available
Evolutionary design is an approach that evolves populations of design variants through the iterative application of a set of computational procedures. This paper proposes a template and set of techniques for creating the development and evaluation procedures. The template defines a clear structure for the procedures, while the techniques provide specific strategies for generating models and handling constraints. A demonstration is presented where the template is used to create development and evaluation procedures for a large complex residential housing project.
Conference Paper
Full-text available
Designers interested in applying evo-devo-design methods for performance based multi-objective design exploration have typically faced two main hurdles: it's too hard and too slow. An evo-devo-design method is proposed that effectively overcomes the hurdles of skill and speed by leveraging two key technologies: computational workflows and cloud computing. In order to tackle the skills hurdle, Workflow Systems are used that allow users to define computational workflows using visual programming techniques. In order to tackle the speed hurdle, cloud computing infrastructures are used in order to allow the evolutionary process to be parallelized. We refer to the proposed method as Evo-Devo In The Sky (EDITS). This paper gives an overview of both the EDITS method and the implementation of a software environment supporting the EDITS method. Finally, a case-study is presented of the application of the EDITS method.
Conference Paper
Full-text available
Visual programming languages enable users to create computer programs by manipulating graphical elements rather than by entering text. The difference between textual languages and visual languages is that most textual languages use a procedural programming model, while most visual languages use a dataflow programming model. When visual programming is applied to design, it results in a new modelling approach that we refer to 'visual dataflow modelling' (VDM). Recently, VDM has becoming increasingly popular within the design community, as it can accelerate the iterative design process, thereby allowing larger numbers of design possibilities to be explored. Furthermore, it is now also becoming an important tool in performance-based design approaches, since it may potentially enable the closing of the loop between design development and design evaluation. A number of CAD systems now provide VDM interfaces, allowing designers to define form generating procedures without having to resort to scripting or programming. However, these environments have certain weaknesses that limit their usability. This paper will analyse these weaknesses by comparing and contrasting three VDM environments: McNeel Grasshopper, Bentley Generative Components, and Sidefx Houdini. The paper will focus on five key areas: * Conditional logic allow rules to be applied to geometric entities that control how they behave. Such rules will typically be defined as if-then-else conditions, where an action will be executed if a particular condition is true. A more advanced version of this is the while loop, where the action within the loop will be repeatedly executed while a certain condition remains true. * Local coordinate systems allow geometric entities to be manipulated relative to some convenient local point of reference. These systems may be either two-dimensional or three-dimensional, using either Cartesian, cylindrical, or spherical systems. Techniques for mapping geometric entities from one coordinate system to another also need to be considered. * Duplication includes three types: simple duplication, endogenous duplication, and exogenous duplication. Simple duplication consists of copying some geometric entity a certain number of times, producing identical copies of the original. Endogenous duplication consist of copying some geometric entity by applying a set of transformations that are defined as part of the duplication process. Lastly, exogenous duplication consists of copying some geometric entity by applying a set of transformations that are defined by some other external geometry. * Part-whole relationships allow geometric entities to be grouped in various ways, based on the fundamental set-theoretic concept that entities can be members of sets, and sets can be members of other sets. Ways of aggregating data into both hierarchical and non-hierarchical structures, and ways of filtering data based on these structures need to be considered. * Spatial queries include relationships between geometric entities such as touching, crossing, overlapping, or containing. More advanced spatial queries include various distance based queries and various sorting queries (e.g. sorting all entities based on position) and filtering queries (e.g. finding all entities with a certain distance from a point). For each of these five areas, a simple benchmarking test case has been developed. For example, for conditional logic, the test case consists of a simple room with a single window with a condition: the window should always be in the longest north-facing wall. If the room is rotated or its dimensions changed, then the window must re-evaluate itself and possibly change position to a different wall. For each benchmarking test-case, visual programs are implemented in each of the three VDM environments. The visual programs are then compared and contrasted, focusing on two areas. First, the type of constructs used in each of these environments are compared and contrasted. Second, the cognitive complexity of the visual programming task in each of these environments are compared and contrasted.
Conference Paper
Full-text available
A novel encoding technique is presented that allows constraints to be easily handled in an intuitive way. The proposed encoding technique structures the genotype-phenotype mapping process as a sequential chain of decision points, where each decision point consists of a choice between alternative options. In order to demonstrate the feasibility of the decision chain encoding technique, a case-study is presented for the evolutionary optimization of the architectural design for a large residential building.
Article
Full-text available
This paper describes the linking of the popular three-dimensional CAD modeler Rhinoceros with advanced daylight simulations using Radiance and Daysim. A new, highly effective design workflow within Rhinoceros is presented that directly exports scene geometries, material properties and sensor grids into Radiance/Daysim format and calculates a series of performance indicators including monthly or seasonal solar radiation maps as well as daylight factor and daylight autonomy distributions. The simulation results are automatically loaded back into the Rhinoceros scene using falsecolor mappings. Using the Grasshopper plug-in for Rhinoceros, key design parameters such as window size and material descriptions can be changed incrementally and the simulation results can be combined into an animated building performance simulation, i.e. a dynamic visualization of the effect of these design parameters on the daylight availability within the scene. The design workflow has been specifically developed with the architectural design process in mind, aiming to provide designers with immediate, high quality feedback all the way from schematic design to design development.
Article
Full-text available
Emerging from the challenge to reduce energy consumption in buildings is the need for energy simulation to be used more effectively to support integrated decision making in early design. As a critical response to a Green Star case study, we present DEEPA, a parametric modeling framework that enables architects and engineers to work at the same semantic level to generate shared models for energy simulation. A cloud-based toolkit provides web and data services for parametric design software that automate the process of simulating and tracking design alternatives, by linking building geometry more directly to analysis inputs. Data, semantics, models and simulation results can be shared on the fly.This allows the complex relationships between architecture, building services and energy consumption to be explored in an integrated manner, and decisions to be made collaboratively.
Conference Paper
Full-text available
Visual programming languages enable users to create computer programs by manipulating graphical elements rather than by entering text. The difference between textual languages and visual languages is that most textual languages use a procedural programming model, while most visual languages use a dataflow programming model. When visual programming is applied to design, it results in a new modelling approach that we refer to 'visual dataflow modelling' (VDM). Recently, VDM has becoming increasingly popular within the design community, as it can accelerate the iterative design process, thereby allowing larger numbers of design possibilities to be explored. Furthermore, it is now also becoming an important tool in performance-based design approaches, since it may potentially enable the closing of the loop between design development and design evaluation. A number of CAD systems now provide VDM interfaces, allowing designers to define form generating procedures without having to resort to scripting or programming. However, these environments have certain weaknesses that limit their usability. This paper will analyse these weaknesses by comparing and contrasting three VDM environments: McNeel Grasshopper, Bentley Generative Components, and Sidefx Houdini. The paper will focus on five key areas: * Conditional logic allow rules to be applied to geometric entities that control how they behave. Such rules will typically be defined as if-then-else conditions, where an action will be executed if a particular condition is true. A more advanced version of this is the while loop, where the action within the loop will be repeatedly executed while a certain condition remains true. * Local coordinate systems allow geometric entities to be manipulated relative to some convenient local point of reference. These systems may be either two-dimensional or three-dimensional, using either Cartesian, cylindrical, or spherical systems. Techniques for mapping geometric entities from one coordinate system to another also need to be considered. * Duplication includes three types: simple duplication, endogenous duplication, and exogenous duplication. Simple duplication consists of copying some geometric entity a certain number of times, producing identical copies of the original. Endogenous duplication consist of copying some geometric entity by applying a set of transformations that are defined as part of the duplication process. Lastly, exogenous duplication consists of copying some geometric entity by applying a set of transformations that are defined by some other external geometry. * Part-whole relationships allow geometric entities to be grouped in various ways, based on the fundamental set-theoretic concept that entities can be members of sets, and sets can be members of other sets. Ways of aggregating data into both hierarchical and non-hierarchical structures, and ways of filtering data based on these structures need to be considered. * Spatial queries include relationships between geometric entities such as touching, crossing, overlapping, or containing. More advanced spatial queries include various distance based queries and various sorting queries (e.g. sorting all entities based on position) and filtering queries (e.g. finding all entities with a certain distance from a point). For each of these five areas, a simple benchmarking test case has been developed. For example, for conditional logic, the test case consists of a simple room with a single window with a condition: the window should always be in the longest north-facing wall. If the room is rotated or its dimensions changed, then the window must re-evaluate itself and possibly change position to a different wall. For each benchmarking test-case, visual programs are implemented in each of the three VDM environments. The visual programs are then compared and contrasted, focusing on two areas. First, the type of constructs used in each of these environments are compared and contrasted. Second, the cognitive complexity of the visual programming task in each of these environments are compared and contrasted.
Conference Paper
Full-text available
This paper proposes a digitally enhanced type of performance driven design method. In order to demonstrate this method, a design environment is presented that links the SideFx Houdini modelling and animation program to the Radiance and EnergyPlus simulation programs. This environment allows designers to explore large numbers of design variants using a partially automated iterative process of design development, design evaluation, and design feedback.
Article
Full-text available
A scanning pattern for sky photometry is described, in which the hemisphere is divided into 151 zones in bands parallel with the horizon.
Article
Full-text available
Performance-driven generative design methods are capable of producing concepts and stimulating solutions based on robust and rigorous models of design conditions and performance criteria. Using generative methods, the computer becomes a design generator in addition to its more conventional role as draftsperson, visualizor, data checker and performance analyst. To enable designers to readily develop meaningful input models, this paper describes a preliminary integration of a generative structural design system, eifForm, and an associative modeling system, Generative Components, through the use of XML models. An example is given involving generation of 20 lightweight, cantilever roof trusses for a saddle shaped stadium roof modeled in Generative Components. Synergies between the two systems and future extensions are discussed.
Interfacing between parametric associative and structural software
  • J L Coenders
Coenders, JL 2007, Interfacing between parametric associative and structural software.In Proceedings of the 4 th International Conference on Structural and Construction Engineering, Melbourne, Australia, 26-28 September.
Operative Performativity (panel discussion)
  • Kolarevic
  • A Malkawi
Kolarevic, B and Malkawi, A 2005, Operative Performativity (panel discussion), in Performative Architecture: Beyond Instrumentality, New York :Spon Press, 2005, pp.239-246.
Irradiation modelling made simple: the cumulative sky approach and its applications, InPlea2004 -The 21 st Conference on passive and low Energy Architecture
  • Robinson
  • Stone
Robinson, D and Stone, A 2004, Irradiation modelling made simple: the cumulative sky approach and its applications, InPlea2004 -The 21 st Conference on passive and low Energy Architecture, Eindhoven, The Netherlands.