Conference PaperPDF Available

Inductive aerodynamics

  • PLP/Architecture

Abstract and Figures

A novel approach is presented to predict wind pressure on tall buildings for early-stage generative design exploration and optimisation. The method provides instantaneous surface pressure data, reducing performance feedback time whilst maintaining accuracy. This is achieved through the use of a machine learning algorithm trained on procedurally generated towers and steady-state CFD simulation to evaluate the training set of models. Local shape features are then calculated for every vertex in each model, and a regression function is generated as a mapping between this shape description and wind pressure. We present a background literature review, general approach, and results for a number of cases of increasing complexity.
Content may be subject to copyright.
Inductive Aerodynamics
Samuel Wilkinson1, Sean Hanna2, Lars Hesselgren3, Volker Mueller4
1,2University College London, UK, 3PLP/Architecture, UK, 4Bentley Systems, US,, 3lhesselgren@plparchitecture,
Abstract. A novel approach is presented to predict wind pressure on tall buildings for
early-stage generative design exploration and optimisation. The method provides
instantaneous surface pressure data, reducing performance feedback time whilst
maintaining accuracy. This is achieved through the use of a machine learning algorithm
trained on procedurally generated towers and steady-state CFD simulation to evaluate the
training set of models. Local shape features are then calculated for every vertex in each
model, and a regression function is generated as a mapping between this shape
description and wind pressure. We present a background literature review, general
approach, and results for a number of cases of increasing complexity.
Keywords. Machine learning; CFD; tall buildings; wind loads; procedural modelling.
It is generally recognised that architects currently require performance information to
guide their decisions almost from the inception of a project. In fact, there is a
mentality present of simply trying to collect as much data as possible with the
intention of synthesising it into a situated design response. This presents a problem,
especially for computational fluid dynamic (CFD) wind simulation, whereby the time
required to assess the performance is obstructive to the fast and iterative nature of
current parametric design softwares. This is possibly due to the tendency for
architectural software tools to originate in engineering fields, without due
consideration of speed-accuracy tradeoffs to adjust for the application requirements
(Chittka et al., 2009; Lu et al., 1991). In other words, they are typically too accurate
and slow for the fast pace of modern conceptual design, massing or form decisions.
Developing a method that can give real-time performance feedback about a form
allows for intuitive play of the kind we are used to with physical models.
Wind engineering has traditionally been within the remit of engineers or
specialists, with numerical simulation (CFD) considered a supportive tool to physical
boundary layer wind tunnel (BLWT) testing. For instance, in the computational wind
engineering (CWE) literature there is substantial caution around numerical analysis,
namely for Reynolds-averaged Navier-Stokes (RANS) and to a lesser extent large-
eddy simulations (LES) (Stathopoulos, 1997; Bitsuamlak, 2006; Dagnew et al., 2009;
Menicovich et al., 2002). However, architects are increasingly getting involved with
analysis, where concerns over accuracy are less paramount since demand is typically
for relative scenario comparison or general flow behaviour (Lomax et al., 2001;
Malkawi et al., 2005; Chronis et al., 2012).
The tall building typology has been identified as a focal area here for a number of
reasons. Firstly, as height increases so too do the wind forces (along with seismic and
gravitational) which has consequences on facade panelisation and structural
efficiency, amongst others. We can construct a simple motivational argument to say
that increased external wind force requires more opposing force, i.e. more structure,
more materials, larger cores, less let-able floor space, less revenue etc. Therefore there
is a need to consider the aerodynamic form of these buildings as they increase in
height. Secondly, the trend for tall buildings is to build them as high as (contextually,
economically and structurally) possible, necessitating cutting-edge design and
construction technologies (CTBUH, 2012). Thirdly, tall building form lends itself well
to parametric design as there is often a high degree of vertical logic that can be
expressed neatly with mathematical expressions (this generalisation is at least more
true than for shorter buildings). Given this, it is possible to easily generate a
procedural, or generic, tall building model that, with a relatively small number of
parameters, can represent a large number of potential designs. This becomes useful
when the objective is to sample the typological space of potential buildings, which
will be discussed in the methodology.
We present a novel approach to predict wind pressure on tall building models for
early-stage generative design exploration and optimisation (exploration as the non-
discrete parametric equivalent of tinkering, and optimisation as the single- or multi-
objective directed design space search requiring iterative testing and evaluation). The
method provides fast surface pressure data with the conventional visualisation,
reducing performance feedback time whilst maintaining verisimilitude.
This is achieved through the use of a machine learning algorithm, trained on a pre-
computed set of CFD simulation data. ANSYS CFX 13.0, a commonly used solver in
engineering practice, was used for steady-state RANS with a k-E turbulence model.
The learning technique is grouped with artificial neural networks (ANN), support
vector machines (SVM), and random forest (RF) decision trees, in that there is a
training set of cases from which generalised rules are generated (Duffy, 1997). The
term machine learning stems from the fields of computer science (Mitchell, 1997) and
artificial intelligence (Samuel, 1959), but in statistics is referred to as regression and
in engineering as function approximation or surrogate modelling. Once trained, this
enables us to provide a new test case and make a prediction of the outcome. Inductive
reasoning, epistemologically, means constructing generalisations from specific
information, as opposed to deductive reasoning where small details are construed
from generalisations. The fundamental outcome of this learning approach is therefore
a continuous output response allowing interpolation and extrapolation between cases
that have not been explicitly simulated. In doing so, we are essentially moving the
simulation time from the front-end to the back-end of the process where more time is
available for pre-computation.
The following section provides a review of relevant literature in the generative,
performative design of tall buildings, wind modelling methods, speed-accuracy
tradeoffs, incorporation of learning in design, concluding with a problem-solution
hypothetical argument positioned in this state of current literature. The subsequent
structure of this paper will describe the methodological approach in general terms,
and results are presented from a series of experimental case studies of increasing
complexity from trivial to practical. The conclusions, further work and the paper as a
whole are positioned within the scope of ongoing research.
Literature Review
Tall Buildings
Tamura et al. (2009, 2010) and Tanaka et al. (2012) acknowledge the increase in tall
building complexity beyond the traditional extruded rectilinear form. We are now
seeing more unconventional free-style forms derived from the architect's use of more
advanced modelling software. These new complicated sectional shapes that may vary
with height, can actually provide better aerodynamic performance by disrupting, or
'confusing', vortex shedding and thus reducing crosswind response. Benefits can also
be found in more subtle manipulations such as corner chamfering or cutting, and by
creating voids, or porous regions, near the edges.
Despite rapid advances over the past century, this emerging generation of
skyscrapers poses new challenges for wind engineering. Irwin (2009) discusses a
number of these, such as the impact that aerodynamics have on construction cost.
Since the structure itself is a large proportion of the cost, and as for tall buildings the
wind is the governing lateral load, there are significant benefits to be had from
reducing wind loads. This also has the effect of reducing lateral motions that can
potentially cause occupant discomfort. He also suggests that shape aerodynamics
must be proactively considered, and iteratively optimised, early on in the design. With
the new generation of super-tall towers over 600m it is simply not possible to ignore
the wind performance. He quotes a designer of the Burj Khalif, saying “we
practically designed the tower in the wind tunnel”, and were therefore able to produce
an extremely efficient aerodynamic shape that enabled the height with reasonable
structural systems and costs, and without any damping system.
The increase in the use of parametric CAD softwares has seen a rise in the last
decade namely with the release of Bentley GenerativeComponents and Rhino
Grasshopper, plus more generally with the increased adoption of scripting. These
allow the user to create parametrically associative relationships related to geometry.
The extension of this idea is to use rules to define the parameters, or where these rules
can be related to the performance of a model component the geometry is directed by
some evaluative metric. Certain metrics can be calculated quickly without problem,
but if the calculation takes time it becomes obstructive to the modelling process. We
adopt the premise that it is better to have a broader range of lower resolution data
rather than a limited amount of exact data.
Speed-Accuracy Tradeoffs
Speed-accuracy tradeoffs (SATs) show that response accuracy generally increases
with response time, i.e. taking more time to make a decision results in a better
decision. Biological examples have been noted by Chittka et al. (2009), who explains
that “when it takes a long time to solve a difficult task, and the potential costs of
errors are low, the best solution from the perspective of an animal might be to guess
the solution quickly, a strategy that is likely to result in low decision accuracy.” The
two extremes can be called impulsive and reflective. This provides a neat analogy for
performance analysis in design where it is necessary to consider what the application
of the simulation tool is, and the consequent risks, before deciding a suitable accuracy.
Crucially though, and in conjunction with this reasoning, Burns (2005)
demonstrates that making more decisions with more mistakes (fast and inaccurate)
results in better overall performance (with bees, more nectar collected) than the more
fastidious (slow and accurate). Defining accuracy as the proportion of choices that are
correct, this highlights that accuracy should not be confined to the immediate task, i.e.
simulation accuracy, but to the larger one of improving building performance (see
Figure 1).
Response time is critical for performance-driven design and SATs must be
considered when developing early stage tools for when large-scale decisions are
made. Performance information is often scarce at this stage and iterative decisions
must be made quickly, necessitating fast response times in sync with the project
cycles. The development of CFD models have been focused over the past decades on
improving accuracy, and computational time is optimised by specific software
vendors after-the-fact, with little thought given to the accuracy required by the user. In
contrast, recent developments in computer graphics have started with the desired
accuracy (believable) and speed (real-time) in mind, with successful results.
Figure 1
(Left) SAT for various task difficulties and skills; (Right) Notional positions of different modelling methods on SAT.
In the design context, CFD can typically be used for a number of purposes:
analysis of internal air movement, pollution dispersion, noise propagation, pedestrian
comfort in urban environments or tall building aerodynamics. As mentioned
previously, it is the last that is the focal application here, especially for early design
stages. There is a paradox here, in that the most complex flow types (bluff bodies) and
therefore most computationally intensive, need to be modelled in a scenario where
fast results are required. The numerical method must be as accurate and fast as
possible. In fact, the conclusion is reached that the fastest method has poor accuracy
and the slowest the best accuracy (as would be expected, considering the speed-
accuracy tradeoffs mentioned earlier). There is general agreement between (Lomax et
al., 2001) and (Chronis et al., 2012) that the “level of accuracy of a CFD simulation
needs to be compromised with the turnaround time requirements of its application.”
Lu et al. (1991) describe the same issue in mechanical engineering where slow but
accurate simulation makes interactive decision making impossible, when only quick
estimates are desired at early stages. It is only towards the final stages of design,
“when the engineer has converged to a small region of decision space, more accurate
simulations are needed to make fine distinctions.” The problem has therefore been
present since the early 90s, but as a solution they propose integration of simulation,
optimisation and machine learning.
Inductive Learning in Design
Our approach is supported by Samarasinghe (2007), who identifies the best solution
to predicting system behaviour through observational data. This is necessary when
there is little or no understanding of the “underlying mechanisms because of complex
and non-linear interactions among various aspects of the problem.” Extracting these
complex relationships is often difficult since the systems are typically natural, and
therefore can have randomness, heterogeneity, multiple causes and effects, and noise.
Even when they are successfully extracted, they may be beyond our understanding
and are held as intractable computational functions or data structures. Hanna (2011)
tests the hypothesis that it is unnecessary to have any understanding of this underlying
system behaviour, but rather it is possible to make predictions about the system
simply by making observations. This is demonstrated by learning the structural
behaviour of system components and applying them to larger-scale scenarios.
Graening et al. (2008) propose a method that allows the extraction of
comprehensible knowledge from aerodynamic design data (jet-blades) represented by
discrete unstructured surface meshes. They use a displacement measure in order to
investigate local differences between designs and the resulting performance variation.
Knowledge, or rule, extraction from CFD data is primarily used to guide human-
centred design by improving understanding of the system's behaviour, whether it is for
jet turbine blade optimisation or architectural design. Whilst the connection between
local geometric features and surface pressure has been extended and changed here,
and used for a different application, this work is a close precedent.
Problem Hypothesis
It is argued here that approximations of CFD simulations can be made with machine
learning regression, using geometric shape descriptors as the learning features. The
entire evaluation process can be broadly split into five key work areas: i) procedural
geometry generation; ii) batch simulation; iii) shape feature generation; iv) machine
learning training; v) prediction and visualisation. Feature generation is essentially the
core of the process since the solution depends heavily on geometric description so as
to define surface pressure as a function of it. We hypothesise that surface pressure
distribution arising from wind flow around tall buildings can be learnt and predicted
with an accuracy appropriate to early stage design (feedback from practice indicates
<20% error) using shape feature description. It can be shown that it is possible to
combine, with an acceptable error, methods that have the separate contradictory
objectives of predictive accuracy and speed.
Data Set Generation: Procedural Modelling
The parametric model was created in Bentley GenerativeComponents. The goal was to
create a generalised tower model, with the two properties of minimising the number
of parameters used whilst maximising the design representation potential, i.e. the
number of possible buildings it could create. This is important when considering
optimisation or exploratory design space searches to avoid the curse of
dimensionality. This means that as the number of variables increases, the design space
increases exponentially by nD, where n is the number of samples taken per parameter
and D is the number of parameters, or dimensionality. There is therefore clearly a
compromise to be made between model efficiency and represent-ability.
Figure 2
(Left) Examples of evaluated procedural models in the training set on Case 4; (Right) Mesh feature extraction.
The geometry for the training set was generated using a procedural tall building
model with a select number of key parameters. There are in fact three separate
topologies in the procedural model with their own parameters, since it is difficult to
incorporate the entire design space with one parametric logic (Park et al., 2004;
Samareh, 1999). Using the unstructured triangulated surface mesh from these means
we are not limited by a single parametric topology in the learning phase of the method
(Graening et al., 2008). Local surface-mesh shape characteristics are used as input
features to the learning algorithm instead of the design parameters, avoiding reliance
on any one parametric model definition.
Simulation Method
An established solver, ANSYS CFX 13.0, was used throughout to run the RANS
steady-state simulations, with a k-ε turbulence model as it is regarded as the most
robust. Each simulation, depending on the complexity, requires up to 60 minutes to
converge (on a 2.66GHz i7). Solver convergence is reached when residuals fall below
a minimum of 1−6, typically at around 100 to 200 iterations. The number of cells in the
tetrahedral meshes varies between 0.8x106 and 1.5x106 depending on the geometry,
with prismatic expansion on surfaces 3 cells deep and a minimum cell size of 0.1m.
The wind was applied at an upstream inlet, with a reference speed (Ur) of 1ms−1 at a
reference height (Zr) of 10m. The most commonly used distribution of mean wind
speed with height is the 'power-law' expression:
Ux = Ur ( Zx / Zr ) α (1)
The exponent α is an empirically derived coefficient that is dependent on the
stability of the atmosphere. For neutral stability conditions it is approximately 0.143,
and is appropriate for open-surroundings such as open water or landscape. Future
work will include a wind profile that takes surrounding surface roughness, or context,
into account, as well as potential wind direction change with height.
Shape Features and Learning
This method creates a definition for the pressure at a point on the model as the
function of a local geometric description. To describe a simple example of the
process: there are N models of a cuboid with various orientations; each is evaluated,
and the pressure P is extracted at M points over each model; for every M, a shape
descriptor X is calculated, such as the vertex height, normal components, curvature,
etc; this gives a set of geometric characteristics, and a corresponding pressure value;
these sets of P(X) are used as the training data. Pressure distribution is predicted from
these geometric descriptors alone meaning the selection is critical. A sensitivity
analysis has been conducted with a variety of descriptors to determine suitable
representation, details of which are not included here. When a new case is presented,
the shape descriptors are calculated and used to make a prediction of P. The feature
definition for point pressure in R22 vector space used throughout the following is:
P ( Z, N(x,y,z), Nσ1-5(x,y,z), U(x,y,z)) (2)
For a specific model vertex, P is the surface pressure, Z is the height, N(x,y,z) are the
normal components, 1-5(x,y,z) is the standard deviation σ of normal components of
cumulative mesh neighbourhood rings 1 through 5, and U(x,y,z) are the normalised
model position components. The extent of the neighbourhood curvature can be
extended beyond 5 rings, within computational resource limits. The definition in
Equation 2 gives 22 inputs and 1 output feature to train the learning algorithm for all
cases described below.
For the Orientation, Height and Topology cases, an Artificial Neural Network
(ANN) was used, with a 70:30% split of the provided data to training:validation. For
the first two cases, separate sets constituting entire models were also held back for
testing, i.e. training was at 15° and 20m intervals respectively. For the third case, there
was no extra test set but the whole was split 70:15:15% to training:validation:test.
Validation data is to check for convergence during training. For the fourth case,
training data was from the procedural tall building model and test data from another
set of real buildings. In this case, a Random Forest (RF) algorithm was used instead as
it provided better results for the more complex problem. Further work is needed with
both methods to understand their applicability to certain tasks, however it is known
that the RF is better with noisy data sets than the ANN. Training set sizes and
summary results are given in Table 1, and computation times are given in Table 2.
Cuboid Orientation
The first and most simple test is the rotation of a cuboid, of width and depth 10m, and
height 50m. Simulations were run at 5° intervals from 0 to 85°, and the ANN trained
on 15° and tested at intervals. The sensitivity analysis here varies the number of
training samples and measures the standard deviation, σ, of the difference between
simulation and prediction. Figure 3 (left) shows the error σ against orientation for
various set sizes (bold vertical lines are training intervals of 15°), (centre) the training
regression of the entire set, and (right) the prediction error for an orientation of 25°.
With less training data, it can be seen that error is highest around 45° when flow
bifurcations (regime change) occur, although this is negated with sufficient data.
Figure 3
(Left) Orientation vs. Error σ %; (Centre) Training set regression, R=0.99564; (Right) Prediction error (25°).
Cuboid Height
Secondly, a parametric cuboid was created with width and depth 10m, and height
varying from 10 to 100m in 5m increments. Figure 4 (left) shows the variability when
trained on 10, 20, 30 and 45m intervals, and (right) the prediction error for a height of
25m when trained at 20m intervals.
Figure 4
(Left) dHeight vs. Error σ %; (Centre) Training set regression, R=0.9992; (Right) Prediction error (25m).
Here the number of edges was varied from 3 to 10, with 0 (circle), diameter 10m and
height 50m. Instead of keeping a complete model separate for testing as in the last two
cases, here all cases were used but only a fraction of the total data set was used. This
is varied in Figure 5 (left), with a training set ranging from 10000 to 50000.
Figure 5
(Left) No. Edges vs. Error σ %; (Centre) Training set regression, R=0.98355; (Right) Prediction error (n0).
Tall Buildings
In the final case, training data was collected from simulations of 600 procedural tall
building models, with a total of over 4x106 shape features extracted. This was down-
sampled to 105 by removing features in close proximity to reduce training time. The
test set contains 10 real tall buildings from around the world, selected for their range
of unique architectural characteristics. Figure 6 below shows predicted surface
pressure distribution in the top row, and the error distribution for the set in the bottom
row. The pressure range (-5.5 to 2.0 Pa) was taken over the entire test set, as was the
absolute error range (0 to 65.2%). The error distribution is shown in Figure 7 (right),
which fits a Gaussian normal distribution. Error percentiles: 99th = 35.7%, 95th =
20.0%, 90th = 13.0%, 75th = 6.1%. That is, 75% of the test features have an error
below 6.1%.
Figure 6
(Upper) Predicted pressure, Pa; (Lower) Error, %. Pressure range is the min. and max. of the entire set for
comparison, the error range is absolute max. error of the set (65.2%).
(Left to right) (1) Metlife Building, NYC; (2) The Shard, London; (3) Willis Tower (Sears), Chicago; (4) Euston
Tower, London; (5) Taipei 101, Taiwan; (6) Shanghai World Financial Centre; (7) Bank of China; (8) Exchange
Place, NYC; (9) Frankfurter Buro Centre, Frankfurt; (10) Washington Street, NYC.
Figure 7
(Left) Error σ % for each case; (Centre) Random Forest learning convergence; (Right) Error probability density.
Results Summary
Case Min σ Error (%) Max σ Error (%) Training Set Size
Orientation 1.2 (55°) 1.6 (10°) 110000 (15° training intervals)
Height 0.7 (10m) 2.0 (50m) 44720 (20m training intervals)
Topology 1.8 (5 Edges) 3.5 (0 Edges) 50000
Real 4.8 (Bank of China) 18.3 (Euston) 100000 (Procedural training)
Table 1
Summary of minimum and maximum error standard deviations (% over test case pressure range).
Case Train Sim. Train Feat. Gen. Train Predict Feat. Gen. * Predict *
Orientation 21600 9060 2600 1540 < 0.1
Height 18000 2370 720 620 < 0.1
Topology 32400 4670 1060 1750 < 0.1
Real 2160000 12000 620 720 < 0.1
Table 2
Summary of time (seconds) required for each case, split into Training (one-off back-end time) and Prediction
(front-end time). Mean feature generation time is 0.085s/vertex. *Mean over all test set. After down-sampling.
The results show that it is possible to achieve a relatively small prediction error
(Figure 7 and Table 1) for less time (Table 2), with the methodology and constraints
described. These prediction errors are necessary for the compromise in avoiding
considerably intensive CFD simulation. Traditionally, for every individual CFD
simulation the process can take a minimum of 1 hour, compared to our methodology
that has a total front-end prediction time of under 12 minutes (for feature generation
and prediction) and a back-end, one-off training set simulation time of 600 hours (for
the real case). Once trained, an unlimited number of predictions can then be made.
Whilst these preliminary results are outside the rigorous accuracy necessary for
final engineering analysis, they are within the boundaries acceptable for early-stage
concept design for tall buildings, where interactive response time is a significant
consideration. The prediction accuracy and response times achieved are promising for
further work given the well-known complexities of fluid behaviour.
The next stages of the work are to consider time-dependent simulations to fully
consider the approximation of turbulence, vortex shedding and gusts, as well as
interference from complex urban contexts on boundary conditions, and further
improvement to the shape feature selection and generation time.
This research was sponsored by the EPSRC, Bentley Systems and PLP Architects.
Bitsuamlak, G., 2006. Application of computational wind engineering: A practical
perspective. In Third National Conference in Wind Engineering. pp. 1–19.
Burns, J.G., 2005. Impulsive bees forage better: the advantage of quick, sometimes
inaccurate foraging decisions. Animal Behaviour, 70(6), pp.1–5.
Chittka, L., Skorupski, P. & Raine, N.E., 2009. Speed-accuracy tradeoffs in animal
decision making. Trends in ecology & evolution, 24(7), pp.400–7.
Chronis, A. et al., 2012. Design Systems, Ecology and Time. In ACADIA.
CTBUH, 2012. Tall Buildings in Numbers : A Tall Building Review, 2012(1).
Dagnew, A.K., Bitsuamlak, G. & Merrick, R., 2009. Computational evaluation of
wind pressures on tall buildings. In 11th Americas Conference on Wind Engineering.
Duffy, A.H.B., 1997. The “what” and “how” of learning in design. IEEE Expert,
12(3), pp.71–76.
Graening, L. et al., 2008. Knowledge Extraction from Aerodynamic Design Data and
its Application to 3D Turbine Blade Geometries. JMMA, 7(4), pp.329–350.
Hanna, S., 2011. Addressing complex design problems through inductive learning.
Irwin, P.A., 2009. Wind engineering challenges of the new generation of super-tall
buildings. JWEIA, 97(7-8), pp.328–334.
Lomax, H., Pulliam, T.H. & Zingg, D.W., 2001. Fundamentals of computational fluid
dynamics, Berlin: Springer.
Lu, S.C.-Y., Tcheng, D.K. & Yerramareddy, S., 1991. Integration of Simulation,
Learning and Optimization to Support Engineering Design. Annals of the CIRP,
40(1), pp.143–146.
Malkawi, A.M. et al., 2005. Decision support and design evolution: integrating
genetic algorithms, CFD and visualization. AIC, 14(1), pp.33–44.
Menicovich, D. et al., 2002. Generation and Integration of an Aerodynamic
Performance Database within the Concept Design Phase of Tall Buildings.
Mitchell, T.M., 1997. Machine Learning, McGraw-Hill.
Park, S.M. et al., 2004. Tall Building Form Generation by Parametric Design
Process. In CTBUH. Seoul Conference, pp. 1–7.
Samarasinghe, S., 2007. Neural Networks for Applied Sciences and Engineering:
From Fundamentals to Complex Pattern Recognition, Auerbach Publications, NY.
Samareh, J.A., 1999. A Survey of Shape Parameterisation Techniques. In
CEAS/AIAA/ICASE/NASA Langley International Forum on Aeroelasticity and
Structural Dynamics. pp. 333–343.
Samuel, A.L., 1959. Some Studies in Machine Learning Using the Game of Checkers.
IBM JRD, 3(3).
Stathopoulos, T., 1997. Computational wind engineering: Past achievements and
future challenges. JWEIA, 67-68, pp.509–532.
Tamura, Y., 2009. Wind and tall buildings. In EACWE 5.
Tamura, Y. et al., 2010. Aerodynamic Characteristics of Tall Building Models with
Various Unconventional Configurations. Structures Congress 2010, pp.278–278.
Tanaka, H. et al., 2012. Experimental investigation of aerodynamic forces and wind
pressures acting on tall buildings with various unconventional configurations.
JWEIA, 107-108, pp.179–191.
... In this parametric paradigm, architects can easily generate immense numbers of alternative scenarios but are then faced with the timeconsuming task of evaluation and selection. One earlier solution focusses on early-stage design of tall buildings, using pre-computed procedural model sets, local morphological shape features, and machine learning via an artificial neural network (Wilkinson et al. 2013). It was shown that significantly faster prediction times can be achieved whilst minimising approximation errors to taskappropriate levels. ...
... Another example of the solution approximation approach, this time applied to building design, is by Wilkinson et al. (2013). Predictions are made through training an ANN on shape features extracted from a set of evaluated procedural tall building models. ...
... The approach taken here is towards performance prediction of wind-induced surface pressure from shape analysis, developing previous work on morphological prediction (Wilkinson et al. 2013). It has previously been shown that it is possible, with a reasonable degree of accuracy and speed, to predict surface pressure for earlystage tall building design. ...
Full-text available
A new approach is demonstrated to approximate computational fluid dynamics (CFD) in urban tall building design contexts with complex wind interference. This is achieved by training an artificial neural network (ANN) on local shape and fluid features to return surface pressure on test model meshes of complex forms. This is as opposed to the use of global model parameters and Interference Factors (IF) commonly found in previous work. The ANN is trained using shape and fluid features extracted from a set of evaluated principal (design) models (PMs). The regression function is then used to predict results based on shape features from the PM and fluid features from a one-off obstruction model (OM), context only, simulation. For the application of early-stage generative design, the errors (against CFD validation) are less than 10% centred standard deviation σ, whilst the front-end prediction times for the test cases are around 20s (up to 500 times faster than the CFD).
... Previous work [1] demonstrated the speed and accuracy of a reduced-order model (ROM) based on the use of a geometric feature vector (P {z,n,nσ r ,u}). The objective, as here, is to match the rapid generation of design alternatives with accurate analysis of equal speed. ...
... The primary aim of this work is to test the scalability of the ROM to cases with complex urban interference by extending the shape-based feature vector to include local wind speed. This work is therefore a development on the methodology and results of [1,2]. Predictions of the isolated tall buildings are inadequate when considering the significant effects that dense urban environments can have on the wind-induced surface pressure. ...
... For testing however, this is replaced with v=v T , the wind speed at the vertex's transformed position in the OM fluid field. This is essentially the same as in previous work [1], except that previously the vertex height z was used instead of the wind speed v. This was acceptable for interference-free predictions since the wind profile was the same in both the training and test models, which is now not the case. ...
Full-text available
A novel approach is demonstrated to approximate the effects of complex urban interference on the wind-induced surface pressure of tall buildings. This is achieved by decomposition of the domain into two components: the obstruction model (OM) of the static large-scale urban context, for which a single computational fluid dynamics (CFD) simulation is run; and the principal model (PM) of the isolated tall building under design, for which repeatable reduced-order model (ROM) predictions can be made. The ROM is generated with an artificial neural network (ANN), using a set of feature vectors comprising an input of local shape descriptors and a range of wind speeds from a training geometry, and an output response of pressure. For testing, the OM CFD simulation provides the flow boundary condition wind speeds to the PM ROM prediction. The result is vertex-resolution surface pressure data for the PM mesh, intended for use within generative design exploration and optimisation. It is found that the mean absolute prediction error is around 5.0% (σ: 7.8%) with an on-line process time of 390 s, 27 times faster than conventional CFD simulation; considering full process time, only 3.2 design iterations are required for the ROM time to match CFD. Existing work in the literature focuses solely on creating generalised rules relating global configuration parameters and a global interference factor (IF). The work presented here is therefore a significantly alternative approach, with the advantages of increased geometric flexibility, output resolution, speed, and accuracy.
... Results from the simulation were compared against the field measurements, where the air velocity obtained from the simulation was 6.5m/s and 7m/s from the field measurements indicating less than 20% error. The study stated that according to Wilkinson et al. [16], acceptable error for CFD results is up to 20% thus validating the simulation performed. Results depict an acceleration in wind velocity by 1.3 m/s with taller buildings enhancing wind movement along the streets also causing a reduction in the air temperature by 2⁰C. ...
... Wilkinson et al. [16], an acceptable error for CFD results is up to 20%. Thus it can be established that the numerical model has been validated. ...
Full-text available
Rapid urban growth and development over the past few years in Dubai has increased the rate at which the mean maximum temperatures are rising. Progressive soaring temperatures have greater effect of heat islands that add on to the high cooling demands. This work numerically explicated the effect of HIs in a tropical desert climate by adopting Heriot-Watt University Dubai Campus (HWUDC) as a case study. The study analysed thermal flow behaviour around the campus by using Computational Fluid Dynamics (CFD) as a numerical tool. The three dimensional Reynolds-Averaged Navier–Stokes (RANS) equations were solved under FLUENT commercial code to simulate temperature and wind flow parameters at each discretised locations. Field measurements were carried out to validate the results produced by CFD for closer approximation in the representation of the actual phenomenon. Results established that the air temperature is inversely proportional to wind velocity. Hotspots were formed in the zone 1 and 3 region with a temperature rise of 9.1% that caused a temperature increase of 2.7 °C. Observations illustrated that the building configuration altered the wind flow pattern where the wind velocity was higher in the zone 2 region. Findings determined increase in the sensible cooling load by 19.61% due to 1.22 °C temperature rise. This paper highlighted the application of CFD in modelling an urban micro-climate and also shed light into future research development to quantify the HIs.
... The method has been demonstrated on specific cases of wind-induced surface pressure derived from CFD on cuboid orientation, height, and topological interpolation, and realistic, contextfree tall building prediction from procedural models (Wilkinson et al., 2013). In this case the reduced-order model is generated and tested on different geometries with a similar prediction time and accuracy as this study. ...
... As well as potential applicability to a range of different analysis methods, the flexibility of the presented methodology has the scope for generalisation both upstream and downstream of the model reduction process. This is a claim made in conjunction with previous work by Wilkinson et al. (2013Wilkinson et al. ( , 2014, where a reduced-order model has been demonstrated for approximating CFD with an input of procedural models and a test set of real tall buildings. ...
Conference Paper
Full-text available
In this paper an approach for generating reduced- order performance models from surface mesh topology is presented. The method uses an ar- tificial neural network (ANN) to create a regres- sion function linking local vertex shape charac- teristics and simulation response. Two cases of model orientation interpolation are demonstrated; firstly for simple insolation; and secondly for wind pressure from steady-state computational fluid dy- namics (CFD) simulations. Finally, both are inte- grated in a single- and multi-objective analysis of the performance space and prediction variability, and an assessment of the approach’s speed and ac- curacy. It is concluded that, since prediction time is independent of the basis simulation, the benefits increase with simulation cost; and that prediction variability, or error, does not substantially alter the structure of non-dominated solutions in the Pareto analysis.
... This is how data-driven approaches have become one of the cornerstones of physics, from Data Assimilation (DA) to Machine Learning (ML). As an example, ML has gained interest from classical physics (fluid mechanics [32,116,145,174]; aerodynamics [267,277]; plasma physics [83,189], astrophysics and astronomy [115,260]) to quantum physics (particle physics [4]; quantum mechanics [164]). Perhaps, physicists nourish a hope about exploring the "chasm of ignorance" using data-based techniques, by pushing the boundaries of classical approaches [105]. ...
Full-text available
This thesis contributions belong to the general framework of data-based and physically-based data-driven modelling. An efficient approach for Machine Learning (ML), as well as a speed-up technique for Data Assimilation (DA), have been developed. For this purpose, Dimensionality Reduction (DR) and stochastic spectral modelling were used. In particular, a coupling between Proper Orthogonal Decomposition (POD) and Polynomial Chaos Expansion (PCE) is at the center of this thesis contributions. POD and PCE have widely proved their worth in their respective frameworks, and the idea was to combine them for optimal field measurement based forecasting, and ensemble-based acceleration technique for variational DA. For this purpose, (i) a physically interpretable POD-PCE ML for non-linear multidimensional fields was developed in the Neural Networks (NN) paradigm and (ii) a hybrid ensemble-variational DA approach for parametric calibration was proposed with adapted calculations of POD-PCE metamodelling error covariance matrix. The proposed techniques were assessed in the context of an industrial application, for the study of sedimentation in a coastal power plant's water intake. Water intakes ensure plant cooling via a pumping system. They can be subject to sediment accumulation, which represents a clogging risk and requires costly dredging operations. For monitoring and safety reasons, the power plant stakeholders asked for a predictive tool that could be run in operational conditions. Data collected during many years of monitoring in the study area were provided. The objective was then to achieve comprehensive analysis of the flow and sediment dynamics, as well as to develop an optimal model in terms of forecasting accuracy, physical meaning, and required computational time. Uncertainty reduction and computational efficiency were therefore starting points for all proposed contributions. In addition to the previously proposed methods, Uncertainty Quantificiation (UQ) studies were undertaken. Specifically, (i) uncertainties related to tidal hydrodynamic modelling, resulting from common modelling choices (domain size, empirical closures) were investigated. POD patterns resulting from measurements and numerical scenarios were compared; (ii) UQ study of the sediment transport modelling in the intake, in a highdimensional framework, was achieved. Investigations were based on appropriate DR. In fact, POD patterns of Boundary Conditions (BC) and Initial Conditions (IC), resulting from hydrodynamic simulations outputs and from bathymetry measurements respectively, were used. A perspective of this work would be to implement a hybrid POD-PCE model, using both measured and numerically emulated data, to better understand and predict complex physical processes. This approach would offer a complete, fast and efficient tool for operational predictions.
... Deep Learning techniques (DL [29,59]) and more generally Machine Learning (ML [79,95]), and their applications to physical problems (fluid mechanics [9,51,67,74] ; aerodynamics [110,115] ; plasma physics [28,83] ; astrophysics and astronomy [50,106] ; particle physics [2] ; quantum mechanics [70], geosciences [27,46,86,88,92]) have made a promising take-off in the last few years. This has been particularly the case for fields where the measurement potential has dramatically increased, with increasing spatiotemporal resolution (e.g. ...
Full-text available
In an ever-increasing interest for Machine Learning (ML) and a favorable data development context, we here propose an original methodology for data-based prediction of two-dimensional physical fields. Polynomial Chaos Expansion (PCE), widely used in the Uncertainty Quantification community (UQ), has long been employed as a robust representation for probabilistic input-to-output mapping. It has been recently tested in a pure ML context, and shown to be as powerful as classical ML techniques for point-wise prediction. Some advantages are inherent to the method, such as its explicitness and adaptability to small training sets, in addition to the associated probabilistic framework. Simultaneously, Dimensionality Reduction (DR) techniques are increasingly used for pattern recognition and data compression and have gained interest due to improved data quality. In this study, the interest of Proper Orthogonal Decomposition (POD) for the construction of a statistical predictive model is demonstrated. Both POD and PCE have amply proved their worth in their respective frameworks. The goal of the present paper was to combine them for a field-measurement-based forecasting. The described steps are also useful to analyze the data. Some challenging issues encountered when using multidimensional field measurements are addressed, for example when dealing with few data. The POD-PCE coupling methodology is presented, with particular focus on input data characteristics and training-set choice. A simple methodology for evaluating the importance of each physical parameter is proposed for the PCE model and extended to the POD-PCE coupling.
... Furthermore, it requires the consideration of other data mining aspects like, feature extraction, feature reduction and post-processing. Wilkinson et al.[9]adopted the basic idea of Graening et al. and utilized unstructured surface meshes as unified object representation for the prediction of the local wind pressure distribution on tall buildings. In this paper, we generalize the concept behind the analytics of design data based on a unified shape representation by introducing the shape mining framework. ...
Full-text available
Although the integration of engineering data within the framework of product data management systems has been successful in the recent years, the holistic analysis (from a systems engineering perspective) of multi-disciplinary data or data based on different representations and tools is still not realized in practice. At the same time, the application of advanced data mining techniques to complete designs is very promising and bears a high potential for synergy between different teams in the development process. In this paper, we propose shape mining as a framework to combine and analyze data from engineering design across different tools and disciplines. In the first part of the paper, we introduce unstructured surface meshes as meta-design representations that enable us to apply sensitivity analysis, design concept retrieval and learning as well as methods for interaction analysis to heterogeneous engineering design data. We propose a new measure of relevance to evaluate the utility of a design concept. In the second part of the paper, we apply the formal methods to passenger car design. We combine data from different representations, design tools and methods for a holistic analysis of the resulting shapes. We visualize sensitivities and sensitive cluster centers (after feature reduction) on the car shape. Furthermore, we are able to identify conceptual design rules using tree induction and to create interaction graphs that illustrate the interrelation between spatially decoupled surface areas. Shape data mining in this paper is studied for a multi-criteria aerodynamic problem, i.e. drag force and rear lift, however, the extension to quality criteria from different disciplines is straightforward as long as the meta-design representation is still applicable.
To contribute to the sustainable energy production, the current paper proposed and numerically solved a model of roll bond photovoltaic thermal (RB-PVT) condenser with fins by using ANSYS Sotfware18. First, the effect of four different fin shapes and number was analyzed to determine the fin shape and number best suit for the study. Last, on the basis of RB-PVT unit condenser with 3 fins of straight rectangular shape, a sensitivity was conducted considering fin length and width in the range of 0.2–0.7 mm, and 195⁰-255⁰ and 345⁰-285⁰ for the position of fin on left and right side of the fixed one, respectively. The highest average values in heat dissipation flux, pressure drop and overall fin efficiency were respectively obtained to be 1792.693 W/m², 86.471 kPa and 79.767% when varying fin(s) width and angle. In addition, the average refrigeration coefficient of performance (COP) of 5.079 and which is higher than that of previous studies was achieved. Meanwhile, the maximum average of 258.07 united state dollars (USD) would be saved to capture and store 8.603 tons of emissions (CO2), after every 5 years. RB-PVT unit with fins not only can improve the system performance but also can help achieving a clean environment.
In performance-oriented architectural design, the use of advanced computational simulation tools may provide valuable insight during design. However, the use of such tools is often a bottleneck in the design process, given that computational requirements are usually high. This is a fact that mostly affects the early conceptual stage of design, where crucial decisions mainly occur, and available time is limited. In order to deal with this, decision-makers frequently resort to drawing conclusions from experience, and, as such, valuable insight that advanced computational methods have to offer is lost. This paper explores an alternative approach, which builds on machine-learning algorithms that inductively learn from simulation-derived data, yielding models that approximate to a good degree and are orders of magnitude faster. We focus on visual comfort of office spaces. This is a type of space that specifically requires visual comfort more than others. Three machine-learning methods are compared with respect to applicability in approximating daylight autonomy and daylight glare probability. The comparison focuses on accuracy and time cost of training and estimation. Results demonstrate that machine-learning-based approaches achieve a favourable trade-off between accuracy and computational cost, and provide a worthwhile alternative for performance evaluations during architectural conceptual design.
Conference Paper
Full-text available
Despite the fact that tall buildings are the most wind affected of architectural typologies, testing for aerodynamic performance is typically conducted during the later design phases, well after the overall geometry has been developed. In this context, aerodynamic performance studies are limited to evaluating an existing design rather than a systematic performance study of design options driving form generation. Beyond constrains of time and cost of wind tunnel testing, which is still more reliable than Computational Fluid Dynamics (CFD) simulations for wind conditions around buildings, aerodynamic performance criteria lack an immediate interface with parametric design tools. This study details a framework for empirical data collection through wind tunnel testing in a uniform airflow of mechatronic dynamic models (MDM) and the expansion of the collected dataset by determining a mathematical interpolating model using an Artificial Neural Network (ANN) algorithm developing an Aerodynamic Performance Data-Base (APDB). The philosophical provocation for our research is found in the early 20th century when Frederick Keisler proclaimed the interacting of forces CO-REALITY, which he defined as The Science of Relationships. In the same article Keisler proclaims that the Form Follows Function is an outmoded understanding that design must demonstrate continuous variability in response to interactions of competing forces. This topographic space is both constant and fleeting where form is developed through the broadcasting of conflict and divergence as a system seeks balance and where one state of matter is passing by another; a decidedly fluid system. However, in spite of the fact that most of our environment consists of fluids or fluid reactions, instantaneous and geologic, natural and engineered, we have restricted ourselves to approaching the design of buildings and their interactions with the environment through solids, their properties and geometry. The research described herein explores alternative relations between the object and the flows around it as an iterative process, suggesting an additional layer to the traditional approach of Form Follows Function by proposing Form Follows Flow.
Conference Paper
Full-text available
Discussion of architecture in ecological terms usually focuses on the spatial and material dimensions of design practice. Yet there is an equally critical temporal dimension in ecology that is just as relevant to design. At the micro scale is the question of 'real time' feedback from our design systems. At the macro scale is the issue of sustainability, in other words long term -- and potentially disastrous -- feedback from terrestrial ecosystems. In between are numerous different units for quantizing time in design and computation. In this paper, we examine some of these units -- 'real time', 'design time', 'development time' -- to suggest how they interact with the ecology of design technology and practice. We contextualize this discussion by reference to relevant literature from the field of ecology and to our work applying custom design and analysis tools on architectural projects within a large interdisciplinary design practice.
Conference Paper
Full-text available
At present, the wind engineering toolbox consists of wind-tunnel testing of scaled models, limited full-scale testing, field measurements, and mechanical load/pressure testing. The evolution of computational wind engineering (CWE) based on computational fluid dynamics (CFD) principles are making the numerical evaluation of wind loads a potentially attractive proposition. This is particularly true in light of the positive development trends in hardware and software technology, as well as numerical modeling. The present study focuses on numerical evaluation of wind pressures on tall buildings by using the Commonwealth Advisory Aeronautical Council (CAARC) building model (Melbourne, 1980). The CARRC model has been used extensively to study wind loading on tall buildings in wind tunnel studies and is usually adopted for calibration of experimental techniques. Numerically obtained pressure coefficients on the surface of CAARC building under different configurations of adjacent building are compared with wind tunnel data collected at the RWDI USA LLC laboratory for the present study and from literature. The present numerical simulation uses mostly Reynolds Averaged Navier-Stokes equations (RANS) and Large Eddy Simulation (LES) for selected few cases.
Optimisation and related techniques are well suited to clearly defined problems involving systems that can be accurately simulated, but not to tasks in which the phenomena in question are highly complex or the problem ill-defined. These latter are typical of architecture and particularly creative design tasks, which therefore currently lack viable computational tools. It is argued that as design teams and construction projects of unprecedented scale are increasingly frequent, this is just where such optimisation and communication tools are most needed. This research develops a method by which to address complex design problems, by using inductive machine learning from example precedents either to approximate the behaviour of a complex system or to define objectives for its optimisation. Two design domains are explored. A structural problem of the optimisation of stiffness and mass of fine scale, modular space frames has relatively clearly defined goals, but a highly complex geometry of many interconnected members. A spatial problem of the layout of desks in the workplace addresses the social relationships supported by the pattern of their arrangement, and presents a design situation in which even the problem objectives are not known. These problems are chosen to represent a range of scales, types and sources of complexity against which the methods can be tested. The research tests two hypotheses in the context of these domains, relating to the simulation of a system and to communication between the designer and the machine. The first hypothesis is that the underlying structure and causes of a system’s behaviour must be understood to effectively predict or simulate its behaviour. This hypothesis is typical of modelling approaches in engineering. It is falsified by demonstrating that a function can be learned that models the system in question—either optimising of structural stiffness or determining desirable spatial patterns—without recourse to a bottom up simulation of that system. The second hypothesis is that communication of the behaviour of these systems to the machine requires explicit, a priori definitions and agreed upon conventions of meaning. This is typical of classical, symbolic approaches in artificial intelligence and still implicitly underlies computer aided design tools. It is falsified by a test equivalent to a test of linguistic competence, showing that the computer can form a concept of, and satisfy, a particular requirement that is implied only by ostensive communication by examples. Complex, ill-defined problems are handled in practice by hermeneutic, reflective processes, criticism and discussion. Both hypotheses involve discerning patterns caused by the complex structure from the higher level behaviour only, forming a predictive approximation of this, and using it to produce new designs. It is argued that as these abilities are the input and output requirements for a human designer to engage in the reflective design process, the machine can thus be provided with the appropriate interface to do so, resulting in a novel means of interaction with the computer in a design context. It is demonstrated that the designs output by the computer display both novelty and utility, and are therefore a potentially valuable contribution to collective creativity.
This paper presents various topics related to "wind and tall buildings". Starting from some historical matters, it discusses wind related issues relevant to tall building constructions, wind-induced vibration of tall buildings and its monitoring, equivalent static wind load for structural design, design load problems for cladding design and frame design, habitability to building vibrations, damping in tall buildings, vibration control, elasto-plastic behavior of tall buildings, wind-induced vibrations of base-isolated tall buildings, and interference effects. Finally, it introduces future trends.
Conference Paper
Tall buildings have been traditionally designed to be symmetric rectangular, triangular or circular in plan, in order to avoid excessive seismic-induced torsional vibrations due to eccentricity, especially in seismic prone regions like Japan. However, recent tall building design has been released from the spell of compulsory symmetric shape design, and free-style design is increasing. This is mainly due to architects' and structural designers' challenging demands for novel and unconventional expressions. Development of computer aided analytical techniques and of vibration control techniques using auxiliary devices has also contributed to this trend. Another important aspect is that rather complicated sectional shapes are basically good with regard to aerodynamic properties for crosswind responses, which is a key issue in tall-building wind-resistant design. A series of wind tunnel tests have been carried out to determine wind forces and wind pressures acting on 31 tall building models with various configurations: square plan, rectangular plan, elliptic plan, with corner cut, with corner chamfered, tilted, tapered, inverse tapered, with setbacks, helical, openings and so on. Dynamic wind-induced response analyses of these models have also been conducted. The results of these tests have led to comprehensive discussions on the aerodynamic characteristics of various tall building configurations, and studies on corresponding optimal structural systems.
Tall buildings have been traditionally designed to be symmetric rectangular, triangular or circular in plan, in order to avoid excessive seismic-induced torsional vibrations due to eccentricity, especially in seismic-prone regions like Japan. However, recent tall building design has been released from the spell of compulsory symmetric shape design, and free-style design is increasing. This is mainly due to architects’ and structural designers’ challenging demands for novel and unconventional expressions. Another important aspect is that rather complicated sectional shapes are basically good with regard to aerodynamic properties for crosswind excitations, which are a key issue in tall-building wind-resistant design. A series of wind tunnel experiments have been carried out to determine aerodynamic forces and wind pressures acting on square-plan tall building models with various configurations: corner cut, setbacks, helical and so on. The results of these experiments have led to comprehensive understanding of the aerodynamic characteristics of tall buildings with various configurations.
This Paper presents knowledge processing system, called AIMS (Adaptive and Interactive Modeling System), which integrates simulation, learning, and optimization techniques to perform multi-objective model formation and model utilization tasks. The system is aimed at improving the utility of simulation programs for analysis and synthesis during various stages of design. In this paper, we demonstrate the use of AIMS with an internal combustion engine simulator. A set of models, which trade off accuracy with speed, are induced by AIMS based on examples generated from the simulator. When comparing induced models with the original simulator, we observe orders of magnitude improvement in mode's execution speed with only a minor compromise in model's predictive accuracy.