Conference PaperPDF Available



Abstract and Figures

Scale-resolving turbulent flow simulation for urban environments is computationally expensive and not widely used in industrial practice, despite inherent complexities from geometry and high Reynolds number effects. One way to reduce the computational demand of the simulation is to focus a turbulence-resolving solver on the region of interest, and employ a cheaper less accurate solver elsewhere. We present developments towards a dual Navier-Stokes / lattice-Boltzmann (NS/LB) solver for three-dimensional unsteady urban wind flow. The simulation domain is divided into an NS sub-domain and an LB sub-domain with calculations performed on CPU and GPU respectively. Turbulence is modelled via a large eddy simulation (LES) sub-grid model in the LB sub-domain and an unsteady Reynolds averaged Navier-Stokes model (URANS) in the NS sub-domain. Both sub-domains are bi-directionally coupled on their overlapping boundaries; a synthetic eddy method (SEM) is used to generate instantaneous flow data for the LB sub-domain from the URANS mean flow. Preliminary results of the dual NS/LB model on laminar flow show that the two solvers are able to communicate and simulate continuous flow across the interface. The implementation of the SEM inlet boundary condition successfully demonstrates the convection of turbulence throughout the LB sub-domain. Further work to fully embed the LES region within the URANS domain is ongoing. While motivated by a focus on urban wind flow simulations, this work is expected to have broader relevance where computational domains include flow around a range of geometric scales.
Content may be subject to copyright.
Marta Camps Santasmasas1, Alistair J. Revell1, Ben Parslew1,
Adrian R. G. Harwood1and William Crowther1
1School of Mechanical, Aerospace and Civil Engineering,
The University of Manchester, Manchester M13 9PL, United Kingdom
Scale-resolving turbulent flow simulation for ur-
ban environments is computationally expensive and
not widely used in industrial practice, despite inher-
ent complexities from geometry and high Reynolds
number effects. One way to reduce the computational
demand of the simulation is to focus a turbulence-
resolving solver on the region of interest, and employ
a cheaper less accurate solver elsewhere. We present
developments towards a dual Navier-Stokes / lattice-
Boltzmann (NS/LB) solver for three-dimensional un-
steady urban wind flow. The simulation domain is di-
vided into an NS sub-domain and an LB sub-domain
with calculations performed on CPU and GPU respec-
tively. Turbulence is modelled via a large eddy sim-
ulation (LES) sub-grid model in the LB sub-domain
and an unsteady Reynolds averaged Navier-Stokes
model (URANS) in the NS sub-domain. Both sub-
domains are bi-directionally coupled on their overlap-
ping boundaries; a synthetic eddy method (SEM) is
used to generate instantaneous flow data for the LB
sub-domain from the URANS mean flow. Preliminary
results of the dual NS/LB model on laminar flow show
that the two solvers are able to communicate and sim-
ulate continuous flow across the interface. The imple-
mentation of the SEM inlet boundary condition suc-
cessfully demonstrates the convection of turbulence
throughout the LB sub-domain. Further work to fully
embed the LES region within the URANS domain is
ongoing. While motivated by a focus on urban wind
flow simulations, this work is expected to have broader
relevance where computational domains include flow
around a range of geometric scales.
1 Introduction
Wind modelling in urban environments has many
applications, from pedestrian wind comfort studies to
prediction of contaminant dispersion in toxic gas re-
lease accidents. The most widely-used computational
models for urban wind flow numerically solve the
Reynolds averaged Navier-Stokes (RANS) equations.
However, scale-resolving methods such as Large Eddy
Simulation (LES), that can accurately simulate the tur-
bulent flow structures require large, fine grids, and re-
sult in high computational costs which limit the appli-
cation of more accurate methods in this sector.
So-called ‘hybrid’ turbulence models combine a
turbulence-resolving approach such as LES on the re-
gion of interest with the less computationally expen-
sive RANS methods employed elsewhere (see Frohlic
and von Terzi (2008)). The majority of such meth-
ods solve the Navier-Stokes equations in both regions.
On the other hand, the lattice-Boltzmann (LB) method
is also able to model turbulent flow and may present
advantages over Navier-Stokes (NS) solvers in some
cases. For example, its local and simple algorithm
is well-suited to resolving flow around complex ge-
ometries using graphic processing units (GPUs). On
the other hand, LB’s conditional stability and uni-
form domain discretisation reduce the efficiency of
the LB method as the domain size increases. Hybrid
RANS-LES methods have gained popularity in recent
years following sustained efforts to develop and vali-
date their use for industrial CFD applications (e.g. see
Haase et al. 2009) and in particular, the use of embed-
ded LES is emerging as a reliable and efficient way to
combine cost saving RANS with high accuracy LES
(see e.g. Holgate et al. 2018).
A pragmatic approach to reducing the computa-
tional cost while still resolving turbulence in the re-
gions of interest is to combine two solvers, each ad-
dressing part of the domain with a different numerical
method and a different turbulence strategy. Tong and
He (2015) and Atanasov et al (2016) propose different
‘dual’ NS/LB solvers for both two-dimensional and
three-dimensional, steady and unsteady flow. These
dual solvers demonstrate the advantages of dividing
the problem into sub-domains to be solved with NS
and LB, but thus far both solvers have been imple-
mented on the same computational framework; i.e.
neither method makes use of heterogeneous architec-
ture. It is well established that the computational cost
of the LB method can be reduced up to two orders of
magnitude when executed using GPUs (see Mawson,
In the present work we investigate the coupling of
an LB flow solver, implemented on GPU, with a NS
solver implemented on CPU. The solvers are com-
bined in the framework of an embedded large eddy
simulation approach wherein the LB solver models
only part of the domain, overcoming mesh-related
drawbacks for standard LB methods. The NS solver
employs a RANS turbulence model within a transient
solver, and is coupled to the embedded LB solver via
fluctuations generated from a synthetic eddy genera-
tor at the interface. In this way, the NS sub-domain
models the mean flow in the outermost region and
the LB sub-domain resolves small-scale flow features
around the urban geometry. The proposed dual solver
approach combines the mesh flexibility, stability and
low memory requirements of the NS solver with the
speed of the GPU implementation of the LB solver.
Preliminary results presented herein illustrate two ele-
ments of the aforementioned dual solver; 1) coupling
between LB and NS sub-domains for laminar flow and
2) the implementation of an SEM inlet boundary for
an LB-LES simulation.
2 Methodology
The dual NS/LB model presented in this paper di-
vides the simulated domain into two sub-domains as
shown in Figure 1. The LB-LES sub-domain is run
by an LB solver using a LES Smagorinsky turbulence
model and is executed on GPU. The objective of this
domain is to simulate turbulence in the region of inter-
est. The rest of the domain is covered by the URANS
sub-domain, which is modelled with an unsteady NS
solver with a RANS turbulence model executed on
CPU. The aim of the URANS sub-domain is to pro-
vide mean flow statistics that take into account possi-
ble obstacles outside of the interest region. The exter-
nal boundaries (boundaries 1, 3 and 5 in Fig. 1) incor-
porate a synthetic turbulence model (SEM) to generate
the instantaneous velocity and pressure, needed by the
LB-LES sub-domain, from the mean flow data at the
corresponding position in the URANS sub-domain.
Given the previously established dependence of syn-
thetic turbulence generation on quality of Reynolds
stress anisotropy, it is envisaged that the elliptic blend-
ing Reynolds stress model of Manceau and Hanjalic
(2002) will be employed.
The current section describes the implementation
of the dual NS/LB model. Subsequent sections present
and discuss the preliminary results of two test cases,
each of which addresses one of the required interface
1. Laminar flow around a wall-mounted cube using
a laminar flow version of the dual NS/LB model
that includes boundaries 5 and 6 in Fig. 1.
2. URANS to LB-LES upstream boundary, number
1 in Fig. 1.
The flow in the URANS sub-domain is mod-
elled using the pisoFoam solver from the open source
Figure 1: Dual solver concept applied to model wind around
a group of buildings. View from the top (top fig-
ure), view from the side (bottom figure). The
user is particularly interested on the wind around
the central building. The numbers mark the
boundaries where information is transferred from
URANS to LB-LES (odd numbers) and from LB-
LES to URANS (even numbers).
CFD package OpenFOAM (Weller and Tabor (1998)).
pisoFoam solves the three-dimensional incompress-
ible Navier-Stokes equations,
∂t +uj
= 0 i, j = 1,2,3(2)
where uiis the flow velocity, pthe pressure and ν
the kinematic viscosity. The solver discretises the NS
equations employing the finite volume method. The
configuration used in this paper includes linear inter-
polation schemes and an implicit Euler scheme for
time integration. The solved flow data is stored at the
centre of each cell, while the boundary conditions are
stored at the faces of the boundary cells.
Lattice Boltzmann
The LB-LES sub-domain solver is LUMIS, a
GPU-accelerated version of the LB code developed
at The University of Manchester by Harwood et al.
(2018). The LB-LES sub-domain numerically solves
the incompressible lattice-Boltzmann equations for-
mulated by Guo et al. (2000) using a stream-collide
fα(x+eαdt, t +dt) = fα(x, t)(3)
τ(fα(x, t)feq
α(x, t))
where fα(x, t)is the particle distribution at the cell
xat time t,eαis the discretised particle velocity in the
αdirection, τis the relaxation time, which is related to
the kinematic viscosity of the fluid via ν=c2
cs= 1/3is the lattice sound speed and w0= 1/3,
wα= 1/18 for α= 1 to 6 and wα= 1/36 for α= 7
to 18. feq
α(x, t)are the equilibrium functions for the
particle distribution functions:
α=(ρ0(1 w0)p
+sα(u), α = 0,
+sα(u), α = 1 to α = 18.
sα(u) = wαu·eα
where ρ0=P18
α=0 fαis the density of the fluid in
LB units and uand pare the macroscopic velocity and
pressure of the fluid in LB units. They can be obtained
from fαas
1w0 18
The discretised particle velocity eαfollows a
D3Q19 model. The spatial domain is discretised with
a uniform 3D grid where cubes have unit length, and
the fluid data is stored at the centre of each ‘cell’. Sim-
ilarly, time is discretised in time steps each of unit du-
ration. The incompressible Navier-Stokes equations
can be obtained via a Chapmann-Enzkog expansion of
equation 3 with ρ0set to a constant value (Guo et al.,
200). LUMIS also incorporates the LES Smagorin-
sky turbulence model as described by Koda and Lien
Synthetic eddy method (SEM)
The instantaneous macroscopic velocity and pres-
sure data needed at the LB-LES sub-domain boundary
are generated using the synthetic eddy method devel-
oped by Skillen et al. (2016) from the mean veloc-
ity, turbulence and pressure received from the URANS
The first step to incorporate SEM in the dual
NS/LB model is to test the suitability of SEM as a
boundary condition for the LB-LES sub-domain. In
order to test it, we use an OpenFOAM implementa-
tion of the Skillen et al. (2016) SEM to generate the
inlet boundary conditions for each time step of the LB-
LES simulation. Then we perform a time step of the
LB-LES simulation, which reads its inlet BC from the
OpenFOAM data. See section 3, Test case 2 for more
information and results of the test.
Coupling methodology
Spatial coupling.
The simulation is divided into an URANS sub-
domain and an LB-LES sub-domain as shown in Fig.
3, where the overlap region is the volume solved by
both the NS (URANS) and the LB (LB-LES) solvers.
The overlapping region acts as a buffer, so that the val-
ues transferred by one solver are not overwritten by the
values transferred from the other solver.
Figure 2: Data interpolated from the Navier-Stokes sub-
domain to the lattice-Boltzmann sub-domain (left)
and vice versa (right) in a line of cells.
The two sub-domains only exchange information
at the boundaries of the overlapping region (see Fig.
2). The URANS boundary receives data from the
equivalent cells on the LB-LES sub-domain, while the
LB-LES boundary receives data from the equivalent
cells in the URANS sub-domain. The LB-LES sub-
domain first linearly interpolates the instantaneous ve-
locity data from the LB cell centres to the position of
the URANS boundary. This data is then transformed
to the data needed by the URANS sub-domain by the
process shown in Fig. 3. The LB-LES sub-domain
stores its boundary data at the cell centres and both
sub-domains have the same cell size at the overlapping
region, so no interpolation is needed when transferring
the data from URANS to LB-LES; the URANS mean
flow data is transformed to the instantaneous values
needed by LB-LES following the procedure in Fig. 3.
Figure 3: Exchange of information between the RANS and
the LB-LES sub-domains.
The LB and NS methods model different quantities
and utilise different units, and so the NS velocity and
pressure need to be converted before being transferred
to the LB solver and vice-versa.
The URANS boundary in the overlapping region
implements a Dirichlet boundary condition for veloc-
ity, which yields a uniform Neumann boundary condi-
tion for pressure. Thus the NS solver needs to obtain
the dimensionless macroscopic uns velocity from the
LB macroscopic velocity ulb via
ulb =uns
;plb =pns
where dtlb is the time step and dxlb is the cell size
of the LB solver
The LB-LES boundary in the overlapping region
implements a Dirichlet boundary condition for the par-
ticle distribution functions fα. However, the URANS
sub-domain provides dimensionless pressure and ve-
locity. Once the NS pressure and velocity are con-
verted to LB units using eq. 8, the dual NS/LB solver
uses the equilibrium distribution function (eq. 4) to
obtain fα. However, this method assumes that the LB-
LES flow is in equilibrium, so the non-equilibrium
information carried by the macroscopic velocity and
pressure is lost.
Temporal coupling.
The URANS and LB-LES sub-domains are solved
simultaneously and coupled following the parallel ex-
plicit approach described in algorithm 1, where both
solvers share data at the end of each time step. The
coupling algorithm is implemented using preCICE li-
braries (Bungartz et al., 2016) and the OpenFOAM
preCICE adapter (Choudrakis, 2017), which we have
modified to transfer and receive pressure and velocity
data from both the domain boundary and internal cells.
Algorithm 1 Parallel explicit coupling scheme for the
LB solver. The NS solver is executed simultaneously
following the same algorithm
1: while (time <end time)
2: Receive data from the NS solver;
3: Set received data as LB boundary condition;
4: Perform LB time step;
5: Send data to NS sub-domain;
6: Update time;
3 Results
Test case 1: dual N/S model
The coherence of the LB-NS interface is first
assessed by simulating a three-dimensional wall
mounted cube with a uniform inlet velocity profile and
a Reynolds number of 100 based on the inlet velocity
and the height of the cube. The aim of this simulation
is to test the exchange of information between the sub-
domains of the dual NS/LB, so the configuration of the
case is not aimed at reproducing previous experimen-
tal or numerical studies. Fig. 4 shows a schematic
of the domain. The boundary conditions are as fol-
lows: the inlet and top boundary condition for the LB
sub-domain are Dirichlet boundary conditions for the
particle distribution functions with u= (1,0,0) and
p= 0, while the top boundary of the NS sub-domain
is a Dirichlet boundary for velocity and a zero gradi-
ent boundary for pressure; the bottom boundary con-
dition is a symmetry plane for the 5 first units so that
the constant velocity inlet velocity profile can adapt to
the no-slip boundary condition on the remainder of the
bottom boundary; the surfaces of the cube are also set
to no-slip; the side boundaries of the domain are peri-
odic; the outlet of the domain is set to a zero-gradient
boundary for velocity and a fixed pressure value esti-
mated from an a priori calculation of the domain with-
out the cube (in the present case this is set to 0.15 di-
mensionless units). At present this is necessary since
the LB solver sets gauge pressure to zero at the inlet of
the domain, so that pressure is fixed on both the inlet
and outlet of the complete domain. The need and con-
figuration of this is expected to be further addressed in
subsequent work.
Figure 4: Sketch of Test case 1, not to scale. Distances are
in dimensionless units. The spanwise extent of the
domain is 6 units.
The NS and the LB sub-domains are discretised us-
ing a uniform Cartesian grid with cubic cells of dimen-
sionless length 0.1 and an uniform time step of 0.006.
Fig. 5 shows velocity streamlines across the NS
and LB sub-domains and the values of the pressure
for both solvers at the overlapping region. The LB
streamlines match the NS streamlines at the inlet of
the NS sub-domain. Inside the overlapping region, the
two sets of streamlines deviate as the distance from
the inlet of NS sub-domain increases. This effect is
strongest in the recirculation region. The NS pressure
at the overlap region (Fig. 5, right) follows the same
pattern as the LB pressure, however their values differ
except at the LB sub-domain outlet, where the bound-
ary condition sets the LB pressure to the NS pressure.
Test case 2: lattice-Boltzmann SEM
Prior to implementing SEM in the dual NS/LB
model, we tested the implementation of an SEM in-
let boundary within a traditional, weakly compress-
ible LB-LES solver. The test case is a channel flow
with Reτ= 180 (see Fig. 6). The inlet velocity
is generated for each time step using the Skillen et
Figure 5: Preliminary results for flow aroundwall-mounted
cube across LB-NS interface. Top XZ plane at
Y=0.5H; Btm XY plane at Z=0. Left: Stream-
lines on LB (grey) and NS sub-domains (black).
Right: LB pressure (filled contours) and NS pres-
sure (lines) on the overlapping region.
al. (2016) SEM model with the mean flow statistics
from direct numerical simulation (DNS) by Moser and
Mansour (1999). The SEM limiter on the turbulent
scales uiuj3/2is set to σmax = 0.125. The pressure
is set to a constant value for all of the inlet boundary.
The LB particle distribution functions are assumed to
be equal to the equilibrium distribution functions (see
eq. 4) obtained from the velocity and pressure data.
The top and bottom walls are no-slip boundaries and
the side boundaries are periodic. The outlet also im-
poses a Dirichlet boundary condition for the particle
distribution functions, which are set to yield constant
pressure and macroscopic velocity across the bound-
Figure 6: Sketch of Test case 2, not to scale: lattice-
Boltzmann SEM. The spanwise extent is π.
The domain is discretised using a uniform Carte-
sian grid with 283 ×92 ×142 cells in the streamwise,
vertical and spanwise directions respectively, which
yields a y+= 2 for the cells next to the walls and
y+= 4 for the remaining cells. The time step is uni-
form and equal to 3.6885 ×105.
Figure 7 displays the streamwise and spanwise
components of the instantaneous dimensionless veloc-
ity ux/uτand uy/uτ. The turbulent velocity patterns
introduced at the inlet are transmitted along the do-
main. However, small turbulent structures dissipate as
the distance from the inlet increases. There is also a
decrease in the velocity magnitude at the YZ plane of
cells neighbouring the inlet.
Figure 7: Spanwise and streamwise instantaneous velocity
on vertical section of a channel flow where the in-
let boundary condition has been generated using
SEM. Simulation run using the LB solver LUMIS.
4 Discussion
The results of the dual NS/LB solver indicate that
the the NS and LB sub-domains exchange information
at their boundaries and are able to produce continu-
ous velocity patterns across both domains. It is noted
that the choice to locate the dual solver interface in the
middle of the recirculation region is intended to be a
particularly challenging test of the methodology; the
two-way nature of the interface is necessary. Compar-
ison to NS results for the full domain demonstrate an
over-prediction of the recirculation region by approxi-
mately 30%, and thus the solver requires further inves-
tigation to identify possible cause for this imbalance.
At present there are a number of imposed simplifica-
tions which will limit the generality of the solver and
its accuracy, as discussed in the following:
Compressible-Incompressible boundary: the chal-
lenge of coupling a compressible method with an in-
compressible method is highlighted, since at present
the pressure coupling is limited to one-way only; the
LB sub-domain does not transmit pressure information
to the NS sub-domain. Furthermore, the pressure is set
at both the inlet of the LB sub-domain and the outlet
of the NS sub-domain. The value of the pressure at the
NS outlet is likely to affect the results.
Resolution: at present we have chose to limit the
resolution used in the LB solver - which is currently
set to 10 points per cube height - on account of compu-
tational limitations of the development platform. Tests
at higher, more appropriate resolutions are currently
underway and will be reported in session.
Reconstruction of particle distribution func-
tion:another important limitation is identified in the
algorithm used to obtain the LB particle distributions
from the NS velocity and pressure; at present we have
employed a simplified reconstruction which disregards
the non-equilibrium part of the particle distribution
functions, but a more complex approach will be im-
plemented shortly.
Size and position of the overlap region: the bound-
aries of the overlapping region have to be separated by
at least one cell to allow a buffer where both solvers
can assimilate the information received from the other.
Further investigation is required to better understand
the limitations in size and position of the overlapping
In addition to the above identified ongoing areas of
development, the coupling model currently employs a
parallel, explicit time coupling scheme, but other time
coupling schemes will also be tested.
Regarding Test case 2, the weakly compressible
LB-LES model convects the SEM generated turbu-
lence across the domain. Downstream of the inlet the
intensity of turbulent fluctuations is reduced, beyond
expected levels and further investigation is ongoing.
The artificial dissipation of turbulence observed along
the channel could be due to a number of causes, in-
cluding resolution and excessive subgrid scale damp-
ing. The decrease in the velocity at the cells adjacent
to the inlet might be influenced by the missing pres-
sure information from the SEM model.
5 Conclusions and outlook
We have presented a fully three-dimensional hy-
brid method for transient turbulent flow simulations.
The method couples a sub-domain solved by an in-
compressible lattice-Boltzmann solver with another
sub-domain solved by an incompressible Navier-
Stokes solver. The NS/LB model is implemented on
a heterogeneous architecture with the NS sub-domain
executed on CPU and the LB sub-domain executed on
We have assessed the coupling of the two models
for laminar flow using a 3D wall mounted cube with
Re number of 100. The preliminary results show that
the two models are able to communicate and create a
continuous flow across them showing small discrep-
ancies in the results on the overlapping region; fur-
ther study is required on the effect of resolution and
boundary conditions, coupling methodology, and size
and position of the overlapping region. We have also
implemented a synthetic eddy method inlet boundary
condition in an LB-LES solver as a first step to intro-
ducing turbulence to the dual NS/LB model. The inlet
turbulence is convected through the domain but it is
affected by artificial dissipation. This may be due to
the inlet and outlet boundary conditions and also the
Smagorinsky constant of the LES model; both poten-
tial causes need further study.
We are currently investigating the effects of in-
creasing the resolution and improving the pressure
coupling on the results of the dual NS/LB solver.
Next steps will include final validation of the NS/LB
solver for laminar flows followed by the incorporation
of the SEM boundaries and the modelling of higher
Reynolds number flows. Finally we will look to op-
timise the coupling algorithm by testing the perfor-
mance increase compared to using a single Navier-
Stokes or lattice-Boltzmann model for the complete
Acknowledgment:financial support awarded by Sam-
sung’s Global Outreach Programme is gratefully ac-
knowledged here.
Atanasov, A., Uekermann, B., and Neumann, P. (2016),
Anderson Accelerated Coupling of Lattice Boltzmann and
NavierStokes Solvers for Parallel Applications, Computa-
tion. 4(4), 3857.
Bungartz, H. J., Lindner, F., Gatzhammer, B., Mehl, M.,
Scheufele, K., Shukaev, A. and Uekermann, B. (2016). pre-
CICE A fully parallel library for multi-physics surface cou-
pling. Computers and Fluids. 141, 250258.
Chourdakis, G. (2017), A general OpenFOAM adapter for
the coupling library preCICE. (Master’s dissertation).
Frohlich, J., Von Terzi, D. a. (2008), Hybrid LES/RANS
methods for the simulation of turbulent flows, Progress in
Aerospace Sciences. 44(5), 349377.
Guo, Z., Shi, B. and Wang, N. (2000). Lattice BGK
Model for Incompressible Navier-Stokes Equation. Journal
of Computational Physics. 165(1), 288306.
Haase, W., Braza, M., Revell, A. (Eds.). (2009). DESiderA
European Effort on Hybrid RANS-LES Modelling: Results
of the European-Union Funded Project, 2004-2007 (Vol.
103). Springer Science & Business Media.
Harwood, A. R. G., O’Connor, J., Sanchez Mu˜
noz, J.,
Camps Santasmasas, M. and Revell, A. J. (2018), LUMA:
A many-core, FluidStructure Interaction solver based on the
Lattice-Boltzmann Method, SoftwareX. 7, 8894.
Holgate, J., Skillen, A., Craft, T., Revell, A. (2018). A
Review of Embedded Large Eddy Simulation for Internal
Flows. Archives of Computational Methods in Engineering,
Koda, Y., and Lien, F. S. (2015). The lattice-Boltzmann
method implemented on the GPU to simulate the turbulent
flow over a square cylinder confined in a channel. Flow,
Turbulence and Combustion. 94(3), 495512.
Manceau, R., Hanjali´
c, K. (2002). Elliptic blending
model: A new near-wall Reynolds-stress turbulence closure.
Physics of Fluids, 14(2), 744-754. Chicago
Mawson, M. J. (2014), Interactive Fluid-Structure Interac-
tion With Many-Core Accelerators. (Doctoral dissertation).
Moser, R., Kim, J., Mansour, N. (1999), Direct numerical
simulation of turbulent channel flow up to Re= 590, Physics
of Fluids.,11(4), 1113.
Skillen, A., Revell, A. and Craft, T. (2016). Accuracy and
efficiency improvements in synthetic eddy methods, Inter-
national Journal of Heat and Fluid Flow. 62, 386394.
Tong, Z. X., and He, Y. L. (2015), A unified coupling
scheme between lattice-Boltzmann method and finite vol-
ume method for unsteady fluid flow and heat transfer, In-
ternational Journal of Heat and Mass Transfer. 80, 812824.
Weller, H. G., and Tabor, G. (1998), A tensorial approach to
computational continuum mechanics using object-oriented
techniques, Computers in Physics. 12(6), 620631.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
When scale-resolving simulation approaches are employed for the simulation of turbulent flow, computational cost can often be prohibitive. This is particularly true for internal wall-bounded flows, including flows of industrial relevances which may involve both high Reynolds number and geometrical complexity. Modelling the turbulence induced stresses (at all scales) has proven to lack requisite accuracy in many situations. In this work we review a promising family of approaches which aim to find a compromise between cost and accuracy; hybrid RANS–LES methods. We place particular emphasis on the emergence of embedded large eddy simulation. These approaches are summarised and key features relevant to internal flows are highlighted. A thorough review of the application of these methods to internal flows is given, where hybrid approaches have been shown to offer significant benefits to industrial CFD (relative to an empirical broadband modelling of turbulence). This paper concludes by providing a cost-analysis and a discussion about the emerging novel use-modalities for hybrid RANS–LES methods in industrial CFD, such as automated embedded simulation and multi-dimensional coupling.
Full-text available
The Lattice-Boltzmann Method at the University of Manchester (LUMA) project was commissioned to build a collaborative research environment in which researchers of all abilities can study fluid–structure interaction (FSI) problems in engineering applications from aerodynamics to medicine. It is built on the principles of accessibility, simplicity and flexibility. The LUMA software at the core of the project is a capable FSI solver with turbulence modelling and many-core scalability as well as a wealth of input/output and pre- and post-processing facilities. The software has been validated and several major releases benchmarked on supercomputing facilities internationally. The software architecture is modular and arranged logically using a minimal amount of object-orientation to maintain a simple and accessible software.
Full-text available
We present an Anderson acceleration-based approach to spatially couple three-dimensional Lattice Boltzmann and Navier–Stokes (LBNS) flow simulations. This allows to locally exploit the computational features of both fluid flow solver approaches to the fullest extent and yields enhanced control to match the LB and NS degrees of freedom within the LBNS overlap layer. Designed for parallel Schwarz coupling, the Anderson acceleration allows for the simultaneous execution of both Lattice Boltzmann and Navier–Stokes solver. We detail our coupling methodology, validate it, and study convergence and accuracy of the Anderson accelerated coupling, considering three steady-state scenarios: plane channel flow, flow around a sphere and channel flow across a porous structure. We find that the Anderson accelerated coupling yields a speed-up (in terms of iteration steps) of up to 40% in the considered scenarios, compared to strictly sequential Schwarz coupling.
Full-text available
Numerical simulations of fully developed turbulent channel flow at three Reynolds numbers up to Retau=590 are reported. It is noted that the higher Reynolds number simulations exhibit fewer low Reynolds number effects than previous simulations at Retau=180. A comprehensive set of statistics gathered from the simulations is available on the web at
Full-text available
Most of the existing lattice Boltzmann BGK models (LBGK) can be viewed as compressible schemes to simulate incompressible fluid flows. The compressible effect might lead to some undesirable errors in numerical simulations. In this paper a LBGK model without compressible effect is designed for simulating incompressible flows. The incompressible Navier–Stokes equations are exactly recovered from this incompressible LBGK model. Numerical simulations of the plane Poiseuille flow, the unsteady 2-D shear decaying flow, the driven cavity flow, and the flow around a circular cylinder are performed. The results agree well with the analytic solutions and the results of previous studies.
Full-text available
A new approach to modeling the effects of a solid wall in one-point second-moment (Reynolds-stress) turbulence closures is presented. The model is based on the relaxation of an inhomogeneous (near-wall) formulation of the pressure–strain tensor towards the chosen conventional homogeneous (far-from-a-wall) form using the blending function α, for which an elliptic equation is solved. The approach preserves the main features of Durbin’s Reynolds-stress model, but instead of six elliptic equations (for each stress component), it involves only one, scalar elliptic equation. The model, called “the elliptic blending model,” offers significant simplification, while still complying with the basic physical rationale for the elliptic relaxation concept. In addition to model validation against direct numerical simulation in a plane channel for Reτ = 590, the model was applied in the computation of the channel flow at a “real-life” Reynolds number of 106, showing a good prediction of the logarithmic profile of the mean velocity.
In the emerging field of multi-physics simulations, we often face the challenge to establish new connections between physical fields, to add additional aspects to existing models, or to exchange a solver for one of the involved physical fields. If in such cases a fast prototyping of a coupled simulation environment is required, a partitioned setup using existing codes for each physical field is the optimal choice. As accurate models require also accurate numerics, multi-physics simulations typically use very high grid resolutions and, accordingly, are run on massively parallel computers. Here, we face the challenge to combine flexibility with parallel scalability and hardware efficiency. In this paper, we present the coupling tool preCICE which offers the complete coupling functionality required for a fast development of a multi-physics environment using existing, possibly black-box solvers. We hereby restrict ourselves to bidirectional surface coupling which is too expensive to be done via file communication, but in contrast to volume coupling still a candidate for distributed memory parallelism between the involved solvers. The paper gives an overview of the numerical functionalities implemented in preCICE as well as the user interfaces, i.e., the application programming interface and configuration options. Our numerical examples and the list of different open-source and commercial codes that have already been used with preCICE in coupled simulations show the high flexibility, the correctness, and the high performance and parallel scalability of coupled simulations with preCICE as the coupling unit.
The lattice Boltzmann method (LBM) is a relatively new method for fluid flow simulations, and is recently gaining popularity due to its simple algorithm and parallel scalability. Although the method has been successfully applied to a wide range of flow physics, its capabilities in simulating turbulent flow is still under-validated. Hence, in this paper, a 3D LBM code was developed to investigate the validity of the LBM for turbulent flow simulations through large eddy simulations (LES). A GPU enabled LBM code was developed, and validated against a benchmark test case involving the flow over a square cylinder in square channel. The flow results showed good agreement with literature, and speedups of over 150 times were observed when two GPUs were used in parallel. Turbulent flow simulations were then conducted using LES with the Smagorinsky subgrid model. The methodology was first validated by computing the fully developed turbulent channel flow, and comparing the results against direct numerical simulation results. The results were in good agreement despite the relatively coarse grid. The code was then used to simulate the turbulent flow over a square cylinder confined in a channel. In order to emulate a realistic inflow at the channel inlet, an auxiliary simulation consisting of a fully developed turbulent channel flow was run in conjunction, and its velocity profile was used to enforce the inlet boundary condition for the cylinder flow simulation. Comparison of the results with experimental and numerical results revealed that the presence of the turbulent flow structures at the inlet can significantly influence the resulting flow field around the cylinder.
The coupling of large eddy simulation (LES) with statistical turbulence models, i.e. Reynolds-Averaged Navier–Stokes (RANS) models, is arguably the main strategy to drastically reduce computational cost for making LES affordable in a wide range of complex industrial applications. The present paper presents a coherent review of the various approaches proposed in the recent literature on this topic. First, basic concepts and principal strategies highlighting the underlying ideas are introduced. This culminates in a general scheme to classify hybrid LES/RANS approaches. Following the structure of this novel classification, a larger number of individual methods are then described and assessed. Key methods are discussed in greater detail and illustrated with examples from the literature or by own results. The aim of the review is to provide information on how to distinguish different methods and their ingredients and to further the understanding of inherent limitations and difficulties. On the other hand, successful simulation results demonstrate the high potential of the hybrid approach.