Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Applications in Energy and Combustion Science 13 (2023) 100113
Available online 9 February 2023
2666-352X/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Application of dense neural networks for manifold-based modeling of
ame-wall interactions
Julian Bissantz
a
, Jeremy Karpowski
a
,
1
, Matthias Steinhausen
a
, Yujuan Luo
a
,
Federica Ferraro
a
,
*
, Arne Scholtissek
a
, Christian Hasse
a
, Luc Vervisch
b
a
Technical University of Darmstadt, Department of Mechanical Engineering, Simulation of reactive Thermo-Fluid Systems, Otto-Berndt-Str. 2, 64287 Darmstadt,
Germany
b
CORIA - CNRS, Normandie Universite´, INSA de Rouen, Technopoˆle du Madrillet, BP 8, Saint-E´tienne-du-Rouvray 76801, France
ARTICLE INFO
Keywords:
Machine learning
Data-driven modeling
Manifold methods
Head-on quenching
Side-wall quenching
ABSTRACT
Artical neural networks (ANNs) are universal approximators capable of learning any correlation between
arbitrary input data with corresponding outputs, which can also be exploited to represent a low-dimensional
chemistry manifold in the eld of combustion. In this work, a procedure is developed to simulate a premixed
methane-air ame undergoing side-wall quenching utilizing an ANN chemistry manifold. In the investigated
case, the ame characteristics are governed by two canonical problems: the adiabatic ame propagation in the
core ow and the non-adiabatic ame- wall interaction governed by enthalpy losses to the wall. Similar to the
tabulation of a Quenching Flamelet-Generated Manifold (QFM), the neural network is trained on a 1D head-on
quenching ame database to learn the intrinsic chemistry manifold. The control parameters (i.e. the inputs) of
the ANN are identied from thermo-chemical state variables by a sparse principal component analysis (PCA)
without using prior knowledge about the ame physics. These input quantities are then transported in the
coupled CFD solver and used for manifold access during simulation runtime. The chemical source terms are
corrected at the manifold boundaries to ensure bounded- ness of the thermo-chemical state at all times. Finally,
the ANN model is assessed by comparison to simulation results of the 2D side-wall quenching (SWQ) congu-
ration with detailed chemistry and with a amelet-based manifold (QFM).
1. Introduction
In the transition towards sustainable combustion technologies, nu-
merical simulations play a crucial role for the rapid design of carbon-
neutral or carbon-free combustion systems for power generation and
transportation. While highly-resolved detailed chemistry (DC) simula-
tions are essential to understand physical phenomena and can serve as
the basis for model development and validation, their application to
practical combustors is often unfeasible due to prohibitive computa-
tional costs. Consequently, there exists a demand for accurate and
numerically efcient chemistry reduction approaches. One option are
reduced-order models utilizing chemistry manifolds [1–4]. These
methods are based on the tabulation of pre-calculated thermo-chemical
states which are pa- rameterized and accessed by control variables. The
control variables are then usually transported in the CFD simulation and
used to retrieve the thermo-chemical state from the manifold during
simulation runtime. Given a suitable manifold is used, these methods
combine the high accuracy of a detailed chemistry (DC) simulation with
low computational costs. However, for complex congurations
involving a signicant number of different combustion phenomena (e.g.,
multi-phase combustion, pollutant formation modeling or ame-wall
interactions), more control variables, thus, additional manifold di-
mensions are required. The memory demand to store the man- ifold
quickly becomes intractable considering memory-per-core limitations
on high-performance computers. Additionally, the tabulation of the
manifold becomes increasingly difcult since the control variables are
often not linearly independent, which complicates a non-overlapping
data arrangement and efcient manifold access.
An emerging alternative is data-driven modeling, or machine
learning (ML), using articial neural networks (ANN). Neural networks
Preprint submitted to Applications in Energy and Combustion Science
* Corresponding author.
E-mail address: ferraro@stfs.tu-darmstadt.de (F. Ferraro).
1
Joint rst author
Contents lists available at ScienceDirect
Applications in Energy and Combustion Science
journal homepage: www.sciencedirect.com/journal/applications-in-energy-and-combustion-science
https://doi.org/10.1016/j.jaecs.2023.100113
Received 2 August 2022; Received in revised form 12 November 2022; Accepted 6 January 2023
Applications in Energy and Combustion Science 13 (2023) 100113
2
are universal approximators capable of learning any correlation be-
tween arbitrary input data (control variables) with corresponding out-
puts. This holds true even if the control variables are not linearly
independent. This property of ANNs can be exploited to represent a
chemistry manifold. An ANN is memory efcient and the storage size
changes only slightly with the number of control variables. Furthermore,
ANNs prot from an enormous performance increase on specialized
accelerator hardware, which is one focus of current computational
hardware development.
The rst ANN application for chemistry modeling in a Large Eddy
Simulation (LES) was performed by Flemming et al. [5], who used the
approach to model the Sandia Flame D. The authors achieved a memory
reduction by three orders of magnitude with a minimal increase in
computational time in comparison with a tabulated manifold approach.
Ihme et al. [6] focused on the optimization of ANN architectures for each
output variable using a generalized pattern search. These models were
subsequently applied in an LES of the Sydney bluff-body swirl- stabilized
SMH1 ame. Again, the ANN showed a comparable accuracy to a
tabulated manifold with acceptable computational overhead for LES
applications. Recently, several investigations have been carried out
applying machine learning in reactive ow simulations for manifold
representation [7–13], turbulence-chemistry interactions [14–17] and
modeling chemical kinetics [18–22]. This was made possible by the
development of open-source deep learning frame- works that followed
the breakthrough of deep learning in the eld of computer vision in the
past decade. Although the training of ANNs has been simplied by these
frameworks, many works have reported a lack of model accuracy in case
input values approach the manifold boundaries. This applies in partic-
ular to non-linear quantities, such as chemical source terms. In a direct
numerical simulation (DNS) of a turbulent syngas oxy-ame, Wan et al.
[23,24] trained additional neural networks for a normalized oxygen
mass fraction Y
O2
>0.9 to improve the model accuracy close to the
unburnt conditions, where low, but non-zero, reaction rates occured.
Similarly, Ding et al. [10] used a multiple multilayer perceptron (MMP)
approach, where additional models were trained on increasingly smaller
intervals around zero, where the relative error of the prediction is large
compared to the absolute error. During the simulation, a model cascade
is employed, where the decision which output is used is based on the
output value of the previous model. This methodology was applied in
LES of the Sandia ame series. Another approach to increase the overall
prediction accuracy of ML models is to divide the manifold using clus-
tering algorithms such as self-organizing maps [8,12,25,26] or k-means
clustering [27] and to use different ANNs for the prediction of the
subsets of the manifold. However, in some cases, hundreds of networks
had to be trained [8,25], which introduces additional overhead for the
model selection during simulation runtime. For thorough overviews of
machine learning approaches in the context of combustion, the reader is
referred to Zhou et al. [28] and Ihme et al. [29].
While there exist many coupled simulations with models based on
tabulated reduced-order manifolds, the literature on ML-based manifold
modeling is still scarce. First results have been encouraging, but several
challenges, such as the handling of non-linear terms, a robust feature
selection (i.e. suitable control variables), or the required number of
networks for an accurate man- ifold representation, are active areas of
research.
In this work, a ML model based on dense neural networks (DNN) is
coupled to a CFD solver and utilized for the 2D simulation of a laminar
premixed methane-air ame undergoing side- wall quenching (SWQ).
The 2D SWQ case is well-established [30–32] and includes two essential
ame regimes: an unstretched adiabatic ame regime and a
non-adiabatic ame quenching at the wall. Even when ignoring un-
steady[33] or turbulent effects [34,35] standard amelet models fail to
capture some physics, as discussed by Emov et al. [36]. Signicant
modeling efforts were required to develop advanced amelet models
that can accurately predict the pollutant formation, specically CO[31,
36].
With this background, the objective of this work is threefold:
•to demonstrate the application of a purely data-driven approach for
modeling low-order chem- istry manifolds in the simulation of the
laminar side-wall quenching conguration;
•to identify suitable input parameters for the ML-based manifold
without using prior knowledge about the ame physics (applying a
sparse principal component analysis [37]);
•to develop a reliable treatment of source terms at the manifold
boundaries to ensure boundedness of the thermo-chemical state
during the coupled simulation.
The laminar SWQ conguration provides a benchmark for combus-
tion modeling that is suf- ciently challenging and therefore suitable for
assessing the predictive capabilities of the ML-based approach devel-
oped in this study. Analogous simulation results obtained with detailed
chemistry and with a amelet-based model (QFM) serve as reference
datasets.
The paper is structured as follows: Section 2 describes the numerical
setup of the investigated congurations, namely the 1D head-on
quenching (HOQ) and the 2D SWQ. In Section 3, the machine learning
methods are outlined. In Section 4, the results for the HOQ and the SWQ
cases are analyzed. First the wall heat ux is compared for the HOQ case
in order to verify the model. Thereafter, an analysis of the local heat
release and the thermo-chemical state for the SWQ conguration is
carried out. Finally, conclusions are drawn in. Section 5
2. Numerical setups
In this section, an overview of the numerical setups is provided. First,
the 1D HOQ conguration is addressed, which is used both for the ANN
training and the QFM generation. Thereafter, the more complex 2D SWQ
conguration is described.
2.1. 1D head-on quenching (HOQ)
In the 1D HOQ conguration, a premixed laminar ame propagates
perpendicular towards a wall, where it extinguishes due to heat losses.
The 2 cm long domain is discretized by an equidistant mesh with 2000
points (resolution of 10 µm) and time integration is realized by a fully
implicit backward differentiation formula (BDF). For the initialization,
an adiabatic, stoichiometric freely propagating methane-air ame is
used with a fresh gas temperature of 300 K. The wall temperature is
xed at 300 K and molecular transport is modeled using the unity Lewis
number assumption. Furthermore, the detailed GRI 3.0 mechanism is
used [38]. All amelet simulations are performed with an in-house
solver [39], and the setup has been validated previously by Luo et al.
[40].
2.2. 2D side-wall quenching (SWQ)
Following the setup of previous works [30,32], the simulation
domain of the generic SWQ cong- uration consists of a two-dimensional
rectilinear mesh (30 mm x 6 mm) with an uniform cell size of Δ =50 µm,
see Fig. 1. The numerical setup has been validated extensively [30,32]
against experimental data [41,42]. The inlet ow is divided into a fresh
(5.5 mm) and burned gas section (0.5 mm), where the latter is used for
ame stabilization. The fresh gas is initialized with a premixed stoi-
chiometric methane-air mixture at ambient conditions (T =300 K; p =1
atm), while the burned gas consists of hot exhaust gasses at equilibrium
conditions. As inlet velocity, a parabolic inow prole is used and the
burned gas velocity is set to 3.81 m/s, compensating for the density
difference between fresh and burned gasses. The velocity prole is
shown in Fig. 1. At the wall, a constant temperature of 330 K is assumed
in accordance with previous studies [32,42]. This represents an addi-
tional challenge for the DNN model, which has to capture the preheating
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
3
at the wall, and this effect must be considered in the training. For the
species mass fractions a zero gradient and for the velocity a non-slip
boundary condition is applied. At the outlets, zero gradient boundary
conditions are applied to temperature, species mass fractions, and ve-
locity. The spatial and temporal discretization scheme, as well as the
time step control, were adapted from Steinhausen et al. [32].
A detailed chemistry reference solution and corresponding simula-
tion results with a tabulated manifold approach were obtained with in-
house solvers based on OpenFOAM (v2006). Different tabulated mani-
fold approaches have been developed and validated for this setup [32],
from which the quenching amelet-generated manifold (QFM) by
Steinhausen et al. [32] is selected to serve as a reference model. The
QFM is constructed from the same 1D HOQ dataset, which is subse-
quently used for the training of the neural network, and it is parame-
terized by the control variables enthalpy and CO
2
mass fraction.
3. Machine learning methodology
This section gives an overview of the ML methodology employed in
this work. First, the generation of the ML training dataset is described.
Thereafter, the training parameters, the chosen network architecture,
and the method for identifying suitable model inputs are specied. A
correction method for the non-linear source terms is proposed and dis-
cussed in Section 3.3. Afterwards, the assessment of the ML model ac-
curacy is described and nally, the coupling of the ML model to the CFD
code is briey outlined.
3.1. Training data generation
The amelet-based training data is generated by a transient HOQ
simulation, as described in Section 2.1. The left plot in Fig. 2 displays the
training data points in the original physical space (x) and time (t)
colored by the CO mass fraction. The isothermal wall is located at the left
boundary (x =0 mm). Additionally, this plot shows the writing interval
of the ame solution. Initially, the solutions are written at a constant
time step, whereas when the ame approaches the wall and heat losses
start to occur, the writing interval is then determined by the change of
enthalpy at the wall by 1 ⋅ 10
4
J/kg. Additionally, 10 freely propagating
ames with varying inlet temperatures ranging from 300 K to 340 K are
included in the training dataset. This extension of the manifold is carried
out in order to account for elevated enthalpy levels in proximity of the
isothermal wall in the SWQ conguration (T
wall
=330 K). The same
procedure was employed for QFM tables in [32] and is visualized in the
right plot in Fig. 2, where the HOQ manifold is displayed in the trans-
formed PV and T space, with the added freely propagating ames
located above the red line. It can be observed that the additional
amelets extend the manifold at the upper boundary.
In total, the dataset consists of 310 1D amelets of different enthalpy
levels, resulting in 0.62 million points. Furthermore, the data is
randomly split into a training (90%) and a validation (10%) dataset. The
latter is used to evaluate the predictive capabilities of the model and to
monitor the training process to prevent overtting. Additionally, all
dataset entries are normalized to an interval of [0, 1] (min-max scaling)
ensuring similar feature value ranges which promotes convergence of
the gradient-based optimization algorithm.
3.2. Neural network architecture and training
In order to represent the manifold, which is inherent to the dataset
described above, two re- quirements have to be met for the ML
approach: (1) a suitable ANN architecture has to be chosen together with
a training algorithm, and (2) proper input quantities (i.e., features, or
control variables) have to be identied.
Here, the neural network architecture of a Dense Neural Network
(DNN) is chosen. Fig. 3 shows the structure of a DNN, which is dened
by several sequential layers of neurons. Table 1 summarizes the chosen
hyperparameters for the ML training, which is carried out using the
PyTorch library [43]. The learning rate is reduced by a factor of 5 if the
loss does not decrease for 20 epochs.
For the selection of suitable inputs (or control variables) for the DNN,
a sparse principal component analysis (SPCA) [37] is performed. Con-
trary to a regular PCA, for which the principal compo- nents are a dense
linear combination, i.e. a combination of all thermo-chemical state
variables [44], the SPCA algorithm attempts to minimize the number of
variables contributing to the principal components, i.e., a sparse linear
combination, which makes the result more interpretable. As a result, the
main principal components identied from the SPCA only rely on a few
of the variables that dene the thermo-chemical state. The variables are
normalized as described in Section 3.1 prior to the SPCA.
The rst three sparse principal components (SPCs) extracted from
the training dataset are shown in Table 2 together with their constitu-
ents and associated explained variance. The latter describes the ratio of
variance contained in the individual sparse principal component
compared to the sum of all sparse principal component variances in the
dataset, i.e. a measure of the information contained in the variable. The
variances of all SPCs sum up to unity. Interestingly, the rst sparse
principal component (SPC1) resembles a progress variable (PV) con-
sisting of the species involved in the global methane oxidation reaction.
Similar combinations of species have been used for the denition of a
primary progress variable in the modeling of methane-air combustion
[45] and it is emphasized that SPC1 is obtained here without using prior
knowledge about the ame physics. The second sparse principal
component (SPC2) mainly consists of the temperature (T) and negligible
Fig. 1. Schematic view of the SWQ burner and the numerical subdomain used
for the simulation. All scales are given in mm. The velocity prole at the inlet is
also shown.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
4
contributions of three species. Both, SPC1 and SPC2, are in accordance
with the two governing ame regimes contained in the 1D HOQ training
dataset: (1) the propagation of a freely propagating ame (characterized
by PV), and (2) ame quenching caused by heat losses to the wall
(characterized by T). Therefore, PV and T are chosen as model inputs,
see Fig. 3. These principal components also agree well with the inputs of
the tabulated manifold approaches, where a progress variable and
enthalpy have been chosen [32,36]. For convenience, PV is dened as 2
SPC1 ≃Y
H2O
+Y
CO2
−Y
O2
−Y
CH4
in the following.
In the coupled simulation, SPC1 and SPC2 are solved in addition to
the velocity and pressure in the employed PIMPLE algorithm, a combi-
nation of the SIMPE and PISO algorithm [46]. Assuming unity Lewis
number diffusion for all species, the transport equations for the SPC1, i.
e. progress variable (PV), and the SPC2, i.e. temperature (T) read
(1)
(2)
where
ρ
is the density, u the velocity, D the diffusion coefcient, ˙
ω
YPV the
progress variable source term, c
p
the heat capacity, λ the heat conduc-
tivity, and ˙
ω
′
T is source term of the temperature, i.e. the heat release rate
(HRR), respectively. The term D
diff,s
in Eq. (2) represents the tempera-
ture diffusion caused by the species diffusion. During the numerical
solution of Eqs. (1) and (2), PV and T are used as inputs for the employed
DNN to retrieve the thermochemical quantities highlighted in Eqs. (1)
and (2) (marked by gray boxes) as outputs from the DNN. For the
training process, the number of layers and neurons was varied, until a
network architecture with three hidden layers containing 64 neurons
each has proven sufcient to accurately represent the thermo-chemical
variables included in the chemistry manifold. Furthermore, a different
Fig. 2. Scatter plot colored by the CO mass fraction for the head-on quenching ame in the physical space (x) versus time (t) (left). The same dataset is mapped into
progress variable (PV) and temperature (T) coordinates on the right. The additional preheated amelets, which are incorporated into the manifold, are shown on the
right above the red line.
Fig. 3. Network architecture of a dense neural network (DNN). The in- and
outputs used in this work are shown for the respective layer.
Table 1
Hyperparameters for the training of the ML model.
Hyperparameter Property
Epochs 1000
Batch size 5000
GPU lxTesla K20Xm
Optimizer Adam algorithm
Initial learning rate 0.001
Loss function Mean-Squared Error (MSE)
Activation function Hyperbolic tangent (tanh)
Table 2
Results of the SPCA for the 1D HOQ training dataset. Only the rst three SPCs are
shown here. SPC3 is truncated after the rst four constituents.
SPC Thermo-chemical variables Variance
SPCI −0.5YO2 −0.5YCH4 0.365
0.5YH2O +0.51YCO2
SPC2 −0.004YH2 +0.009YO2 0.121
−0.001YH2O2 +1.0T
SPC 3 −0.503YC2H6 −0.492YC3H8 0.113
−0.446YCH2O−0.335YC2H4
+... (truncated)
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
5
training strategy, namely the training of one DNN for each output, was
evaluated without observable improvement on the accuracy or compu-
tational performance for the manifold investigated here. It is empha-
sized that this could be different for other cases.
3.3. Source term modeling
The accurate description of the chemical source terms can be chal-
lenging for ML approaches based on neural networks. Specically, in a
transient simulation small, but non-zero, source terms for fresh gas
conditions or equilibrium conditions on the burnt side of a ame (i.e.
thermo-chemical states at the boundaries of the manifold) can lead to an
error accumulation for the transported con trol variables (PV, T). Sub-
sequently, the transported variables can drift to unphysical states
outside the training interval, where the predictions by the DNN degrade
and exacerbate the drift further. Due to their inherent, gradient-based
optimization, neural networks asymptotically approach but cannot
reach perfect accuracy in regression tasks, which is required here to
accurately capture the ame physics. This aspect can easily be missed in
the a-priori validation step, since the error at the boundary is small.
Previous works proposed to train additional networks for subsets of
the manifold [8,12,25-27]. This can decrease the error made at the
manifold boundaries to an acceptable threshold, but it does not
completely solve the issue and further introduces computational over-
head since the appropriate model has to be identied for every
computational element in every time step at simulation runtime.
Here it was found enough to set the source terms to zero close to the
PV boundaries. The predicted output of all source terms, namely ˙
ω
PV and
˙
ω
′
T is then conditioned on the progress variable at simulation runtime:
˙
ω
=⎧
⎨
⎩
0 if
¯
PV ≤0
DNN(PV,T)if 0 ≤¯
PV ≤1,
0 if
¯
PV ≥1,
(3)
where ¯
PV is the scaled progress variable. ¯
PV =0 and ¯
PV =1 correspond
to PVmin = − 0.275 and PVmax =0.271 respectively. These values are
given by the manifold and are constants in this case, because only one
mixture fraction level is considered. Generally, PV
min
and PV
max
could
also be dened as a function of the mixture fraction if required. The
above constraint is realized by a mask vector, which is described in more
detail in the next section. It ensures that the numerical solution of the
input values of the DNN stays bounded to the known training interval. It
is noted that this measure can be easily applied to arbitrary inputs of the
DNN.
3.4. Model validation
In order to measure and quantify its accuracy, the ML model is tested
with the validation dataset (not included in the training). The prediction
quality is measured with the coefcient of determination dened as
R2=1−[∑
N
i=1
(Φi−
Φi)2][∑
N
i=1
(Φi−¯
Φ)2]−1
where Φi,
¯
Φ,
Φ, and N are the true value, the mean value of Φ, the
network’s prediction, and the number of samples, respectively. For the
ML model employed here, all model outputs reach an R
2
score of 0.999.
Particularly, CO has been investigated and identied as a relevant
quantity for ame-wall in teractions in previous studies [30,32,36].
Fig. 4 allows to assess the overall predictive capabilities of the tabulated
manifold (QFM) and the ML model for the CO mass fraction in com-
parison to the two reference datasets (DC) of HOQ and SWQ. Extracting
the inputs from the DC datasets, the QFM and ML model are utilized to
predict the CO mass fraction which is then compared to the DC refer-
ences (a-priori analysis). Optimal predictions are located on the diagonal
line in Fig. 4, where the predicted CO is identical to the reference DC
value. It can be observed that both the QFM and ML model show high
accuracy for the HOQ and SWQ datasets. Only slight overestimates are
observed for the SWQ for both the ML and QFM model, as indicated by
few gray points above the diagonal. This over-prediction of CO is caused
by a substantially (factor of 2) higher heat loss to the wall in the HOQ
conguration (i.e. the training data) compared to the SWQ congura-
tion. This difference can be attributed to the different directions of the
temperature gradients in the respective cases [32,36,47]. Overall, the
ML model yields satisfying results and is applied to a coupled simula-
tions of both reference congurations next.
3.5. Model coupling
Before coupling the model to the CFD code two post-processing steps
are performed:
1 In order to apply the network in the CFD, the scaling and rescaling
operation of in- and outputs are added to the exported network.
2 All source terms are corrected based on the input of the progress
variable according to Eq. (3).
The rst step allows the generic case-independent implementation of
the ML interface into the CFD code. As outlined in the previous section,
the source term correction is realized by means of a mask vector, that
determines at runtime, which cells contain a thermo-chemical state
located at the boundary of the manifold. Subsequently, the output vector
of the source term is modied for positive values of the mask vector.
Finally, the DNN is coupled to the OpenFOAM-based solver via the
PyTorch C++ API for usage in the coupled simulations.
4. Results and discussion
In this section, two cases are simulated using the previously
described ML model to represent the manifold. First, the model is veri-
ed on the 1D HOQ conguration, which was also used to generate the
training data. Second, the model is applied to the laminar SWQ
conguration described in Section 2.2. By this means, the predictive
Fig. 4. A-priori analysis of tabulated manifold (QFM) and the machine learning
model (ML) on the detailed chemistry dataset of head-on quenching (HOQ) and
side-wall quenching (SWQ) for the mass fraction of CO.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
6
capabilities of the ML model are assessed for a ame-wall interaction
case which is different from the training database.
4.1. Results for the 1D HOQ conguration
A snapshot of the coupled HOQ simulation with (ML) and without
the source term correction (ML*) is shown together with the DC result in
Fig. 5. The ML* model predicts small, non-zero temperature (and
progress variable) source terms in the preheat zone of the ame, which
accumulates to an unphysical temperature increase over the simulation
runtime. At the beginning of the simulation, ML* underpredicts the
ame propagation speed by approximately 30%, which later turns to an
overprediction of similar order of magnitude after a considerable tem-
perature increase. This effect can hardly be identied from the a-priori
analysis and it depends on the initialization and duration of the simu-
lation, and the absolute prediction error by the ML model for source
terms at the manifold boundaries. In comparison, the ML model, which
utilizes the source term correction described in Section 3.3, accurately
describes temperature prole and ame speed (< 1% deviation
compared to the DC reference).
Furthermore, Fig. 6 shows the wall heat ux for the ML model, the
tabulated manifold (QFM), and the DC reference result. It is found that
both ML and QFM models recover the overall trend shown by the DC
reference result, but overpredict the wall heat ux at the quenching
point, characterized as the point in time of maximum wall heat ux, by a
similar order of magnitude. It is thereby veried, that the ML model
yields comparable results to a tabulated manifold approach, generated
from the same dataset used in the training of the neural network.
4.2. Results for the 2D SWQ conguration
The coupled simulation results obtained with the ML model for the
2D SWQ conguration are compared with a detailed chemistry and a
tabulated chemistry calculation (QFM, c.f. Section 2.2). The numerical
results are analyzed with respect to a relative coordinate system that
uses the quenching height as the origin of the wall-parallel direction.
The quenching height is dened by the maximum wall heat ux [30,32].
Fig. 7 depicts the temperature contour of the SWQ conguration for
the ML model as well as the DC and QFM references. The contours of the
reduced models show qualitatively good agreement at the quenching
point as well as the shape and position of the adiabatic ame branch.
The isolines for three temperature levels (310 K, 320 K, and 330 K)
indicate the heating of the fresh mixture (300 K) in close proximity of
the wall, caused by the slightly elevated wall tem- perature (330 K).
These preheated states are captured by adding preheated adiabatic
amelets to the training dataset (c.f. Section 3.1). Interestingly, the
isoline for 330 K for the QFM shows a small deviation at the point where
it meets the wall boundary. This difference can be attributed to the
transport of enthalpy for the QFM model instead of temperature for the
ML model. The transport of enthalpy requires special treatment of the
constant temperature wall boundary condition. In this case, a secondary
table is used to retrieve the wall enthalpy for a given progress variable
and constant temperature. Whereas, for the ML model the temperature is
transported and the isothermal boundary condition is directly imposed.
4.2.1. Analysis of local heat release rate
Following previous investigations on SWQ [30,32,36] the local HRR
as a global ame property is used to assess the predictive capabilities of
the ML model. The 2D contours of the normalized HRR at the quenching
point are displayed in the top row of Fig. 8. The HRR decreases
considerably within 0.5 mm distance to the wall showing that the ame
quenching process occurs very close to the wall. A detailed assessment of
quenching distances for the given SWQ conguration is beyond the
scope of this work. Here the reader is referred to Zirwes et al. [47], who
investigated the inuence of several factors (e.g., chemical mechanism,
diffusion model, etc.) on the quenching distance of a SWQ conguration.
Overall, a qualitatively good agreement of the HRR contour of the ML
model with the DC and QFM references can be observed.
To allow a quantitative comparison, HRR proles are extracted at
three different distances from the quenching point (y
q
=0), indicated by
the horizontal white dotted lines in Fig. 8. The extracted proles are
shown in Fig. 9. At the three quenching heights y
q
, both, the peak po-
sition and value, of the HRR proles obtained with the ML model align
well with the DC reference solution. Small deviations can be observed at
the quenching height y
q
=0 mm, where the peak of the ML model is
slightly shifted towards the wall and for y
q
=0.5 mm, where the peak is
slightly overestimated. However, the model yields similar results as the
QFM, demonstrating that it is able to accurately capture the local HRR.
4.2.2. Analysis of the thermo-chemical state
As previously mentioned, CO has been identied as a relevant
quantity for ame-wall interactions in previous works [30,32,36]. The
CO mass fraction and the temperature elds, which are obtained for the
SWQ conguration by the DC, QFM and ML modeling approaches, are
analysed next.
Contours of the CO mass fraction around the quenching point are
displayed in the bottom row of Fig. 8. Here, it can be observed, that the
CO production has its maximum within the reaction zone of the ame.
The high CO concentration in the quenching zone at the wall indicates
an incomplete combustion, where further oxidation of CO was no longer
possible.
The local thermo-chemical state is further analysed by comparing the
CO mass fraction over temperature at different wall distances in Fig. 10.
The locations are indicated by the vertical white dotted lines drawn in
Fig. 8. An adiabatic freely propagating premixed ame is additionally
Fig. 5. Snapshot of coupled HOQ simulations with a DC and two machine
learning models with (ML) and without the source term correction (ML*) ac-
cording to Eq. (3).
Fig. 6. Comparison of the wall heat ux obtained for coupled HOQ simulations
with a DC model, a tabulated manifold approach (QFM), and a machine
learning model (ML), relative to the quenching time.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
7
included as a reference. Close to the wall at x =0.1 mm signicant
differences between all the simulations and the adiabatic ame can be
observed, indicating an incomplete combustion process. When moving
away from the wall (x =0.5 mm and x =2 mm), the CO prole shifts
from a quenching state due to heat loss to the wall to an almost adiabatic
state.
A slight over-prediction of CO can be observed for both the ML and
QFM prole. This is in agreement with the conclusions made by the a-
priori analysis (c.f. Section 3.4). This over-prediction can be attributed
to the different rates of heat transfer to the wall in HOQ and SWQ [32,
36,47]. To improve the prediction of CO, Emov et al. [36] introduced
an additional reactive control variable to account for this effect of
Fig. 7. 2D contours of the temperature for the SWQ conguration obtained from simulations with detailed chemistry (DC, left), with the machine learning model
(ML, middle) and with the tabulated manifold (QFM, right). Isolines for three temperature levels (310 K, 320 K, and 330 K) are added to highlight the heating of the
fresh gas mixture (300 K) by the slightly elevated wall temperature (330 K). Note that the domain is cropped to aid the visual inspection of the quenching point.
Fig. 8. 2D contours of the normalized local HRR (top) and CO mass fraction (bottom) for a detailed chemistry (DC, left), a tabulated manifold (QFM, right), and a
machine learning (ML, middle) model in proximity of the quenching point for the SWQ conguration. In the upper row additional isolines of temperature T are shown
(dashed white lines).
Fig. 9. Comparison for the HRR of the SWQ of the detailed (DC), tabulated (QFM), and machine learning (ML) chemistry. The proles are extracted along wall-
normal lines at different heights y
q
, which are displayed as white solid lines in Fig. 8.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
8
varying rates of heat transfer, which could be added to the ML approach
in future works. The ML model adequately predicts regions close and far
to the quenching point, accurately accounting for the heat losses.
Overall, the results show good agreement with the DC reference solution
and are qualitatively and quantitatively comparable to the results ob-
tained with the tabulated manifold model.
5. Conclusions
A data-driven approach has been presented that includes the
parameterization and the training of a machine learning model repre-
senting a low-order chemistry manifold. The model is coupled to a CFD
solver and utilized for the 2D simulation of a premixed methane-air
ame undergo- ing side-wall quenching. With an emphasis on the ML
modeling techniques, procedures for the selection of suitable input pa-
rameters (based on a sparse PCA) and an efcient method for the
correction of non-linear source terms at the manifold boundaries have
been demonstrated. It was shown in the case of a 1D head-on quenching
ame how the accumulation of errors caused by the incorrect prediction
of source terms at the manifold boundaries causes an unphysical in-
crease of the transported control variables, which eventually leads to an
incorrect ame speed. Contrary, the results of the ML model, which
includes the proposed correction method, showed good agreement for
ame speed and wall heat ux with the detailed chemistry reference
solution. Subsequently, the model was applied to a generic SWQ
conguration, where its predictive capabilities for the local heat release
rate and the CO production near the wall were analyzed. The ML model
showed comparable results to the tabulated manifold approach in
comparison to the detailed chemistry reference results. This underlines
the ability of ML approaches to capture complex combustion phenom-
ena accurately, such as ame-wall interaction.
In summary, ML chemistry models based on neural networks provide
a promising alternative to the conventional approach of manifold
tabulation, compensating for some of its shortcomings. The DNN re-
quires only 2% of the QFM’s memory, while the computational cost
remains similar. However, the performance heavily depends on DNN
architecture [48] and hardware, which will be the subject of future
studies. This work can serve as the basis to investigate more complex
ame congurations in future works, involving aspects such as differ-
ential diffusion and stretch effects, turbulent combustion, or hydro-
gen/hydrocarbon fuel blends.
Declaration of Competing Interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
Data availability
Data will be made available on request.
Acknowledgments
The work was supported by the Graduate School Computational
Engineering and by the Grad- uate School of Energy and Science at the
Technical University of Darmstadt. This work has been partially funded
by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) – Project Number 237267381 – TRR 150, by the project
”Center of Excellence in Combustion”, which received funding from the
European Union’s Horizon 2020 research and innovation pro- gram
under grant agreement No◦952181, and by the Federal Ministry of
Education and Research (BMBF) and the state of Hesse as part of the
NHR Program.
References
[1] van Oijen J, Donini A, Bastiaans R, Thije Boonkkamp, de Goey L. State-of-the-art in
premixed combustion modeling using amelet generated manifolds. Prog Energy
Combust Sci 2016;57:30–74. https://doi.org/10.1016/j.pecs.2016.07.001.
[2] Maas U, Pope S. Simplifying chemical kinetics: intrinsic low-dimensional manifolds
in composition space. Combust Flame 1992;88(3–4):239–64. https://doi.org/
10.1016/0010-2180(92)90034-m.
[3] Gicquel O, Darabiha N, The´venin D. Liminar premixed hydrogen/air counterow
ame simulations using ame prolongation of ILDM with differential diffusion.
Proc Combust Inst 2000;28(2):1901–8. https://doi.org/10.1016/s0082-0784(00)
80594-9.
[4] Bykov V, Maas U. The extension of the ILDM concept to reaction–diffusion
manifolds. Combust Theor Model 2007;11(6):839–62. https://doi.org/10.1080/
13647830701242531.
[5] F. Flemming, A. Sadiki, J. Janicka, LES using articial neural networks for
chemistry representation, Progress in Computational Fluid Dynamics, An
International Journal 5 2005; (7) 375. doi:10.1504/pcfd.2005.007424.
[6] Ihme M, Schmitt C, Pitsch H. Optimal articial neural networks and tabulation
methods for chemistry rep- resentation in LES of a bluff-body swirl-stabilized
ame. Proc Combust Inst 2009;32(1):1527–35. https://doi.org/10.1016/j.
proci.2008.06.100.
[7] Hansinger M, Ge Y, Ptzner M. Deep residual networks for amelet/progress
variable tabulation with ap-plication to a piloted ame with inhomogeneous inlet.
Combust Sci Technol 2020:1–27. https://doi.org/10.1080/
00102202.2020.1822826.
[8] Franke LL, Chatzopoulos AK, Rigopoulos S. Tabulation of combustion chemistry via
articial neural net- works (ANNs): methodology and application to LES-PDF
simulation of sydney ame l. Combust Flame 2007;185:245–60. https://doi.org/
10.1016/j.combustame.2017.07.014.
[9] Readshaw T, Ding T, Rigopoulos S, Jones WP. Modeling of turbulent ames with
the large eddy simula-tion–probability density function (LES–PDF) approach,
stochastic elds, and articial neural networks. Phys Fluids 2021;33(3):035154.
https://doi.org/10.1063/5.0041122.
[10] Ding T, Readshaw T, Rigopoulos S, Jones W. Machine learning tabulation of
thermochemistry in turbulent combustion: an approach based on hybrid amelet/
random data and multiple multilayer perceptrons. Combust Flame 2021;231:
111493. https://doi.org/10.1016/j.combustame.2021.111493.
[11] Owoyele O, Kundu P, Ameen MM, Echekki T, Som S. Application of deep articial
neural networks to multi-dimensional amelet libraries and spray ames. Int J
Engine Res 2019;21(1):151–68. https://doi.org/10.1177/1468087419837770.
Fig. 10. Comparison for the thermo-chemical state of the SWQ of the detailed (DC), tabulated (QFM), and machine learning (ML) chemistry at different axial
positions. The proles were extracted from the vertical white solid lines in Fig. 8. Additionally, the thermo-chemical state of an adiabatic premixed ame (FP) is
added as a reference state.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
9
[12] Ranade R, Li G, Li S, Echekki T. An efcient machine-learning approach for PDF
tabulation in turbulent combustion closure. Combust Sci Technol 2019:1–20.
https://doi.org/10.1080/00102202.2019.1686702.
[13] Bhalla S, Yao M, Hickey J-P, Crowley M. Compact representation of a multi-
dimensional combustion mani-fold using deep neural networks. In: Brefeld U,
Fromont E, Hotho A, Knobbe A, Maathuis M, Robardet C, editors. Machine learning
and knowledge discovery in databases. Cham: Springer International Publishing;
2020. p. 602–17.
[14] Lapeyre CJ, Misdariis A, Cazard N, Veynante D, Poinsot T. Training convolutional
neural networks to estimate turbulent sub-grid scale reaction rates. Combust Flame
2019;203:255–64. https://doi.org/10.1016/j.combustame.2019.02.019.
[15] Seltz A, Domingo P, Vervisch L, Nikolaou ZM. Direct mapping from LES resolved
scales to ltered-ame generated manifolds using convolutional neural networks.
Combust Flame 2019;210:71–82. https://doi.org/10.1016/j.
combustame.2019.08.014.
[16] Nikolaou ZM, Chrysostomou C, Vervisch L, Cant S. Progress variable variance and
ltered rate modelling using convolutional neural networks and amelet methods.
Flow Turbul Combust 2019;103(2):485–501. https://doi.org/10.1007/s10494-
019-00028-w.
[17] Shin J, Ge Y, Lampmann A, Ptzner M. A data-driven subgrid scale model in large
eddy simulation of turbulent premixed combustion. Combust Flame 2021;231:
111486. https://doi.org/10.1016/j.combustame.2021.111486.
[18] Sinaei P, Tabejamaat S. Large eddy simulation of methane diffusion jet ame with
representation of chemical kinetics using articial neural network. Proc Inst Mech
Eng E: J Process Mech Eng 2016;231(2):147–63. https://doi.org/10.1177/
0954408915580505.
[19] Chi C, Janiga G, The´venin D. On-the-y articial neural network for chemical
kinetics in direct nu-merical simulations of premixed combustion. Combust Flame
2021;226:467–77. https://doi.org/10.1016/j. combustame.2020.12.038.
[20] Owoyele O, Pal P. ChemNODE: a neural ordinary differential equations framework
for efcient chemical kinetic solvers. Energy AI 2022;7:100118. https://doi.org/
10.1016/j.egyai.2021.100118.
[21] Haghshenas M, Mitra P, Santo ND, Schmidt DP. Acceleration of chemical kinetics
computation with the learned intelligent tabulation (LIT) method. Energies 2021;
14(23):7851. https://doi.org/10.3390/en14237851.
[22] Ji W, Qiu W, Shi Z, Pan S, Deng S. Stiff-PINN: physics-informed neural network for
stiff chemical kinetics. J Phys Chem A 2021;125(36):8098–106. https://doi.org/
10.1021/acs.jpca.1c05102.
[23] Wan K, Barnaud C, Vervisch L, Domingo P. Chemistry reduction using machine
learning trained from non- premixed micro-mixing modeling: application to DNS of
a syngas turbulent oxy-ame with side-wall effects. Combust Flame 2020;220:
119–29. https://doi.org/10.1016/j.combustame.2020.06.008.
[24] Wan K, Barnaud C, Vervisch L, Domingo P. Machine learning for detailed chemistry
reduction in DNS of a syngas turbulent oxy-ame with side-wall effects. Proc
Combust Inst 2021;38(2):2825–33. https://doi.org/10.1016/j.proci.2020.06.047.
[25] A. Chatzopoulos, S. Rigopoulos, A chemistry tabulation approach via rate-
controlled constrained equilibrium (RCCE) and articial neural networks (ANNs),
with application to turbulent non-premixed CH4/h2/n2 ames, Proc Combust Inst
(2013); 34 (1) 1465–73. doi:10.1016/j.proci.2012.06.057.
[26] Ranade R, Echekki T. A framework for data-based turbulent combustion closure: a
posteriori validation. Combust Flame 2019;210:279–91. https://doi.org/10.1016/
j.combustame.2019.08.039.
[27] Nguyen H-T, Domingo P, Vervisch L, Nguyen P-D. Machine learning for integrating
combustion chemistry in numerical simulations. Energy and AI 2021;5:100082.
https://doi.org/10.1016/j.egyai.2021.100082.
[28] Zhou L, Song Y, Ji W, Wei H. Machine learning for combustion. Energy AI 2022;7:
100128. https://doi.org/10.1016/j.egyai.2021.100128.
[29] Ihme M, Chung WT, Mishra AA. Combustion machine learning: principles, progress
and prospects. Prog Energy Combust Sci 2022;91:101010. https://doi.org/
10.1016/j.pecs.2022.101010.
[30] Ganter S, Straßacker C, Kuenne G, Meier T, Heinrich A, Maas U, Janicka J. Laminar
near-wall combustion: analysis of tabulated chemistry simulations by means of
detailed kinetics. Int J Heat Fluid Flow 2018;70:259–70. https://doi.org/10.1016/
j.ijheatuidow.2018.02.015.
[31] Strassacker C, Bykov V, Maas U. Comparative analysis of reaction-diffusion
manifold based reduced models for head-on- and side-wall-quenching ames. Proc
Combust Inst 2021;38(1):1025–32. https://doi.org/10.1016/j.proci.2020.06.130.
[32] Steinhausen M, Luo Y, Popp S, Strassacker C, Zirwes T, Kosaka H, et al. Numerical
investigation of local heat-release rates and thermo-chemical states in side- wall
quenching of laminar methane and dimethyl ether ames, Flow. Turbulence and
Combustion 2020. https://doi.org/10.1007/s10494-020-00146-w.
[33] Palulli R, Talei M, Gordon RL. Unsteady ame–wall interaction: impact on co
emission and wall heat ux. Combust Flame 2019;207:406–16. https://doi.org/
10.1016/j.combustame.2019. 06.012.
[34] M. Steinhausen, T. Zirwes, F. Ferraro, A. Scholtissek, H. Bockhorn, C. Hasse, Flame-
vortex interaction during turbulent side-wall quenching and its implications for
amelet manifolds, Proc Combust Inst doi:10.1016/j.proci.2022.09.026.
[35] D. Kaddar, M. Steinhausen, T. Zirwes, H. Bockhorn, C. Hasse, F. Ferraro, Combined
effects of heat loss and curvature on turbulent ame-wall interaction in a premixed
dimethyl ether/air ame, Proc Combust Instdoi: 10.1016/j.proci.2022.08.060.
[36] Emov DV, de Goey P, van Oijen JA. QFM: quenching amelet-generated manifold
for modelling of ame–wall interactions. Combust Theor Model 2019;24(1):
72–104. https://doi.org/10.1080/13647830.2019.1658901.
[37] Mairal J, Bach F, Ponce J, Sapiro G. Online dictionary learning for sparse coding.
In: Proceedings of the 26th Annual International Conference on Machine Learning -
ICML ’09. ACM Press; 2009. https://doi.org/10.1145/1553374.1553463.
[38] G. Smith, D. Golden, M. Frenklach, N. Moriarty, B. Eiteneer, M. Goldenberg, C.
Bowman, R. Hanson, S. Song, W. Gardiner, J. Vitali, V. Lissianski, Z. Qin, Gri-mech
3.0. URL https://www.me.berkeley.edu/gri_mech/.
[39] A. Zschutschke, D. Messig, A. Scholtissek, C. Hasse, Universal laminar ame solver
(ulf) (2017). doi:10.6084/M9.FIGSHARE.5119855.V2.
[40] Luo Y, Strassacker C, Wen X, Sun Z, Maas U, Hasse C. Strain rate effects on head-on
quenching of laminar premixed methane-air ames. Flow Turbul Combust 2020;
106(2):631–47. https://doi.org/10.1007/s10494-020-00179-1.
[41] Kosaka H, Zentgraf F, Scholtissek A, Bischoff L, Ha¨ber T, Suntz R, et al. Wall heat
uxes and CO formation/oxidation during laminar and turbulent side-wall
quenching of methane and DME ames. Int J Heat Fluid Flow 2018;70:181–92.
https://doi.org/10.1016/j.ijheatuidow.2018.01.009.
[42] Kosaka H, Zentgraf F, Scholtissek A, Hasse C, Dreizler A. Effect of ame-wall
interaction on local heat release of methane and DME combustion in a side-wall
quenching geometry. Flow Turbul Combust 2019;104(4):1029–46. https://doi.
org/10.1007/s10494-019-00090-4.
[43] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N.
Gimelshein, L. Antiga, A. Desmaison, A. Ko¨pf, E. Yang, Z. DeVito, M. Raison, A.
Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An
imperative style, high-performance deep learning libraryarXiv:1912.01703.
[44] Sutherland JC, Parente A. Combustion modeling using principal component
analysis. Proc Combust Inst 2009;32(1):1563–70. https://doi.org/10.1016/j.
proci.2008.06.147.
[45] Scholtissek A, Domingo P, Vervisch L, Hasse C. A self-contained progress variable
space solution method for thermochemical variables and ame speed in freely-
propagating premixed amelets. Proc Combust Inst 2018;37:1529–36. https://doi.
org/10.1016/j.proci.2018.06.168.
[46] Versteeg HK, Malalasekera W. An introduction to computational uid dynamics:
the nite volume method. 2nd Ed. Pearson Education Ltd; 2007.
[47] Zirwes T, Ha¨ber T, Zhang F, Kosaka H, Dreizler A, Steinhausen M, Hasse C,
Stagni A, Trimis D, Suntz R, Bockhorn H. Numerical study of quenching distances
for side-wall quenching using detailed diffusion and chemistry. Flow Turbul
Combust 2020;106(2):649–79. https://doi.org/10.1007/s10494-020-00215-0.
[48] Nikolaou Z, Vervisch L, Domingo P. Criteria to switch from tabulation to neural
networks in computational combustion. Combust Flame 2022;246:112425.
https://doi.org/10.1016/j.combustame.2022.112425.
J. Bissantz et al.