PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Artifical neural networks (ANNs) are universal approximators capable of learning any correlation between arbitrary input data with corresponding outputs, which can also be exploited to represent a low-dimensional chemistry manifold in the field of combustion. In this work, a procedure is developed to simulate a premixed methane-air flame undergoing side-wall quenching utilizing an ANN chemistry manifold. In the investigated case, the flame characteristics are governed by two canonical problems: the adiabatic flame propagation in the core flow and the non-adiabatic flame-wall interaction governed by enthalpy losses to the wall. Similar to the tabulation of a Quenching Flamelet-Generated Manifold (QFM), the neural network is trained on a 1D head-on quenching flame database to learn the intrinsic chemistry manifold. The control parameters (i.e. the inputs) of the ANN are identified from thermo-chemical state variables by a sparse principal component analysis (PCA) without using prior knowledge about the flame physics. These input quantities are then transported in the coupled CFD solver and used for manifold access during simulation runtime. The chemical source terms are corrected at the manifold boundaries to ensure boundedness of the thermo-chemical state at all times. Finally, the ANN model is assessed by comparison to simulation results of the 2D side-wall quenching (SWQ) configuration with detailed chemistry and with a flamelet-based manifold (QFM).
Content may be subject to copyright.
Applications in Energy and Combustion Science 13 (2023) 100113
Available online 9 February 2023
2666-352X/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Application of dense neural networks for manifold-based modeling of
ame-wall interactions
Julian Bissantz
a
, Jeremy Karpowski
a
,
1
, Matthias Steinhausen
a
, Yujuan Luo
a
,
Federica Ferraro
a
,
*
, Arne Scholtissek
a
, Christian Hasse
a
, Luc Vervisch
b
a
Technical University of Darmstadt, Department of Mechanical Engineering, Simulation of reactive Thermo-Fluid Systems, Otto-Berndt-Str. 2, 64287 Darmstadt,
Germany
b
CORIA - CNRS, Normandie Universite´, INSA de Rouen, Technopoˆle du Madrillet, BP 8, Saint-E´tienne-du-Rouvray 76801, France
ARTICLE INFO
Keywords:
Machine learning
Data-driven modeling
Manifold methods
Head-on quenching
Side-wall quenching
ABSTRACT
Artical neural networks (ANNs) are universal approximators capable of learning any correlation between
arbitrary input data with corresponding outputs, which can also be exploited to represent a low-dimensional
chemistry manifold in the eld of combustion. In this work, a procedure is developed to simulate a premixed
methane-air ame undergoing side-wall quenching utilizing an ANN chemistry manifold. In the investigated
case, the ame characteristics are governed by two canonical problems: the adiabatic ame propagation in the
core ow and the non-adiabatic ame- wall interaction governed by enthalpy losses to the wall. Similar to the
tabulation of a Quenching Flamelet-Generated Manifold (QFM), the neural network is trained on a 1D head-on
quenching ame database to learn the intrinsic chemistry manifold. The control parameters (i.e. the inputs) of
the ANN are identied from thermo-chemical state variables by a sparse principal component analysis (PCA)
without using prior knowledge about the ame physics. These input quantities are then transported in the
coupled CFD solver and used for manifold access during simulation runtime. The chemical source terms are
corrected at the manifold boundaries to ensure bounded- ness of the thermo-chemical state at all times. Finally,
the ANN model is assessed by comparison to simulation results of the 2D side-wall quenching (SWQ) congu-
ration with detailed chemistry and with a amelet-based manifold (QFM).
1. Introduction
In the transition towards sustainable combustion technologies, nu-
merical simulations play a crucial role for the rapid design of carbon-
neutral or carbon-free combustion systems for power generation and
transportation. While highly-resolved detailed chemistry (DC) simula-
tions are essential to understand physical phenomena and can serve as
the basis for model development and validation, their application to
practical combustors is often unfeasible due to prohibitive computa-
tional costs. Consequently, there exists a demand for accurate and
numerically efcient chemistry reduction approaches. One option are
reduced-order models utilizing chemistry manifolds [14]. These
methods are based on the tabulation of pre-calculated thermo-chemical
states which are pa- rameterized and accessed by control variables. The
control variables are then usually transported in the CFD simulation and
used to retrieve the thermo-chemical state from the manifold during
simulation runtime. Given a suitable manifold is used, these methods
combine the high accuracy of a detailed chemistry (DC) simulation with
low computational costs. However, for complex congurations
involving a signicant number of different combustion phenomena (e.g.,
multi-phase combustion, pollutant formation modeling or ame-wall
interactions), more control variables, thus, additional manifold di-
mensions are required. The memory demand to store the man- ifold
quickly becomes intractable considering memory-per-core limitations
on high-performance computers. Additionally, the tabulation of the
manifold becomes increasingly difcult since the control variables are
often not linearly independent, which complicates a non-overlapping
data arrangement and efcient manifold access.
An emerging alternative is data-driven modeling, or machine
learning (ML), using articial neural networks (ANN). Neural networks
Preprint submitted to Applications in Energy and Combustion Science
* Corresponding author.
E-mail address: ferraro@stfs.tu-darmstadt.de (F. Ferraro).
1
Joint rst author
Contents lists available at ScienceDirect
Applications in Energy and Combustion Science
journal homepage: www.sciencedirect.com/journal/applications-in-energy-and-combustion-science
https://doi.org/10.1016/j.jaecs.2023.100113
Received 2 August 2022; Received in revised form 12 November 2022; Accepted 6 January 2023
Applications in Energy and Combustion Science 13 (2023) 100113
2
are universal approximators capable of learning any correlation be-
tween arbitrary input data (control variables) with corresponding out-
puts. This holds true even if the control variables are not linearly
independent. This property of ANNs can be exploited to represent a
chemistry manifold. An ANN is memory efcient and the storage size
changes only slightly with the number of control variables. Furthermore,
ANNs prot from an enormous performance increase on specialized
accelerator hardware, which is one focus of current computational
hardware development.
The rst ANN application for chemistry modeling in a Large Eddy
Simulation (LES) was performed by Flemming et al. [5], who used the
approach to model the Sandia Flame D. The authors achieved a memory
reduction by three orders of magnitude with a minimal increase in
computational time in comparison with a tabulated manifold approach.
Ihme et al. [6] focused on the optimization of ANN architectures for each
output variable using a generalized pattern search. These models were
subsequently applied in an LES of the Sydney bluff-body swirl- stabilized
SMH1 ame. Again, the ANN showed a comparable accuracy to a
tabulated manifold with acceptable computational overhead for LES
applications. Recently, several investigations have been carried out
applying machine learning in reactive ow simulations for manifold
representation [713], turbulence-chemistry interactions [1417] and
modeling chemical kinetics [1822]. This was made possible by the
development of open-source deep learning frame- works that followed
the breakthrough of deep learning in the eld of computer vision in the
past decade. Although the training of ANNs has been simplied by these
frameworks, many works have reported a lack of model accuracy in case
input values approach the manifold boundaries. This applies in partic-
ular to non-linear quantities, such as chemical source terms. In a direct
numerical simulation (DNS) of a turbulent syngas oxy-ame, Wan et al.
[23,24] trained additional neural networks for a normalized oxygen
mass fraction Y
O2
>0.9 to improve the model accuracy close to the
unburnt conditions, where low, but non-zero, reaction rates occured.
Similarly, Ding et al. [10] used a multiple multilayer perceptron (MMP)
approach, where additional models were trained on increasingly smaller
intervals around zero, where the relative error of the prediction is large
compared to the absolute error. During the simulation, a model cascade
is employed, where the decision which output is used is based on the
output value of the previous model. This methodology was applied in
LES of the Sandia ame series. Another approach to increase the overall
prediction accuracy of ML models is to divide the manifold using clus-
tering algorithms such as self-organizing maps [8,12,25,26] or k-means
clustering [27] and to use different ANNs for the prediction of the
subsets of the manifold. However, in some cases, hundreds of networks
had to be trained [8,25], which introduces additional overhead for the
model selection during simulation runtime. For thorough overviews of
machine learning approaches in the context of combustion, the reader is
referred to Zhou et al. [28] and Ihme et al. [29].
While there exist many coupled simulations with models based on
tabulated reduced-order manifolds, the literature on ML-based manifold
modeling is still scarce. First results have been encouraging, but several
challenges, such as the handling of non-linear terms, a robust feature
selection (i.e. suitable control variables), or the required number of
networks for an accurate man- ifold representation, are active areas of
research.
In this work, a ML model based on dense neural networks (DNN) is
coupled to a CFD solver and utilized for the 2D simulation of a laminar
premixed methane-air ame undergoing side- wall quenching (SWQ).
The 2D SWQ case is well-established [3032] and includes two essential
ame regimes: an unstretched adiabatic ame regime and a
non-adiabatic ame quenching at the wall. Even when ignoring un-
steady[33] or turbulent effects [34,35] standard amelet models fail to
capture some physics, as discussed by Emov et al. [36]. Signicant
modeling efforts were required to develop advanced amelet models
that can accurately predict the pollutant formation, specically CO[31,
36].
With this background, the objective of this work is threefold:
to demonstrate the application of a purely data-driven approach for
modeling low-order chem- istry manifolds in the simulation of the
laminar side-wall quenching conguration;
to identify suitable input parameters for the ML-based manifold
without using prior knowledge about the ame physics (applying a
sparse principal component analysis [37]);
to develop a reliable treatment of source terms at the manifold
boundaries to ensure boundedness of the thermo-chemical state
during the coupled simulation.
The laminar SWQ conguration provides a benchmark for combus-
tion modeling that is suf- ciently challenging and therefore suitable for
assessing the predictive capabilities of the ML-based approach devel-
oped in this study. Analogous simulation results obtained with detailed
chemistry and with a amelet-based model (QFM) serve as reference
datasets.
The paper is structured as follows: Section 2 describes the numerical
setup of the investigated congurations, namely the 1D head-on
quenching (HOQ) and the 2D SWQ. In Section 3, the machine learning
methods are outlined. In Section 4, the results for the HOQ and the SWQ
cases are analyzed. First the wall heat ux is compared for the HOQ case
in order to verify the model. Thereafter, an analysis of the local heat
release and the thermo-chemical state for the SWQ conguration is
carried out. Finally, conclusions are drawn in. Section 5
2. Numerical setups
In this section, an overview of the numerical setups is provided. First,
the 1D HOQ conguration is addressed, which is used both for the ANN
training and the QFM generation. Thereafter, the more complex 2D SWQ
conguration is described.
2.1. 1D head-on quenching (HOQ)
In the 1D HOQ conguration, a premixed laminar ame propagates
perpendicular towards a wall, where it extinguishes due to heat losses.
The 2 cm long domain is discretized by an equidistant mesh with 2000
points (resolution of 10 µm) and time integration is realized by a fully
implicit backward differentiation formula (BDF). For the initialization,
an adiabatic, stoichiometric freely propagating methane-air ame is
used with a fresh gas temperature of 300 K. The wall temperature is
xed at 300 K and molecular transport is modeled using the unity Lewis
number assumption. Furthermore, the detailed GRI 3.0 mechanism is
used [38]. All amelet simulations are performed with an in-house
solver [39], and the setup has been validated previously by Luo et al.
[40].
2.2. 2D side-wall quenching (SWQ)
Following the setup of previous works [30,32], the simulation
domain of the generic SWQ cong- uration consists of a two-dimensional
rectilinear mesh (30 mm x 6 mm) with an uniform cell size of Δ =50 µm,
see Fig. 1. The numerical setup has been validated extensively [30,32]
against experimental data [41,42]. The inlet ow is divided into a fresh
(5.5 mm) and burned gas section (0.5 mm), where the latter is used for
ame stabilization. The fresh gas is initialized with a premixed stoi-
chiometric methane-air mixture at ambient conditions (T =300 K; p =1
atm), while the burned gas consists of hot exhaust gasses at equilibrium
conditions. As inlet velocity, a parabolic inow prole is used and the
burned gas velocity is set to 3.81 m/s, compensating for the density
difference between fresh and burned gasses. The velocity prole is
shown in Fig. 1. At the wall, a constant temperature of 330 K is assumed
in accordance with previous studies [32,42]. This represents an addi-
tional challenge for the DNN model, which has to capture the preheating
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
3
at the wall, and this effect must be considered in the training. For the
species mass fractions a zero gradient and for the velocity a non-slip
boundary condition is applied. At the outlets, zero gradient boundary
conditions are applied to temperature, species mass fractions, and ve-
locity. The spatial and temporal discretization scheme, as well as the
time step control, were adapted from Steinhausen et al. [32].
A detailed chemistry reference solution and corresponding simula-
tion results with a tabulated manifold approach were obtained with in-
house solvers based on OpenFOAM (v2006). Different tabulated mani-
fold approaches have been developed and validated for this setup [32],
from which the quenching amelet-generated manifold (QFM) by
Steinhausen et al. [32] is selected to serve as a reference model. The
QFM is constructed from the same 1D HOQ dataset, which is subse-
quently used for the training of the neural network, and it is parame-
terized by the control variables enthalpy and CO
2
mass fraction.
3. Machine learning methodology
This section gives an overview of the ML methodology employed in
this work. First, the generation of the ML training dataset is described.
Thereafter, the training parameters, the chosen network architecture,
and the method for identifying suitable model inputs are specied. A
correction method for the non-linear source terms is proposed and dis-
cussed in Section 3.3. Afterwards, the assessment of the ML model ac-
curacy is described and nally, the coupling of the ML model to the CFD
code is briey outlined.
3.1. Training data generation
The amelet-based training data is generated by a transient HOQ
simulation, as described in Section 2.1. The left plot in Fig. 2 displays the
training data points in the original physical space (x) and time (t)
colored by the CO mass fraction. The isothermal wall is located at the left
boundary (x =0 mm). Additionally, this plot shows the writing interval
of the ame solution. Initially, the solutions are written at a constant
time step, whereas when the ame approaches the wall and heat losses
start to occur, the writing interval is then determined by the change of
enthalpy at the wall by 1 10
4
J/kg. Additionally, 10 freely propagating
ames with varying inlet temperatures ranging from 300 K to 340 K are
included in the training dataset. This extension of the manifold is carried
out in order to account for elevated enthalpy levels in proximity of the
isothermal wall in the SWQ conguration (T
wall
=330 K). The same
procedure was employed for QFM tables in [32] and is visualized in the
right plot in Fig. 2, where the HOQ manifold is displayed in the trans-
formed PV and T space, with the added freely propagating ames
located above the red line. It can be observed that the additional
amelets extend the manifold at the upper boundary.
In total, the dataset consists of 310 1D amelets of different enthalpy
levels, resulting in 0.62 million points. Furthermore, the data is
randomly split into a training (90%) and a validation (10%) dataset. The
latter is used to evaluate the predictive capabilities of the model and to
monitor the training process to prevent overtting. Additionally, all
dataset entries are normalized to an interval of [0, 1] (min-max scaling)
ensuring similar feature value ranges which promotes convergence of
the gradient-based optimization algorithm.
3.2. Neural network architecture and training
In order to represent the manifold, which is inherent to the dataset
described above, two re- quirements have to be met for the ML
approach: (1) a suitable ANN architecture has to be chosen together with
a training algorithm, and (2) proper input quantities (i.e., features, or
control variables) have to be identied.
Here, the neural network architecture of a Dense Neural Network
(DNN) is chosen. Fig. 3 shows the structure of a DNN, which is dened
by several sequential layers of neurons. Table 1 summarizes the chosen
hyperparameters for the ML training, which is carried out using the
PyTorch library [43]. The learning rate is reduced by a factor of 5 if the
loss does not decrease for 20 epochs.
For the selection of suitable inputs (or control variables) for the DNN,
a sparse principal component analysis (SPCA) [37] is performed. Con-
trary to a regular PCA, for which the principal compo- nents are a dense
linear combination, i.e. a combination of all thermo-chemical state
variables [44], the SPCA algorithm attempts to minimize the number of
variables contributing to the principal components, i.e., a sparse linear
combination, which makes the result more interpretable. As a result, the
main principal components identied from the SPCA only rely on a few
of the variables that dene the thermo-chemical state. The variables are
normalized as described in Section 3.1 prior to the SPCA.
The rst three sparse principal components (SPCs) extracted from
the training dataset are shown in Table 2 together with their constitu-
ents and associated explained variance. The latter describes the ratio of
variance contained in the individual sparse principal component
compared to the sum of all sparse principal component variances in the
dataset, i.e. a measure of the information contained in the variable. The
variances of all SPCs sum up to unity. Interestingly, the rst sparse
principal component (SPC1) resembles a progress variable (PV) con-
sisting of the species involved in the global methane oxidation reaction.
Similar combinations of species have been used for the denition of a
primary progress variable in the modeling of methane-air combustion
[45] and it is emphasized that SPC1 is obtained here without using prior
knowledge about the ame physics. The second sparse principal
component (SPC2) mainly consists of the temperature (T) and negligible
Fig. 1. Schematic view of the SWQ burner and the numerical subdomain used
for the simulation. All scales are given in mm. The velocity prole at the inlet is
also shown.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
4
contributions of three species. Both, SPC1 and SPC2, are in accordance
with the two governing ame regimes contained in the 1D HOQ training
dataset: (1) the propagation of a freely propagating ame (characterized
by PV), and (2) ame quenching caused by heat losses to the wall
(characterized by T). Therefore, PV and T are chosen as model inputs,
see Fig. 3. These principal components also agree well with the inputs of
the tabulated manifold approaches, where a progress variable and
enthalpy have been chosen [32,36]. For convenience, PV is dened as 2
SPC1 Y
H2O
+Y
CO2
Y
O2
Y
CH4
in the following.
In the coupled simulation, SPC1 and SPC2 are solved in addition to
the velocity and pressure in the employed PIMPLE algorithm, a combi-
nation of the SIMPE and PISO algorithm [46]. Assuming unity Lewis
number diffusion for all species, the transport equations for the SPC1, i.
e. progress variable (PV), and the SPC2, i.e. temperature (T) read
(1)
(2)
where
ρ
is the density, u the velocity, D the diffusion coefcient, ˙
ω
YPV the
progress variable source term, c
p
the heat capacity, λ the heat conduc-
tivity, and ˙
ω
T is source term of the temperature, i.e. the heat release rate
(HRR), respectively. The term D
diff,s
in Eq. (2) represents the tempera-
ture diffusion caused by the species diffusion. During the numerical
solution of Eqs. (1) and (2), PV and T are used as inputs for the employed
DNN to retrieve the thermochemical quantities highlighted in Eqs. (1)
and (2) (marked by gray boxes) as outputs from the DNN. For the
training process, the number of layers and neurons was varied, until a
network architecture with three hidden layers containing 64 neurons
each has proven sufcient to accurately represent the thermo-chemical
variables included in the chemistry manifold. Furthermore, a different
Fig. 2. Scatter plot colored by the CO mass fraction for the head-on quenching ame in the physical space (x) versus time (t) (left). The same dataset is mapped into
progress variable (PV) and temperature (T) coordinates on the right. The additional preheated amelets, which are incorporated into the manifold, are shown on the
right above the red line.
Fig. 3. Network architecture of a dense neural network (DNN). The in- and
outputs used in this work are shown for the respective layer.
Table 1
Hyperparameters for the training of the ML model.
Hyperparameter Property
Epochs 1000
Batch size 5000
GPU lxTesla K20Xm
Optimizer Adam algorithm
Initial learning rate 0.001
Loss function Mean-Squared Error (MSE)
Activation function Hyperbolic tangent (tanh)
Table 2
Results of the SPCA for the 1D HOQ training dataset. Only the rst three SPCs are
shown here. SPC3 is truncated after the rst four constituents.
SPC Thermo-chemical variables Variance
SPCI 0.5YO2 0.5YCH4 0.365
0.5YH2O +0.51YCO2
SPC2 0.004YH2 +0.009YO2 0.121
0.001YH2O2 +1.0T
SPC 3 0.503YC2H6 0.492YC3H8 0.113
0.446YCH2O0.335YC2H4
+... (truncated)
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
5
training strategy, namely the training of one DNN for each output, was
evaluated without observable improvement on the accuracy or compu-
tational performance for the manifold investigated here. It is empha-
sized that this could be different for other cases.
3.3. Source term modeling
The accurate description of the chemical source terms can be chal-
lenging for ML approaches based on neural networks. Specically, in a
transient simulation small, but non-zero, source terms for fresh gas
conditions or equilibrium conditions on the burnt side of a ame (i.e.
thermo-chemical states at the boundaries of the manifold) can lead to an
error accumulation for the transported con trol variables (PV, T). Sub-
sequently, the transported variables can drift to unphysical states
outside the training interval, where the predictions by the DNN degrade
and exacerbate the drift further. Due to their inherent, gradient-based
optimization, neural networks asymptotically approach but cannot
reach perfect accuracy in regression tasks, which is required here to
accurately capture the ame physics. This aspect can easily be missed in
the a-priori validation step, since the error at the boundary is small.
Previous works proposed to train additional networks for subsets of
the manifold [8,12,25-27]. This can decrease the error made at the
manifold boundaries to an acceptable threshold, but it does not
completely solve the issue and further introduces computational over-
head since the appropriate model has to be identied for every
computational element in every time step at simulation runtime.
Here it was found enough to set the source terms to zero close to the
PV boundaries. The predicted output of all source terms, namely ˙
ω
PV and
˙
ω
T is then conditioned on the progress variable at simulation runtime:
˙
ω
=
0 if
¯
PV 0
DNN(PV,T)if 0 ¯
PV 1,
0 if
¯
PV 1,
(3)
where ¯
PV is the scaled progress variable. ¯
PV =0 and ¯
PV =1 correspond
to PVmin = 0.275 and PVmax =0.271 respectively. These values are
given by the manifold and are constants in this case, because only one
mixture fraction level is considered. Generally, PV
min
and PV
max
could
also be dened as a function of the mixture fraction if required. The
above constraint is realized by a mask vector, which is described in more
detail in the next section. It ensures that the numerical solution of the
input values of the DNN stays bounded to the known training interval. It
is noted that this measure can be easily applied to arbitrary inputs of the
DNN.
3.4. Model validation
In order to measure and quantify its accuracy, the ML model is tested
with the validation dataset (not included in the training). The prediction
quality is measured with the coefcient of determination dened as
R2=1[
N
i=1
(Φi
Φi)2][
N
i=1
(Φi¯
Φ)2]1
where Φi,
¯
Φ,
Φ, and N are the true value, the mean value of Φ, the
networks prediction, and the number of samples, respectively. For the
ML model employed here, all model outputs reach an R
2
score of 0.999.
Particularly, CO has been investigated and identied as a relevant
quantity for ame-wall in teractions in previous studies [30,32,36].
Fig. 4 allows to assess the overall predictive capabilities of the tabulated
manifold (QFM) and the ML model for the CO mass fraction in com-
parison to the two reference datasets (DC) of HOQ and SWQ. Extracting
the inputs from the DC datasets, the QFM and ML model are utilized to
predict the CO mass fraction which is then compared to the DC refer-
ences (a-priori analysis). Optimal predictions are located on the diagonal
line in Fig. 4, where the predicted CO is identical to the reference DC
value. It can be observed that both the QFM and ML model show high
accuracy for the HOQ and SWQ datasets. Only slight overestimates are
observed for the SWQ for both the ML and QFM model, as indicated by
few gray points above the diagonal. This over-prediction of CO is caused
by a substantially (factor of 2) higher heat loss to the wall in the HOQ
conguration (i.e. the training data) compared to the SWQ congura-
tion. This difference can be attributed to the different directions of the
temperature gradients in the respective cases [32,36,47]. Overall, the
ML model yields satisfying results and is applied to a coupled simula-
tions of both reference congurations next.
3.5. Model coupling
Before coupling the model to the CFD code two post-processing steps
are performed:
1 In order to apply the network in the CFD, the scaling and rescaling
operation of in- and outputs are added to the exported network.
2 All source terms are corrected based on the input of the progress
variable according to Eq. (3).
The rst step allows the generic case-independent implementation of
the ML interface into the CFD code. As outlined in the previous section,
the source term correction is realized by means of a mask vector, that
determines at runtime, which cells contain a thermo-chemical state
located at the boundary of the manifold. Subsequently, the output vector
of the source term is modied for positive values of the mask vector.
Finally, the DNN is coupled to the OpenFOAM-based solver via the
PyTorch C++ API for usage in the coupled simulations.
4. Results and discussion
In this section, two cases are simulated using the previously
described ML model to represent the manifold. First, the model is veri-
ed on the 1D HOQ conguration, which was also used to generate the
training data. Second, the model is applied to the laminar SWQ
conguration described in Section 2.2. By this means, the predictive
Fig. 4. A-priori analysis of tabulated manifold (QFM) and the machine learning
model (ML) on the detailed chemistry dataset of head-on quenching (HOQ) and
side-wall quenching (SWQ) for the mass fraction of CO.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
6
capabilities of the ML model are assessed for a ame-wall interaction
case which is different from the training database.
4.1. Results for the 1D HOQ conguration
A snapshot of the coupled HOQ simulation with (ML) and without
the source term correction (ML*) is shown together with the DC result in
Fig. 5. The ML* model predicts small, non-zero temperature (and
progress variable) source terms in the preheat zone of the ame, which
accumulates to an unphysical temperature increase over the simulation
runtime. At the beginning of the simulation, ML* underpredicts the
ame propagation speed by approximately 30%, which later turns to an
overprediction of similar order of magnitude after a considerable tem-
perature increase. This effect can hardly be identied from the a-priori
analysis and it depends on the initialization and duration of the simu-
lation, and the absolute prediction error by the ML model for source
terms at the manifold boundaries. In comparison, the ML model, which
utilizes the source term correction described in Section 3.3, accurately
describes temperature prole and ame speed (< 1% deviation
compared to the DC reference).
Furthermore, Fig. 6 shows the wall heat ux for the ML model, the
tabulated manifold (QFM), and the DC reference result. It is found that
both ML and QFM models recover the overall trend shown by the DC
reference result, but overpredict the wall heat ux at the quenching
point, characterized as the point in time of maximum wall heat ux, by a
similar order of magnitude. It is thereby veried, that the ML model
yields comparable results to a tabulated manifold approach, generated
from the same dataset used in the training of the neural network.
4.2. Results for the 2D SWQ conguration
The coupled simulation results obtained with the ML model for the
2D SWQ conguration are compared with a detailed chemistry and a
tabulated chemistry calculation (QFM, c.f. Section 2.2). The numerical
results are analyzed with respect to a relative coordinate system that
uses the quenching height as the origin of the wall-parallel direction.
The quenching height is dened by the maximum wall heat ux [30,32].
Fig. 7 depicts the temperature contour of the SWQ conguration for
the ML model as well as the DC and QFM references. The contours of the
reduced models show qualitatively good agreement at the quenching
point as well as the shape and position of the adiabatic ame branch.
The isolines for three temperature levels (310 K, 320 K, and 330 K)
indicate the heating of the fresh mixture (300 K) in close proximity of
the wall, caused by the slightly elevated wall tem- perature (330 K).
These preheated states are captured by adding preheated adiabatic
amelets to the training dataset (c.f. Section 3.1). Interestingly, the
isoline for 330 K for the QFM shows a small deviation at the point where
it meets the wall boundary. This difference can be attributed to the
transport of enthalpy for the QFM model instead of temperature for the
ML model. The transport of enthalpy requires special treatment of the
constant temperature wall boundary condition. In this case, a secondary
table is used to retrieve the wall enthalpy for a given progress variable
and constant temperature. Whereas, for the ML model the temperature is
transported and the isothermal boundary condition is directly imposed.
4.2.1. Analysis of local heat release rate
Following previous investigations on SWQ [30,32,36] the local HRR
as a global ame property is used to assess the predictive capabilities of
the ML model. The 2D contours of the normalized HRR at the quenching
point are displayed in the top row of Fig. 8. The HRR decreases
considerably within 0.5 mm distance to the wall showing that the ame
quenching process occurs very close to the wall. A detailed assessment of
quenching distances for the given SWQ conguration is beyond the
scope of this work. Here the reader is referred to Zirwes et al. [47], who
investigated the inuence of several factors (e.g., chemical mechanism,
diffusion model, etc.) on the quenching distance of a SWQ conguration.
Overall, a qualitatively good agreement of the HRR contour of the ML
model with the DC and QFM references can be observed.
To allow a quantitative comparison, HRR proles are extracted at
three different distances from the quenching point (y
q
=0), indicated by
the horizontal white dotted lines in Fig. 8. The extracted proles are
shown in Fig. 9. At the three quenching heights y
q
, both, the peak po-
sition and value, of the HRR proles obtained with the ML model align
well with the DC reference solution. Small deviations can be observed at
the quenching height y
q
=0 mm, where the peak of the ML model is
slightly shifted towards the wall and for y
q
=0.5 mm, where the peak is
slightly overestimated. However, the model yields similar results as the
QFM, demonstrating that it is able to accurately capture the local HRR.
4.2.2. Analysis of the thermo-chemical state
As previously mentioned, CO has been identied as a relevant
quantity for ame-wall interactions in previous works [30,32,36]. The
CO mass fraction and the temperature elds, which are obtained for the
SWQ conguration by the DC, QFM and ML modeling approaches, are
analysed next.
Contours of the CO mass fraction around the quenching point are
displayed in the bottom row of Fig. 8. Here, it can be observed, that the
CO production has its maximum within the reaction zone of the ame.
The high CO concentration in the quenching zone at the wall indicates
an incomplete combustion, where further oxidation of CO was no longer
possible.
The local thermo-chemical state is further analysed by comparing the
CO mass fraction over temperature at different wall distances in Fig. 10.
The locations are indicated by the vertical white dotted lines drawn in
Fig. 8. An adiabatic freely propagating premixed ame is additionally
Fig. 5. Snapshot of coupled HOQ simulations with a DC and two machine
learning models with (ML) and without the source term correction (ML*) ac-
cording to Eq. (3).
Fig. 6. Comparison of the wall heat ux obtained for coupled HOQ simulations
with a DC model, a tabulated manifold approach (QFM), and a machine
learning model (ML), relative to the quenching time.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
7
included as a reference. Close to the wall at x =0.1 mm signicant
differences between all the simulations and the adiabatic ame can be
observed, indicating an incomplete combustion process. When moving
away from the wall (x =0.5 mm and x =2 mm), the CO prole shifts
from a quenching state due to heat loss to the wall to an almost adiabatic
state.
A slight over-prediction of CO can be observed for both the ML and
QFM prole. This is in agreement with the conclusions made by the a-
priori analysis (c.f. Section 3.4). This over-prediction can be attributed
to the different rates of heat transfer to the wall in HOQ and SWQ [32,
36,47]. To improve the prediction of CO, Emov et al. [36] introduced
an additional reactive control variable to account for this effect of
Fig. 7. 2D contours of the temperature for the SWQ conguration obtained from simulations with detailed chemistry (DC, left), with the machine learning model
(ML, middle) and with the tabulated manifold (QFM, right). Isolines for three temperature levels (310 K, 320 K, and 330 K) are added to highlight the heating of the
fresh gas mixture (300 K) by the slightly elevated wall temperature (330 K). Note that the domain is cropped to aid the visual inspection of the quenching point.
Fig. 8. 2D contours of the normalized local HRR (top) and CO mass fraction (bottom) for a detailed chemistry (DC, left), a tabulated manifold (QFM, right), and a
machine learning (ML, middle) model in proximity of the quenching point for the SWQ conguration. In the upper row additional isolines of temperature T are shown
(dashed white lines).
Fig. 9. Comparison for the HRR of the SWQ of the detailed (DC), tabulated (QFM), and machine learning (ML) chemistry. The proles are extracted along wall-
normal lines at different heights y
q
, which are displayed as white solid lines in Fig. 8.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
8
varying rates of heat transfer, which could be added to the ML approach
in future works. The ML model adequately predicts regions close and far
to the quenching point, accurately accounting for the heat losses.
Overall, the results show good agreement with the DC reference solution
and are qualitatively and quantitatively comparable to the results ob-
tained with the tabulated manifold model.
5. Conclusions
A data-driven approach has been presented that includes the
parameterization and the training of a machine learning model repre-
senting a low-order chemistry manifold. The model is coupled to a CFD
solver and utilized for the 2D simulation of a premixed methane-air
ame undergo- ing side-wall quenching. With an emphasis on the ML
modeling techniques, procedures for the selection of suitable input pa-
rameters (based on a sparse PCA) and an efcient method for the
correction of non-linear source terms at the manifold boundaries have
been demonstrated. It was shown in the case of a 1D head-on quenching
ame how the accumulation of errors caused by the incorrect prediction
of source terms at the manifold boundaries causes an unphysical in-
crease of the transported control variables, which eventually leads to an
incorrect ame speed. Contrary, the results of the ML model, which
includes the proposed correction method, showed good agreement for
ame speed and wall heat ux with the detailed chemistry reference
solution. Subsequently, the model was applied to a generic SWQ
conguration, where its predictive capabilities for the local heat release
rate and the CO production near the wall were analyzed. The ML model
showed comparable results to the tabulated manifold approach in
comparison to the detailed chemistry reference results. This underlines
the ability of ML approaches to capture complex combustion phenom-
ena accurately, such as ame-wall interaction.
In summary, ML chemistry models based on neural networks provide
a promising alternative to the conventional approach of manifold
tabulation, compensating for some of its shortcomings. The DNN re-
quires only 2% of the QFMs memory, while the computational cost
remains similar. However, the performance heavily depends on DNN
architecture [48] and hardware, which will be the subject of future
studies. This work can serve as the basis to investigate more complex
ame congurations in future works, involving aspects such as differ-
ential diffusion and stretch effects, turbulent combustion, or hydro-
gen/hydrocarbon fuel blends.
Declaration of Competing Interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
Data availability
Data will be made available on request.
Acknowledgments
The work was supported by the Graduate School Computational
Engineering and by the Grad- uate School of Energy and Science at the
Technical University of Darmstadt. This work has been partially funded
by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) Project Number 237267381 TRR 150, by the project
Center of Excellence in Combustion, which received funding from the
European Unions Horizon 2020 research and innovation pro- gram
under grant agreement No952181, and by the Federal Ministry of
Education and Research (BMBF) and the state of Hesse as part of the
NHR Program.
References
[1] van Oijen J, Donini A, Bastiaans R, Thije Boonkkamp, de Goey L. State-of-the-art in
premixed combustion modeling using amelet generated manifolds. Prog Energy
Combust Sci 2016;57:3074. https://doi.org/10.1016/j.pecs.2016.07.001.
[2] Maas U, Pope S. Simplifying chemical kinetics: intrinsic low-dimensional manifolds
in composition space. Combust Flame 1992;88(34):23964. https://doi.org/
10.1016/0010-2180(92)90034-m.
[3] Gicquel O, Darabiha N, The´venin D. Liminar premixed hydrogen/air counterow
ame simulations using ame prolongation of ILDM with differential diffusion.
Proc Combust Inst 2000;28(2):19018. https://doi.org/10.1016/s0082-0784(00)
80594-9.
[4] Bykov V, Maas U. The extension of the ILDM concept to reactiondiffusion
manifolds. Combust Theor Model 2007;11(6):83962. https://doi.org/10.1080/
13647830701242531.
[5] F. Flemming, A. Sadiki, J. Janicka, LES using articial neural networks for
chemistry representation, Progress in Computational Fluid Dynamics, An
International Journal 5 2005; (7) 375. doi:10.1504/pcfd.2005.007424.
[6] Ihme M, Schmitt C, Pitsch H. Optimal articial neural networks and tabulation
methods for chemistry rep- resentation in LES of a bluff-body swirl-stabilized
ame. Proc Combust Inst 2009;32(1):152735. https://doi.org/10.1016/j.
proci.2008.06.100.
[7] Hansinger M, Ge Y, Ptzner M. Deep residual networks for amelet/progress
variable tabulation with ap-plication to a piloted ame with inhomogeneous inlet.
Combust Sci Technol 2020:127. https://doi.org/10.1080/
00102202.2020.1822826.
[8] Franke LL, Chatzopoulos AK, Rigopoulos S. Tabulation of combustion chemistry via
articial neural net- works (ANNs): methodology and application to LES-PDF
simulation of sydney ame l. Combust Flame 2007;185:24560. https://doi.org/
10.1016/j.combustame.2017.07.014.
[9] Readshaw T, Ding T, Rigopoulos S, Jones WP. Modeling of turbulent ames with
the large eddy simula-tionprobability density function (LESPDF) approach,
stochastic elds, and articial neural networks. Phys Fluids 2021;33(3):035154.
https://doi.org/10.1063/5.0041122.
[10] Ding T, Readshaw T, Rigopoulos S, Jones W. Machine learning tabulation of
thermochemistry in turbulent combustion: an approach based on hybrid amelet/
random data and multiple multilayer perceptrons. Combust Flame 2021;231:
111493. https://doi.org/10.1016/j.combustame.2021.111493.
[11] Owoyele O, Kundu P, Ameen MM, Echekki T, Som S. Application of deep articial
neural networks to multi-dimensional amelet libraries and spray ames. Int J
Engine Res 2019;21(1):15168. https://doi.org/10.1177/1468087419837770.
Fig. 10. Comparison for the thermo-chemical state of the SWQ of the detailed (DC), tabulated (QFM), and machine learning (ML) chemistry at different axial
positions. The proles were extracted from the vertical white solid lines in Fig. 8. Additionally, the thermo-chemical state of an adiabatic premixed ame (FP) is
added as a reference state.
J. Bissantz et al.
Applications in Energy and Combustion Science 13 (2023) 100113
9
[12] Ranade R, Li G, Li S, Echekki T. An efcient machine-learning approach for PDF
tabulation in turbulent combustion closure. Combust Sci Technol 2019:120.
https://doi.org/10.1080/00102202.2019.1686702.
[13] Bhalla S, Yao M, Hickey J-P, Crowley M. Compact representation of a multi-
dimensional combustion mani-fold using deep neural networks. In: Brefeld U,
Fromont E, Hotho A, Knobbe A, Maathuis M, Robardet C, editors. Machine learning
and knowledge discovery in databases. Cham: Springer International Publishing;
2020. p. 60217.
[14] Lapeyre CJ, Misdariis A, Cazard N, Veynante D, Poinsot T. Training convolutional
neural networks to estimate turbulent sub-grid scale reaction rates. Combust Flame
2019;203:25564. https://doi.org/10.1016/j.combustame.2019.02.019.
[15] Seltz A, Domingo P, Vervisch L, Nikolaou ZM. Direct mapping from LES resolved
scales to ltered-ame generated manifolds using convolutional neural networks.
Combust Flame 2019;210:7182. https://doi.org/10.1016/j.
combustame.2019.08.014.
[16] Nikolaou ZM, Chrysostomou C, Vervisch L, Cant S. Progress variable variance and
ltered rate modelling using convolutional neural networks and amelet methods.
Flow Turbul Combust 2019;103(2):485501. https://doi.org/10.1007/s10494-
019-00028-w.
[17] Shin J, Ge Y, Lampmann A, Ptzner M. A data-driven subgrid scale model in large
eddy simulation of turbulent premixed combustion. Combust Flame 2021;231:
111486. https://doi.org/10.1016/j.combustame.2021.111486.
[18] Sinaei P, Tabejamaat S. Large eddy simulation of methane diffusion jet ame with
representation of chemical kinetics using articial neural network. Proc Inst Mech
Eng E: J Process Mech Eng 2016;231(2):14763. https://doi.org/10.1177/
0954408915580505.
[19] Chi C, Janiga G, The´venin D. On-the-y articial neural network for chemical
kinetics in direct nu-merical simulations of premixed combustion. Combust Flame
2021;226:46777. https://doi.org/10.1016/j. combustame.2020.12.038.
[20] Owoyele O, Pal P. ChemNODE: a neural ordinary differential equations framework
for efcient chemical kinetic solvers. Energy AI 2022;7:100118. https://doi.org/
10.1016/j.egyai.2021.100118.
[21] Haghshenas M, Mitra P, Santo ND, Schmidt DP. Acceleration of chemical kinetics
computation with the learned intelligent tabulation (LIT) method. Energies 2021;
14(23):7851. https://doi.org/10.3390/en14237851.
[22] Ji W, Qiu W, Shi Z, Pan S, Deng S. Stiff-PINN: physics-informed neural network for
stiff chemical kinetics. J Phys Chem A 2021;125(36):8098106. https://doi.org/
10.1021/acs.jpca.1c05102.
[23] Wan K, Barnaud C, Vervisch L, Domingo P. Chemistry reduction using machine
learning trained from non- premixed micro-mixing modeling: application to DNS of
a syngas turbulent oxy-ame with side-wall effects. Combust Flame 2020;220:
11929. https://doi.org/10.1016/j.combustame.2020.06.008.
[24] Wan K, Barnaud C, Vervisch L, Domingo P. Machine learning for detailed chemistry
reduction in DNS of a syngas turbulent oxy-ame with side-wall effects. Proc
Combust Inst 2021;38(2):282533. https://doi.org/10.1016/j.proci.2020.06.047.
[25] A. Chatzopoulos, S. Rigopoulos, A chemistry tabulation approach via rate-
controlled constrained equilibrium (RCCE) and articial neural networks (ANNs),
with application to turbulent non-premixed CH4/h2/n2 ames, Proc Combust Inst
(2013); 34 (1) 146573. doi:10.1016/j.proci.2012.06.057.
[26] Ranade R, Echekki T. A framework for data-based turbulent combustion closure: a
posteriori validation. Combust Flame 2019;210:27991. https://doi.org/10.1016/
j.combustame.2019.08.039.
[27] Nguyen H-T, Domingo P, Vervisch L, Nguyen P-D. Machine learning for integrating
combustion chemistry in numerical simulations. Energy and AI 2021;5:100082.
https://doi.org/10.1016/j.egyai.2021.100082.
[28] Zhou L, Song Y, Ji W, Wei H. Machine learning for combustion. Energy AI 2022;7:
100128. https://doi.org/10.1016/j.egyai.2021.100128.
[29] Ihme M, Chung WT, Mishra AA. Combustion machine learning: principles, progress
and prospects. Prog Energy Combust Sci 2022;91:101010. https://doi.org/
10.1016/j.pecs.2022.101010.
[30] Ganter S, Straßacker C, Kuenne G, Meier T, Heinrich A, Maas U, Janicka J. Laminar
near-wall combustion: analysis of tabulated chemistry simulations by means of
detailed kinetics. Int J Heat Fluid Flow 2018;70:25970. https://doi.org/10.1016/
j.ijheatuidow.2018.02.015.
[31] Strassacker C, Bykov V, Maas U. Comparative analysis of reaction-diffusion
manifold based reduced models for head-on- and side-wall-quenching ames. Proc
Combust Inst 2021;38(1):102532. https://doi.org/10.1016/j.proci.2020.06.130.
[32] Steinhausen M, Luo Y, Popp S, Strassacker C, Zirwes T, Kosaka H, et al. Numerical
investigation of local heat-release rates and thermo-chemical states in side- wall
quenching of laminar methane and dimethyl ether ames, Flow. Turbulence and
Combustion 2020. https://doi.org/10.1007/s10494-020-00146-w.
[33] Palulli R, Talei M, Gordon RL. Unsteady amewall interaction: impact on co
emission and wall heat ux. Combust Flame 2019;207:40616. https://doi.org/
10.1016/j.combustame.2019. 06.012.
[34] M. Steinhausen, T. Zirwes, F. Ferraro, A. Scholtissek, H. Bockhorn, C. Hasse, Flame-
vortex interaction during turbulent side-wall quenching and its implications for
amelet manifolds, Proc Combust Inst doi:10.1016/j.proci.2022.09.026.
[35] D. Kaddar, M. Steinhausen, T. Zirwes, H. Bockhorn, C. Hasse, F. Ferraro, Combined
effects of heat loss and curvature on turbulent ame-wall interaction in a premixed
dimethyl ether/air ame, Proc Combust Instdoi: 10.1016/j.proci.2022.08.060.
[36] Emov DV, de Goey P, van Oijen JA. QFM: quenching amelet-generated manifold
for modelling of amewall interactions. Combust Theor Model 2019;24(1):
72104. https://doi.org/10.1080/13647830.2019.1658901.
[37] Mairal J, Bach F, Ponce J, Sapiro G. Online dictionary learning for sparse coding.
In: Proceedings of the 26th Annual International Conference on Machine Learning -
ICML 09. ACM Press; 2009. https://doi.org/10.1145/1553374.1553463.
[38] G. Smith, D. Golden, M. Frenklach, N. Moriarty, B. Eiteneer, M. Goldenberg, C.
Bowman, R. Hanson, S. Song, W. Gardiner, J. Vitali, V. Lissianski, Z. Qin, Gri-mech
3.0. URL https://www.me.berkeley.edu/gri_mech/.
[39] A. Zschutschke, D. Messig, A. Scholtissek, C. Hasse, Universal laminar ame solver
(ulf) (2017). doi:10.6084/M9.FIGSHARE.5119855.V2.
[40] Luo Y, Strassacker C, Wen X, Sun Z, Maas U, Hasse C. Strain rate effects on head-on
quenching of laminar premixed methane-air ames. Flow Turbul Combust 2020;
106(2):63147. https://doi.org/10.1007/s10494-020-00179-1.
[41] Kosaka H, Zentgraf F, Scholtissek A, Bischoff L, Ha¨ber T, Suntz R, et al. Wall heat
uxes and CO formation/oxidation during laminar and turbulent side-wall
quenching of methane and DME ames. Int J Heat Fluid Flow 2018;70:18192.
https://doi.org/10.1016/j.ijheatuidow.2018.01.009.
[42] Kosaka H, Zentgraf F, Scholtissek A, Hasse C, Dreizler A. Effect of ame-wall
interaction on local heat release of methane and DME combustion in a side-wall
quenching geometry. Flow Turbul Combust 2019;104(4):102946. https://doi.
org/10.1007/s10494-019-00090-4.
[43] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N.
Gimelshein, L. Antiga, A. Desmaison, A. Ko¨pf, E. Yang, Z. DeVito, M. Raison, A.
Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An
imperative style, high-performance deep learning libraryarXiv:1912.01703.
[44] Sutherland JC, Parente A. Combustion modeling using principal component
analysis. Proc Combust Inst 2009;32(1):156370. https://doi.org/10.1016/j.
proci.2008.06.147.
[45] Scholtissek A, Domingo P, Vervisch L, Hasse C. A self-contained progress variable
space solution method for thermochemical variables and ame speed in freely-
propagating premixed amelets. Proc Combust Inst 2018;37:152936. https://doi.
org/10.1016/j.proci.2018.06.168.
[46] Versteeg HK, Malalasekera W. An introduction to computational uid dynamics:
the nite volume method. 2nd Ed. Pearson Education Ltd; 2007.
[47] Zirwes T, Ha¨ber T, Zhang F, Kosaka H, Dreizler A, Steinhausen M, Hasse C,
Stagni A, Trimis D, Suntz R, Bockhorn H. Numerical study of quenching distances
for side-wall quenching using detailed diffusion and chemistry. Flow Turbul
Combust 2020;106(2):64979. https://doi.org/10.1007/s10494-020-00215-0.
[48] Nikolaou Z, Vervisch L, Domingo P. Criteria to switch from tabulation to neural
networks in computational combustion. Combust Flame 2022;246:112425.
https://doi.org/10.1016/j.combustame.2022.112425.
J. Bissantz et al.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Progress in combustion science and engineering has led to the generation of large amounts of data from large-scale simulations, high-resolution experiments, and sensors. This corpus of data offers enormous opportunities for extracting new knowledge and insights—if harnessed effectively. Machine learning (ML) techniques have demonstrated remarkable success in data analytics, thus offering a new paradigm for data-intense analyses and scientific investigations through combustion machine learning (CombML). While data-driven methods are utilized in various combustion areas, recent advances in algorithmic developments, the accessibility of open-source software libraries, the availability of computational resources, and the abundance of data have together rendered ML techniques ubiquitous in scientific analysis and engineering. This article examines ML techniques for applications in combustion science and engineering. Starting with a review of sources of data, data-driven techniques, and concepts, we examine supervised, unsupervised, and semi-supervised ML methods. Various combustion examples are considered to illustrate and to evaluate these methods. Next, we review past and recent applications of ML approaches to problems in combustion, spanning fundamental combustion investigations, propulsion and energy-conversion systems, and fire and explosion hazards. Challenges unique to CombML are discussed and further opportunities are identified, focusing on interpretability, uncertainty quantification, robustness, consistency, creation and curation of benchmark data, and the augmentation of ML methods with prior combustion-domain knowledge.
Article
Full-text available
Combustion science is an interdisciplinary study that involves nonlinear physical and chemical phenomena in time and length scales, including complex chemical reactions and fluid flows. Combustion widely supplies energy for powering vehicles, heating houses, generating electricity, cooking food, etc. The key to studying combustion is to improve the combustion efficiency with minimum emission of pollutants. Machine learning facilitates data-driven techniques for handling large amounts of combustion data, either through experiments or simulations under multiple spatiotemporal scales, thereby finding the hidden patterns underlying these data and promoting combustion research. This work presents an overview of studies on the applications of machine learning in combustion science fields over the past several decades. We introduce the fundamentals of machine learning and its usage in aiding chemical reactions, combustion modeling, combustion measurement, engine performance prediction and optimization, and fuel design. The opportunities and limitations of using machine learning in combustion studies are also discussed. This paper aims to provide readers with a portrait of what and how machine learning can be used in combustion research and to inspire researchers in their ongoing studies. Machine learning techniques are rapidly advancing in this era of big data, and there is high potential for exploring the combination between machine learning and combustion research and achieving remarkable results.
Article
Full-text available
In this work, a data-driven methodology for modeling combustion kinetics, Learned Intelligent Tabulation (LIT), is presented. LIT aims to accelerate the tabulation of combustion mechanisms via machine learning algorithms such as Deep Neural Networks (DNNs). The high-dimensional composition space is sampled from high-fidelity simulations covering a wide range of initial conditions to train these DNNs. The input data are clustered into subspaces, while each subspace is trained with a DNN regression model targeted to a particular part of the high-dimensional composition space. This localized approach has proven to be more tractable than having a global ANN regression model, which fails to generalize across various composition spaces. The clustering is performed using an unsupervised method, Self-Organizing Map (SOM), which automatically subdivides the space. A dense network comprised of fully connected layers is considered for the regression model, while the network hyper parameters are optimized using Bayesian optimization. A nonlinear transformation of the parameters is used to improve sensitivity to minor species and enhance the prediction of ignition delay. The LIT method is employed to model the chemistry kinetics of zero-dimensional H2–O2 and CH4-air combustion. The data-driven method achieves good agreement with the benchmark method while being cheaper in terms of computational cost. LIT is naturally extensible to different combustion models such as flamelet and PDF transport models.
Article
Full-text available
Solving for detailed chemical kinetics remains one of the major bottlenecks for computational fluid dynamics simulations of reacting flows using a finite-rate-chemistry approach. This has motivated the use of fully connected artificial neural networks to predict stiff chemical source terms as functions of the thermochemical state of the combustion system. However, due to the nonlinearities and multi-scale nature of combustion, the predicted solution often diverges from the true solution when these deep learning models are coupled with a computational fluid dynamics solver. This is because these approaches minimize the error during training without guaranteeing successful integration with ordinary differential equation solvers. In the present work, a novel neural ordinary differential equations approach to modeling chemical kinetics, termed as ChemNODE, is developed. In this deep learning framework, the chemical source terms predicted by the neural networks are integrated during training, and by computing the required derivatives, the neural network weights are adjusted accordingly to minimize the difference between the predicted and ground-truth solution. A proof-of-concept study is performed with ChemNODE for homogeneous autoignition of hydrogen-air mixture over a range of composition and thermodynamic conditions. It is shown that ChemNODE accurately captures the correct physical behavior and reproduces the results obtained using the full chemical kinetic mechanism at a fraction of the computational cost.
Article
This study investigates the effects of curvature on the local heat release rate and mixture fraction during turbulent flame-wall interaction of a lean dimethyl ether/air flame using a fully resolved simulation with a reduced skeletal chemical reaction mechanism and mixture-averaged transport. The region in which turbulent flame-wall interaction affects the flame is found to be restricted to a wall distance less than twice the laminar flame thickness. In regions without heat losses, heat release rate and curvature, as well as mixture fraction and curvature, are negatively correlated, which is in accordance with experimental findings. Flame-wall interaction alters the correlation between heat release rate and curvature. An inversion in the sign of the correlation from negative to positive is observed as the flame starts to experience heat losses to the wall. The correlation between mixture fraction and curvature, however, is unaffected by flame-wall interactions and remains negative. Similarly to experimental findings, the investigated turbulent side-wall quenching flame shows both head-on quenching and side-wall quenching-like behavior. The different quenching events are associated with different curvature values in the near-wall region. Furthermore, for medium heat loss, the correlations between heat release rate and curvature are sensitive to the quenching scenario.
Article
In this study, the thermochemical state during turbulent flame-wall interaction of a stoichiometric methane-air flame is investigated using a fully resolved simulation with detailed chemistry. The turbulent side-wall quenching flame shows both head-on quenching and side-wall quenching-like behavior that significantly affects the CO formation in the near-wall region. The detailed insights from the simulation are used to evaluate a recently proposed flame (tip) vortex interaction mechanism identified from experiments on turbulent side-wall quenching. It describes the entrainment of burnt gases into the fresh gas mixture near the flame’s quenching point. The flame behavior and thermochemical states observed in the simulation are similar to the phenomena observed in the experiments. A novel chemistry manifold is presented that accounts for both the effects of flame dilution due to exhaust gas recirculation in the flame vortex interaction area and enthalpy losses to the wall. The manifold is validated in an a-priori analysis using the simulation results as a reference. The incorporation of exhaust gas recirculation effects in the manifold leads to a significantly increased prediction accuracy in the near-wall regions of flame-vortex interactions.
Article
Motivated by the need to reduce computational costs, look-up tables are widely used in numerical simulations of laminar and turbulent flames, for the thermodynamics of the mixture, for detailed chemistry, and for turbulent combustion closures. At the same time, there have been many studies where artificial neural networks have been trained to replace the classic tabulation approach, and their performance against tabulation typically evaluated a posteriori. In the majority of applications the focus is on accuracy, and the objective is to obtain the best network structure which minimises the inference error during training. Computational efficiency however is also important, and criteria are needed to decide whether or not it is worthwhile in the first place to employ neural networks at all, and if so what the potential bounds on the computational time and memory gains (if any) over tabulation are. This is examined analytically in this work by developing scaling laws for the computational cost of tabulation and of neural networks including the effect of network structure. The scaling laws are validated using both model test-data but also data based on a canonical problem which involves inferring laminar flame speeds of methane/hydrogen mixtures at off-training conditions. The proposed scaling laws lead naturally to a framework for effective decision-making between adopting look-up tables or neural networks.
Article
A new machine learning methodology is proposed for speeding up thermochemistry computations in simulations of turbulent combustion. The approach is suited to a range of methods including Direct Numerical Simulation (DNS), Probability Density Function (PDF) methods, unsteady flamelet, Conditional Moment Closure (CMC), Multiple Mapping Closure (MMC), Linear Eddy Model (LEM), Thickened Flame Model, the Partially Stirred Reactor (PaSR) method (as in OpenFOAM) and the computation of laminar flames. In these methods, the chemical source term must be evaluated at every time step, and is often the most expensive element of a simulation. The proposed methodology has two main objectives: to offer enhanced capacity for generalisation and to improve the accuracy of the ANN prediction. To accomplish the first objective, we propose a hybrid flamelet/random data (HFRD) method for generating the training set. The random element endows the resulting ANNs with increased capacity for generalisation. Regarding the second objective, a multiple multilayer perceptron (MMP) approach is developed where different multilayer perceptrons (MLPs) are trained to predict states that result in smaller or larger composition changes, as these states feature different dynamics. It is shown that the multiple MLP method can greatly reduce the prediction error, especially for states yielding small composition changes. The approach is used to simulate flamelets of varying strain rates, one-dimensional premixed flames with differential diffusion and varying equivalence ratio, and finally the Large Eddy Simulation (LES) of CH4/air piloted flames Sandia D, E and F, which feature different levels of local extinction. The simulation results show very good agreement with those obtained from direct integration, while the range of problems simulated indicates that the approach has great capacity for generalisation. Finally, a speed-up ratio of 12 is attained for the reaction step.
Article
A strategy based on machine learning is discussed to close the gap between the detailed description of combustion chemistry and the numerical simulation of combustion systems. Indeed, the partial differential equations describing chemical kinetics are stiff and involve many degrees of freedom, making their solving in three-dimensional unsteady simulations very challenging. It is discussed in this work how a reduction of the computing cost by an order of magnitude can be achieved using a set of neural networks trained for solving chemistry. The thermochemical database used for training is composed of time evolutions of stochastic particles carrying chemical species mass fractions and temperature according to a turbulent micro-mixing problem coupled with complex chemistry. The novelty of the work lies in the decomposition of the thermochemical hyperspace into clusters to facilitate the training of neural networks. This decomposition is performed with the Kmeans algorithm, a local principal component analysis is then applied to every cluster. This new methodology for combustion chemistry reduction is tested under conditions representative of a non-premixed syngas oxy-flame.