Available via license: CC BY-NC-ND 4.0
Content may be subject to copyright.
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
Discover Articial Intelligence
Review
Applications ofmachine learning inthebrewing process: asystematic
review
PhilippNettesheim1· PeterBurggräf1· FabianSteinberg1
Received: 28 July 2024 / Accepted: 15 October 2024
© The Author(s) 2024 OPEN
Abstract
The high-cost pressure caused by the level of competition poses major challenges for breweries. While microbreweries
can develop local strengths and brewery groups develop synergies, this does not represent a decisive improvement.
The application of machine learning, on the other hand, could give breweries a signicant advantage in their brewing
process. Several approaches to the application of machine learning in the brewing process have already been proposed
in the literature. To guide possible areas of applications and the respective available solution approaches to improve the
brewing process based on machine learning, a systematic review of the application of machine learning in the brewing
process is presented in this paper. In this systematic review, all potentially relevant publications were included at rst.
Subsequently, irrelative publications were ltered out by using a clustering approach. Afterward, the remaining 21pub-
lications were analyzed and synthesized. Based on a developed framework considering the brewing process steps, areas
of improvement, machine learning tasks, and machine learning algorithms, these publications were classied. Upon the
classication, a descriptive analysis was performed to identify common approaches in the existing literature. One result
was that research on articial intelligence in brewing lags signicantly behind the general trend of articial intelligence
research. Additionally, there is very limited research into the association between the recipe and the desired chemical
properties of the beer. Furthermore, it was noticeable that machine learning tasks utilizing articial neural networks or
support vector machines were preferred over others.
Keywords Articial intelligence· Machine learning· Biological processes· Food manufacturing· Food industry·
Reviews· Brewing process
1 Introduction
During the past century, the methods of brewing beer remained nearly unchanged. Only in recent decades and years,
due to advanced technologies, process optimization, and improved eciency, the brewing industry has seen an increase
in productivity [1]. Today, cultivated varieties of the yeast genus Saccharomyces are used by worldwide producers of
beers, sake, or wine [2] and the production of alcoholic beverages contributes considerably to the economies of many
countries [3]. While the main ingredients remained the same over thousands of years, the technologies used for brewing
slowly changed [1]. With a steadily increasing interest in optimization algorithms and the integration of machine learning
into production processes [4], it is necessary to understand their value and impact on the modern brewing industry. The
implementation of new technologies could help to further improve the brewing process.
* Philipp Nettesheim, philipp.nettesheim@uni-siegen.de; Peter Burggräf, peter.burggraef@uni-siegen.de; Fabian Steinberg, fabian.
steinberg@uni-siegen.de | 1University ofSiegen DE, Siegen, Germany.
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
Since there are already several process-improving use cases of machine learning in manufacturing across industries,
such as prediction of an optimal operating point, processability, and machine failure [58–60], one key factor for improve-
ment in production systems seems to be the use of machine learning [5]. With the increasing automation, large volumes
of data have been accumulated. However, it was not economically feasible to use this data for statistical evaluations and
the identication of patterns and relationships regarding biochemistry. With the development of the Internet of Things
(IoT) and especially in the areas of data mining and machine learning, it is now possible to analyze these databases for
complex patterns and to initiate process optimization [6]. In consequence, applications of articial intelligence could
signicantly improve the brewing process from raw material to bottling. Possible elds of applications include the auto-
matic quality control of barley [7–9], more ecient processes during wort production [10, 11], a more precise prediction
of the fermentation process [12–14], optimized energy usage [15], or an empty bottle inspector system [16].
Some approaches to the application of machine learning in the brewing process have already been proposed in the
literature over the past three decades. A systematic review would deliver a clear and comprehensive overview of available
evidence on a given topic. This overview of the possible areas of application of machine learning in the brewing process
and the respective available solution approaches could enable breweries to select the right applications to improve their
brewing process based on machine learning. A systematic review is therefore helpful both for the scientic community
as basic literature and for the practical implementation of machine learning in brewing. Nevertheless, to the best of our
knowledge, we could not nd any systematic literature review in the body of literature. Thus, we will provide the com-
munity with the rst systematic review on applications of articial intelligence in the brewing process.
The aim of this paper is therefore to answer the two following research questions by conducting a systematic literature
review:
• In which areas of the brewing process are machine learning tasks used?
• Which machine learning tasks are used in the respective areas?
To answer these questions, we conduct the systematic review by following the structure of Vom Brocke etal. [17]
supplemented by dedicated review concepts from other authors like a procedure model of Moher etal. [18] and a clus-
tering approach of Weißer etal. [19] which has recently proven itself several times [20, 21]. The above mentioned review
structure and review concepts ensure the systematicsness of literature review. With help of that all potentially relevant
publications can be included in the literature search results. In addition to that, the clustering approach is used to reduce
the time spend on screening literatures. On the basis of the aim of this paper the a priori hypothesis of the literature
review is derived. We assume that the publications for dierent brewing process steps as well as dierent machine learn-
ing algorithms used for brewing process will be found. Through the systematic literature review 21 publications were
nally identied, which are relative to the theme of this paper and can answer the research questions. In this paper we
will develop a framework for the classication of the publications. Based on the result of the literature review and the
Hypothesis, we will then perform a descriptive analysis, which will then be used to identify the main topics in the given
literature to answer the research questions as well as implications for further research.
Our paper is structured as follows. Section2 gives an overview of brewing and articial intelligence. Section3 elabo-
rates the systematic literature review and details the applied methodological approach. In Sect.4 a framework is derived
as a result of the systemic review and a descriptive analysis of the literature is carried out. Based on this, a detailed analysis
of the current state of the art in the body of literature is conducted, and implications for further research are derived in
Sect.5. Finally, a summary is given in the last section.
2 Brewing andmachine learning
Brewing has a long history alongside the evolution of humans and its beginnings are in the Neolithic period, dating back
further than the invention of writing [2]. Alcoholic drinks are amongst the most comprehensively produced and enjoyed
items in history and ranked third of the most popular drinks only behind water and tea [22, 23]. A new study, published
in October 2018 in the Journal of Archaeological Science even showed that beer brewed on a wheat/barley basis was
used for ritual feasts as early as 13.000years ago [24]. The mind-altering properties boosted creativity and encouraged
the development of language, art, and religion since the Stone Age rituals [2].
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
To briey describe the brewing process, we follow the often-cited book by BRIGGS ET AL. [25] in which they provide
an in-depth description of the principles of the various brewing processes as well as ESSLINGER [26] and his handbook
of brewing. At the beginning of the brewing process is the raw material, the grain. It will dene the later style and type of
beer to a large extent. Many of the beer’s avors are developed during the malting process, where barley is germinated
and heated. The malted barley is milled and mixed with water in the mashing tun. During the mashing process, starch
is released from the malt and when the mash is heated, enzymes are released that ensure fermentable sugar. The wort
is boiled in the brew kettle, stabilizing it and killing bacteria. In the same step, the hops are added, which release stabi-
lizing, bitter alpha acid and essential oils to give the beer the desired aroma and bitterness. To separate the wort from
solid compounds such as hops or remnants of the malt it is ltered before the fermentation process begins. The steps
of mashing, lautering and boiling are often referred to as wort production. To start the fermentation process, the wort
is cooled down to a certain temperature at which the yeast is added. The biological reactions then lead to the produc-
tion of ethanol and some other aromatizing metabolic by-products. Depending on the type of beer, it is now stored for
weeks or months to develop the correct avor. The whole process of beer production is nished by a ltration step for
nal clarication before the bottles are lled.
Machine learning (ML) belongs to the eld of Articial Intelligence (AI) and was coined in 1955 by John Mccarthy
who is widely regarded as the founder of the term and the respective eld of research [27]. According to the Cambridge
Dictionary, machine learning is dened as “the process of computers changing the way they carry out tasks by learning
from new data, without a human being needing to give instructions in the form of a program” [28]. The professor of
computer science and machine learning Tom Mitchell gives a more scientic denition by saying “A computer program
is said to learn from experience E with respect to some class of task T and performance measure P, if its performance
at the task in T, as measured by P, improves with experience E” [29]. Generally said, machine learning describes a com-
puter program or algorithm that learns and therefore improves automatically from experience [30, 31]. Two of the most
important areas in which it makes sense to use programs that learn from their experience and improve themselves are
complex problems and the need for adaptability. Complexity problems could be complex tasks that are performed by
humans such as driving, speech recognition, or image understanding as well as tasks that are beyond human capabili-
ties which includes the analysis of large and complex data sets. The rigidity of computer programs is a limiting factor
once it has been written and installed. But there are many tasks like decoding handwritten text, speech recognition, or
spam detection where it is necessary for the program to adapt to changes in the environment it is interacting with and
therefore requires adaptivity [32].
Machine learning is typically performed in three phases. Firstly, the training phase, where training data sets are used
to teach the model to pair between the given input and desired output. Secondly, the test or validation phase, where a
validation data set is used to measure how well the model has been trained and to test properties such as precision or
errors. Third and nally the application phase, where the model is applied to real-world data to derive the results [29].
Because of the numerous dierent types of machine learning, it is useful to divide them into broad categories. The
distinction can be made based on the types of available training data, the method, and order by which the training
data is fed, or based on the test data that is used for the evaluation of the learning algorithm. The most commonly used
dierentiator is the extent to which the algorithm is trained with human supervision: supervised, unsupervised, semi-
supervised, and reinforcement learning [29, 33].
• Supervised learning: Predictions are based on a set of training data that includes the desired solutions (called labels).
Typical tasks are classication, regression, and ranking problems.
• Unsupervised learning: Predictions are solely based on unlabeled training data with the intent to nd “interesting
patterns” within the data set. Typical tasks are clustering and dimensional reduction problems.
• Semi-supervised learning: Predictions are based on both labeled and unlabeled data. It is used in areas where unla-
beled data is widely accessible but labeled data is expensive to obtain.
• Reinforcement learning: The learning system (called agent) can observe the environment and learn how to behave
or act by occasional reward or punishment [33, 34].
Supervised learning methods especially for classication and regression problems have already been proven suc-
cessful to improve the brewing process. Kozlowski etal. [7] implemented convolutional neural networks for varietal
classication of barley to replace human expertise in the sensitive and time-consuming detection of defective grains.
Their results show that the recognition of an individual kernel variety can be achieved by the convolutional neural net-
work with satisfying accuracy, outperforming state-of-the-art methods by over 40%. Ermi etal. [35] showed that deep
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
learning provides a promising approach to the highly non-linear mapping from the domain of recipes to the domain
of chemical properties to produce desired chemical properties for beer. They used two deep learning architectures,
a standard Deep Neural Network (DNN) and a Long Short-Term Memory (LSTM) Deep Neural Network to model the
non-linear relationships between beer type and predicting ranges for original gravity, nal gravity, alcohol by volume,
international bitterness units and color.
These and other successful applications of machine learning in the brewing process have already been proposed
in the literature. To get a good overview of the vast literature, we performed a systematic literature review based on a
framework developed to bring relevant concepts into a higher-level structure.
3 Conducting thereview
This study employs a systematic literature review as its methodology. A systematic review uses explicit methods for
the identication and selection of relevant research data, is reproducible, and answers a specic formulated research
question. Statistical methods for the analysis of the included studies or the summarization of results can be used [18].
The methodology used in this review is following the procedure model of Vom Brocke etal. which consists of ve steps:
(I) denition of review scope, (II) conceptualization of topic, (III) literature search, (IV) literature analysis and synthesis
as well as (V) deduction of research agenda [17]. It is widely accepted within review theory [36] and not least it grants
freedom of action for domain and process-specic examinations.
3.1 Definition ofreview scope
This study employs a systematic literature review as its methodology. A systematic review uses explicit methods for
the identication and selection of relevant research data, is reproducible, and answers a specic formulated research
question. Statistical methods for the analysis of the included studies or the summarization of results can be used [18].
The methodology used in this review is following the procedure model of Vom Brocke etal. which consists of ve steps:
(I) denition of review scope, (II) conceptualization of topic, (III) literature search, (IV) literature analysis and synthesis
as well as (V) deduction of research agenda [17]. It is widely accepted within review theory [36] and not least it grants
freedom of action for domain and process-specic examinations (Fig.1).
The aim of this systematic literature review is rst to aggregate the latest state of the art for application of articial
intelligence in the brewing process and second to develop an integrative framework for further analysis and synthesis
of the relevant publications. Here, we want to focus on the brewing process only and leave out approaches describing
nal product assessment methods like chemical characterization with electronic nose or tongue since these approaches
are based on an already brewed beer. Accordingly, this leads to the following research questions (RQ):
Fig. 1 Taxonomy of literature
reviews following Cooper [37]
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
• RQ1: In which areas of the brewing process is machine learning used?
• RQ2: Which machine learning tasks are used in the respective areas?
• Furthermore, the corresponding hypotheses are also derived:
• HP1: The publications for dierent brewing process steps will be found.
• HP2: There are dierent machine learning algorithms used in dierent brewing process.
3.2 Conceptualization oftopic
Prior knowledge about the topic needs to be acquired before conducting a review to synthesize knowledge from lit-
erature to be able to conduct the review properly [38]. Based on the explanations and denitions provided in Sects. 1
and 2 we identied concepts most relevant to our eld of observation and mapped them to the topic. Furthermore, we
reviewed publications with an explorative approach to ensure the use of a wide range of key terms that exist within the
literature. As a result, we generated a concept map [39] for the brewing process and machine learning (see Fig.2). The
concept map lists relevant synonyms and terms for further literature search.
3.3 Literature search
Based on the concept map the search terms were transferred into the following search string including Boolean opera-
tors and wildcards: [(“beer*” OR “brewery” OR “brewing” OR “fermentation” OR “wort production”) AND (“articial intel-
ligence” OR “AI” OR “machine learning” OR “supervised learning” OR “unsupervised learning” OR “reinforcement learning”
OR “ANN” OR “NN” OR “SVM” OR “KNN” OR “RF” OR “DT” OR “digitalization”)]. We used AND operators to exclude publications
focusing on a single area of the search eld only to increase the thematic relevance. The search strategy was enhanced
by the elements of the STARLITE mnemonic framework [40]: We focus on journal articles and conference proceedings
published in English in the electronic databases IEEE Xplore, Web of Science, EBSCO, ScienceDirect, and Wiley Online
Library. For each one of them, the search string has been adjusted to t the database search requirements. A special
adjustment to sharpen the search string has been made for EBSCO. Due to a larger number of titles including the phrase
"except beer", the decision was made to eliminate those irrelevant publications and (NOT "except beer") was added to
the search string. The query includes all publications until June 2024. In order not to exclude any relevant publications
and to be able to analyze the distribution of papers over time, no start date was dened to search through each of the
ve databases from the oldest document listed.
The application of the search string to the metadata title, abstract, and keywords, considering the additional criteria
from the STARLITE mnemonic, identied a total of 13,839 publications in all databases. Afterward, we followed the proce-
dure given in the PRISMA ow diagram according to Moher, Liberati etal. to consider relevant publications only [18] (see
Fig.3). The procedure recommends removing duplicates followed by a literature screening and a detailed assessment
of relevance based on the full text. The following quality criteria (QC) were dened for the screening and the detailed
assessment:
Fig. 2 Conceptualization map
for the brewing process and
machine learning according
to the procedure of Rowley
and Slack [39]
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
• QC1: Addresses the domain of brewing.
• QC2: Fermentation is applied in a context with beer brewing. Other domains such as wine or tea fermentation are
excluded.
• QC3: Publications focusing on the use of machine learning to improve the brewing process. Papers that solely describe
digitalization improvements without touching the topic of machine learning are excluded.
• QC4: Papers focusing on the nal product assessment rather than on the brewing process are excluded. Final product
assessment in this context means research on an already brewed beer that has completed the brewing process. For
example, chemical characterization to identify the type of beer.
The total number of publications included 2193 duplicates. In the remaining 11,646 publications, we identied various
publications that do not comply with the applied search criteria. It turned out that some databases apply the search string
to the full text in addition to the title, abstract, and keywords. To comply with the search criteria, we additionally applied
the search string to the title, abstract, and keywords manually. After removing duplicates and the manual application of
the search string, a total number of 4178 publications remained for the screening phase.
For the screening of the remaining publications, we used a clustering approach from Weißer etal. [19] that is based on
Natural Language Processing (NLP). A k means clustering is executed after successful tokenization (word separation), the
removal of stop words (they do not contain relevant information), and a TFIDF vectorization, to identify the most relevant
words (top words) per cluster. The top words characterize each cluster and indicate its thematical relevance. The basis
for the clustering is title, abstract, and keywords. Due to the resulting big text corpus, we performed a dimensionality
reduction by latent semantic analysis (LSA), as proposed by [41] and [42], to achieve better clustering results.
In addition to using top words to exclude irrelevant clusters, as suggested by Weißer etal., we improved our process
by introducing a stricter sampling method. We assumed that the articles within each cluster are homogeneous, i.e. they
have a similar topic. Therefore, we selected a representative sample of publications from each cluster. We calculated the
sample with a condence level of 90%, a margin of error of 10% and a standard deviation of 25%, as the clustering already
provided a coherence of content within a cluster. Based on this calculation for each cluster, 15% of the publications, but
at least ve, are selected by a random sample generator to be reviewed. This ensured that our sampling approach was
robust and statistically sound. We assessed the abstracts of all publications selected in the sample for each cluster. To
ensure a high quality standard, a cluster was only excluded if all publications in the sample did not meet the predened
quality criteria (QC1-4). This iterative and sample-based assessment allowed us to minimise the risk of excluding relevant
publications while maintaining eciency. This approach has already been successfully demonstrated by Burggräf etal.
[20], as well as Burggräf etal. [21].
Fig. 3 Flowchart of the review
process
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
Deviating from the described approach of the authors in [19] and [20], we decided on an iterative procedure and
thus repeated the clustering approach a total of ve times to further reduce the number of irrelevant publications in
our clusters. We applied a k means algorithm that oers excellent time complexity and a good cluster purity to divide
the publications by minimizing the sum of squared errors. To determine the appropriate number of clusters for each
iteration we used a combination of the elbow method and the silhouette score. The number of clusters always includes
one incomplete cluster marked ‘–‘ where the Euclidean distance of the data points from the assigned publications to
the centroids of neighboring clusters is too wide and thus the assignment to any cluster is not possible. This cluster
automatically moves on to the next iteration.
For the remaining 4178 publications, a total of ve iterations was performed until no more clusters were irrelevant.
Table1 shows the rst iteration with its top words, cluster size, and relevance, while Table2 shows the fth and nal itera-
tion. As displayed in Table3 the number of relevant publications was gradually reduced through the iterative approach
from 4178 to 596.
Next, we analyzed the abstracts of the 596 publications concerning QC1-4. The remaining 92 publications were then
further analyzed by reading the full text resulting in 19 relevant publications. With the relevant 19 publications, we per-
formed a forward and backward search, to identify models, theories, and constructs that may not have been covered
by the database search terms [43]. This resulted in two additional publications so that our systematic literature review
ultimately covers a total of 21 publications. Since the above mentioned methods were used in the systematic literature
review, all of the potentially relative publications were included in the results of literature search. Among which 21 pub-
lications were retained, as they met the predened quality criteria for the further classication. Corresponding to that
other publications were excluded from the search result due to its irrelevance.
Table 1 Iteration 1: Clusters
with top words, cluster size,
and assessed relevance
Cluster no. Top words Cluster Size Relevance
1 Biomass, concentration, ethanol, process, rate 188 Not relevant
2 Users, express, listserv, abridged, warranty 107 Not relevant
3 Model, process, network, models, batch 298 Not relevant
4 Model, process, learning, data, sensor 186 Relevant
5 Control, controller, process, system, batch 184 Relevant
6 Company, industry new, de, business 293 Relevant
7 Process, estimation, industrial, control, line 141 Not relevant
8 Analysis, alcohol, samples, electronic, classication 196 Relevant
9 Production, rsm, ga, optimization, medium 144 Not relevant
10 Fault, diagnosis, process, data, seed 61 Not relevant
11 Soft, sensor, sensing, process, variables 53 Not relevant
12 Method, determination, spectrophotometric, mu, ml 74 Not relevant
13 Flavor, compound, aroma, volatile, soy 51 Not relevant
14 Optical, light, lm, wavelength, absorption 39 Not relevant
15 Antioxidant, fermented, sourdough, activity, bread 210 Not relevant
16 Cocoa, bean, moistur, chocolat, cacao 23 Not relevant
17 Coee, ccma, bean, altitud, yeast 34 Not relevant
18 Win, yeast, grap, saccaromyc, fermentation 70 Not relevant
19 Rum, diet, ruminal, digestibility, feed 156 Not relevant
20 Sludg, pretreatment, anaerobic, production, wast 76 Not relevant
21 Tea, black, kombucha, catechin, fermentation 43 Not relevant
22 Pim, medication, criteria, old, patient 56 Not relevant
23 Beer, machin, craft, learning, model 99 Relevant
24 Ann, neural, network, articial, rsm 74 Relevant
25 Food, safety, enzym, tos, genetically 29 Not relevant
26 Gut, microbiota, polysaccharid, vitro, intestinal 106 Not relevant
– Protein, activity, production, strains, high 1187 Relevant
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
4 Results
This theoretical overview brings relevant concepts into a higher-level structure, maps the contribution of literature to
our research question, and provides starting points for future research [36]. A framework to classify the publications is
therefore dened before the dierent concepts are analyzed and assigned considering their contribution to our research
question.
4.1 Definition offramework
Setting up a framework is a common approach to structure literature as recommended by [44] and [45]. Our frame-
work is separated into the following four dimensions (see Fig.4): Brewing Process, Area of Improvement, ML-Task, and
ML-Algorithm. The rst two dimensions represent the target area in terms of process step and type of optimization for
improvements within the brewing process, while the ML Task and ML Algorithm represent the application side. These
dimensions are designed to function independently of each other, as each focuses on a distinct aspect of the framework.
The Brewing Process and Area of Improvement dene what is being optimized, while the ML-Task and ML-Algorithm
focus on how this optimization is achieved. This separation ensures that the solution methods can be applied exibly
across various brewing processes without being constrained by the specic details of the improvement areas.
4.1.1 Brewing process
Our systematic literature review focuses on machine learning in the brewing process. Therefore, the rst criterion to clas-
sify the publications is their respective brewing process step. As shown in Fig.5, which provides a schematic overview
of the brewing process, our classication follows the main processes of a brewery as described by [26] but aggregates
thematically similar steps for a better overview: Starchy raw material, hops, brew water, and yeast are summarized as
Table 2 Iteration 5: Clusters
with top words, cluster size,
and assessed relevance
Cluster no. Top words Cluster size Relevance
1 Recogintion, electronic, tongue, pattern, array 15 Relevant
2 Light, red, intensity, plants, plant 3 Not relevant
3 Cocoa, beans, fermented, pheromone, coprophilous 7 Not relevant
4 Beers, aging, alcoholic, nir, primary 18 Relevant
5 Noise, classiciation, spinosad, quality, methods 37 Relevant
6 Bottle, inspection, inspector, empty, vision 4 Relevant
7 Nose, electronic, sensors, discrimination, made 30 Relevant
8 Barley, process, dough, feature, energy 36 Relevant
9 Foam, sensory, robobeer, assess, using 12 Relevant
10 Beer, quality, craft, hops, machin 58 Relevant
11 Data, process, sensor, monitoring, soft 48 Relevant
12 Ann, production, neural, network, rsm 70 Relevant
- Models, stars, fouling, foods, control 268 Relevant
Table 3 Overview of the
publications per iteration Iteration no. Initial publications No. of clusters No. of clusters
excluded Remaining
publications
1 4178 27 20 2219
2 2219 24 17 1118
3 1118 20 10 748
4 748 12 5 606
5 606 13 2 596
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
Fig. 4 Dimensions of the
developed framework
Fig. 5 Schematic overview of the brewing process
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
raw material for the framework. We, therefore, divide the brewing process into raw material, malting, wort production,
fermentation, ltration and stabilization, lling, and labeling, and—as a supporting process step—recipe.
Raw Material considers the main ingredients for the brewing process: starchy raw material, hops, yeast, and brew
water—from incoming goods to quality control. Malting is the process step of steeping, germinating, and drying the
barley and includes the milling. Wort Production aggregates mashing, lautering, and boiling as previously described
in chapter two. Fermentation completes the brewing and includes maturation and storage. Filtration and stabilization
conclude the brewing. It is the process to stabilize the avor and clear the beer before lling and labeling takes place.
Since the recipe is the basis of the entire brewing process and adaptations can also occur in this area through machine
learning, we have included the supporting process step recipe in the framework, which contains all information about
the recipe and its development.
4.1.2 Area ofimprovement
Since we are looking at existing processes that are to be optimized through machine learning, our second dimension of
the framework relates to the area of improvement. This aspect is derived from the introduction to total productive main-
tenance and the overall equipment eectiveness by Nakajima [46]. Availability, performance, and quality are the three
components of a process that can be targeted for improvement. All activities that belong to the preparation of the actual
process, such as availability of the system, maintenance, energy consumption servicing, or set-up time are summarized
under availability. Performance includes all improvements or performance optimizations that directly aect the process
execution, while improved methods of quality monitoring or quality measurements are summarized under quality.
4.1.3 ML‑task
Learning algorithms can be grouped based on the approach that they are following. Since the main tasks have been
extensively studied, several overviews are available in the literature. For our framework, we consider the basic work by
Mohri etal. [33]. They dierentiate between classication, regression, clustering, and dimensionality reduction. Classica-
tion is the problem of identifying to which category an item belongs. One of the best-known examples for classication is
spam / no-spam detection for emails (cf. [47]). Regression is the problem in which a continuous value for each item needs
to be predicted. The result of the prediction depends on the magnitude of the dierence between true and predicted
values. A typical example would be the prediction of stock values (cf. [48]). Clustering is the problem of partitioning
objects into homogeneous subsets based on similar attribute values. The analysis of large data sets is the main usage (cf.
[49]). Dimensionality reduction is the problem of transforming data from a higher dimension into a lower-dimensional
space while retaining meaningful properties from the initial representation. This is commonly used for the preprocessing
of digital images fur further computer vision tasks (cf. [31]).
4.1.4 ML‑algorithm
To subdivide the machine learning algorithms we follow the often-cited overview about supervised learning algorithms
by Caruana and Niculescu-Mizil [50]. They subdivide into Articial Neural Networks (ANN), Logistic Regression (LOGREG),
K-Nearest-Neighbor (KNN), Support Vector Machines (SVM), Random Forest (RF), Decision Trees (DT), and Bagged Trees
(BAG-DT). In addition to that, we extended the eld of Logistic Regression by Linear Regression (LINREG), added Naïve
Bayes (NB) and k means (KM) as well as Principal Component Analysis (PCA) together with Linear Discriminant Analysis
(LDA).
4.2 Analysis andsynthesis
With the help of the above mentioned method of literature search all potentially relevant publications have been included
and the irrelevant literatures have been ltered out. So that at the end of the literature research 21 publications are
retained which are considered to be relevant. We classied all publications following the before mentioned dimensions
and performed a descriptive analysis to identify the current state of the art for applications of articial intelligence in
the brewing process, specically to discover in which areas of the brewing process machine learning tasks are being
used (cf. RQ1). Additionally, we further studied the machine learning tasks that are applied in those dierent areas (cf.
RQ2). The chronological development of publications towards a given research topic usually gives a good overview of
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
the development within that area (see Fig.6). Burggräf etal. have already successfully applied this [21]. Therefore, we
looked at the publishing trend of the 21 publications from our systematic review and compared them to the publishing
trend for articial intelligence from 1999 to 2023 using the data from 1999 as the basis (ML publications n = 11,479; ML
in brewing n = 1).
The result shows, that the research on ML applications in brewing (blue) does not follow the general increasing trend
for overall articial intelligence research (grey). From this, we conclude that articial intelligence is underrepresented in
the brewing sector and that there is currently no focus on ML applications in brewing.
Next, we analyzed the dimensions of the framework (see Fig.4) individually and subsequentially combined two dimen-
sions to identify common approaches and implications for further research. Since one publication can be assigned to
more than one class within a dimension, we allow multiple allocations for our framework. The following paragraphs are
structured according to the framework.
4.2.1 Brewing process
Looking at the process steps and numbers of publications (see Fig.7) reveals that most of the papers focus on the core
brewing process steps of raw material, wort production (ve publications each), and fermentation with eight publica-
tions each. Szczypinski etal. [8, 51] for example intensively study the identication of barley to detect defects and to
ensure a varietal uniformity which is essential for the brewing process. Hou and Zhou [52, 53] investigated a soft-sensing
approach for periodic fouling during wort production, and Zhang and Jia [14] predicted the acetic acid that is signicant
to beer taste and produced mainly by yeast during the fermentation. Another publication deals with predicting the
alcohol concentration during the process of fermentation [54]. In contrast, only three publications are dealing with the
Fig. 6 Chronological develop-
ment of publications
Fig. 7 Overview of the brew-
ing process steps
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
process step of lling and labeling, mainly focusing on the inspection of empty bottles as described by Duan and Wang
[16, 55] while ltration and stabilization (2), recipe (1), and malting (1) are relatively rarely covered in the publications.
One could assume that improving the recipe to brew a better beer that more closely meets the customer’s taste
plays an important role. However, only the publication by Ermi etal. [35] deals with the beer recipe, mapping between
the recipe and the chemical attribute domains of the beer to model the non-linear relationship between them. We also
noticed that most of the publications focus on improving one process step. Bai etal. [15] are the only authors covering
the processes from malting to wort production, fermentation, and nally ltration and stabilization focusing on the
overall energy consumption. Analyzing the number of considered brewing process steps in more detail reveals, that
only one additional study covers two or more process steps. Gonzales Viejo etal. [56] explored how foamability proper-
ties are aected by the use of soundwaves during fermentation as well as during ltration and stabilization. Overall, as
expected, the core process steps of beer brewing from raw material to wort production and fermentation are primarily
considered in the publications. On the other hand, recipe optimization as an essential tool for meeting customer needs
is hardly considered.
4.2.2 Area ofimprovement
Regarding the area of improvement, it was particularly noticeable that only one publication relates to the category of
availability [15]. All others belong to either performance or quality, with a few more in the latter category (see Fig.8).
This can be explained by the importance of the ingredients quality which makes up half of this category’s publication
assignments. An example of the high importance of the quality of the raw material is given by Szczypinski etal. [51]
claiming the varietal uniformity of barley to be crucial to ensure the production of high-quality malt. In addition to the
raw material, lling and labeling [16, 55], fermentation [14, 56], and the recipe [35] are covered in the category of quality.
Improvements in the area of performance mainly occur in wort production [10, 11, 52, 53].
4.2.3 ML‑task
Considering the machine learning task, we recognized an equal distribution between classication and regression (see
Fig.9). They are both used eleven times and therefore account for 81% of the total ML tasks, putting supervised learning
far ahead of unsupervised learning where dimensional reduction is used four times and clustering only once. Addition-
ally, none of the publications exclusively utilizes unsupervised learning tasks.
For example, Hou and Zhou [52] in their publication about periodic fouling in recipe alternation use a clustering
approach but also combine it with a regression approach. Likewise, a dimensional reduction is once used with regression
[56] and three times for the inspection of barley with a classication approach [8, 51, 57]. They have in common that they
use dimensional reduction for image recognition before it is further analyzed. A possible explanation for the predominant
use of classication and regression might be that there is no need to nd new patterns within the brewing process but
to optimize the performance based on experience and therefore the use of labeled input data.
Fig. 8 Overview of the area of
improvement
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
4.2.4 ML‑algorithm
Looking at the machine learning algorithms in detail reveals that ANN with 14 applications is by far the most widely used
algorithm (see Fig.10). Moreover, only four publications neither use ANN nor SVM [11, 12, 54, 58], and LOGREG/LINREG,
RF, and BAG-DT, as well as other algorithms are not applied at all. Furthermore, we identied authors using more than
one approach within a publication. PCA and LDA are twice used with ANN [51, 56] and also twice with SVM [8, 57]. For
example, Zhang etal. [14] combine ANN and SVM to predict acetic acid content in the nal beer while Hou etal. [52]
couple ANN with k means clustering. Song and Ciesielski [11] simultaneously use DT, K-NN, and NB to get the best results
for image analysis to dierentiate between dierent states of the mashing process. Bowler etal. use LSTMs for predicting
the alcohol concentration during fermentation [54]. Another publication applies MT-DNN to improve the model accuracy
for industrial fermentation process by using laboratory scale fermentation data [58].
4.2.5 Brewing process andML‑algorithm
Combining the brewing process step with the applied machine learning algorithms revealed that articial neural net-
works were used in every process step (see Fig.11). Once several publications on a process step were available, additional
machine learning algorithms such as SVM, KNN, MT-DNN or PCA were applied. This led us to the assumption that ANN
was applied as a universal solution because it has a wide range of applicability but at some process steps, it had been
outperformed by other algorithms. This also means that further research is needed for the process steps where so far
only ANN was applied.
4.2.6 Area ofimprovement withML‑task andML‑algorithm
Combining the area of improvement with the used machine learning tasks and machine learning algorithms (see Fig.12)
showed that dimensional reduction and with it PCA/LDA and SVM were only used for quality improvement purposes.
Fig. 9 Overview of the used
machine learning task
Fig. 10 Overview of applied
machine learning algorithms
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
Performance improvements had the biggest spread amongst the machine learning algorithms even though they only
account for nine out of 21 publications.
This theoretical overview aimed to bring relevant concepts into a higher-level structure, mapped the contribution of
literature to our research questions, and provided starting points for further research. Despite the general increasing trend
for overall articial intelligence research, the research in ML applications in brewing has hardly increased in the past 20years.
Nevertheless, we showed that there are publications on articial intelligence for every brewing process step but that most of
them focus on the core steps of raw material, wort production, and fermentation (cf. RQ1). Concerning the applied machine
learning task (cf. RQ2), classication and regression were most used whereas an articial neural network was the most pre-
dominantly applied algorithm followed by support vector machines and PCA/LDA.
Fig. 11 Overview of brewing
process steps combined with
the applied machine learning
algorithm
Fig. 12 Overview of area of improvement combined with machine learning task and algorithm
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
5 Discussion
Following the descriptive analysis, we further summarized the considered publications grouped by their area of
improvement. In order to provide a comprehensive overview of the publications reviewed in this study, we catego-
rized them according to their contribution to different steps of the brewing process and areas of improvement. Table4
presents a matrix that assigns the considered publications to specific brewing process steps, such as raw material
handling, wort production, fermentation, and others, along with the respective areas of improvement, namely quality,
performance, and availability. This categorization allows for a structured understanding of where machine learning
has been applied and what benefits have been achieved across the brewing process.
Similarly, Table5 summarizes the publications according to the machine learning tasks they address, such as
classification, regression, clustering, and dimensionality reduction, in relation to the same areas of improvement.
By analyzing the assignment of publications to these categories, this matrix provides insights into which machine
learning tasks have been most effective in improving the quality, performance, and availability of brewing processes.
This classification enables a better understanding of the role different machine learning models play in addressing
specific challenges within the brewing industry.
Building on the insights provided in both Tables, the following sections delve deeper into the specific impact of
machine learning on different aspects of the brewing process. Each of the following sections explores how machine
learning algorithms have been applied to improve availability, performance and quality in the brewing industry,
illustrating both the successes and the ongoing challenges in these areas. This detailed analysis highlights the practi-
cal applications of various models and the specific process improvements achieved through their implementation.
Furthermore, the prevalence of ANN and SVM is discussed.
5.1 Impact ofmachine learning toimprove theavailability ofbrewing processes
The publication of Bai etal. [15] demonstrates a solution on how to improve the availability, more specific the energy
consumption of the beer brewing process. As written in the framework’s definition, all activities that belong to the
preparation of the actual process, such as availability of the system, maintenance, energy consumption servicing, or
set-up time are summarized under availability. According to the authors, the brewing processes from raw material
to fermentation are the most important but also account for the main energy consumption. Therefore, to reduce
energy consumption and save costs a detailed analysis of historical production data was performed to obtain the
Table 4 Matrix overview
of the assignment of
publications to the categories
brewing process step and area
of improvement
Brewing process step Area of improvement
Quality Performance Availability
Raw material [7–9, 51, 57]
Malting [15]
Wort production [10, 11, 52, 53] [15]
Fermentation [14, 54, 56] [12, 13, 58, 59] [15]
Filtration and stabilization [56] [15]
Filling and labeling [16, 55] [60]
Recipe [35]
Table 5 Matrix overview
of the assignment of
publications to the categories
ML-Task and Area of
Improvement
ML-task Area of improvement
Quality Performance Availability
Classication [7–9, 16, 51, 55, 57] [11, 12, 58, 59] [15]
Regression [14, 35, 54, 56] [10, 13, 52, 53, 58, 60]
Clustering [52]
Dimensional reduction [8, 51, 56, 57]
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
energy-saving and energy-wasting production batches. The energy consumption of the batches was analyzed using
PCA and the data envelopment analysis, to evaluate the relative effectiveness of the energy consumption based on
linear programming. The final modeling of the brewing process’s energy consumption was carried out using a radial
basis function neural network which shows that the method can be used to effectively predict and analyze the energy
consumption of brewing processes based on their historical production data.
5.2 Impact ofmachine learning toimprove theperformance ofbrewing processes
Nine of the 21 publications discussed the use of machine learning to improve the performance of brewing processes.
The authors of [10] described the use of neural networks as a control system to obtain high extract eciency in the
lauter tun. To ensure an even dierential pressure during lautering—that restricts the turbidity of the wort during lter-
ing—adjustable blades are applied. To test the performance of a developed neural network controller for the blades,
a simplied neural network model has been used to represent the real plant. The model was successful in maintaining
optimal conditions, such as reducing turbidity and ensuring eective ltration. The low training error of 0.0529 and
testing error of 0.1578 reect the model’s high accuracy in learning and predicting process behaviors, which led to the
optimization of the lautering process. This optimization improved extract eciency and minimized process disruptions,
resulting in a more stable and ecient operation. The result showed that neural networks simplify the management of
the complexity of the plant variables relations but an analysis with a more complex model needs to be done to adjust
the blade controller even more to the real plant.
Neural networks have also been used in [53] and [52] to reduce fouling during wort production. A soft-sensing
approach based on fuzzy neural networks and orthogonal experimental design was investigated by [53] to online meas-
ure the batch heat exchanger fouling resistance. The approach was successfully and eectively applied in the fouling
measurement of a brewery wort evaporator. [52] presented a measurement scheme for predicting the periodic fouling in
recipe alternation and batch production, investigating an approach based on fuzzy neural networks and fuzzy c-means
clustering. The latter was used to cluster 15 recipes with dierent quantities of raw material and dierent parameters
of the evaporation process into classes while two fuzzy neural networks were trained to learn the formation trends of
short- and long-term fouling. Their experimental results indicated that the hybrid FNN reduced the overall fouling pre-
diction error to less than 7%, with short-term fouling errors below 8% and long-term fouling errors below 4%, proving
the eectiveness of this method in improving heat exchanger performance.
The development of a machine vision application for automatic process assessment during mashing was described in
[11]. Dierent classiers that represent three groups of classication models such as decision trees (C4.5, PART), statistical
methods (KNN, Naïve Bayes), and rule models (1R, decision table) were applied to discriminate between seven dierent
stages of the mashing process. The experiments showed that using a reduced set of 11 selected image features, a clas-
sication accuracy of 71.6% was achieved for dierentiating seven stages of the mashing process. For binary classication
between the nished state and all other stages, the accuracy increased to 92.0%, demonstrating that machine learning
could eectively optimize the timing of mashing termination and improve process control.
Another combination of dierent methods was used by [12] to characterize beer fermentations. The goal was to predict
whether or not the target-present gravity of the beer was reached within a given time. This was achieved by combining
three approaches: Modelling by a mathematical function, the nearest neighbour approach, and the generation of centile
curves. Using all three approaches combined proved to be the most eective to judge if a given fermentation deviates
from normal behaviour because the condence in prediction increased if all three methods agreed.
Also looking at the fermentation process, [13] described an online set-point optimization cooperating with model
predictive control and its application to a yeast fermentation process. A structure with adaptive steady-state target opti-
mization that is linearized online and a model predictive control algorithm in which two neural models of the process are
used was proposed. The neural network approach showed comparable results to those achieved in a computationally
demanding structure with nonlinear optimization.
[60] considered the estimation of the overall heat transfer coecient in a turbulent heat exchanger under fouling as
an application in a ash pasteurizer. They demonstrated an advancement in the prediction of the overall heat transfer
coecient using a model based on neural networks with the key factors that aect beer fouling during the pasteuriza-
tion process.
Using laboratory ultrasonic data, [58] found that all three domain adaptation methodologies studied improved the
accuracy of machine learning models for predicting industrial scale fermentation progress compared to training on the
industrial data alone. Federated learning performed the best, with an accuracy of nearly 100% for 14 out of 16 tasks
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
compared to the base case model. This approach may be particularly useful for performance improvement for the craft
beer industry, where a wider range of beers are produced at smaller volumes and where data privacy may be a concern.
In [59], it was found that both RSM and ANN were eective tools for optimizing the bioprocessing parameters of
umqombothi, with good correlation between the experimental and predicted values. A coupled approach using both
RSM and ANN may be benecial for optimizing the bioprocess and improving the nal product, although further inves-
tigation of other key parameters may be necessary.
5.3 Impact ofmachine learning toimprove thequality ofbrewing processes
The remaining eleven out of our 21 publications oered solutions and suggestions on how to improve the quality of
brewing processes.
The quality of barley is a key factor for successful brewing and therefore the visual inspection of grain before malting
is highly important. In [8] and [51] the authors presented a method for the identication of barley defects with computer
vision based on the barley’s colour, texture, and shape. Discriminant analysis, linear classier ensemble, and an articial
neural network were applied [51] with the result showing that the proposed method is useful for discriminating barley
varieties and that the articial neural network is superior to the linear classier ensemble. A similar approach was followed
in [8], applying linear classier ensembles and support vector machines with linear and polynomial kernel functions for
the classication of barley grain defects. It showed that the support vector machine classication was able to determine
the defect classes with an accuracy of 97%, a more than satisfactory result.
Another option to replace the currently employed visual inspection of barley was presented in [7]. The authors com-
pared nine implementations of convolutional neural networks, covering deep learning methods and transfer learning
models in their application to varietal classication. Their results in the recognition of an individual kernel variety with
convolutional neural networks outperformed state-of-the-art methods by over 40%, proving the value of it to be applied
in the malting industry.
Furthermore, two publications covered yeast cell viability analysis with the goal of a more compact and cost-eective
device [9] and for in-situ real-time monitoring [57]. The approach in [9], where a compact automatic yeast analysis plat-
form was used to rapidly measure viability and cell concentration, reached comparable results to the industry standards.
They applied a support vector machine model coupled to a lens-free computational imaging technology to classify
between live and dead cells. With a similar purpose, [57] developed an automatic way to determine cell viability with
the analysis of time-lapse images taken by darkeld microscopy. Just like the aforementioned publication, they applied
a support vector machine to predict the viability, although a principal component analysis was used to investigate the
dynamic information of intracellular movements before the SVM was applied. Results showed that their system was
resistant to disturbances and therefore proved to be robust and applicable.
The taste and appearance of colour and foam are key to any beer and an important quality criterion. Three of the pub-
lications addressed these main points. The authors in [14] applied articial neural networks and support vector machines
to predict acetic acid after fermentation. The beer’s taste is signicantly inuenced by it and therefore the control of it
during fermentation is important for high-quality beer production. Partial least squares regression, back-propagation
neural networks, radial basis function neural network, and least squares support vector machines were used for the
model with the latter being the best for predicting the acetic acid content in the study.
Predicting chemical attributes and their eect on the taste of beer based on the brewing recipe was covered in the
publication by [35]. The work focused on a mapping between the recipe and a range of ve chemical attribute domains
to optimize recipes and produce the desired chemical properties of the beer. Two deep learning architectures were
used, a deep neural network and a recurrent neural network. They were tested with three tasks: Classifying coarse- and
ne-grained beer types as well as predicting the chemical attributes. For this highly non-linear mapping, both outper-
formed standard baselines, with the deep neural network giving the best performance and being a promising approach
for further research.
[56] assessed the eect of audible soundwaves on foam quality and beer bubbles with ve dierent frequencies that
were applied during fermentation and carbonization. Samples were analyzed using a robotic pourer and multivariate
data analysis to assess foam and colour-related parameters as well as through a trained sensory panel. Two articial
neural network models were tested based on data from the robotic pourer to predict the fermentation type and the
intensity of sensory descriptors. Without a change in aroma or avour prole, the soundwaves increased the number of
small bubbles, foamability, and foam stability.
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
In [16] and [55] a similar machine vision application for the inspection of empty beer bottles was presented. For real-
time determination of the inspection area a method that is based on the histogram of edge points was used. To detect
possible defects of bottle wall and bottom, [16] proposed an algorithm based on local statistical characteristics whereas
[55] suggested a derivative algorithm from the Canny edge detector. For low- and high-level inspection both used two
articial neural networks. The performance and value of the algorithms had been tested and proven with a developed
prototype where the experimental results showed the feasibility of the machine vision application.
In [54], ultrasonic measurements and machine learning were used to predict alcohol concentration during beer fer-
mentation. This approach can improve product quality, increase eciency, and reduce the burden on operators by pro-
viding real-time, automatic alcohol concentration measurements, allowing for early detection of anomalous batches and
more eective scheduling of production equipment. By providing real-time data on alcohol concentration, this approach
can also help identify and address any issues with the fermentation process before they result in defective products.
5.4 Critical discussion ontheprevalence ofANN andSVM
The widespread use of Articial Neural Networks (ANN) and Support Vector Machines (SVM) in brewing process optimi-
sation, as identied in this review, can be attributed to the distinct advantages these methods oer over other machine
learning algorithms. Articial Neural Networks (ANN) have a high capacity to handle complex, non-linear relationships,
which are abundant in the brewing process. For example, during fermentation or recipe optimisation, the interactions
between raw materials, environmental conditions and process parameters are highly non-linear. ANNs can eectively
model these relationships, allowing precise control and prediction of outcomes such as alcohol content, avour proles
and fermentation eciency. In addition, ANNs are adaptive and scalable, making them suitable for real-time process
monitoring and optimisation as more data becomes available.
Support Vector Machines (SVMs), on the other hand, excel at classication tasks that are critical to quality control
processes in brewing. SVMs are particularly eective in dealing with high-dimensional data sets, such as those used to
analyse raw material quality (e.g. barley defect detection) or to monitor fermentation stages. Their ability to create clear
decision boundaries ensures high accuracy in distinguishing between dierent classes or states in the brewing process,
such as identifying optimal yeast cell viability or predicting the presence of unwanted fermentation by-products such
as acetic acid.
Both, ANN and SVM have demonstrated superior performance in improving process eciency reducing variability and
ensuring consistent product quality making them the preferred choice in brewing related machine learning research.
Their versatility and robustness across dierent brewing applications provide clear advantages over traditional machine
learning models such as decision trees or k-nearest neighbours, which may not perform as well when dealing with com-
plex, dynamic or high-dimensional data.
6 Limitation ofthepaper andimplications forfurther research
During the analysis of the identied publications, we found that dierent ML algorithms are used at various stages of the
brewing process. However, the conclusions that can be drawn from these publications are limited, as only 21 relevant
papers were identied through the systematic literature review. The review also focused on the evaluation of the papers
within the regulatory framework, providing a general overview of the current state of research in the brewing process.
The limitations described here refer specically to the review process itself, rather than the identied publications. Sev-
eral aspects were beyond the scope of our review. For instance, the review did not delve into the mathematical details
of the algorithms considered, nor did it consider other applications of articial intelligence in the beer industry, such as
chemical characterization using electronic noses or tongues. Additionally, the review only included published literature
and did not consider any unpublished or grey literature, which may contain valuable insights and information on the use
of machine learning in the brewing process. The review also did not extensively examine the practical implementation
or eectiveness of the machine learning tasks discussed in the included studies, but rather provided a broad overview
of their application in various stages of the brewing process.
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
Despite the review’s individual limitations, it is clear from the publications analysed that further research is needed on
the practical implementation and eectiveness of machine learning in the brewing industry. This research could focus
on evaluating the real-world impact of these approaches on eciency and quality in the brewing process. In addition,
exploring the application of other machine learning algorithms, such as Random Forests (RF), K-Nearest Neighbours (KNN)
and Reinforcement Learning, could open up new optimisation opportunities. By delving deeper into these methods,
Random Forest (RF) stands out as particularly suited to multi-variable decision-making tasks, as it creates multiple deci-
sion trees and combines them to improve prediction accuracy. In brewing, RF could be used in areas such as predicting
batch quality or identifying optimal process parameters based on numerous input variables such as temperature, pH
and raw material quality. Similarly, K-Nearest Neighbours (KNN) is valuable for clustering and pattern recognition, mak-
ing it ideal for classifying dierent brewing stages or identifying similarities between brewing processes. It could also be
used to group similar recipes or process variations, enabling more ecient experimentation and recipe development.
Investigating these methods may yield tailored solutions that complement or enhance the eectiveness of ANN and
SVM in the brewing process. Reinforcement learning, which focuses on optimising actions through trial and error to
maximise long-term benets, has signicant potential for continuous, dynamic processes such as fermentation or energy
management. By continuously learning and adapting to new data, reinforcement learning algorithms could optimise
brewing conditions in real time, improving both eciency and consistency of production. Additionally, it would be use-
ful to explore ways in which the availability of brewing products can be increased through the use of AI. So far, this topic
has only been addressed in one paper.
The identied gaps in the literature, as well as potential future research directions, are summarized in the following
Table6.
7 Conclusion
In this article, a rst systematic literature review on ML in the brewing process was conducted to oer a condensed
overview of the publications on the topic. Further, this review helps the research community as basic literature. It gives
an overview of the current state of the art and shows research gaps. Additionally, it also helps the practitioners with a
clear overview of solutions that have already been applied and shows where these have been successfully implemented.
The research guides possible areas of applications and the respective available approaches to improve the brewing
process based on machine learning. We conducted the review according to Vom Brocke etal. [17] and implemented
dedicated SLR concepts from other authors. 4,178 publications from ve dierent databases were identied, of which
21 publications were further analyzed and synthesized in the core of our literature research. Based on our developed
framework—considering the brewing process step, area of improvement, machine learning task, and machine learning
algorithm—the publications were classied.
The main nding of the research is that despite the general trend of increasing articial intelligence research, the use of
ML in the brewing industry has seen little growth in the past 20years. Machine learning has been applied in various areas
of the brewing process, including raw material selection and fermentation, with the aim of improving process eciency,
performance, and quality. Another main nding regarding the applied machine learning tasks and algorithms is that
classication and regression are the most commonly used, while an articial neural network is the most predominantly
applied algorithm, followed by support vector machines. These approaches and algorithms have been utilized for tasks
such as predicting energy consumption and optimizing brewing recipes, with the goal of improving process eciency,
performance, and quality. Another main nding is that there is only one publication on the relationship between the
recipe and the desired chemical properties of beer.
Furthermore, we identied theoretical and practical implications for further research. The review showed that some
of the presented applications are still in their development phase and that there is much potential for further research.
However, despite the limitation of the paper it still to a certain extent oers practical guidance to practitioners in the
brewing industry on how to implement machine learning applications into their brewery.
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
Table 6 Overview of implications for future research
Research Area Identied gaps Future research directions
Machine learning algorithms Various algorithms like Random Forest (RF), K-Nearest Neighbors (KNN),
and Reinforcement Learning are underexplored in brewing applica-
tions
Investigation of the potential of these algorithms for optimization tasks
such as predicting quality and identifying process parameters
Application across brewing steps ML has been applied unevenly across dierent brewing process steps,
with some steps receiving little attention Exploration of the application of machine learning at under-researched
stages of brewing, such as fermentation and packaging
Diversity of algorithms Most studies focus on a narrow set of algorithms like ANN and SVM Widening the scope to include alternative machine learning algorithms,
including those optimized for clustering and decision-making
Real-world impact of ML applications No studies have examined the entire brewing process as a cohesive
system for AI application Investigating how AI can be applied across the entire brewing process in
a real brewery, providing insights into the system-wide benets of AI
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
Author contributions All authors contributed to the study conception and design. Data collection and analysis was performed by Philipp
Nettesheim. The rst draft of the manuscript was written by Philipp Nettesheim and all authors commented on previous versions of the
manuscript. All authors read and approved the nal manuscript.
Funding Not applicable.
Data availability No datasets were generated or analysed during the current study.
Code availability Not applicable.
Declarations
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which
permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to
the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modied the licensed material. You
do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party
material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If
material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco
mmons. org/ licen ses/ by- nc- nd/4. 0/.
References
1. Stewart GG. Brewing: the evolution of a tradition into a technology. Ingenia, no. 11, pp. 31–35, Feb. 2002. Available: https:// www. ingen
ia. org. uk/ Ingen ia/ Artic les/ 21ca7 fbb- 3407- 4fe4- 82be- a6aab 7bf75 41.
2. Curry A. A 9,000-year love aair. NGS. 321(2): 30–53. Available: https:// www. natio nalge ograp hic. com/ magaz ine/ 2017/ 02/ alcoh ol- disco
very- addic tion- booze- human- cultu re/.
3. Willaert R. The beer brewing process: wort production and beer fermentation. In: Hui YH, editor. Handbook of food products manufactur-
ing. Hoboken: Wiley; 2007. p. 443–506.
4. Wuest T, Weimer D, Irgens C, Thoben K-D. Machine learning in manufacturing: advantages, challenges, and applications. Prod Manuf Res.
2016;4(1):23–45. https:// doi. org/ 10. 1080/ 21693 277. 2016. 11925 17.
5. Weichert D, Link P, Stoll A, Rüping S, Ihlenfeldt S, Wrobel S. A review of machine learning for the optimization of production processes.
Int J Adv Manuf Technol. 2019;104:1889–902. https:// doi. org/ 10. 1007/ s00170- 019- 03988-5.
6. Woestmann R, Reckelkamm T, Deuse J, Kimberger J, Temme F, Schlunder P, Klinkenberg R. Datengetriebene Prozessoptimierung in der
Getränkeindustrie. Fabriksoftware. 2019;24(3):21–4.
7. Kozłowski M, Górecki P, Szczypiński PM. Varietal classication of barley by convolutional neural networks. Biosyst Eng. 2019;184:155–65.
https:// doi. org/ 10. 1016/j. biosy stems eng. 2019. 06. 012.
8. Szczypiński PM, Klepaczko A, Kociolek M. Barley defects identication. Proc 10th Int Image Sig Ljubljana Slovenia. 2017. https:// doi. org/
10. 1109/ ISPA. 2017. 80735 98.
9. Feizi A, etal. Rapid, portable and cost-eective yeast cell viability and concentration analysis using lensfree on-chip microscopy and
machine learning. Lab Chip. 2016;16(22):4350–8. https:// doi. org/ 10. 1039/ c6lc0 0976j.
10. Sarabia EG, Llata JR, Fernandez D, Oria JP, Landaluce R. Lauter Tun control by neural networks. Proc ICIIS. 1999. https:// doi. org/ 10. 1109/
ICIIS. 1999. 810268.
11. Song A, Ciesielski V, Rogers P. Vision system development by machine learning: mashing assessment in brewing. Appl Artif Intell.
2001;15(8):777–95. https:// doi. org/ 10. 1080/ 08839 51013 17018 609.
12. Defernez M, Foxall RJ, O’Malley CJ, Montague G, Ring SM, Kemsley EK. Modelling beer fermentation variability. J Food Eng. 2007;83(2):167–
72. https:// doi. org/ 10. 1016/j. jfood eng. 2007. 02. 033.
13. ławryńczuk M. Online set-point optimisation cooperating with predictive control of a yeast fermentation process: a neural network
approach. Eng Appl Artif Intell. 2011;24(6):968–82. https:// doi. org/ 10. 1016/j. engap pai. 2011. 04. 007.
14. Zhang Y, Jia S, Zhang W. Predicting acetic acid content in the nal beer using neural networks and support vector machine. J Inst Brew.
2012;118(4):361–7. https:// doi. org/ 10. 1002/ jib. 50.
15. J. Bai, T. Pu, J. Xing, G. Niu, S. Zhang, and Q. Liu, “Research on energy consumption analysis of beer brewing process,” Proc. EMEIT, Harbin,
China, pp. 182–185, 2011, https:// doi. org/ 10. 1109/ EMEIT. 2011. 60228 92.
16. Duan F, Wang Y-N, Liu H-J, Li Y-G. A machine vision inspector for beer bottle. Eng Appl Artif Intell. 2007;20(7):1013–21. https:// doi. org/ 10.
1016/j. engap pai. 2006. 12. 008.
17. vom Brocke, etal. J. Reconstructing the giant: on the importance of rigour in documenting the literature search process. Proc. ECIS. 2009.
Available: https:// aisel. aisnet. org/ ecis2 009/ 161/.
Vol:.(1234567890)
Review Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6
18. Moher D, Liberati A, Tetzla J, Altman DG, The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the
PRISMA statement. PLoS Med. 2009;6(7):e1000097. https:// doi. org/ 10. 1371/ journ al. pmed. 10000 97.
19. Weißer T, Saßmannshausen T, Ohrndorf D, Burggräf P, Wagner J. A clustering approach for topic ltering within systematic literature
reviews. MethodsX. 2020;22(7):100831. https:// doi. org/ 10. 1016/j. mex. 2020. 100831.
20. Burggräf P, Wagner J, Koke B, Steinberg F. Approaches for the prediction of lead times in an engineer to order environment—a systematic
review. IEEE Access. 2020;8:142434–45. https:// doi. org/ 10. 1109/ ACCESS. 2020. 30100 50.
21. Burggräf P, Wagner J, Heinbach B. Bibliometric study on the use of machine learning as resolution technique for facility layout problems.
IEEE Access. 2021;9:22569–86. https:// doi. org/ 10. 1109/ ACCESS. 2021. 30545 63.
22. Meussdoer fer FG. A comprehensive history of beer brewing. In: Eßlinger HM, editor. Handbook of brewing: processes, technology, markets.
Weinheim: Wiley; 2009. p. 1–42.
23. Nelson M. The barbarian’s beverage: a history of beer in ancient Europe. New York: Routledge; 2005.
24. Liu L, Wang J, Rosenberg D, Zhao H, Lengyel G, Nadel D. Fermented beverage and food storage in 13,000 y-old stone mortars at Raqefet
Cave, Israel: investigating Natuan ritual feasting. J Archaeol Sci Rep. 2018;21:783–93. https:// doi. org/ 10. 1016/j. jasrep. 2018. 08. 008.
25. Briggs DE, Brookes PA, Stevens R, Boulton CA. Brewing: science and practice. Amsterdam: Elsevier; 2004.
26. Eßlinger HM, Editors. Handbook of brewing: processes, technology, markets. Wiley, Weinheim, 2009. Available: http:// depos it.d- nb. de/
cgi- bin/ dokse rv? id= 31526 87& prov= M& dok_ var= 1& dok_ ext= htm.
27. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the dartmouth summer research project on articial intelligence, August
31, 1955. AI Mag. 2006;27(4):12. https:// doi. org/ 10. 1609/ aimag. v27i4. 1904.
28. Cambridge Advanced Learner’s Dictionary & Thesaurus, Machine Learning. [Online]. Available: https:// dicti onary. cambr idge. org/ dicti
onary/ engli sh/ machi ne- learn ing. Accessed 29 Dec 2020.
29. Mitchell TM. Machine learning. New York: McGraw-Hill; 1997.
30. Michie D, Spiegelhalter DJ, Taylor CC. Machine learning, neural and statistical classication. 1994.
31. Alpaydin E. Introduction to machine learning. 4th ed. Cambridge: The MIT Press; 2020.
32. Shalev-Shwartz S, Ben-David S. Understanding machine learning: from theory to algorithms. New York: Cambridge University Press; 2014.
33. Mohri M, Rostamizadeh A, Talwalkar A. Foundations of machine learning. 2nd ed. Cambridge: The MIT Press; 2018.
34. Murphy KP. Machine learning: a probabilistic perspective. Cambridge: The MIT Press; 2012.
35. Ermi G, Ayton E, Price N, Hutchinson B. Deep learning approaches to chemical property prediction from brewing recipes. IJCNN. 2018.
https:// doi. org/ 10. 1109/ IJCNN. 2018. 84894 92.
36. Paré G, Trudel M-C, Jaana M, Kitsiou S. Synthesizing information systems knowledge: a typology of literature reviews. Inf Manage.
2015;52(2):183–99. https:// doi. org/ 10. 1016/j. im. 2014. 08. 008.
37. Cooper HM. Organizing knowledge syntheses: a taxonomy of literature reviews. Knowl Soc. 1988;1(1):104–26. https:// doi. org/ 10. 1007/
BF031 77550.
38. Torraco RJ. Writing integrative literature reviews: guidelines and examples. Hum Resour Dev Rev. 2005;4(3):356–67. https:// doi. org/ 10.
1177/ 15344 84305 278283.
39. Rowley J, Slack F. Conducting a literature review. Manage Res News. 2004;27(6):31–9. https:// doi. org/ 10. 1108/ 01409 17041 07841 85.
40. Booth A. “Brimful of STARLITE”: toward standards for reporting literature searches. J Med Libr Assoc. 2006;94(4):421-e205.
41. Aggarwal CC, Zhai C. A survey of text clustering algorithms. In: Aggarwal CC, Zhai C, editors. Mining text data. Boston: Springer; 2012. p.
77–128.
42. Adinugroho S, Sari YA, Fauzi MA, Adikara PP. Optimizing K-means text document clustering using latent semantic indexing and pillar
algorithm. In: Proc. 5th Int. Symp. Comput. Bus. Intell., Dubai, United Arab Emirates, pp. 81–85, 2017.
43. Levy Y, Ellis TJ. A systems approach to conduct an eective literature review in support of information systems research. Inf Sci. 2006;9:181–
212. https:// doi. org/ 10. 28945/ 479.
44. Salipante P, Notz W, Bigelow J. A matrix approach to literature reviews. Res Organiz Behav Annu Ser Anal Essays Crit Rev. 1982;4(1):321–48.
45. Webster J, Watson RT. Analyzing the past to prepare for the future: writing a literature review. MIS Quart. 2002;26(2):13–23.
46. Nakajima S. Introduction to TPM: total productive maintenance. Cambridge: Productivity Press; 1988.
47. Duda RO, Hart PE. Pattern classication. 2nd ed. New York: Wiley; 2006.
48. Draper NR, Smith H. Applied regression analysis. 3rd ed. New York: Wiley; 1998.
49. Jain AK, Murty MN, Flynn PJ. Data clustering: a review. ACM Comput Surv. 1999;31(3):264–323. https:// doi. org/ 10. 1145/ 331499. 331504.
50. Caruana R, Niculescu-Mizil A. An empirical comparison of supervised learning algorithms. Proc. of the 23rd ICML, Pittsburgh, PA, USA,
pp. 161–168, 2006, https:// doi. org/ 10. 1145/ 11438 44. 11438 65.
51. Szczypiński PM, Klepaczko A, Zapotoczny P. Identifying barley varieties by computer vision. Comput Electron Agric. 2015;110:1–8. https://
doi. org/ 10. 1016/j. compag. 2014. 09. 016.
52. Hou D, Zhou Z. A novel measurement scheme for periodic fouling in recipe alternation based on hybrid fuzzy neural network. In: Int Conf
Syst Man Cybern Waikoloa, HI, USA, pp. 1280–1285, 2005. https:// doi. org/ 10. 1109/ ICSMC. 2005. 15713 23.
53. Hou D-B, Zhou Z-K, Zhang G-X. On-line measurement for the BHE fouling of brewery wort evaporator using a soft sensing approach. In:
Int. Conf. Ind. Tech., Maribor, Slovenia, pp. 95–98, 2003. https:// doi. org/ 10. 1109/ ICIT. 2003. 12902 48.
54. Bowler A, Escrig J, Pound M, Watson N. Predicting alcohol concentration during beer fermentation using ultrasonic measurements and
machine learning. Fermentation. 2021;7:34. https:// doi. org/ 10. 3390/ ferme ntati on701 0034.
55. Duan F, Wang Y-N, Liu H-J, Tan W. Empty bottle inspector based on machine vision. Proc Int Conf Mach Learn Cyb Shanghai China.
2004;6:3845–50. https:// doi. org/ 10. 1109/ ICMLC. 2004. 13805 07.
56. Gonzalez Viejo C, etal. The eect of soundwaves on foamability properties and sensory of beers with a machine learning modeling
approach. Beverages. 2018;4(3):53. https:// doi. org/ 10. 3390/ bever ages4 030053.
57. Wei N, Flaschel E, Saalbach A, Twellmann T, Nattkemper TW. Reagent-free automatic cell viability determination using neural networks
based machine vision and dark-eld microscopy in Saccharomyces cerevisiae. Conf. Proc. IEEE Eng. Med. Biol. Soc., Shanghai, China, pp.
6305–6308, 2005. https:// doi. org/ 10. 1109/ IEMBS. 2005. 16159 39.
Vol.:(0123456789)
Discover Artificial Intelligence (2024) 4:80 | https://doi.org/10.1007/s44163-024-00177-6 Review
58. Bowler AL, Pound MP, Watson NJ. Domain adaptation and federated learning for ultrasonic monitoring of beer fermentation. Fermenta-
tion. 2021;7(4):253. https:// doi. org/ 10. 3390/ ferme ntati on704 0253.
59. Hlangwani E, Doorsamy W, Adebiyi JA, Fajimi LI, Adebo OA. A modeling method for the development of a bioprocess to optimally produce
umqombothi (a South African traditional beer). Sci Rep. 2021;11(1):20626. https:// doi. org/ 10. 1038/ s41598- 021- 00097-w.
60. Riverol C, Napolitano V. Estimation of the overall heat transfer coecient in a tabular heat exchanger under fouling using neural networks.
Application in a ash pasteurizer. Int Commun Heat Mass Transfer. 2002;29(4):453–7. https:// doi. org/ 10. 1016/ S0735- 1933(02) 00342-1.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional aliations.