ArticlePDF Available

Flow Chemistry for Process Optimisation using Design of Experiments

Authors:

Abstract and Figures

Implementing statistical training into undergraduate or postgraduate chemistry courses can provide high-impact learning experiences for students. However, the opportunity to reinforce this training with a combined laboratory practical can significantly enhance learning outcomes by providing a practical bolstering of the concepts. This paper outlines a flow chemistry laboratory practical for integrating design of experiments optimisation techniques into an organic chemistry laboratory session in which students construct a simple flow reactor and perform a structured series of experiments followed by computational processing and analysis of the results.
Content may be subject to copyright.
FULL PAPER
Flow chemistry for process optimisation using design of experiments
Connor J. Taylor
1
&Alastair Baker
1
&Michael R. Chapman
1
&William R. Reynolds
1
&Katherine E. Jolley
1
&
Graeme Clemens
2
&Gill E. Smith
3
&A. John Blacker
1
&Thomas W. Chamberlain
1
&Steven D. R. Christie
4
&
Brian A. Taylor
2
&Richard A. Bourne
1
Received: 6 September 2020 / Accepted: 14 December 2020
#The Author(s) 2021
Abstract
Implementing statistical training into undergraduate or postgraduate chemistry courses can provide high-impact learning expe-
riences for students. However, the opportunity to reinforce this training with a combined laboratory practical can significantly
enhance learning outcomes by providing a practical bolstering of the concepts. This paper outlines a flow chemistry laboratory
practical for integrating design of experiments optimisation techniques into an organic chemistry laboratory session in which
students construct a simple flow reactor and perform a structured series of experiments followed by computational processing and
analysis of the results.
Keywords Flow chemistry .Design of experiments .Optimisation .Curriculum .Hands-on learning
Introduction
Flow chemistry is increasing in popularity as synthetic chem-
ists continue to discover the numerous advantages afforded to
them by swapping their round-bottom flasks and condensers
for pumps and tubes [1]. The rate of adoption of continuous
flow chemistry is continuing to grow, as more enabling tech-
nologies are developed and more research groups are building
their own reactor platforms [24]. Whilst there are many in-
stances of harnessing the capabilities of flow chemistry for
synthesis over traditional batch methods [57], there are also
groups that have used these setups to optimise chemical pro-
cesses. Flow optimisation by Design of Experiments (DoE) is
a very useful and efficient method, where examples have been
reported with varying experimental criteria, whether this be
yield, purity, E-factor etc [812]. DoE as a technique can be
used for a number of applications aside from chemical reac-
tions, and has been reported often in the pedagogical literature
[1316]. The teaching ofthis technique provides students with
a statistical basis for their experimentation, with the aim of
solidifying transferrable skills that ultimately lead them to
becoming well-rounded scientists. However, this optimisation
resource is still under-utilised in a lab setting, with one-factor-
at-a-time (OFAT) optimisation approaches often substituting
as a method for chemical process optimisation and under-
standing [1719]. The aim of this paper is to provide a flow-
chemistry-specific optimisation case study, where students
can learn DoE and statistics by performing a continuous flow
based practical experiment, as well as experiencing and over-
coming challenges that they would not usually encounter if
they were running a synthetic experiment in batch.
The OFAT approach is a common method of optimisation,
especially in academia, in which experiments guided by scien-
tific intuition are performed, by fixing all process factors except
for one [20]. These factors are experimental conditions (such as
temperature, reagent stoichiometry, reaction time etc.) which
when combined, make a multi-dimensional space where there
are a large number of possible combinations of these factors to
Supplementary Information The online version contains
supplementary material available at https://doi.org/10.1007/s41981-020-
00135-0.
*Richard A. Bourne
R.A.Bourne@leeds.ac.uk
1
Institute of Process Research and Development, School of Chemistry
and School of Chemical and Process Engineering, University of
Leeds, Leeds LS2 9JT, UK
2
Chemical Development, Pharmaceutical Technology &
Development, Operations, AstraZeneca, Macclesfield, UK
3
University of Southampton, University Road, Southampton SO17
1BJ, UK
4
Department of Chemistry, Loughborough University, Epinal Way,
Loughborough LE11 3TU, UK
Journal of Flow Chemistry
https://doi.org/10.1007/s41981-020-00135-0
make up one experiment - this is called the parameter space,
and is constrained by the lower and upper limits of each factor
(e.g. max and min temperature). After the best value for one
factor has been optimised, another set of experiments are exe-
cuted to then optimise another factor, until all factors are
optimised and the scientist believes that they have arrived at
the optimal reaction conditions [21,22]. However, this method
gives an incomplete picture of the chemical process, as it dis-
regards any synergistic effects between any factors in the multi-
dimensional and complex parameter space. This means that
interactions between the experimental factors are not consid-
ered. An example of this could be the difference between a
reaction at a high temperature with a short reaction time, com-
pared with a long reaction time - there may not be a linear
relationship between these factors and the desired output, mean-
ing that a change in temperature after the optimisation of the
reaction time could lead to a suboptimal result [17,23,24].
As research laboratories are diversifying their equipment,
by incorporating flow and automation technologies, it is also
necessary for chemists to evolve at the same pace, by diversi-
fying their skillsets to fully harness the capabilities and ways
of working enabled by this new equipment. Synthetic chem-
ists are embracing facets of process chemistry, chemical engi-
neering, analytical chemistry, and programming, to name a
few. Concurrently, better understanding and adoption of reac-
tion optimisation methods should be implemented: OFAT op-
timisations need to be replaced by more robust and more ef-
ficient techniques [2528].
In this paper, the use of a structured experimental
design integrated with a flow chemistry platform is re-
ported. It is shown how this can be used as a teaching
resource to introduce students to performing flow chem-
istry experiments and to better understand the type of
data required for the optimisation of chemical processes.
We herein report a demonstration of Design of
Experiments for teaching the next generation of chem-
istry in a practical lab setting, whereby a chemical pro-
cess with a number of possible products is optimised
for the highest yield of a particular product. The chem-
ical process chosen to be optimised was the S
N
Ar reac-
tion between 2,4-difluoronitrobenzene, 1, and pyrrol-
idine, 2, to form the desired ortho-substituted product,
3,andimpurities45,showninScheme1.
Learning objectives
&To set up a flow chemistry system to execute flow
experiments.
&To methodically plan flow experiments using DoE.
&To statistically analyse DoE results and generate empirical
models for an experimental data set.
&To use DoE models to optimise an S
N
Ar flow process.
Design of experiments
DoE is a statistical method of reaction optimisation that is
often practiced in industry [25], but is less commonly used
in academia, where a OFAT approach is much more common.
Although OFAT can give an idea as to how particular factors
influence the yield of a reaction, the parameters of interest are
explored less comprehensively and no indication of how these
factors affect each other and themselves at varying levels is
obtained. When using a DoE approach, however, the entire
parameter space is mapped in an efficient manner, which ex-
plores multidimensional space at the same time. This is be-
cause in each sequential experiment, multiple factors are
changed at the same time. A comparison of the parameter
space exploration by these two methods is shown in Fig. 1,
by observing three experimental factors for the optimisation of
the demonstration reaction: temperature (°C), residence time
(min) and pyrrolidine equivalents. It is typically the case that
OFAT experiments mostly explore individual planes of pa-
rameter space which makes it difficult to infer the overall
space behaviour, whereas experimental designs can interpo-
late factor interactions much more effectively. This is true
regardless of the number of experiments undertaken in either
approach. The face-centred central composite (CCF) design
shown in Fig. 1splits each experimental factor into different
levels. These levels: (-1, 0, + 1), are named by convention and
correspond to the degree of the experimental factor, where 1
is the lower bound of the parameter space, + 1 is the upper
bound and 0 is the midpoint. For example, if the experimental
bounds for residence time were between 1 and 5 minutes, the
levels would be: 1 minute (-1), 3 minutes (0) and 5 minutes (+
1). These levels are defined to ensure that all areas of param-
eter space can be explored, regardless of the range of the
factors.
These structured experiments allow statistical models to be
constructed from the experimental results, that accurately de-
scribe the changes in responses to experimental factor chang-
es. If a DoE analysis tool such as MODDE (from Umetrics) or
Design-Expert (from Stat-Ease) is used, the generation of
these models is performed easily and intuitively. Empirical
models, made up of experimental responses, can then be used
to predict further experimental results based on how the model
weights a particular input variable. These variables can simply
be the experimental factors, but can also be interaction terms
between the different factors, or squared interactions of the
same factor. These interaction and squared terms indicate
how experimental factors influence the reaction output, when
other factors are changed alongside them. In the case of our
S
N
Ar example, it may be insufficient to describe the experi-
mental data by simply incorporating model terms of resi-
dence time,temperatureand pyrrolidine equivalents.It
may be a significant factor in the modelling of the data to
include an interaction term between residence time and
J Flow Chem
temperature, meaning in real terms that there is a higher influ-
ence of residence time/temperature at higher residence times/
temperatures. Similarly, a squared temperature model term
could better describe larger effects of temperature changes
when the temperature is generally higher, meaning that tem-
perature has a non-linear effect.
These interaction considerations can give a better descrip-
tion of the experimental data, as the synergistic effects be-
tween the factors are also incorporated into the empirical mod-
el. In this case, an empirical model is a purely statistical rep-
resentation of the experiments and their outcomes, as opposed
to a physical model determined by the underlying chemistry.
This model can then allow response surfaces to be plotted and
optimum operating regions to be identified, by interpolating
the areas between the equidistant experimental points.
In this paper, we describe the use of a CCF design in the
MODDE software. After already determining that the three
factors of residence time, temperature and pyrrolidine equiv-
alents are significant, the CCF optimisation design identifies
all interaction and squared terms between these factors. The
generated model will then be used to portray the entire param-
eter space, and hence identify the optimum operating condi-
tions for the highest yield of the ortho-substituted product, 3.
There are also other experimental designs one can consider
using depending on the outcomes that are desired, but these
are not covered in this paper [29].
Necessary equipment
In order to run the experiment as described, it is recommended
to have the following equipment and chemicals. A full list of
recommended vendors is located in the ESI.
&PTFE tubing, 1/16internal diameter.
&Tubing fittings.
&A tubing cutter.
&Two syringe pumps, or equivalent.
&Three stirrer-hotplates to place water baths on.
&Three water baths, 500 mL.
&2,4-Difluoronitrobenzene (CAS: 446-35-5).
&Pyrrolidine (CAS: 123-75-1).
NO
2
F
F
+
H
N
12
NO
2
N
F
NO
2
F
N
3
Desired product
4
NO
2
N
N
5
Scheme 1 The S
N
Ar reaction of
interest, where the yield of the
ortho-substituted product, 3,isto
be optimised in a flow setup
Fig. 1 A comparison of the
parameter space exploration when
conducting a OFAT optimisation
alongside a structured DoE
design, where represents an
experiment. Note also that a
OFAT optimisation does not
require a pre-determined number
of experiments, and may or may
not exceed the number of experi-
ments in an experimental design
JFlowChem
&Triethylamine (CAS: 121-44-8).
&Hydrochloric acid (CAS: 7647-01-0).
&Common laboratory solvents: ethanol, water, isopropyl
amine.
&Access to HPLC, or an equivalent quantitative analytical
technique.
&MODDE Pro, or equivalent DoE software.
Experimental setup
The experimental bounds for each of the factors are: residence
time (0.5 to 3.5 minutes), temperature (30 to 70 °C) and equiv-
alents of pyrrolidine (2 to 10). The concentrations of 2,4-
difluoronitrobenzene and triethylamine are kept constant.
The rationale behind these pre-determined experimental
bounds came from the kinetic understanding of the work re-
ported by Hone et al. on the same reaction [30]. The HPLC
peak areas are converted to relative concentration percent for
each of the species, each of which are reported as outputs for
that particular experiment. The run order of the experiments
was randomised, to prevent any extraneous (uncontrolled)
variables affecting the results, shown in Table 1.
When running the experiments, undergraduate students can
be placed into groups of 5 or 6. Recommended tasks within
the group can be split into: making up stock solutions, prepar-
ing the tubing, connecting the tubing, running/timing the ex-
periments, experimental sampling, running HPLC analysis
etc. However, in our case the HPLC calibration was conduct-
ed in advance by a trained instructor but this could be a task
for the students as part of the experimental procedure. It is
recommended also for students to read introductions to DoE
papers or seek advice from postgraduates or academic super-
visors prior to experimentation. Key introductory reading
could include references reported by Krawczyk et al. [16]
andAggerwaletal.[20], as well as the book written by
Antony [31] which are all useful resources.
The experimental flow setup is shown schematically in
Fig. 2, and pictorially in Fig. 3.Fourreservoirswereused,
one containing 2,4-difluoronitrobenzene (0.1 M) and
triethylamine (0.11 M) in ethanol, then three other reservoirs
containing triethylamine (0.11 M) and varied pyrrolidine con-
centration (0.1 M, 0.5 M and 1 M) in ethanol. Each experi-
ment setup contains the 2,4-difluoronitrobenzene solution in
one syringe, and one of the triethylamine/pyrrolidine in etha-
nol solutions in the second syringe, depending on the low/
medium/high equivalents of pyrrolidine that were investigated
in a particular run. Harvard syringe pumps are used in each
experiment to pump the solutions into a PTFE length of tubing
(1/16internal diameter, 6.3 cm, equal to 1 mL volume),
submerged in one of three water baths at 30 °C, 50 °C or
70 °C. Three water baths were set up so that there are no
waiting times between experiments for the water baths to
achieve the desired temperature. This is important as the lab
time is the most crucial resource, and the experiments must be
executed in a specific order; running each block of
Table 1 The experimental conditions ran to perform the DoE study
ID Run order Residence time /min Temperature /°C Pyrrolidine eq. Pump 1 Flow /mL min
1
Pump 2 Flow /mL min
1
Pump2Conc/M
N1 3 0.5 30 2 0.096 1.920 0.1
N2 7 3.5 30 2 0.014 0.274 0.1
N3 12 2 30 6 0.038 0.463 0.5
N4 16 0.5 30 10 0.180 1.820 1
N5 2 3.5 30 10 0.026 0.260 1
N6 8 2 50 2 0.024 0.480 0.1
N7 13 0.5 50 6 0.150 1.850 0.5
N8 1 2 50 6 0.038 0.463 0.5
N9 11 2 50 6 0.038 0.463 0.5
N10 17 2 50 6 0.038 0.463 0.5
N11 4 3.5 50 6 0.021 0.264 0.5
N12 9 2 50 10 0.045 0.455 1
N13 5 0.5 70 2 0.096 1.920 0.1
N14 14 3.5 70 2 0.014 0.274 0.1
N15 6 2 70 6 0.038 0.463 0.5
N16 10 0.5 70 10 0.180 1.820 1
N17 15 3.5 70 10 0.026 0.260 1
The pump flow rates and the concentration in Pump 2 was changed for each experiment to vary residence time and pyrrolidine equivalents, and the
tubular reactor was placed in a different temperature water bath to vary temperature. The flow rates are calculated for a 1 mL reactor. Run order should be
generated randomly. See Fig. 2for further details.
J Flow Chem
temperature experiments (for example, all 30 °C experiments
at once) could introduce extraneous variables, and must be
avoided.
Each experiment was allowed to reach steady-state by
equilibrating for 2 reactor volumes, meaning that for each
flow experiment, a wait time of two residence times is neces-
sary before collection of material for analysis. For example, if
the residence time for the reaction is 0.5 minutes, then 1 min-
ute of reaction mixture is purged to waste before steady-state
is established. For each experiment, the desired temperature is
reached by placing the tubular reactor in a separate water bath
at the corresponding temperature. Samples can then be taken
from the end of the reactor, by immediately quenching a few
droplets of material into a vial containing a drop of hydrochlo-
ric acid at the outlet of the flow system. This can then be
diluted with methanol before transferring to analysis. These
samples can then be sampled by HPLC, or other analytical
techniques such as GC, requiring that quantitative yields of
each of the species can be obtained - this is shown in Fig. 2.
HPLC analysis was performed using an Ascentis Express C18
column (5 cm x 4.6 mm x 2.7 µm), using an isocratic flow
gradient (51% water/49% acetonitrile, each reservoir contain-
ing 0.1% TFA)at 1.5 mL min
1
flow rate for 2 minutes HPLC
run time. It is beneficial to have short analytical methods to
allow fast analysis and turnaround between different sets of
experimental conditions.
Hazards
Safety goggles and lab coats should be worn throughout the
course of the experiment. All handling of organic solvents and
preparation of solutions should be conducted inside the fume
hoods. Special care should be taken when handling concen-
trated hydrochloric acid to quench the reaction in the HPLC
vial. If any reagent is spilled on the body, wash the area with
copious amounts of water for at least 15 minutes. Consult the
MSDS for the specific guidance on handling each of the
chemicals. After experimentation, any tubing can be washed
by pumping isopropyl alcohol through the reactor for 10 re-
actor volumes in order to keep the tubing, or it can be
discarded.
Fig. 2 A schematic of the
experimental flow setup used for
the S
N
Ar reaction. The
pyrrolidine concentration is
changed for varying equivalent
experiments, and the reactor is
movedbyhandintodifferent
water baths corresponding to the
temperature that the experiment
requires
Fig. 3 The flow setup used for the
S
N
Ar experimentation, where the
tubular reactor is submerged in
one of three different temperature
water baths
JFlowChem
Analysis, results and discussion
The full CCF DoE (shown in Fig. 1) was run using the experi-
mental setup described in Figs. 2and 3, which consisted of
running the experiments shown in Table 2. Three centre-point
experiments were also run throughout the course of data acqui-
sition, to monitor the reproducibility of the experiments as time
passed. These repeated experiments, or replicates, ensure that
any extraneous variables are identified (uncontrolled variables
that are being changed unknowingly, e.g. stock solution contam-
ination or degradation). The outputsareshownasmolarpercent-
ages, where the starting material 2,4-difluoronitrobenzene (1),
the desired product (3), the para-substituted impurity (4), and
the di-substituted impurity (5). We assumed that each of the
materials have equivalent HPLC response and did not run prior
calibrations with standards, although this could be done with
additional time. Molar percentages were calculated using inter-
nal normalisation for each of the species, where the area of the
HPLC peak for the species of interest was divided by the total
summed HPLC area for each peak, multiplied by 100. This is
shownintheequationbelow,where(x)istheHPLCareaforthe
species of interest, and (1)/(3)/(4)/(5) are the HPLC areas of the
species in this study:
Molar percentage ¼xðÞ
1ðÞþ 3ðÞþ 4ðÞþ5ðÞ½
100
Using this dataset, MODDE can fit a model automatically
using the Analysis wizardtool. Full instructions can be
found in the ESI. MODDE then fits a saturated model for each
of the responses given. A saturated model is where all model
terms, including all interactions and squared terms, are includ-
ed in the model. When a saturated model is initially generated,
the R
2
value is the largest value it can be. R
2
is a percentage
measure of how well a given model fits the data, which is
usually represented as a number between 0 and 1. When a
model uses all of the possible terms available to it, the varia-
tion in the experimental response is best described. This
means that as R
2
tends to 1, more of the variation is explained
by terms in the model, as closer to 100% of the experimental
variation can be attributed to specific terms. However, satu-
rated models typically contain non-significant model terms
that lead to a low Q
2
value. The Q
2
value is the percentage
of the variation of the response predicted by the model by
using cross validation, represented as a number between 0
and 1; simply put, Q
2
tells you how well the model can predict
new data. For a useful model, it is necessary to have a high R
2
that explains the dataset well, as well as high Q
2
that can
interpolate new data points accurately. To achieve this, the
model for each response must be edited as to remove any
nonsignificant terms. Figure 4shows the coefficients plots
for a particular response, in this case the response for the
amount of the desired product, (3), which graphically indi-
cates each model term (x axis) and their respective signifi-
cance (y axis). Each of the model terms are scaled and cen-
tered, meaning factors with different units can be compared
to determine the influence of model terms over the range of
the factors studied.
Table 2 The experimental dataset generated from the running of the DoE for the S
N
Ar reaction
Run Run order Residence time /min Temperature /°C Pyrrolidine eq. (1) /% (3) /% (4) /% (5) /%
N1 3 0.5 30 2 79.7 20.3 0.0 0.0
N2 7 3.5 30 2 36.3 60.0 0.0 3.6
N3 12 2 30 6 29.6 66.4 0.0 4.1
N4 16 0.5 30 10 52.7 44.6 0.0 2.7
N5 2 3.5 30 10 10.9 83.9 0.0 5.2
N6 8 2 50 2 34.0 62.0 0.0 4.0
N7 13 0.5 50 6 41.2 55.3 0.0 3.5
N8 1 2 50 6 13.8 80.9 0.0 5.3
N9 11 2 50 6 14.9 79.9 0.0 5.2
N10 17 2 50 6 14.9 79.9 0.0 5.2
N11 4 3.5 50 6 6.9 87.3 0.0 5.8
N12 9 2 50 10 9.1 84.9 0.4 5.6
N13 5 0.5 70 2 49.1 47.8 0.0 3.1
N14 14 3.5 70 2 11.8 82.2 0.4 5.6
N15 6 2 70 6 4.7 88.1 1.2 6.0
N16 10 0.5 70 10 15.8 78.8 0.0 5.4
N17 15 3.5 70 10 0.5 91.0 2.5 6.0
The replicates in this data set are experiments: N8, N9, N10
J Flow Chem
Each model term has a respective uncertainty (repre-
sented in the plot as an error bar), and if that uncertainty
overlaps with y = 0, then that model term can be deemed to
be statistically nonsignificant, Fig. 4a illustrates this point.
This is because there is a probability that the relative effect
of the model term could be zero. The saturated model for
the response of (3) is shown in Fig. 4b, where there are
several significant model terms and two non-significant
terms: Temp
2
and Temp*Eq.. The R
2
and Q
2
measures
are shown alongside the coefficients plot as a green bar
and a blue bar respectively. Upon removal of the two
non-significant terms, shown in Fig. 4c,theQ
2
value rises
from 0.764 to 0.894, meaning that the predictability of the
model is increased for an insignificant decrease in R
2
.
This process is then repeated for the other responses of
compounds (4)and(5), shown as Fig. 5a/b and Fig. 6respec-
tively. The response for (4), the saturated model (Fig. 5a)ap-
pears to describe the data well as the R
2
is high, however, there
are many non-significant terms. Because of this, the Q
2
is
negative, meaning there is no acceptable degree of predictabil-
ity to be obtained from the model. As these non-significant
terms are removed, even more terms become non-significant,
until the only significant term that remains is temperature
(Fig. 5b), but the R
2
and Q
2
values are still very low. This is
because the response for (4) remained largely unchanged
throughout our experimentation, meaning it is difficult to
model well, as there were no factors that could be shown to
have a strong effect on the outcome of this response.
Fig. 4 Thesignificanceofmodel
terms on the response for the
desired product, (3). aThe
difference between significant
and non-significant model terms.
bThe saturated model, R
2
=
0.990, Q
2
=0.764.cThe
optimised model, R
2
=0.986,
Q
2
= 0.894. Time = residence
time, Temp= temperature, Eq. =
pyrrolidine equivalents
JFlowChem
Conversely, the model for the response of (5) was found to be
excellent without any need for further optimisation - the R
2
and Q
2
measures were both high, and the saturated model
contained no non-significant model terms. This means that
by using the same experimental data set, a secondary response
can also be modelled and optimised for. This means that re-
sponse surfaces for (5) can also be predicted without any fur-
ther experimentation. Interestingly, it is not possible to do the
same for the response for (4) due to the low formation of the
product, as there are no changes in the experimental condi-
tions that lead to a significant amount of this product being
generated. This manifests itself in the uncertainty of the model
terms, as most of the error bars for these model terms intersect
y = 0 and are therefore their relative effects are non-
significant.
As the models were further optimised to have the highest
R
2
and Q
2
possible, the optimum operating conditions for the
production of the desired ortho-substituted (3), could then be
identified. By selecting the 4D Contouroption in MODDE,
the response for (3) can be interpolated across the entire land-
scape of the parameter space, providing a total insight into the
chemistry that could not be achieved by other means such as
OFAT. This contour plot is shown in Fig. 7, which indicates
clearly the yield of (3) that would be achieved with varying
experimental factors. Figure 8shows a similar plot on how the
yield of the di-substituted impurity, (5), also changes with
these differing inputs. It is important to note that to sensibly
use contour plots, DoE model performance metrics such as R
2
and Q
2
must be good. This is also a significant point for the
student learning and can be adapted into leading questions
Fig. 5 Thesignificanceofmodel
terms on the response for (4).
aThe saturated model, R
2
=
0.864, Q
2
= -0.200. bThe
optimised model, R
2
=0.246,Q
2
= -0.047
J Flow Chem
such as: Using the 4D Contour Plot, predict the yield of the
major product at x,yand zexperimental conditions?.
The optimum operating region for the highest yield of the
(3) have been identified using this DoE approach, whilst giv-
ing a full picture of the parameter space. The results show that
high temperature, high residence times and high pyrrolidine
equivalents lead to the highest yield of the desired product (3),
as well as the highest yield of the di-substituted impurity (5).
There are still other aspects of DoE that can be explored, such
as model validity and reproducibility, predicted kinetic plots
and Sweet Spotvisualisations and Optimizerusage in
MODDE. These tools can use the same data set to give further
process understanding, and the empirical model can be
exported to further explore responses such as E-factor,
space-time yield etc. The same data set can also be used to
build further models on multiple responses, each of which can
be refined to give further understanding and predictability.
This could be warranted if there were additional experimental
needs, such as productivity of material. This can highlight
areas where the highest yields are present in the shortest res-
idence time, by compromising higher yields for quicker prod-
uct generation.
Upon completion of the experimental work, students were
asked to prepare a report on their findings - this can be in a word
document or a research article format. Conveying their ability
to report on statistical models and find optimum reaction con-
ditions for the production of (3)servesasthemainassessment
criteria for this work, where > 90% of students were successful.
Correct assignments of the optimum parameter regions indi-
cates that they have performed the experiments correctly and
should be considered when grading the report. Further ques-
tions can also be postulated to the students, such as what are
Fig. 6 Thesignificanceofmodel
terms on the response for (5),
showing the saturated model with
no non-significant terms, R
2
=
0.997, Q
2
=0.944
Fig. 7 The contour plot for the
response of (3), showing how the
yield of the ortho-substituted
product changes with varying ex-
perimental conditions
JFlowChem
the advantages of running this reaction in flow?and why
perform a DoE?. These questions can enhance the student
learning experience as they are asked to reflect upon their work
directly. Sample questions and full answers with suggested
grading criteria are provided in the ESI.
Student feedback
This example has formed part of the EPSRC Dial-A-Molecule
Summer School in 2018 and 2019 targeted at 1st year PhD
students. The summer school was a lively and interactive event
and in addition to the experiment/analysis outlined also included
a series of lectures from academic and industrial experts.
Furthermore, practical sessions on 3D printing and an evening
session outlining Design of Experiments by designing and mak-
ing paper helicopters and optimising the helicopter geometry
were also conducted, simply to solidify the concepts of DoE
and their applications to various real-life scenarios. These exer-
cises create an equal baseline of background knowledge that
drives the use of the DoE methodology, which forces hypotheses
to be made from understandings of factor selection and level
setting, rather than undisclosed assumptions based on prior ex-
periences. The content was very well received, with 88% on the
feedback rating the course as Goodor Very Good,andthe
students particularly valued the combination of practical and
theoretical examples detailed in this publication.
Conclusions
It has been shown that by using a simple continuous flow setup,
consisting of syringe pumps, water baths and a method of quan-
titative analysis, alongside a methodical experimental technique
such as DoE, that multistep chemical processes can be optimised
for a desired output. The effect of varying reaction conditions on
the outcome of a chemical reaction is explored and therefore
allows better understanding of the reaction system than a OFAT
approach. This particular experiment is run annually as part of the
undergraduate chemistry course at the University of Leeds, but
can be used as an exercise in teaching flow chemistry and opti-
misation to researchers at any level. Third year undergraduates
that select this optional project learn the theory of DoE as part of
the pre-laboratory preparation - these theory PowerPoint slides are
provided in the ESI. Depending on the experience of the students,
the experiment can be altered to constrain what each participant
will conduct experimentally and what is provided for them. The
experimental setup requires low cost equipment alongside com-
mon laboratory analytical equipment, and the experimentation
itself is suitable for undergraduates and upwards; all experimental
results can be obtained in a 23 hour lab session.
This experiment demonstrates that the outlined statistical
modelling methodologies provide a greater insight into process
optimisation than can be achieved by a OFAT approach and
represent some of the most efficient and effective data analysis
techniques to explain the chemistry and identify regions of
interest. As many exercises in undergraduate courses are based
around synthetic batch experiments, this continuous flow ex-
periment can be incorporated into the course as a different ap-
proach to carrying out a synthetic reaction and obtaining reac-
tion data, and simultaneously provide an opportunity to learn
about statistics and optimisation techniques. This also enable
the students to work as part of a group to design and perform the
experiment, working towards a common goal, broadening their
skills and encouraging new ways of thinking.
It is the hope of the authors that as the skillset required of a
chemist is diversifying and expanding in sync with the in-
creasing capabilities and technologies of a typicallaboratory
- so will the teaching of both chemistry and optimisation
Fig. 8 The contour plot for the
response of (5), showing how the
yield of the di-substituted product
changes with varying experimen-
tal conditions
J Flow Chem
techniques for process development. We are making strides
globally in a positive and constructive way towards laborato-
ries which contain scientists with a wide variety of skillsets,
and this paper aims to serve as a guide to teaching a number of
these key skills, i.e. continuous flow synthesis, statistical data
analysis, experimental design and reaction optimisation. The
evolution of curricula, the paradigm shift of academic labs and
overall increased awareness of other methodologies means it
is now a very exciting time to be in a chemistry setting: where
being a chemist is more than just being a chemist.
Acknowledgements The authors thank the School of Chemical and
Process Engineering and the School of Chemistry at the University of
Leeds for their support, and EPSRC and AstraZeneca for their funding
and support. We would like to thank the following people for helping to
make the 2019 Dial-A-Molecule event such a success: Anna Slater
(University of Liverpool), Dan Tray (GlaxoSmithKline), Adam Price
(University of Loughborough), Laurence Coles (Added Scientific),
Matt Penny (UCL) and John Blacker (University of Leeds).
Compliance with ethical standards
Conflict of interest On behalf of all authors, the corresponding author
states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indicate if
changes were made. The images or other third party material in this article
are included in the article's Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in the
article's Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
1. Plutschack MB et al (2017) The hitchhikers guide to flow chem-
istry. Chem Rev 117(18):1179611893
2. OBrien M et al (2017) Harnessing open-source technology for
low-cost automation in synthesis: Flow chemical deprotection of
silyl ethers using a homemade autosampling system. Tetrahedron
Lett 58(25):24092413
3. Christensen M et al (2019) Development of an automated kinetic
profiling system with online HPLC for reaction optimization. React
Chem Eng 4(9):15551558
4. Noël T (2019) Flow into the chemistry curriculum.Chemistry World.
https://www.chemistryworld.com/opinion/flow-into-the-chemistry-
curriculum/4010382.article
5. McQuade DT, Seeberger PH (2013) Applying flow chemistry: methods,
materials, and multistep synthesis. J Org Chem 78(13):63846389
6. Wegner J, Ceylan S, Kirschning A (2012) Flow chemistryakey
enabling technology for (multistep) organic synthesis. Adv Synth
Catal 354(1):1757
7. Baumann M, Baxendale IR (2015) The synthesis of active pharma-
ceutical ingredients (APIs) using continuous flow chemistry.
Beilstein J Org Chem 11(1):11941219
8. Holmes N et al (2016) Online quantitative mass spectrometry for
the rapid adaptive optimisation of automated flow reactors. React
Chem Eng 1(1):96100
9. Ingham RJ et al (2014) Integration of enabling methods for the
automated flow preparation of piperazine-2-carboxamide.
Beilstein J Org Chem 10(1):641652
10. Mostarda S et al (2014) Glucuronidation of bile acids under flow
conditions: design of experiments and KoenigsKnorr reaction op-
timization. Org Biomol Chem 12(47):95929600
11. Cyr P et al (2013) Flow heck reactions using extremely low loadings of
phosphine-free palladium acetate. Org Lett 15(17):43424345
12. Nieuwland PJ et al (2011) Fast scale-up using microreactors: pyrrole
synthesis from micro to production scale. Org Process Res Dev 15(4):
783787
13. Penteado JC, Masini JC (2011) Exploring liquid sequential injection
chromatography to teach fundamentals of separation methods: A very
fast analytical chemistry experiment. J Chem Educ 88(2):235238
14. Hope WW, Johnson C, Johnson LP (2004) Tetraglyme trap for the
determination of volatile organic compounds in urban air. Projects
for undergraduate analytical chemistry. J Chem Educ 81(8):1182
15. McAllister GD, Parsons AF (2019) Going green in process chem-
istry: optimizing an asymmetric oxidation reaction to synthesize the
antiulcer drug esomeprazole. J Chem Educ 96(11):26172621
16. Krawczyk T, Słupska R, Baj S (2015) Applications of chemilumines-
cence in the teaching of experimental design. J Chem Educ 92(2):317
321
17. Lendrem DW et al (2015) Lost in space: design of experiments and
scientific exploration in a Hogarth universe. Drug Discov Today
20(11):13651371
18. Lendrem D, Owen MR, Godbert S (2001) DoE (design of experi-
ments) in development chemistry: potential obstacles. Org Process
Res Dev 5(3):324327
19. Czitrom V (1999) One-factor-at-a-time versus designed experi-
ments. Am Stat 53(2):126131
20. Aggarwal VK, Staubitz AC, Owen MR (2006) Optimization of the
mizoroki-heck reaction using design of experiment (DoE). Org
Process Res Dev 10(1):6469
21. Lovallo D, Kahneman D (2003) Delusions of success. Harv Bus
Rev 81(7):5663
22. Mynatt CR, Doherty ME, Tweney RD (1977) Confirmation bias in
a simulated research environment: An experimental study of scien-
tific inference. Q J Exp Psychol 29(1):8595
23. Owen MR et al (2001) Efficiency by design: optimisation in pro-
cess research. Org Process Res Dev 5(3):308323
24. Wahid Z, Nadir N (2013) Improvement of one factor at a time
through design of experiments. World Appl Sci J 21:5661
25. Weissman SA, Anderson NG (2015) Design of experiments (DoE)
and process optimization. A review of recent publications. Org
Process Res Dev 19(11):16051633
26. Elliott P et al (2013) Quality by design for biopharmaceuticals: a his-
torical review and guide for implementation. Pharm Bioprocess 1(1):
105122
27. Tye H (2004) Application of statistical design of experiments
methods in drug discovery. Drug Discov Today 9(11):485491
28. Murray PM et al (2016) The application of design of experiments
(DoE) reaction optimisation and solvent selection in the development
of new synthetic chemistry. Org Biomol Chem 14(8):23732384
29. Engineering Statistics Handbook: 5.3.3.6.3. Comparisons of re-
sponse surface designs. https://www.itl.nist.gov/div898/handbook/
pri/section3/pri3363.htm. Accessed online: 20/03/2020.
30. Hone CA et al (2017) Rapid multistep kinetic model generation
from transient flow data. React Chem Eng 2(2):103108
31. Antony J (2014) Design of experiments for engineers and scientists.
Elsevier, Amsterdam
JFlowChem
Connor J. Taylor is currently a
postdoctoral research associate
at Astex Pharmaceuticals in
collaboration with the
University of Cambridge. He
completed an M.Chem degree
at the University of Leeds
(2017) and his PhD under the
supervision of Richard Bourne
and Thomas Chamberlain
(2021). During his PhD he
founded the process optimisa-
tion company, Compunetics,
in partnership with the
University of Leeds and main-
tains a strong interest in optimisation, kinetic analysis and reaction
modelling.
Richard A. Bourne is currently an
Associate Professor at the
University of Leeds. He complet-
ed a M.Chem degree at the
University of Nottingham (2004)
and his PhD under the supervision
of Prof. Martyn Poliakoff, CBE,
FRS. He is now a Royal
Academy of Engineering Senior
Research Fellow working on the
development of new sustainable
processes with focus on continu-
ous flow routes to pharmaceutical
and fine chemical products. His
group is based within the
Institute of Process Research and Development (IPRD) at the
University of Leeds, a joint institute between Chemical Engineering
and Chemistry.
J Flow Chem
... This method has several advantages over the typical "one factor at a time" (OFAT) method. OFAT encompasses the iterative experimentation of one factor by fixing all process factors except the one that is under optimization [17,18]. Adding to the inefficiency and time-consuming process, it is often inaccurate as an optimization technique for chemical processes, as there are no considerations for synergistic effects. ...
... Adding to the inefficiency and time-consuming process, it is often inaccurate as an optimization technique for chemical processes, as there are no considerations for synergistic effects. It is a linear experimental procedure, and chemical reaction outputs usually have nonlinear responses [6,[18][19][20]. Therefore, it is possible to deduce the individual impact of each factor on the reaction, but it is not possible to draw conclusions regarding their mutual influence on one another. ...
... [21,22]. As an alternative, the use of statistical or physical modeling might be considered [18]. DoE is used to systematically address cause and effect relations between the parameters in study and experimental outputs, enabling to build a model to mathematically predict an output [17,[21][22][23][24]. ...
Article
Full-text available
We employed microemulsion combined with the solvent evaporation technique to produce biodegradable polycaprolactone (PCL) MCs, containing encapsulated isophorone diisocyanate (IPDI), to act as crosslinkers in high-performance adhesive formulations. The MC production process was optimized by applying a design of experiment (DoE) statistical approach, aimed at decreasing the MCs’ average size. For that, three different factors were considered, namely the concentration of two emulsifiers, polyvinyl alcohol (PVA) and gum arabic (GA); and the oil-to-water phase ratio of the emulsion. The significance of each factor was evaluated, and a predictive model was developed. We were able to decrease the average MC size from 326 μm to 70 µm, maintaining a high encapsulation yield of approximately 60% of the MCs’ weight, and a very satisfactory shelf life. The MCs’ average size optimization enabled us to obtain an improved distributive and dispersive mixture of isocyanate-loaded MCs at the adhesive bond. The MCs’ suitability as crosslinkers for footwear adhesives was assessed following industry standards. Peel tests revealed peel strength values above the minimum required for casual footwear, while the creep test results indicated an effective crosslinking of the adhesive. These results confirm the ability of the MCs to release IPDI during the adhesion process and act as crosslinkers for new adhesive formulations.
... Overall, this means the pipeline can be utilized in different contexts as long as there is a kinetic model with control inputs (in Fig. S1 we probed the applicability of this software to larger systems, specifically by comparing CPU time needed for the model presented here and the E. coli core metabolism model). Process optimization for organic synthesis using design of experiments in flow has been reported, most of which aim to determine the optimal operational conditions for one reaction [34][35][36] . So far experimental design schemes have not been applied to multiple organic reaction networks (or placed in the context of an active learning cycle). ...
Article
Full-text available
Kinetic modeling of in vitro enzymatic reaction networks is vital to understand and control the complex behaviors emerging from the nonlinear interactions inside. However, modeling is severely hampered by the lack of training data. Here, we introduce a methodology that combines an active learning-like approach and flow chemistry to efficiently create optimized datasets for a highly interconnected enzymatic reactions network with multiple sub-pathways. The optimal experimental design (OED) algorithm designs a sequence of out-of-equilibrium perturbations to maximize the information about the reaction kinetics, yielding a descriptive model that allows control of the output of the network towards any cost function. We experimentally validate the model by forcing the network to produce different product ratios while maintaining a minimum level of overall conversion efficiency. Our workflow scales with the complexity of the system and enables the optimization of previously unobtainable network outputs.
... However, during purification, a green colour was observed on the silica plug, indicating the presence of protonated porphyrin, and suggesting that the yield of porphyrin could be increased if the neutralization step was improved. Indeed, the global optimum of the reaction may be outside the parameters studied; here, a full design of experiments (DoE) approach would be informative [42][43][44]. However, to achieve robust DoE, the above set-up requires inline analysis and better control over the final neutralization step. ...
Article
Full-text available
Porphyrin derivatives have found diverse applications due to their attractive photophysical and catalytic properties, but remain challenging to synthesize, particularly at scale. Porphyrin synthesis thus stands to benefit from the more controlled environment, opportunities for efficient optimization, and potential for scale-up available in flow. Here, we have transferred Lindsey porphyrin synthesis into flow, enabling controlled timing for oxidation and neutralization steps and real time monitoring of the reaction mixture with inline UV–Vis analysis. For tetraphenyl porphyrin (TPP), inline UV–Vis showed the presence of protonated TPP, formed due to residual acid. Thus, inline monitoring allowed optimization of the neutralization step to improve yield. Three further porphyrin substrates were produced in flow; in two cases, the yield from inline UV was significantly higher than the yield from post-purification, identifying further yield losses that could be recovered by modifying the purification step. The workflow presented here can be adapted to multiple substrates to systematically optimise porphyrin yield, reducing the time needed to develop scalable routes to these valuable compounds.
Article
Chemical reactions are central for the creation of new materials, drug design and many more fields. The access to high reaction yields is of great importance to reduce cost, increase...
Chapter
Over the last decade, there has been a significant shift from traditional mechanistic and empirical modelling into statistical and data-driven modelling for applications in reaction engineering. In particular, the integration of machine learning and first-principle models has demonstrated significant potential and success in the discovery of (bio)chemical kinetics, prediction and optimisation of complex reactions, and scale-up of industrial reactors. Summarising the latest research and illustrating the current frontiers in applications of hybrid modelling for chemical and biochemical reaction engineering, Machine Learning and Hybrid Modelling for Reaction Engineering fills a gap in the methodology development of hybrid models. With a systematic explanation of the fundamental theory of hybrid model construction, time-varying parameter estimation, model structure identification and uncertainty analysis, this book is a great resource for both chemical engineers looking to use the latest computational techniques in their research and computational chemists interested in new applications for their work.
Article
Full-text available
To aid the advancement of hydrometallurgical reprocessing of used nuclear fuel, this work has explored and optimised the synthesis of DEHiBA in continuous flow, to establish a scalable, cost-effective manufacture route.
Article
The design of inorganic materials for various applications critically depends on our ability to manipulate their synthesis in a rational, robust, and controllable fashion. Different from the conventional trial-and-error approach, data-driven techniques such as the design of experiments (DoE) and machine learning are an effective and more efficient way to predictably control materials synthesis. Here, we present a Viewpoint on recent progress in leveraging such techniques for predicting and controlling the outcomes of inorganic materials synthesis. We first compare how the design choice (statistical DoE vs machine learning) affects the type of control it can offer over the resulting product attributes, information elucidated, and experimental cost. These attributes are supported by discussing select case studies from the recent literature that highlight the power of these techniques for materials synthesis. The influence of experimental bias is next discussed, followed finally by our perspectives on the major challenges in the widespread implementation of predictable and controllable materials synthesis using data-driven techniques.
Article
Full-text available
Sustainable practices in process chemistry are highlighted by a novel, 9 week team project of 8–12 students, in collaboration with AstraZeneca chemists, in an organic chemistry laboratory. Students synthesize the antiulcer medicine esomeprazole, which involves the asymmetric oxidation of pyrmetazole. To provide insight into the modern process chemistry industry, they propose environmentally friendly modifications to the asymmetric oxidation. Students first synthesize pyrmetazole and then follow a standard oxidation procedure and carry out modified, greener reactions of their choice. They investigate how a change in reaction conditions affects both the yield and enantioselectivity of esomeprazole. Positive student feedback was received and student postlab reports were analyzed over a 4 year period (2015–2018). Results consistently showed that the project provided students with the key tools to develop greener syntheses. This contextual approach not only offers the opportunity to develop valuable communication and team-working skills, but it also gives students creative input into their experimental work. It teaches the important research skills involved in sustainable process chemistry, from reproducing and modifying a literature procedure to identifying green metrics.
Article
Full-text available
This article outlines the benefits of using 'Design of Experiments' (DoE) optimisation during the development of new synthetic methodology. A particularly important factor in the development of new chemical reactions is the choice of solvent which can often drastically alter the efficiency and selectivity of a process. Whilst solvent optimisation is usually done in a non-systematic way based upon a chemist's intuition and previous laboratory experience, we illustrate how optimisation of the solvent for a reaction can be carried out by using a 'map of solvent space' in a DoE optimisation. A new solvent map has been developed specifically for optimisation of new chemical reactions using principle component analysis (PCA) incorporating 136 solvents with a wide range of properties. The new solvent map has been used to identify safer alternatives to toxic/hazardous solvents, and also in the optimisation of an SNAr reaction.
Article
Full-text available
The implementation of continuous flow processing as a key enabling technology has transformed the way we conduct chemistry and has expanded our synthetic capabilities. As a result many new preparative routes have been designed towards commercially relevant drug compounds achieving more efficient and reproducible manufacture. This review article aims to illustrate the holistic systems approach and diverse applications of flow chemistry to the preparation of pharmaceutically active molecules, demonstrating the value of this strategy towards every aspect ranging from synthesis, in-line analysis and purification to final formulation and tableting. Although this review will primarily concentrate on large scale continuous processing, additional selected syntheses using micro or meso-scaled flow reactors will be exemplified for key transformations and process control. It is hoped that the reader will gain an appreciation of the innovative technology and transformational nature that flow chemistry can leverage to an overall process.
Article
Automated kinetic profiling is a valuable tool for providing insights into key mechanistic features of complex catalytic systems. In an attempt to optimize a palladium-catalyzed Suzuki cross-coupling reaction, automated kinetic profiling was utilized with offline liquid chromatography to monitor reaction progress. Upon uncovering analytical sample instability issues, an online HPLC capability was developed and implemented through integration of a Chemspeeed liquid handling robot with an Agilent HPLC to facilitate automated reaction set-up and monitoring. Application of this capability resulted in the observation that precatalyst activation was a key factor influencing the reaction rate. Leveraging this mechanistic insight, a more efficient method to access the active catalyst was developed. This change resulted in a five-fold increase in the reaction rate.
Article
Flow chemistry involves the use of channels or tubing to conduct a reaction in a continuous stream rather than in a flask. Flow equipment provides chemists with unique control over reaction parameters enhancing reactivity or in some cases enabling new reactions. This relatively young technology has received a remarkable amount of attention in the past decade with many reports on what can be done in flow. Until recently, however, the question, “Should we do this in flow?” has merely been an afterthought. This review introduces readers to the basic principles and fundamentals of flow chemistry and critically discusses recent flow chemistry accounts.
Article
An inexpensive homemade 3-axis autosampler was used to facilitate the automation of an acid catalysed flow chemical desilylation reaction. Harnessing open-source software technologies (Python, OpenCV), an automated computer-vision controlled liquid-liquid extraction step was used to provide effective inline purification. A Raspberry Pi single-board computer was employed to interface with the motors used in the autosampler and actuated fluidic valves.
Article
Today, the generation of kinetic models is still seen as a resource intensive and specialised activity. We report an efficient method of generating reaction profiles from transient flows using a state-ofthe-art continuous-flow platform. Experimental data for multistep aromatic nucleophilic substitution reactions are collected from an automated linear gradient flow ramp with online HPLC at the reactor outlet. Using this approach, we generated 16 profiles, at 3 different inlet concentrations and 4 temperatures, in less than 3 hours run time. The kinetic parameters, 4 rate constants and 4 activation energies were fitted with less than 4% uncertainty. We derived an expression for the error in the observed rate constants due to dispersion and showed that such error is 5% or lower. The large range of operational conditions prevented the need to isolate individual reaction steps. Our approach enables early identification of the sensitivity of product quality to parameter changes and early use of unit operation models to identify optimal processequipment combinations in silico, greatly reducing scale up risks
Article
The tools and technique used in the Design of Experiments (DOE) have been proved successful in meeting the challenge of continuous improvement over the last 15 years. However, research has shown that applications of these techniques in small and medium-sized manufacturing companies are limited due to a lack of statistical knowledge required for their effective implementation. Although many books have been written in this subject, they are mainly by statisticians, for statisticians and not appropriate for engineers.Design of Experiments for Engineers and Scientists overcomes the problem of statistics by taking a unique approach using graphical tools. The same outcomes and conclusions are reached as by those using statistical methods and readers will find the concepts in this book both familiar and easy to understand. The book treats Planning, Communication, Engineering, Teamwork and Statistical Skills in separate chapters and then combines these skills through the use of many industrial case studies. Design of Experiments forms part of the suite of tools used in Six Sigma.Key features: Provides essential DOE techniques for process improvement initiatives Introduces simple graphical techniques as an alternative to advanced statistical methodsâ reducing time taken to design and develop prototypes, reducing time to reach the market Case studies place DOE techniques in the context of different industry sectors An excellent resource for the Six Sigma training program This book will be useful to engineers and scientists from all disciplines tackling all kinds of manufacturing, product and process quality problems and will be an ideal resource for students of this topic.Dr Jiju Anthony is Senior Teaching Fellow at the International Manufacturing Unit at Warwick University. He is also a trainer and consultant in DOE and has worked as such for a number of companies including Motorola, Vickers, Procter and Gamble, Nokia, Bosch and a large number of SMEs.
Article
An automated continuous reactor for the synthesis of organic compounds, which uses online mass spectrometry (MS) for reaction monitoring and product quantification, is presented. Quantitative and rapid MS monitoring was developed and calibrated using HPLC. The amidation of methyl nicotinate with aqueous MeNH2 was optimised using design of experiments and a self-optimisation algorithm approach to produce >93% yield.
Article
A Hogarth, or "wicked", universe is an irregular environment generating data to support erroneous beliefs. In this paper we argue that development scientists often work in such a universe. We demonstrate that exploring these multidimensional spaces using small experiments guided by scientific intuition alone, gives rise to an illusion of validity and a misplaced confidence in scientific intuition. In contrast, Design of Experiments permits the efficient mapping of such complex, multidimensional spaces. We describe simulation tools allowing research scientists to explore these spaces in relative safety.