ArticlePDF Available

Abstract and Figures

In this paper we present a beam-search-based constructive heuristic to solve the permutation flowshop scheduling problem with total flowtime minimisation as objective. This well-known problem is NP-hard, and several heuristics have been developed in the literature. The proposed algorithm is inspired in the logic of the beam search, although it remains a fast constructive heuristic. The results obtained by the proposed algorithm outperform those obtained by other constructive heuristics in the literature for the problem, thus modifying substantially the state-of-the-art of efficient approximate procedures for the problem. In addition, the proposed algorithm even outperforms two of the best metaheuristics for many instances of the problem, using much lesser computation effort. The excellent performance of the proposal is also proved by the fact that the new heuristic found new best upper bounds for 35 of the 120 instances in Taillard's benchmark.
Content may be subject to copyright.
A beam-search-based constructive heuristic for the
PFSP to minimise total flowtime
Victor Fernandez-Viagas1
, Jose M. Framinan1
1Industrial Management, School of Engineering, University of Seville,
Camino de los Descubrimientos s/n, 41092 Seville, Spain, {vfernandezviagas,framinan}@us.es
January 9, 2017
Abstract
In this paper we present a beam-search-based constructive heuristic to solve the
permutation flowshop scheduling problem with total flowtime minimisation as objec-
tive. This well-known problem is NP-hard, and several heuristics have been developed
in the literature. The proposed algorithm is inspired in the logic of the beam search,
although it remains a fast constructive heuristic.
The results obtained by the proposed algorithm outperform those obtained by
other constructive heuristics in the literature for the problem, thus modifying sub-
stantially the state-of-the-art of efficient approximate procedures for the problem. In
addition, the proposed algorithm even outperforms two of the best metaheuristics for
many instances of the problem, using much lesser computation effort. The excellent
performance of the proposal is also proved by the fact that the new heuristic found
new best upper bounds for 35 of the 120 instances in Taillard’s benchmark.
Keywords: Scheduling, Flowshop, Heuristics, Flowtime, PFSP, Beam Search, total
completion time
Preprint submitted to Computers & Operations Research. http://dx.doi.org/10.1016/j.cor.2016.12.020
Corresponding author.
1
1 Introduction
The permutation flowshop scheduling problem, denoted as PFSP, is one of the most studied
optimization problems in the literature. In this problem, njobs must be processed on a shop
of mmachines following the same order. Since the sequence of jobs must be the same for
all machines, the goal of the problem is to find a sequence of jobs optimizing one or several
objectives. Traditionally, the most common criteria are: minimisation of makespan (see e.g.
Fernandez-Viagas and Framinan, 2014; Ruiz and Stützle, 2007; Nawaz et al., 1983; Dong et al.,
2008), minimisation of total flowtime (see e.g. Allahverdi and Aldowaisan, 2002; Framinan et al.,
2005; Dong et al., 2013; Rajendran, 1993), and minimisation of total tardiness (see e.g. Vallada et al.,
2008; Armentano and Ronconi, 1999; Framinan and Leisten, 2008; Fernandez-Viagas and Framinan,
2015a). Among these, PFSP with makespan minimisation as objective was initially proposed
by Johnson (1954), and has been employed in many works since (see e.g. the reviews by
Framinan et al., 2004; Ruiz and Maroto, 2005; Reza Hejazi and Saghafian, 2005). Here we fo-
cus on total flowtime minimisation, which is considered to be among the most relevant and
meaningful for today’s dynamic production environments (Liu and Reeves, 2001).
The PFSP to minimise total flowtime is denoted as F m|prmu|Cjaccording to the standard
notation for scheduling problems (see e.g. Framinan et al., 2014). Since this problem was shown
to be strongly NP-hard for two or more machines by Garey et al. (1976), numerous heuristics and
metaheuristics have been proposed in the literature trying to achieve good solutions in reasonable
CPU times. In an exhaustive analysis, Pan and Ruiz (2013) evaluate the existing algorithms for
the problem in order to obtain a so-called efficient set of heuristics assuming as criteria the quality
of solutions obtained by each heuristic, and its computational requirements. This efficient set was
later improved by Fernandez-Viagas and Framinan (2015b) by means of a constructive heuristic
of complexity O(n2·m)that can be used as initial solution in composite heuristics.
The goal of this paper is to substantially improve the existing efficient set of heuristics for
the F m|prmu|Cjproblem by proposing a new beam-search-based constructive heuristic. The
proposed heuristic is inspired by the beam search which was first used in artificial intelligence
problems by Lowerre (1976). The beam search is a derivation of the branch-and-bound method
2
where only a subset of the most promising nodes are kept in each iteration and has been success-
fully adapted to several scheduling problems in the literature (see e.g. Della Croce and T’kindt,
2002; Valente and Alves, 2005; Valente and Alves, 2008; Valente, 2010). Its performance is highly
scalable with the decision interval, thus serving to obtain fast solutions in very short times, or to
yield very good-quality solutions if longer CPU times are allowed.
The remainder of the paper is organised as follows: the problem under consideration is de-
scribed and the state-of-the-art is presented in Section 2. In Section 3, the proposed heuristic
is explained in detail and compared with the state-of-the-art heuristics in Section 4. Finally,
conclusions are discussed in Section 5.
2 Problem Statement and State of the Art
The problem under study can be stated as follows: njobs have to be scheduled in a flowshop
with mmachines. A job jhas a processing time pij on machine i. The completion time of job j
on machine iis denoted as Cij, whereas Ci[j]indicates the completion time of the job scheduled
in position jon machine i.Cmj represents the completion time of job j.
As mentioned in Section 1, many heuristics have been proposed for the problem, and an
excellent review on these heuristics is provided by Pan and Ruiz (2013). In the following, we just
outline the basic aspects of the main heuristics and refer the interested reader to the paper by
Pan and Ruiz (2013) for a more detailed description of all existing heuristics.
Among the so-found efficient heuristics, the fastest one is the Raj heuristic by Rajendran
(1993), where a sequence is constructed by iteratively trying to insert a non-scheduled job in
several positions of an existing partial sequence. More specifically, given a partial sequence of
kjobs, positions from k
2to k+ 1 are tried. The list of non-scheduled jobs is arranged in non
descending order of indicator Tj(1):
Tj=
m
i=1
(mj+ 1) ·pij (1)
A different approach is adopted by the LR(x)heuristic by Liu and Reeves (2001) where xfinal
sequences are constructed by iteratively adding jobs one by one at the end of xpartial sequences.
3
The job to be inserted in iteration kis chosen so its value of indicator ξjk –see Equation (2)– is the
minimum among the unscheduled jobs. The first job of the ifinal sequence (with i∈ {1, . . . , x})
is the job with ith minimal indicator ξj0.
ξjk = (nk2) ·ITjk +ATjk (2)
In Equation (2), ITjk estimates the weighted idle time induced if job jis scheduled in the
last position of the partial sequence (i.e. k+ 1). ATjk is the artificial flowtime, which is the sum
of the completion time of job jplus the completion time of an artificial job pwhose processing
time on machine iequal to the average processing time of the unscheduled jobs on that machine
(excluding job j). More specifically, ITjk and ATj k are defined as:
ITjk =
m
i=2
m·max{Ci1,j Ci,[k],0}
i+k·(mi)/(n2) (3)
ATjk =Cmj +Cmp
The NEH heuristic was originally proposed by Nawaz et al. (1983) for the F m|prmu|Cmax
problem and lately adapted for the F m|prmu|Cjproblem by Framinan et al. (2002). In the
NEH, jobs are initially sorted according to a non-descending sum of their processing times.
Using this order, each unscheduled job is inserted in the partial sequence in the position that
minimises its total flowtime.
In view of the good performance of both LR(x)and N EH, Pan and Ruiz (2013) propose the
composite LR N EH(x)heuristic which schedules the first 3·n/4jobs of xsequences according
to a LR(x)procedure, and the remaining jobs according to the N EH .
The rest of the efficient heuristics in the set identified by Pan and Ruiz (2013) include a local
search method after the construction of the initial solution. More specifically, regarding local
search methods based on job insertion, the RZ heuristic by Rajendran and Ziegler (1997) uses
the ascending order of total processing times as the initial sequence and improves that sequence
by inserting each job of the sequence in the rest of positions and updating the sequence if better
solution is found (this improvement phase is denoted in the following as RZ). The I C1heuristic
4
by Li et al. (2009) implements the RZ local search method after the LR heuristic until no further
improvement (denoted as iRZ) is found.
Regarding local search methods based on the exchange of positions among jobs, Liu and Reeves
(2001) propose the heuristic LR(x)F P E (y), in which xsequences are generated according to
the LR(x)procedure and then, the solutions are improved by employing a Forward Pairwise
Exchange (F P E) procedure, i.e. each job in position kin the sequence is exchanged with each
one of the yjobs in positions k+ 1, k + 2, . . . , k +y. The procedure is repeated until there are no
more improvements in a complete iteration. The I C 2and I C 3heuristics, proposed by Li et al.
(2009), are similar to IC1, but at the end of each iteration the F P E and F P E Rprocedures
are performed, being F P E Ra variant of F P E where the insertion procedure is restarted after
an improvement of the solution. Finally, regarding the combination of insertion and exchange
movements, Pan and Ruiz (2013) propose two variants –denoted as P R2(x)and P R4(x)– of a
V ND local search method where the resulting solutions are embedded in LR NEH procedures
until xiterations are reached, or the CPU time exceeds a given value. Other variations, such as
the PR1 heuristic, which performs the iRZ procedure instead of the V N D method to improve
each sequence obtained by the LR NEH procedure were not found to be efficient. Recently,
Abedinnia et al. (2016) present a new simple heuristic which outperforms the simple heuristic of
Laha and Sarin (2009). However, their results in term of quality of solution and computational
effort are still far from this set of efficient heuristics.
All aforementioned heuristics have at least a complexity of O(n3·m), and most of them use the
LR heuristic to generate a seed solution. Using a similar procedure to that of the LR heuristic,
Fernandez-Viagas and Framinan (2015b) propose a fast constructive heuristic –denoted F F in
the following– inserting, step by step, jobs at the end of the sequence according to the index ξ
jk
(see Equation 4) in order to reduce the complexity to O(n2·m):
ξ
jk =(nk2)
4·I T
jk +Cmj (4)
where I T
jk is defined by (5):
5
I T
jk =
m
i=2
m·max{Ci1,j Ci,[k],0}
i1 + k·(mi+ 1)/(n2) (5)
By means of this new F F heuristic, it is possible to obtain a completely new set of efficient
heuristics by replacing LR by F F in the rest of heuristics. More specifically, the new set includes
the F F ,F F F P E,F F IC1,F F IC2,IC2,IC3and P R1heuristics obtained by replacing
LR by F F in the corresponding heuristics. In the next section, we propose a new heuristic which
can substantially improve the above described set of efficient heuristics.
3 Proposed Heuristic
In this section, we propose a Beam-Search-based Constructive Heuristic –denoted BSCH–, for
the PFSP to minimise total flowtime. BSCH works with several candidate nodes in parallel in
each iteration. The number of selected nodes is controlled by the parameter x(beam width). The
heuristic operates performing n1iterations. At iteration k, each selected node l(l∈ {1, . . . , x})
is formed by a set, Sl
k, of kscheduled jobs (sl
jk denotes the job placed in position jof selected
node lin iteration k). Consequently, for each selected node lin iteration kthere is a set Ul
kof
nkunscheduled jobs. Let us denote ul
jk the jth unscheduled job of selected node lin iteration
k.
For each iteration k∈ {1, . . . , n 1},|Ul
k|candidate nodes can be obtained from each selected
node lby inserting each one of the jobs in Ul
kin position k+ 1 of Sl
k. In total, (nk)·x
candidate nodes can be obtained. The idea is to retain the most promising xcandidate nodes for
the next iteration (selected nodes). The rest of the nodes are discarded for the next iterations.
However, comparing candidate nodes may be or may be not straightforward depending on the
specific situation:
If the candidate nodes to be compared have been obtained by appending different jobs
in Ul
kto a same node l, then their corresponding partial sequences are identical with the
exception of the last job. Therefore, they can be compared in terms of the completion time
of the added job, or of the new idle time induced by the added job.
6
If the candidate nodes to be compared have been obtained from different nodes –e.g. one
candidate node is the subsequence (1,2), and other candidate node is subsequence (2,3)–,
both the scheduled and the unscheduled jobs are different for each candidate node. In such
case, it is useless to perform a straightforward comparison among candidate nodes taking
into account either the job to be inserted, or just the scheduled jobs.
Clearly, the key to select the best xcandidate nodes is to be able to compare partial sequences
composed of different jobs. Since in iteration k, a candidate node is formed by partial sequence
Sl
kof selected node lplus a job inserted in position k+ 1, both land the inserted job would
contribute to the value of the flowtime of a final sequence obtained from this candidate.
Regarding the contribution of the inserted job, there are two elements that largely influence
the value of the sum of completion times in the complete sequence (Fernandez-Viagas and Framinan,
2015b), i.e.: the weighted idle time induced by the new job ul
jk inserted, and the completion time
of the new job ul
jk . Note that the evaluation of these elements can be done in O(m).
Regarding the contribution of each selected node lin iteration kto the flowtime of the final
sequence –denoted Fkl or forecast index in the following–, it is clear that such contribution is
related to both scheduled and unscheduled jobs. On the one hand, the contribution due to the
scheduled jobs can be computed by means of a function of the idle times and completion times of
the previous jobs. On the other hand, an ‘artificial’ completion time, denoted as C T λkl, can be
used to identify the contribution of the unscheduled jobs. The computation of Fkl is developed
in Section 3.5.
Hence, steps of the constructive heuristic can be summarised as follows:
Obtain a set of nodes
During niterations:
Generate candidate nodes
Evaluate candidate nodes
Select the best xcandidate nodes
Update forecast index
7
These steps are elaborated in the next subsections.
3.1 Generation of the Initial Nodes
Jobs are initially sorted in non descending order of indicator ξ
j,0(see Section 2) breaking ties in
favor of jobs with lower IT
j,0. Let us denoted by αi(α:= (α1, ..., αi, ..., αn)) the component i
of that sorted list. Hence, to obtain the first xnodes (consisting of one job), job in position l
of the sorted list is placed in the first position of the partial sequence sl
1,1of the selected node l
(sl
1,1=αl). The rest of the jobs forms the unscheduled jobs of this selected node l, i.e. ul
j,1with
j∈ {1, . . . , n 1}.
3.2 Candidate Nodes Generation
New candidate nodes are formed by adding an unscheduled job at the end of the partial sequence
of each selected node. More specifically, from each selected node l∈ {1, . . . , x},nkcandidate
nodes are obtained at iteration kwhere each candidate jis obtained from selected node lby
adding the jobs in Ul
kat the end of the scheduled jobs.
3.3 Candidate Nodes Evaluation
Once candidate nodes are formed, they are evaluated. This evaluation is performed taking into
account two factors:
Influence from the selected node: As already discussed, the influence of selected node lin
iteration kis measured by means of the forecast index Fkl which is explained further in
Section 3.5.
Influence from the inserted job: This influence is due to the new job inserted at the end of
the scheduled jobs and is measured by CTjkl the completion time of the unscheduled job
ul
jk , which is the additional completion time incurred when inserting job ul
jk in the selected
node, i.e.:
8
CTjkl =Cmul
jk
and by ITj kl the weighted idle time induced by the insertion of job ul
jk :
ITjkl =
m
i=2
m·max{Ci1,ul
jk Ci,[k],0}
i1 + k·(mi+ 1)/(n2) (6)
Hence, in iteration k, given a selected node l, the insertion of unscheduled job ul
jk is evaluated
according to the following index:
Bjkl =Fkl +c·CTjkl +I Tj kl ·(nk2) (7)
The parameter chas been introduced in the equation in order to balance the completion time
and the idle time of the new introduced job (in Section 3.6, the calibration of this parameter is
addressed). Additionally, the idle time is weighted by (nk2) to reduce its importance as
indicator as the sequence contains more jobs.
In the beam search literature (see e.g. Sabuncuoglu and Bayiz, 1999), this type of evaluation
method where each unscheduled job is taken into account is denoted as total cost evaluation
function. However, note that, in our case, in order to speed up the computation of the cost
function, an estimate of the actual cost function is carried out.
3.4 Candidate Nodes Selection
The procedure to select the candidate nodes that would constitute the selected nodes of the next
iteration is very simple: we adopt an elitist selection procedure where the xcandidate nodes with
the lowest values of Bare selected, i.e. in iteration kwe look for the combination of jand l
achieving the lowest values of Bjkl as defined in Equation (7). The rest of candidate nodes are
removed from the population, and the chosen candidate nodes are denoted as the selected nodes
for the next iteration. Let us denote by branch[l]and job[l]the value of land jrespectively of
the lth best Bjkl in iteration k.
9
3.5 Forecasting Phase
The Forecast Index, F, is used to be able to compare candidate nodes with different un- and
scheduled jobs. It balances the following indicators:
1. the idle time of each scheduled job in the candidate node,
2. the completion time of each scheduled job in the candidate node, and
3. the completion time of the unscheduled jobs in the candidate node.
The influence of 1) and 2) changes across the iterations of the algorithm. Recall that the
influence of the idle time allows us to compare candidate nodes with different jobs. In the first
iterations there are few scheduled jobs, and these scheduled jobs may be quite different. Therefore,
the idle time between jobs is expected to have a larger influence in the comparison between nodes,
as compared to the sum of completion times (which is strongly schedule-dependent). In contrast,
in the last iterations the candidate nodes are almost complete sequences, so they are very similar
in terms of scheduled jobs and therefore, a direct evaluation of the completion times of the jobs
to compare nodes would be more related to the final objective. Thereby, in the equation of
the forecast index, the cumulated idle time, denoted as SI T (8) is reduced with the number of
scheduled jobs (it is multiplied by nk2), while the cumulated completion time, so-called
SC T (9), remains the same along the iterations. More specifically:
SI Tk,l=nb
n·SI Tk1,branch[l]+ITjob[l],k,branch[l]·(nk2),k= 1, . . . , n1, l= 1, . . . , x
(8)
SC Tk,l=SC Tk1,branch[l]+CTjob[l],k,branch[l]+C T λk,branch[l],k= 1, . . . , n 1, l= 1, . . . , x
(9)
where SI T0,l=SCT0,l= 0,l= 1, . . . , x and C T λk,l is the completion time of an artificial
job placed at the end of the sequence of the selected node lin the iteration k. The processing
times of this artificial job are equal to the average processing times of the unscheduled jobs (ul
j,k
j).
Taking these indicators into account, the forecast index can be then defined as follows:
10
Figure 1: Example of BSCH
Fk,l=a·SCTk,l+SITk,l,k= 1, . . . , n 1, l= 1, . . . , x (10)
where a, and bare parameters designed to better balance the components of the forecast
index. Parameter abalances the influence of SIT and S CT . Parameter bis introduced in
fraction (nb)/n of SI T in order to diminish the weight of idle time with the increase of
iterations, given that 1) the idle time of the last jobs is less important than that of the first ones
given the flowtime objective, and 2) the importance of the cumulated idle time as indicator also
decreases as the number of scheduled jobs is higher.
The calibration of aand bis discussed in Section 3.6. An example of the algorithm is presented
in Figure 1. We use the third instance of the benchmark by Taillard (1993) where the last 16 jobs
have been removed, and consider only the first 4 jobs. Selected nodes are shown in lilac while
candidate nodes are shown in orange. The pseudo-code of the algorithm is shown in Figure 2.
3.6 Experimental parameter tuning
Parameters a,band chave been included to better adjust the performance of the proposed
heuristic. In this subsection, a full factorial design of experiments is performed to set up proper
values for these parameters. For each of them, the following levels are tested
a∈ {1,3,5,7,9,11,13}
b∈ {0,1,2,3,4,5,6}
c∈ {1,3,5,7,9,11,13}
11
Procedure BSCH(x)
//Initial Order
Determination of I T
j,0,CT
j,0and ξ
j0;
ITj,0,l =I T
j,0and CTj,0,l =CT
j,0l;
α:= Jobs ordered according to non-decreasing ξ
j,0breaking ties in favor of jobs with
lower IT
j,0;
Update Sl
1(sl
1,1=αl)land Ul
1with the remaining jobs.
Determination of CT λ0,l l. Note that the processing times of the artificial job for
selected node lis equal to the average processing times of all jobs with the exception
of sl
1,1;
for l= 1 to xdo
SI T1,l =nb
n·ITalpha[l],0,l ·(n02);
SC T1,l =C Talpha[l],0,l +CT λ,0,l;
F1,l := a·SC T1,l +S I T1,l ;
end
for k= 1 to n1do
//Candidate Nodes Creation
Determination of ITjkl,C Tj kl ;
//Candidate Nodes Evaluation
Bjkl := Fkl +c·CTjkl +I Tjkl,l= 1, . . . , x and j= 1, . . . , n x;
//Candidate Nodes Selection
for l= 1 to xdo
Determination of the l-th best candidate node according to non-decreasing Bj kl
in iteration k. Denote by branch[l]the value of the index lof that candidate
node and by job[l]the value of j;
end
//Forecasting Phase. Update of the Forecast Index
for l= 1 to xdo
Update Sl
k+1 and Ul
k+1 by removing job ubranch[l]
job[l],k from Ubranch[l]
kand including
in Sbranch[l]
k.
Determination of CT λk,branch[l]for new selected node lformed by the old se-
lected node branch[l]with job job[l]. Note that the processing times of the
artificial job are equal to the average processing times of all unscheduled jobs
(Ul
k+1);
SI Tk+1,l=nb
n·SI Tk,branch[l]+ITj ob[l],k,branch[l]·(nk2);
SC Tk+1,l=SC Tk,branch[l]+CTjob[l],k,branch[l]+C T λk,branch[l];
Fk+1,l=a·SCTk+1,l+SITk+1,l;
end
end
//Final evaluation
Evaluate the flowtime of the scheduled jobs of each selected node and return the least
one.
end
Figure 2: BSCH
12
Source Significance
Parameter a0.000
Parameter b1.000
Parameter c0.007
Table 1: Kruskal-Wallis for the parameters a, b and c
representing 343 combinations of values. For each combination, five instances have been generated
for several values of nand m,n∈ {20,50,100,200,500}and m∈ {5,10,20}, where the processing
times of each job in each machine is uniformly distributed between 1 and 99. A non-parametric
Kruskal-Wallis analysis is performed since normality and homoscedasticity assumptions required
for ANOVA were not fulfilled. In the experiments, x=n/10 in order to avoid excessive CPU
time requirements for parameter tuning. Results are shown in Table 1, indicating that there are
significant differences between the levels of parameters aand c, but not for parameter b. The
best combination is obtained for a= 9,b= 3 and c= 7. These values are used for the BSCH
heuristic in the next section regardless the value of x.
4 Computational Evaluation
The proposed heuristic is compared with the current set of efficient heuristics formed by 17
heuristics (see Section 2). In order to have a fair comparison, each heuristic is again implemented
under the same computers conditions which means:
Using the same computer in the computational evaluation (a Intel Core i7-3770 PC with
3.4 GHz and 16GB RAM),
the same programming language (C# under Visual Studio 2013), and
the same libraries and common functions for all heuristics.
Experiments have been performed for the 120 instances of the benchmark by Taillard (1993)
which is composed of 12 problem sizes varying the number of jobs and machines according to
n∈ {20,50,100,200,500}and m∈ {5,10,15,20}respectively, with 10 instances for each size.
13
Processing times are uniformly distributed from 1 to 99 in this testbed. To better fit the com-
putational time of each heuristic, 5 runs are carried out for each instance and the average values
are collected.
Additionally, the parameter xof the proposed heuristic must be set. As shown in Section 3,
xindicates the number of selected nodes in each iteration and therefore, it is proportional to the
CPU time required by the heuristic. For x>n, additional indications in the first iteration of
the algorithm would have to be provided (i.e. at least it should be indicated which is the first
job of the last xnselected nodes after the first iteration), so here we restrict to x∈ {1, n}.
More specifically, we use the values of xalso employed in the literature for the LR heuristic,
i.e. x∈ {2,5,10,15, n/10, n/m, n}(see e.g. Liu and Reeves, 2001, Pan and Ruiz, 2013, and
Fernandez-Viagas and Framinan, 2015b). Note that x= 1 has been removed from the analysis
since BSCH(1) is equivalent to F F (1) (with a different combination of parameters), so it is
already included in the computational evaluation.
4.1 Comparison between BSCH and the efficient heuristics
The comparison among the heuristics is performed in terms of quality of the solutions and com-
putational effort. On the one hand, the former is commonly evaluated by means of the Relative
Percentage Deviation RP D1, which is defined for heuristic hin instance ias:
RP D1ih =Cih
sum min1hHCih
sum
min1hHCih
sum
·100,i= 1, . . . , I, h = 1, . . . , H (11)
where His the number of heuristics considered in the evaluation, Iis the number of instances
in the test bed, and Cih
sum is the total flowtime obtained by heuristic hin instance i. Note
that ARP D1indicates Average RP D1. On the other hand, the most common indicator for
the computational effort is the average CPU time. However, Fernandez-Viagas and Framinan
(2015b) detected that this indicator presents several problems when used to evaluate heuristics
with different stopping criteria, and proposed the RP T (Relative Percentage Time) indicator
instead:
14
RP T
ih =Tih ACTi
ACTi
,i= 1, . . . , I, h = 1, . . . , H (12)
where Tih is the CPU time required by heuristic hin instance iand:
ACTi=
H
h=1
Tih/H, i= 1, . . . , I (13)
In this paper, a slightly different indicator, denoted as RP T , is used to be able to graphically
represent the results in logarithmic scale:
RP Tih =Tih ACTi
ACTi
+ 1,i= 1, . . . , I, h = 1, . . . , H (14)
The average value of RP T , i.e. AR P T , can be defined as follows:
ARP Th=
I
i=1
RP Tih
I,h= 1, . . . , H (15)
Nevertheless, in order to provide additional information of the experiments, raw CPU times
are also used together with ARP T .
The RP D1values obtained for each algorithm are shown in Tables 2 and 3. The last row
indicates the average value, i.e. the ARP D 1for each algorithm. As it can be seen, the ARP D1
of the actual set of efficient heuristics ranges from 3.84 to 1.22, where the best one (1.22) is
obtained by FF-PR1(15). Regarding BSCH, the worst ARP D1is 2.51 while the best one is
0.19. In order to be able to perform a fair comparison among heuristics, CPU times (in seconds)
are summarised in Tables 4 and 5 (the last two rows represent the average CPU time and the
ARP T respectively). The average values are indicated in Table 6 and graphically shown in Figure
4 using ARP T as a measure of the computational effort, as well as in Figure 3 using Average
CPU times.
Considering ARP T , the actual set of efficient heuristics is updated by including a complete
new set of heuristics, all of them including BSCH for different values of x. The following
conclusions can be obtained:
BSCH(2) (with ARP D1 = 2.51) improves heuristics F F (n/m),F F (n/10) and F F (n/10)
15
F P E(1) with ARP D 1equal to 3.11, 3.02 and 2.70 respectively, while using less ARP T .
BSCH(n/m),BSC H (5) and BSCH(n/10) with ARP D11.46, 1.35 and 1.21 respectively
outperform F F (2) F P E (n/10) and F F (n/10) F P E(n/10) using less ARP T .
BSCH(10), with ARP D 1and ARP T equal to 0.88 and 0.13, clearly outperforms F F (15)
F P E(n/10), which has an ARP D1of 2.35 and an ARP T of 0.17.
BSCH(15) (ARP D1 = 0.64) outperforms with less computational effort F F (n/10)
F P E(n),F F (n/m)F P E (n),F F IC1and F F I C2which have a minimal ARP D1
of 1.61.
The best heuristic, BSCH(n), with ARP D1 = 0.19 clearly outperforms heuristics IC2,
F F IC 3,I C 3,F F P R1(5),F F P R1(10) and F F P R1(15).
As a consequence, it can be stated that our proposal outperforms the up-to-now efficient
heuristics for the problem.
In order to establish the statistical significance of these results, Holm’s procedure (Holm,
1979) is used where each hypothesis is analysed using a non-parametric Mann-Whitney test
(see e.g. Pan et al., 2008). In Holm’s procedure, the hypotheses are sorted in non-descending
order of the p-values found in the Mann-Whitney test. The hypothesis iis rejected if its p-
value is lower than α/(ki+ 1) where kis the total number of hypotheses. The results of
the Holm’s procedure are shown in Table 7, where the fourth and sixth columns indicate if the
hypothesis is rejected (denoted as R in such case) by Mann-Whitney and/or Holm’s procedure.
As can be seen, hypothesis BSCH(2) = F F (n/10) F P E(1) is the only one that cannot be
rejected by Holm’s procedure, but it has to be noted that the computational effort required by
F F (n/10) F P E(1) is much higher to that by BSCH(2). In summary, it can be concluded
that BSCH(n/10),BSC H (10),BSCH(15) and BSC H (n)are statistically efficient and that
BSCH(2) is not inefficient. Note that BSCH(2) would be statistically efficient when considering
the Pareto frontier using the average CPU time instead of the ARP T .
16
Instance FF(1) FF(2) FF(n/10) FF(n/m) FF(2)-FPE(n/10) FF(15)-FPE(n/10) FF(n/10)-FPE(1) FF(n/10)-FPE(n/10) FF(n/10)-FPE(n) FF(n/m)-FPE(n) FF-IC1
20 x 5 3.20 2.71 2.71 2.45 1.90 1.54 1.95 1.90 1.24 1.42 1.30
20 x 10 3.20 3.38 3.38 3.38 2.37 1.94 2.49 2.37 1.76 1.76 1.07
20 x 20 3.06 2.68 2.68 3.26 1.91 2.24 2.13 1.91 1.54 1.79 1.02
50 x 5 2.28 2.10 2.03 2.01 1.51 1.55 1.91 1.62 1.38 1.36 1.05
50 x 10 3.49 3.60 3.12 3.12 2.82 2.64 2.84 2.63 2.26 2.26 1.59
50 x 20 3.19 3.44 3.15 3.44 2.76 2.48 2.83 2.48 1.96 2.16 1.50
100 x 5 1.92 1.61 1.55 1.55 1.41 1.41 1.50 1.41 1.25 1.25 1.25
100 x 10 3.63 3.70 3.34 3.34 2.76 2.55 3.17 2.59 2.41 2.41 2.20
100 x 20 5.55 5.24 4.06 4.42 4.02 3.13 3.76 3.28 2.40 2.53 1.84
200 x 10 3.59 3.24 2.95 2.95 2.41 2.37 2.85 2.35 2.25 2.25 2.14
200 x 20 5.93 5.46 4.13 4.32 4.33 3.42 3.89 3.42 3.03 3.07 2.69
500 x 20 4.12 3.83 3.12 3.15 3.21 2.90 3.05 2.75 2.54 2.55 2.46
Average 3.84 3.42 3.02 3.11 2.62 2.35 2.70 2.39 2.00 2.07 1.68
Table 2: RPD1 of heuristics (I)
Instance FF-IC2 FF-IC3 IC2 IC3 FF-PR1(5) FF-PR1(10) FF-PR1(15) BSCH(2) BSCH(5) BSCH(10) BSCH(15) BSCH(n/10) BSCH(n/m) BSCH(n)
20 x 5 1.20 1.20 0.65 0.57 0.46 0.34 0.26 2.44 1.44 0.82 0.61 2.44 1.23 0.96
20 x 10 1.16 1.16 1.09 0.98 0.67 0.53 0.29 2.28 1.03 0.82 0.43 2.28 2.28 0.46
20 x 20 1.02 1.01 1.13 1.20 0.24 0.12 0.06 1.96 0.81 0.62 0.44 1.96 3.34 0.46
50 x 5 0.91 0.91 1.06 1.05 0.95 0.88 0.88 1.74 0.81 0.54 0.45 0.81 0.54 0.12
50 x 10 1.69 1.66 1.59 1.58 1.33 1.27 1.20 2.50 1.38 0.65 0.44 1.38 1.38 0.06
50 x 20 1.45 1.45 1.12 1.08 0.86 0.81 0.71 2.65 1.15 0.96 0.73 1.15 2.65 0.10
100 x 5 1.12 1.11 1.15 1.15 1.20 1.16 1.14 1.39 0.93 0.54 0.50 0.54 0.39 0.07
100 x 10 1.96 2.00 1.82 1.76 1.86 1.76 1.74 2.92 1.54 0.80 0.53 0.80 0.80 0.04
100 x 20 1.81 1.79 2.18 2.04 1.67 1.48 1.41 3.89 2.20 1.29 0.87 1.29 2.20 0.00
200 x 10 2.01 1.99 2.07 2.04 2.06 1.92 1.92 2.34 1.31 0.87 0.71 0.58 0.58 0.00
200 x 20 2.61 2.56 2.60 2.59 2.50 2.41 2.33 3.36 1.84 1.43 1.09 0.80 1.43 0.00
500 x 20 2.40 2.36 2.49 2.48 2.66 2.66 2.66 2.69 1.70 1.19 0.92 0.49 0.69 0.00
Average 1.61 1.60 1.58 1.54 1.37 1.28 1.22 2.51 1.35 0.88 0.64 1.21 1.46 0.19
Table 3: RPD1 of heuristics (II)
17
Instance FF(1) FF(2) FF(n/10) FF(n/m) FF(2)-FPE(n/10) FF(15)-FPE(n/10) FF(n/10)-FPE(1) FF(n/10)-FPE(n/10) FF(n/10)-FPE(n) FF(n/m)-FPE(n) FF-IC1
20 x 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
20 x 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
20 x 20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
50 x 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.01 0.01
50 x 10 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.01 0.02 0.03
50 x 20 0.00 0.00 0.00 0.00 0.01 0.01 0.00 0.01 0.03 0.03 0.04
100 x 5 0.00 0.00 0.01 0.01 0.01 0.02 0.01 0.01 0.06 0.07 0.08
100 x 10 0.00 0.00 0.01 0.01 0.03 0.04 0.01 0.03 0.15 0.14 0.23
100 x 20 0.00 0.00 0.02 0.01 0.05 0.08 0.03 0.06 0.33 0.33 0.60
200 x 10 0.01 0.01 0.09 0.09 0.22 0.27 0.12 0.30 1.30 1.31 2.13
200 x 20 0.01 0.02 0.19 0.09 0.54 0.57 0.23 0.63 3.26 3.17 5.69
500 x 20 0.06 0.12 2.85 1.42 9.14 9.30 3.12 10.43 57.79 55.96 81.24
Average 0.01 0.01 0.26 0.14 0.83 0.86 0.29 0.96 5.25 5.09 7.51
ARPT 0.01 0.01 0.02 0.03 0.06 0.17 0.04 0.08 0.34 0.34 0.58
Table 4: Computational times of heuristics I
Instance FF-IC2 FF-IC3 IC2 IC3 FF-PR1(5) FF-PR1(10) FF-PR1(15) BSCH(2) BSCH(5) BSCH(10) BSCH(15) BSCH(n/10) BSCH(n/m) BSCH(n)
20 x 5 0.00 0.00 0.00 0.00 0.01 0.02 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00
20 x 10 0.00 0.00 0.00 0.00 0.01 0.03 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00
20 x 20 0.00 0.00 0.00 0.00 0.02 0.03 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00
50 x 5 0.02 0.03 0.03 0.04 0.06 0.13 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.03
50 x 10 0.04 0.05 0.06 0.07 0.13 0.27 0.39 0.00 0.00 0.01 0.01 0.00 0.00 0.03
50 x 20 0.07 0.08 0.14 0.15 0.25 0.53 0.82 0.00 0.00 0.01 0.02 0.00 0.00 0.06
100 x 5 0.16 0.26 0.30 0.35 0.52 1.03 1.56 0.00 0.01 0.01 0.02 0.01 0.03 0.31
100 x 10 0.36 0.79 0.45 0.94 1.07 2.20 3.35 0.00 0.01 0.02 0.03 0.02 0.02 0.40
100 x 20 0.81 1.64 0.83 1.91 2.56 5.05 7.58 0.01 0.02 0.04 0.06 0.04 0.02 0.68
200 x 10 3.17 14.54 4.39 15.01 10.21 19.59 28.92 0.02 0.04 0.08 0.13 0.18 0.19 7.25
200 x 20 7.66 32.59 8.43 23.84 22.74 45.59 68.98 0.03 0.07 0.15 0.23 0.33 0.15 8.57
500 x 20 120.86 1079.65 174.03 1090.81 378.95 390.46 391.29 0.19 0.44 0.95 1.62 7.32 2.88 215.44
Average 11.10 94.14 15.72 94.43 34.71 38.74 41.93 0.02 0.05 0.11 0.18 0.66 0.27 19.40
ARPT 0.85 2.04 1.14 2.26 2.82 5.57 8.12 0.02 0.05 0.13 0.20 0.05 0.05 1.02
Table 5: Computational times of heuristics II
18
Figure 3: ARP D1versus average CPU times. Average computational time (X-axis) is
shown in logarithmic scale.
4.2 Comparison between BSCH and metaheuristics
An additional series of experiments have been conducted to compare the BSCH heuristic with
an iterated local search (denoted as MRSILS) and an iterated greedy algorithm (denoted as
IGRIS ). These two are among the best metaheuristics for the problem (see Dong et al., 2013 and
Pan et al., 2008). In order to analyse the impact of BSCH, we separately run both metaheuristics
until the stopping criterion 60 ·n·m/2milliseconds. For each instance, five runs are considered
and the average flowtime values are recorded. Both metaheuristics have been again implemented
under the same computer conditions and the comparison has been performed for all instances of
the benchmark. Results in terms of ARP D 2and average CPU times are shown in Table 8. Note
that the last column indicates the ratio between the CPU time needed by the metaheuristics
and the BSCH(n)heuristic for each size of instance. ARP D2is the average RP D2which is
calculated by Equation (16):
RP D2ih =Cih
sum UB
UB ·100,i= 1, . . . , I , h = 1, . . . , H (16)
19
Heuristic ARP D1ARP T Avg. Time
F F (1) 3.84 0.01 0.01
F F (2) 3.42 0.01 0.01
F F (n/10) 3.02 0.02 0.26
F F (n/m)3.11 0.03 0.14
F F (2) F P E(n/10) 2.62 0.06 0.83
F F (15) F P E(n/10) 2.35 0.17 0.86
F F (n/10) F P E(1) 2.70 0.04 0.29
F F (n/10) F P E(n/10) 2.39 0.08 0.96
F F (n/10) F P E(n)2.00 0.34 5.25
F F (n/m)F P E(n)2.07 0.34 5.09
F F IC 11.68 0.58 7.51
F F IC 21.61 0.85 11.10
F F IC 31.60 2.04 94.14
IC21.58 1.14 15.72
IC31.54 2.26 94.43
F F P R1(5) 1.37 2.82 34.71
F F P R1(10) 1.28 5.57 38.74
F F P R1(15) 1.22 8.12 41.93
BSCH(2) 2.51 0.02 0.02
BSCH(5) 1.35 0.05 0.05
BSCH(10) 0.88 0.13 0.11
BSCH(15) 0.64 0.20 0.18
BSCH(n/10) 1.21 0.05 0.66
BSCH(n/m)1.46 0.05 0.27
BSCH(n)0.19 1.02 19.40
Table 6: Summary of results of the heuristics.
i Hip-value Mann-Whitney α/(ki+ 1) Holm’s Procedure
1BSCH (2)=F F (n/m)0.000 R 0.0031 R
2BSCH (n/10)=F F (2) F P E (n/10) 0.000 R 0.0033 R
3BSCH (n/10)=F F (n/10) F P E (n/10) 0.000 R 0.0036 R
4BSCH (10)=F F (15) F P E (n/10) 0.000 R 0.0038 R
5BSCH (15)=F F (n/10) F P E (n)0.000 R 0.0042 R
6BSCH (15)=F F (n/m)F P E (n)0.000 R 0.0045 R
7BSCH (15)=F F IC10.000 R 0.0050 R
8BSCH (15)=F F IC20.000 R 0.0056 R
9BSCH (n)=IC20.000 R 0.0063 R
10 BSCH (n)=F F IC30.000 R 0.0071 R
11 BSCH (n)=IC30.000 R 0.0083 R
12 BSCH (n)=F F P R1(5) 0.000 R 0.0100 R
13 BSCH (n)=F F P R1(10) 0.000 R 0.0125 R
14 BSCH (n)=F F P R1(15) 0.000 R 0.0167 R
15 BSCH (2)=F F (n/10) 0.001 R 0.0250 R
16 BSCH (2)=F F (n/10) F P E (1) 0.163 0.0500
Table 7: Holm’s procedure.
20
Figure 4: ARP D1versus ARP T .ARP T (X-axis) is shown in logarithmic scale
where UB is the best known upper bound for instance itaken from Pan and Ruiz (2012).
As it can be seen, both ARP D2values and average CPU times of the metaheuristics are
clearly improved by the proposed constructive heuristic. One the one hand, the best ARP D 2
value of the metaheuristics is 0.76 while the ARP D2value of the BSC H (n)heuristic is 0.40
(there are statistical differences between the algorithms when a non-parametric Mann-Whitney
test is used as p-value equals to 0.004). Additionally, 35 new best upper bounds have been
found in the instances (see Table 9). This fact clearly highlights the excellent performance of the
proposed heuristic since e.g. only 12 upper bounds were updated when Pan and Ruiz (2012) ran
the several metaheuristics until a stopping criterion of 400 ·m·nmilliseconds (i.e. an average
CPU time of 731.7 seconds). On the other hand, big differences are found when analysing the
average CPU time between the algorithms, which are 19.4 seconds for the BSCH(n)heuristic
and 54.88 seconds for the metaheuristics. Although the differences in average CPU time are not
so relevant, it is due to the use of an instance-size dependent indicator to compare algorithms
with different stopping criteria (see Fernandez-Viagas and Framinan, 2015b for a more detailed
explanation). In fact, regarding the ratio of the CPU time between the metaheuristics and the
21
ARP D2Avg. time
Instance MRSI LS IGRIS BSC H(n)MRSI LS(BSCH )M RSI LS, IGRIS B SC H (n)M RSI LS,IGRIS
BSC H(n)
20 x 5 0.01 0.05 1.25 0.01 3.00 0.00 1704.55
20 x 10 0.00 0.08 0.75 0.00 6.00 0.00 2500.00
20 x 20 0.00 0.01 0.75 0.00 12.00 0.00 3508.77
50 x 5 0.57 0.69 0.75 0.28 7.50 0.03 291.60
50 x 10 0.70 0.90 1.04 0.47 15.00 0.03 438.34
50 x 20 0.69 0.99 1.48 0.63 30.00 0.06 529.10
100 x 5 1.11 1.17 0.30 0.22 15.00 0.31 48.49
100 x 10 1.44 1.60 0.57 0.27 30.00 0.40 74.63
100 x 20 1.50 1.89 1.14 0.83 60.00 0.68 87.60
200 x 10 1.10 1.35 -0.61 -0.71 60.00 7.25 8.28
200 x 20 1.24 1.46 -0.76 -0.83 120.00 8.57 14.01
500 x 20 0.79 0.85 -1.87 -1.90 300.00 215.44 1.39
Average 0.76 0.92 0.40 -0.06 54.88 19.40 767.23
Table 8: ARP D2and average CPU time, for each instance size, required by the BSC H (n)
heuristic and the metaheuristics MRSILS and I GRI S .
proposed heuristic, the computational effort for the metaheuristics is 767.23 times bigger than for
the proposed heuristic. This also serves to explain the good performance of the metaheuristics
in the 60 smallest instances as compared with the proposed constructive heuristic since a huge
computational effort is used for the former (e.g. approximately 3,500 times higher in instances
Ta21-Ta-30). In contrast, the CPU time of the proposed heuristic is always less than 1 second,
and its average CPU times for the first 90 instances is 0.17 seconds against 19.83 seconds required
by the metaheuristics.
Finally, the excellent behavior of the proposed heuristic is also confirmed in a last experi-
ment. We measure the variation in the quality of the solution in the metaheuristic MRSILS
when the BSCH(n)heuristic is used as the initial sequence of the metaheuristic, denoted as
MRSILS(BSCH). Results are shown in the fifth column of Table 3. The ARP D 2found by
MRSILS(BSCH)is –0.06 as compared to 0.76 (ARP D found of by the original MRSILS).
5 Conclusions
In this paper, we have presented BSC H (x), a beam-search-based constructive heuristic to solve
the PFSP to minimise total flowtime. The algorithm constructs sequences, and at the same time,
it combines them and selects the best xones. Since the nodes are formed by partial sequences, a
forecast index is introduced in order to be able to compare nodes with different un- and scheduled
22
Instance Best Bound Instance Best Bound Instance Best Bound Instance Best Bound
TA1 14033 TA31 64802 TA61 253232 TA91 1042494
TA2 15151 TA32 68051 TA62 242093 TA92 1028957
TA3 13301 TA33 63162 TA63 237832 TA93 1043467
TA4 15447 TA34 68226 TA64 227738 TA94 1029244
TA5 13529 TA35 69351 TA65 240301 TA95 1029384
TA6 13123 TA36 66841 TA66 232342 TA96 999241
TA7 13548 TA37 66253 TA67 240366 TA97 1042663
TA8 13948 TA38 64332 TA68 230945 TA98 1035981
TA9 14295 TA39 62981 TA69 247677 TA99 1015389
TA10 12943 TA40 68770 TA70 242933 TA100 1022277
TA11 20911 TA41 87114 TA71 298385 TA101 1223860
TA12 22440 TA42 82820 TA72 273826 TA102 1234081
TA13 19833 TA43 79931 TA73 288114 TA103 1259866
TA14 18710 TA44 86446 TA74 301044 TA104 1228060
TA15 18641 TA45 86377 TA75 284279 TA105 1219886
TA16 19245 TA46 86587 TA76 269686 TA106 1219432
TA17 18363 TA47 88750 TA77 279463 TA107 1234366
TA18 20241 TA48 86727 TA78 290908 TA108 1240627
TA19 20330 TA49 85441 TA79 301970 TA109 1220873
TA20 21320 TA50 87998 TA80 291283 TA110 1235462
TA21 33623 TA51 125831 TA81 365463 TA111 6558547
TA22 31587 TA52 119247 TA82 372449 TA112 6679507
TA23 33920 TA53 116459 TA83 370027 TA113 6624893
TA24 31661 TA54 120261 TA84 372393 TA114 6649855
TA25 34557 TA55 118184 TA85 368915 TA115 6590021
TA26 32564 TA56 120586 TA86 370908 TA116 6603691
TA27 32922 TA57 122880 TA87 373408 TA117 6576201
TA28 32412 TA58 122489 TA88 384525 TA118 6629393
TA29 33600 TA59 121872 TA89 374423 TA119 6589205
TA30 32262 TA60 123954 TA90 379296 TA120 6626342
Table 9: New best bounds (in bold) found by the proposed algorithm.
23
jobs.
Under the same computer conditions, the proposed heuristic improves each other efficient
heuristic for the problem both in quality of the solutions and in computational effort (e.g. the
ARP D1and ARP T of the BSCH(n)heuristic is 0.19 and 0.02 respectively which are much
less than those obtained by the most efficient heuristic so far, F F P R1(15) with 1.22 and
7.13). When comparing BSCH(x)with the so-far most efficient heuristics in the literature, there
are statistical differences for each new efficient heuristic with the only exception of BSCH(2).
Thereby, the set of efficient heuristics for the problem has been reduced from 17 heuristics to
seven heuristics of only two types of heuristics, the existing FF for parameters 1 and 2 which is
efficient for the smallest CPU times, and our proposal with x∈ {2, n/10,10,15, n}.
The excellent performance of the proposed heuristic is also shown by means of its comparison
against two of the best metaheuristics for the problem. Our proposal statistically outperforms
both metaheuristics (i.e. the ARP D2of BSCH(n)is 0.40 against 0.76 of the best metaheuristic)
using much less computational effort for each instance of the benchmark. Additionally, the pro-
posed heuristic found new best upper bounds for 35 of the 120 instances in Taillard’s benchmark.
Acknowledgements
This research has been funded by the Spanish Ministry of Science and Innovation, under projects
“ADDRESS” with reference DPI2013-44461-P and “PROMISE” with reference DPI2016-80750-P.
References
Abedinnia, H., Glock, C., and Brill, A. (2016). New simple constructive heuristic algorithms for
minimizing total flow-time in the permutation flowshop scheduling problem. Computers and
Operations Research, 74:165–174.
Allahverdi, A. and Aldowaisan, T. (2002). New heuristics to minimize total completion time in
m-machine flowshops. International Journal of Production Economics, 77(1):71–83.
Armentano, V. and Ronconi, D. (1999). Tabu search for total tardiness minimization in flowshop
scheduling problems. Computers and Operations Research, 26(3):219–235.
Della Croce, F. and T’kindt, V. (2002). A recovering beam search algorithm for the one-machine
dynamic total completion time scheduling problem. Journal of the Operational Research Soci-
ety, 53(11):1275–1280.
24
Dong, X., Chen, P., Huang, H., and Nowak, M. (2013). A multi-restart iterated local search
algorithm for the permutation flow shop problem minimizing total flow time. Computers and
Operations Research, 40(2):627–632.
Dong, X., Huang, H., and Chen, P. (2008). An improved NEH-based heuristic for the permutation
flowshop problem. Computers and Operations Research, 35(12):3962–3968.
Fernandez-Viagas, V. and Framinan, J. (2015a). NEH-based heuristics for the permutation flow-
shop scheduling problem to minimise total tardiness. Computers and Operations Research,
60:27–36.
Fernandez-Viagas, V. and Framinan, J. M. (2014). On insertion tie-breaking rules in heuristics for
the permutation flowshop scheduling problem. Computers and Operations Research, 45(0):60
– 67.
Fernandez-Viagas, V. and Framinan, J. M. (2015b). A new set of high-performing heuristics to
minimise flowtime in permutation flowshops. Computers and Operations Research, 53(0):68 –
80.
Framinan, J., Gupta, J., and Leisten, R. (2004). A review and classification of heuristics for per-
mutation flow-shop scheduling with makespan objective. Journal of the Operational Research
Society, 55(12):1243–1255.
Framinan, J. and Leisten, R. (2008). Total tardiness minimization in permutation flow shops:
A simple approach based on a variable greedy algorithm. International Journal of Production
Research, 46(22):6479–6498.
Framinan, J., Leisten, R., and Ruiz, R. (2014). Manufacturing Scheduling Systems: An Integrated
View on Models, Methods and Tools. Springer.
Framinan, J., Leisten, R., and Ruiz-Usano, R. (2002). Efficient heuristics for flowshop sequencing
with the objectives of makespan and flowtime minimisation. European Journal of Operational
Research, 141(3):559–569.
Framinan, J., Leisten, R., and Ruiz-Usano, R. (2005). Comparison of heuristics for flowtime
minimisation in permutation flowshops. Computers and Operations Research, 32(5):1237–1254.
Garey, M., Johnson, D., and Sethi, R. (1976). Complexity of flowshop and jobshop scheduling.
Mathematics of Operations Research, 1(2):117–129.
Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal
of Statistics, 6:65–70.
Johnson, S. M. (1954). Optimal two- and three-stage production schedules with setup times
included. Naval Research Logistics Quarterly, 1:61–68.
Laha, D. and Sarin, S. (2009). A heuristic to minimize total flow time in permutation flow shop.
Omega, 37(3):734–739.
Li, X., Wang, Q., and Wu, C. (2009). Efficient composite heuristics for total flowtime minimiza-
tion in permutation flow shops. Omega, 37(1):155–164.
Liu, J. and Reeves, C. (2001). Constructive and composite heuristic solutions to the P|| ci
scheduling problem. European Journal of Operational Research, 132:439–452.
Lowerre, B. T. (1976). The HARPY speecch recognition system. PhD thesis, Carnegie-Mellon
University, USA.
Nawaz, M., Enscore Jr., E., and Ham, I. (1983). A heuristic algorithm for the m-machine, n-job
25
flow-shop sequencing problem. OMEGA, The International Journal of Management Science,
11(1):91–95.
Pan, Q.-K. and Ruiz, R. (2012). Local search methods for the flowshop scheduling problem with
flowtime minimization. European Journal of Operational Research, 222(1):31–43.
Pan, Q.-K. and Ruiz, R. (2013). A comprehensive review and evaluation of permutation flowshop
heuristics to minimize flowtime. Computers and Operations Research, 40(1):117–128.
Pan, Q.-K., Tasgetiren, M., and Liang, Y.-C. (2008). A discrete differential evolution algorithm
for the permutation flowshop scheduling problem. Computers and Industrial Engineering,
55(4):795–816.
Rajendran, C. (1993). Heuristic algorithm for scheduling in a flowshop to minimize total flowtime.
International Journal of Production Economics, 29(1):65–73.
Rajendran, C. and Ziegler, H. (1997). An efficient heuristic for scheduling in a flowshop to
minimize total weighted flowtime of jobs. European Journal of Operational Research, 103:129–
138.
Reza Hejazi, S. and Saghafian, S. (2005). Flowshop-scheduling problems with makespan criterion:
A review. International Journal of Production Research, 43(14):2895–2929.
Ruiz, R. and Maroto, C. (2005). A comprehensive review and evaluation of permutation flowshop
heuristics. European Journal of Operational Research, 165(2):479–494.
Ruiz, R. and Stützle, T. (2007). A simple and effective iterated greedy algorithm for the permu-
tation flowshop scheduling problem. European Journal of Operational Research, 177(3):2033–
2049.
Sabuncuoglu, I. and Bayiz, M. (1999). Job shop scheduling with beam search. European Journal
of Operational Research, 118(2):390–412.
Taillard, E. (1993). Benchmarks for basic scheduling problems. European Journal of Operational
Research, 64(2):278–285.
Valente, J. (2010). Beam search heuristics for quadratic earliness and tardiness scheduling. Jour-
nal of the Operational Research Society, 61(4):620–631.
Valente, J. and Alves, R. (2005). Filtered and recovering beam search algorithms for the
early/tardy scheduling problem with no idle time. Computers and Industrial Engineering,
48(2):363–375.
Valente, J. and Alves, R. (2008). Beam search algorithms for the single machine total weighted
tardiness scheduling problem with sequence-dependent setups. Computers and Operations
Research, 35(7):2388–2405.
Vallada, E., Ruiz, R., and Minella, G. (2008). Minimising total tardiness in the m-machine
flowshop problem: A review and evaluation of heuristics and metaheuristics. Computers and
Operations Research, 35(4):1350–1373.
26
... It was later improved to reduce its complexity from ( 3 ) to ( 2 ), later called the FF algorithm [6]. Later, this scheme was integrated into a beam search algorithm (more on that later) that obtained state-of-the-art performance [7]. Recently, this beam search was integrated within a biased random-key genetic algorithm as a warm-start procedure [1]. ...
... As it is noted in many works [25,7], another interesting guidance strategy is to combine both guidance strategies discussed earlier (i.e. the bound and idle time guides). Indeed, while the bound guide is usually ineffective to guide the search close to the root, it is very precise close to feasible solutions. ...
... However, the LR heuristic cannot be directly applied in a general tree search context. Indeed, it is sometimes noted [7] that algorithms like the beam search usually compare nodes from different parents, thus, it is needed to adapt the LR heuristic guidance that only compares nodes with the same parent. We propose a simple yet efficient ways to implement similar ideas. ...
Article
Full-text available
We study an iterative beam search algorithm for the permutation flowshop (makespan and flowtime minimization). This algorithm combines branching strategies inspired by recent branch-and-bounds and a guidance strategy inspired by the LR heuristic. It obtains competitive results on large instances compared to the state-of-the-art algorithms, reports many new-best-so-far solutions on the VFR benchmark (makespan minimization) and the Taillard benchmark (flowtime minimization) without using any NEH-based branching or iterative-greedy strategy. The source code is available at: https://github.com/librallu/dogs-pfsp.
... • For the total flowtime objective (F m|prmu, r j | F j problem), we adapt for P D the M RSILS algorithm proposed by Dong et al. (2013) due to their excellent performance in the F m|prmu| C j problem (see Fernandez-Viagas and Framinan, 2017). ...
Article
The technological advances recently brought to the manufacturing arena (collectively known as Industry 4.0) offer huge possibilities to improve decision-making processes in the shop floor by enabling the integration of information in real-time. Among these processes, scheduling is often cited as one of the main beneficiaries, given its data-intensive and dynamic nature. However, in view of the extremely high implementation costs of Industry 4.0, these potential benefits should be properly assessed, also taking into account that there are different approaches and solution procedures that can be employed in the scheduling decision-making process, as well as several information sources (i.e. not only shop floor status data, but also data from upstream/downstream processes). In this paper, we model various decision-making scenarios in a shop floor with different degrees of uncertainty and diverse efficiency measures, and carry out a computational experience to assess how real-time and advance information can be advantageously integrated in the Industry 4.0 context. The extensive computational experiments (equivalent to 6.3 years of CPU time) show that the benefits of using real-time, integrated shop floor data and advance information heavily depend on the proper choice of both the scheduling approach and the solution procedures, and that there are scenarios where this usage is even counterproductive. The results of the paper provide some starting points for future research regarding the design of approaches and solution procedures that allow fully exploiting the technological advances of Industry 4.0 for decision-making in scheduling.
... One of the most popular optimisations of PFSP was minimization of makespan [1] and initially proposed by Johnson [2]. Many literature also set the criteria as minimization of total ow time [3], or minimization of total tardiness [4]. In the past decades, one-time optimisation of sequence was acknowledged in most literature on optimisation of PFSP. ...
Preprint
Full-text available
The automobile disassembly line is a typical permutation flow shop problem (PFSP). In the automobile disassembly line, the goal of the research is to minimize the completion time of scheduling with optimizing the disassembly order of vehicles. However, disassembly time was uncertain at the initial stage as the consuming time of cars’ specific sequence only could be roughly estimated, which make the PFSP in automobile disassembly line unlike the traditional ones. In this study, the real-time PFSP problem in automobile disassembly line was defined and well solved with the proposed online Bees Algorithm (O-BA). The algorithm has been prepared in Matlab/Simulink to work in real-time. Time consumed by each component of each vehicle was roughly estimated based on engineer’s experience. First optimisation was carried out to decide the disassembly order of vehicles to be disassembled. When each component was disassembled, the real consuming time value was updated with the detecting system which was realized in Simulink. Then the O-BA was activated and created the new solution for the disassembly order of vehicles which were still not entering the disassembly line. The proposed O-BA algorithm has an online structure and conducted the optimisation with distinguishing whether the vehicles to be disassembled were entering the disassembly line. The result shows the O-BA with the detecting system was succeeded in realizing real-time PFSP in the disassembly line. Moreover, the method proposed in this study was suitable for solving the same kind of real-time PFSP in the assembly line or disassembly line.
Article
The flow-shop scheduling problem (FSSP) has received a considerable amount of attention due to its wide-ranging applications. However, the omission of uncertainty significantly diminishes the practicality of scheduling results, underscoring its the necessity to address uncertainty in the flow shop problem. In this paper, a fuzzy two-machine flow-shop problem is considered and an effective algorithm with a fuzzy ranking method is proposed to minimize the total waiting time. The processing times are represented using trapezoidal membership functions. Furthermore, a two-stage flow shop scheduling problem is used in the proposed algorithm and various categories of fuzzy mean techniques. The experimental results and statistical comparisons demonstrate that the proposed algorithm exhibits significant advantages in effectively solving the FFSSP (Fuzzy Flow-Shop Scheduling Problem).
Chapter
Production planningProduction planningand scheduling of flexible manufacturing plantsScheduling of flexible manufacturing plant are still highly manual labor-intensive tasks. The production efficiency is constrained due to the large number of combinations of feasible machine selectionMachine selectionand operation sequenceOperation sequence arrangement. In this study, a mathematic model approximating the real working environment and two different Bees AlgorithmsBees Algorithm were compared. In the improved Bees Algorithm with site abandonment technologyBees Algorithm with site abandonment technology, different strategies were used for the abandonment of initial sites and elite sites. The simulation results based on actual factory data from Trumpf (China) show that the mathematical model and the Bees AlgorithmBees Algorithm, THE could help to improve production effectiveness. Moreover, the improved Bees Algorithm with site abandonment technologyBees Algorithm with site abandonment technology shows its excellent ability to solve problems such as production planning issues in flexible manufacturing plants.
Article
In this paper we address the non-permutation flow shop scheduling problem, a more general variant of the flow shop problem in which the machines can have different sequences of jobs. We aim to minimize the total completion time. We propose a template to generate iterated greedy algorithms, and use an automatic algorithm configuration to obtain efficient methods. This is the first automated approach in the literature for the non-permutation flow shop scheduling problem. The algorithms start by building a high-quality permutation solution, which is then improved during a second phase that generates non-permutation solutions by changing the job order on some machines. The obtained algorithms are evaluated against two well-known benchmarks from the literature. The results show that they can find better schedules than the state-of-the-art methods for both the permutation and non-permutation flow shop scheduling problems in comparable experimental conditions, as evidenced by comprehensive computational and statistical testing. We conclude that using non-permutation schedules is a viable alternative to reduce the total completion time that production managers should consider.
Article
This paper presents a heuristic and a metaheuristic algorithm for solving the challenging single machine scheduling problem with release dates and sequence-dependent setup times to minimise the makespan. Notably, the former is a population-based constructive heuristic based on a beam search strategy that evolves through a variable number of partial sequences. The solutions obtained in the last iteration feed the initial pool of the latter proposal consisting of a population-based iterated-greedy procedure. The two proposed approximate algorithms are evaluated considering both anticipatory and non-anticipatory setup times, against the most promising algorithms of the related scheduling problems. The computational experience confirms the effectiveness of the proposed approaches, regardless of the adopted setup strategy.
Article
Full-text available
Since Johnson׳s seminal paper in 1954, scheduling jobs in a permutation flowshop has been receiving the attention of hundreds of practitioners and researchers, being one of the most studied topics in the Operations Research literature. Among the different objectives that can be considered, minimising the total tardiness (i.e. the sum of the surplus of the completion time of each job over its due date) is regarded as a key objective for manufacturing companies, as it entails the fulfilment of the due dates committed to customers. Since this problem is known to be NP-hard, most research has focused on proposing approximate procedures to solve it in reasonable computation times. Particularly, several constructive heuristics have been proposed, with NEHedd being the most efficient one, serving also to provide an initial solution for more elaborate approximate procedures. In this paper, we first analyse in detail the decision problem depending on the generation of the due dates of the jobs, and discuss the similarities with different related decision problems. In addition, for the most characteristic tardiness scenario, the analysis shows that a huge number of ties appear during the construction of the solutions done by the NEHedd heuristic, and that wisely breaking the ties greatly influences the quality of the final solution. Since no tie-breaking mechanism has been designed for this heuristic up to now, we propose several mechanisms that are exhaustively tested. The results show that some of them outperform the original NEHedd by about 25% while keeping the same computational requirements.
Article
Full-text available
This paper addresses the problem of scheduling jobs in a permutation flowshop with the objective of total completion time minimisation. Since this problem is known to be NP-hard, most research has focussed on obtaining procedures – heuristics – able to provide good, but not necessarily optimal, solutions with a reasonable computational effort. Therefore, a full set of heuristics efficiently balancing both aspects (quality of solutions and computational effort) has been developed. 12 out of these 14 efficient procedures are composite heuristics based on the LR heuristic by Liu and Reeves (2001), which is of complexity n3mn3m. In our paper, we propose a new heuristic of complexity n2mn2m for the problem, which turns out to produce better results than LR. Furthermore, by replacing the heuristic LR by our proposal in the aforementioned composite heuristics, we obtain a new set of 17 efficient heuristics for the problem, with 15 of them incorporating our proposal. Additionally, we also discuss some issues related to the evaluation of efficient heuristics for the problem and propose an alternative indicator.
Article
Full-text available
The most efficient approximate procedures so far for the flowshop scheduling problem with makespan objective – i.e. the NEH heuristic and the iterated greedy algorithm – are based on constructing a sequence by iteratively inserting, one by one, the non-scheduled jobs into all positions of an existing subsequence, and then, among the so obtained subsequences, selecting the one yielding the lowest (partial) makespan. This procedure usually causes a high number of ties (different subsequences with the same best partial makespan) that must be broken via a tie-breaking mechanism. The particular tie-breaking mechanism employed is known to have a great influence in the performance of the NEH, therefore different procedures have been proposed in the literature. However, to the best of our knowledge, no tie-breaking mechanism has been proposed for the iterated greedy. In our paper, we present a new tie-breaking mechanism based on an estimation of the idle times of the different subsequences in order to pick the one with the lowest value of the estimation. The computational experiments carried out show that this mechanism outperforms the existing ones both for the NEH and the iterated greedy for different CPU times. Furthermore, embedding the proposed tie-breaking mechanism into the iterated greedy provides the most efficient heuristic for the problem so far.
Article
This paper develops a set of new simple constructive heuristic algorithms to minimize total flow-time for an -jobs× -machines permutation flowshop scheduling problem. We first propose a new iterative algorithm based on the best existing simple heuristic algorithm, and then integrate new indicator variables for weighting jobs into this algorithm. We also propose new decision criteria to select the best partial sequence in each iteration of our algorithm. A comprehensive numerical experiment reveals that our modifications and extensions improve the effectiveness of the best existing simple heuristic without affecting its computational efficiency.
Book
The book is devoted to the problem of manufacturing scheduling, which is the efficient allocation of jobs (orders) over machines (resources) in a manufacturing facility. It offers a comprehensive and integrated perspective on the different aspects required to design and implement systems to efficiently and effectively support manufacturing scheduling decisions. Obtaining economic and reliable schedules constitutes the core of excellence in customer service and efficiency in manufacturing operations. Therefore, scheduling forms an area of vital importance for competition in manufacturing companies. However, only a fraction of scheduling research has been translated into practice, due to several reasons. First, the inherent complexity of scheduling has led to an excessively fragmented field in which different sub problems and issues are treated in an independent manner as goals themselves, therefore lacking a unifying view of the scheduling problem. Furthermore, mathematical brilliance and elegance has sometimes taken preference over practical, general purpose, hands-on approaches when dealing with these problems. Moreover, the paucity of research on implementation issues in scheduling has restricted translation of valuable research insights into industry. "Manufacturing Scheduling Systems: An Integrated View on Models, Methods and Tools" presents the different elements constituting a scheduling system, along with an analysis the manufacturing context in which the scheduling system is to be developed. Examples and case studies from real implementations of scheduling systems are presented in order to drive the presentation of the theoretical insights. The book is intended for an ample readership including industrial engineering/operations post-graduate students and researchers, business managers, and readers seeking an introduction to the field.
Article
In this work we present a review and comparative evaluation of heuristics and metaheuristics for the well-known permutation flowshop problem with the makespan criterion. A number of reviews and evaluations have already been proposed. However, the evaluations do not include the latest heuristics available and there is still no comparison of metaheuristics. Furthermore, since no common benchmarks and computing platforms are used, the results cannot be generalised. We propose a comparison of 25 methods, ranging from the classical Johnson's algorithm or dispatching rules to the most recent metaheuristics, including tabu search, simulated annealing, genetic algorithms, iterated local search and hybrid techniques. For the evaluation we use the standard test of Taillard [Eur. J. Operation. Res. 64 (1993) 278] composed of 120 instances of different sizes. In the evaluations we use the experimental design approach to obtain valid conclusions on the effectiveness and efficiency of the different methods tested.
Article
In this paper, we present beam search heuristics for the single machine scheduling problem with quadratic earliness and tardiness costs, and no machine idle time. These heuristics include classic beam search procedures, as well as filtered and recovering algorithms. We consider three dispatching heuristics as evaluation functions, in order to analyse the effect of different rules on the performance of the beam search procedures. The computational results show that using better dispatching heuristics improves the effectiveness of the beam search algorithms. The performance of the several heuristics is similar for instances with low variability. For high variability instances, however, the detailed, filtered and recovering beam search (RBS) procedures clearly outperform the best existing heuristic. The detailed beam search algorithm performs quite well, and is recommended for small-to medium-sized instances. For larger instances, however, this procedure requires excessive computation times, and the RBS algorithm then becomes the heuristic of choice.
Article
Flowshop scheduling is a very active research area. This problem still attracts a considerable amount of interest despite the sheer amount of available results. Total flowtime minimization of a flowshop has been actively studied and many effective algorithms have been proposed in the last few years. New best solutions have been found for common benchmarks at a rapid pace. However, these improvements many times come at the cost of sophisticated algorithms. Complex methods hinder potential applications and are difficult to extend to small problem variations. Replicability of results is also a challenge. In this paper, we examine simple and easy to implement methods that at the same time result in state-of-the-art performance. The first two proposed methods are based on the well known Iterated Local Search (ILS) and Iterated Greedy (IG) frameworks, which have been applied with great success to other flowshop problems. Additionally, we present extensions of these methods that work over populations, something that we refer to as population-based ILS (pILS) and population-based IG (pIGA), respectively. We calibrate the presented algorithms by means of the Design of Experiments (DOE) approach. Extensive comparative evaluations are carried out against the most recent techniques for the considered problem in the literature. The results of a comprehensive computational and statistical analysis show that the presented algorithms are very effective. Furthermore, we show that, despite their simplicity, the presented methods are able to improve 12 out of 120 best known solutions of Taillard’s flowshop benchmark with total flowtime criterion.
Article
In this paper, we consider the single machine weighted tardiness scheduling problem with sequence-dependent setups. We present heuristic algorithms based on the beam search technique. These algorithms include classic beam search procedures, as well as the filtered and recovering variants. Previous beam search implementations use fixed beam and filter widths. We consider the usual fixed width algorithms, and develop new versions that use variable beam and filter widths.The computational results show that the beam search versions with a variable width are marginally superior to their fixed value counterparts, even when a lower average number of beam and filter nodes is used. The best results are given by the recovering beam search algorithms. For large problems, however, these procedures require excessive computation times. The priority beam search algorithms are much faster, and can therefore be used for the largest instances.Scope and purposeWe consider the single machine weighted tardiness scheduling problem with sequence-dependent setups. In the current competitive environment, it is important that companies meet the shipping dates, as failure to do so can result in a significant loss of goodwill. The weighted tardiness criterion is a standard way of measuring compliance with the due dates. Also, the importance of sequence-dependent setups in practical applications has been established in several studies.In this paper, we present several heuristics based on the beam search technique. In previous beam search implementations, fixed beam and filter widths have been used. We consider the usual fixed width algorithms, and also develop new versions with variable beam and filter widths.The computational tests show that the beam search versions with a variable width are marginally superior to their fixed value counterparts. The recovering beam search procedures are the heuristic of choice for small and medium size instances, but require excessive computation times for large problems. The priority beam search algorithm is the fastest of the beam search heuristics, and can be used for the largest instances.