ArticlePDF Available

Abstract and Figures

p>Highlights ► BLS is an effective Max-Cut algorithm based on iterated local search. ► BLS alternates between a descent phase and a dedicated diversification phase. ► Diversification is adapative and combines guided and random perturbations. ► BLS finds new record-breaking solutions for 33 out of 71 benchmark instances. ► The source code and the results of BLS is available online.</p
Content may be subject to copyright.
Breakout Local Search for the Max-Cut
Problem
Una Benlic and Jin-Kao Hao
LERIA, Universit´e d’Angers
2 Boulevard Lavoisier, 49045 Angers Cedex 01, France
Abstract
Given an undirected graph G= (V, E) where each edge of Eis weighted with an
integer number, the maximum cut problem (Max-Cut) is to partition the vertices of
Vinto two disjoint subsets so as to maximize the total weight of the edges between
the two subsets. As one of Karp’s 21 NP-complete problems, Max-Cut has attracted
considerable attention over the last decades. In this paper, we present Breakout
Local Search (BLS) for Max-Cut. BLS explores the search space by a joint use of
local search and adaptive perturbation strategies. The proposed algorithm shows
excellent performance on the set of well-known maximum cut benchmark instances
in terms of both solution quality and computational time. Out of the 71 benchmark
instances, BLS is capable of finding new improved results in 34 cases and attaining
the previous best-known result for 35 instances, within computing times ranging
from less than one second to 5.6 hours for the largest instance with 20000 vertices.
Keywords: Max-cut; local search and heuristics; adaptive diversification; meta-
heuristics.
1 Introduction
The maximum cut problem (Max-Cut) is one of Karp’s 21 NP-complete prob-
lems with numerous practical applications [12]. Let G= (V, E) be an undi-
rected graph with the set of vertices Vand the set of edges E, each edge
(i, j)Ebeing associated with a weight wij. Max-Cut consists in partition-
ing the vertices of Vinto two disjoint subsets V1and V2such that the total
Corresponding author.
Email addresses: benlic@info.univ-angers.fr (Una Benlic),
hao@info.univ-angers.fr (Jin-Kao Hao).
Preprint submitted to Elsevier 9 June 2014
weight of the edges whose endpoints belong to different subsets is maximized,
i.e.,
f(V1, V2) = max X
iV1,jV2
wij.(1)
Given its theoretical and practical importance, Max-Cut has received consid-
erable attention over the last decades. Well-known exact methods for Max-
Cut, such as the Branch and Price procedure [17], are capable of solving
to optimality medium size instances (i.e., |V|= 500). For larger instances,
different heuristic methods have been proposed including global equilibrium
search [21], projected gradient approach [6], rank-2 relaxation heuristic [7],
and greedy heuristics [11]. Other well-known algorithms are based on popular
metaheuristics such as variable neighbourhood search [8], tabu search [1,16],
scatter search [19], grasp [22], and different hybrid approaches [8,22].
In this work, we propose a new heuristic algorithm for Max-Cut, using the
Breakout Local Search (BLS) [4,5]. Based on the framework of Iterated Lo-
cal Search (ILS) [18], BLS combines local search (i.e., the steepest descent)
with a dedicated and adaptive diversification mechanism. Its basic idea is to
use local search to discover local optima and employ adaptive perturbations
to continually move from one attractor to another in the search space. The
continual exploration of new search areas is achieved by alternating between
random and directed, and weak and strong perturbations depending on the
current search state. Despite its simplicity, BLS shows excellent performance
on the set of well-known Max-Cut instances in terms of both solution quality
and computational time. Out of 71 benchmark instances, the proposed ap-
proach is capable of improving the previous best-known solutions in 34 cases
and reaching the previous best-known results for 35 instances, within a com-
putational time ranging from less than one seconds to 5.6 hours for the largest
instance with 20000 vertices.
In the next section, we present in details the breakout local search approach
for the Max-Cut problem. Section 3 shows extensive computational results
and comparisons with the state-of-art Max-Cut algorithms. In Section 4, we
provide a parameter sensitivity analysis and justify the parameter settings
used to obtain the reported results. Moreover, we investigate the efficiency of
the proposed diversification mechanism of BLS, and highlight the importance
of excluding diversification schemes during the local search phase. Conclusions
are given in the last section.
2 Breakout Local Search (BLS)
Our Breakout Local Search (BLS) approach is conceptually rather simple and
transits from one basin of attraction to another basin by a combined use of
2
local search (to reach local optima) and dedicated perturbations (to discover
new promising regions).
Algorithm 1 The Breakout Local Search for the Max-cut Problem
Require: Graph G= (V, E ), initial jump magnitude L0, max. number Tof non-improving attractors
visited before strong perturb.
Ensure: A partition of V.
1: Cgenerate initial solution(V)/* Cis a partition of Vinto two subsets V1and V2*/
2: fcf(C) /* fcRecords the ob jective value of the solution */
3: Cbest C/* Cbest Records the best solution found so far */
4: fbest fc/* fbest Records the best objective value reached so far */
5: CpC/* CpRecords the solution obtained after the last descent */
6: ω0/* Counter for consecutive non-improving local optima */
7: Iter 0/* Counter of iterations */
8: while stopping condition not reached do
9: Let mbe the best move meligible for C/* See Section 2.1 */
10: while f(Cm)> f(C)do
11: fcf(Cm) /* Records the ob jective value of the current solution */
12: CCm/* Perform the best-improving move */
13: Update bucket sorting structure /* Section 2.2 */
14: HIter +γ/* Update tabu list, γis the tabu tenure */
15: Iter I ter + 1
16: end while
17: if fc> fbest then
18: Cbest C;fbest fc/* Update the recorded best solution */
19: ω0/* Counter for consecutive non-improving local optima reset */
20: else
21: ωω+ 1
22: end if
23: /* Determine the number of perturbation moves Lto be applied to C*/
24: if ω > T then
25: /* Search seems to be stagnating, random perturbation required */
26: ω0
27: end if
28: if C=Cpthen
29: /* Search returned to previous local optimum, increase number of perturbation moves*/
30: LL+ 1
31: else
32: /* Search escaped from the previous local optimum, reinitialize number of perturb. moves */
33: LL0
34: end if
35: /* Perturb the current local optimum Cwith Lperturbation moves */
36: CpC
37: CP erturbation(C, L, H , Iter, ω)/* Section 2.3.1 */
38: end while
Recall that given a search space Sand an objective function f, a neighborhood
Nis a function N:S P (S) that associates to each solution Cof Sa subset
N(C) of S. A local optimum Cwith respect to the given neighborhood Nis a
solution such that C0N(C), f(C)f(C0), where fis the maximization
function. A basin of attraction of a local optimum Ccan be defined as the
set BCof solutions that lead the local search to the given local optimum
C, i.e., BC={CS|LocalSearch(C) = C}[3]. Since a local optimum
Cacts as an attractor with respect to the solutions BC, the terms attractor
and local optimum will be used interchangeably throughout this paper. Notice
that in practice, for a given solution C, a neighboring solution C0is typically
generated by applying a move operator to C. Let mbe the move applied to
C, we use C0Cmto denote the transition from the current solution C
to the new neighboring solution C0.
3
BLS follows the general scheme of iterated local search [18] and alternates
between a local search phase, which uses the steepest descent [20] to discover
an attractor, and a perturbation phase, which guides the search to escape
from the current basin of attraction. The general BLS algorithm is shown in
Algorithm 1.
After generation of an initial solution (line 1) and an initialization step (lines
2-7), BLS applies the steepest descent to reach a local optimum (lines 9-
16). Each iteration of this local search procedure identifies the best move m
among those that are applicable to the current solution C, and applies mto
Cto obtain a new solution which replaces the current solution (lines 9-12).
Updates are performed to reflect the current state of the search (lines 13-
22). In particular, if the last discovered local optimum is better than the best
solution found so far (recorded in Cbest), Cbest is updated with the last local
optimum (lines 17-18).
If no improving neighbor exists, local optimality is reached. At this point, BLS
tries to escape from the basin of attraction of the current local optimum and
to go into another basin of attraction. For this purpose, BLS applies a number
Lof dedicated moves to the current optimum C(we say that Cis perturbed,
see Section 2.3 for details). Each time an attractor is perturbed (line 37),
the perturbed solution becomes the new starting point for the next round of
the local search procedure (a new round of the outer while structure, line 8).
The algorithm stops when a prefixed condition is satisfied. This can be, for
example, a cutoff time, an allowed maximum of iterations or a target objective
value to be attained. In this paper, an allowed maximum of iterations is used
(see Section 3).
To determine the most appropriate perturbation (its type and strength), we
distinguish two situations. First, if the search returns to the immediate previ-
ous attractor (recorded in Cp), BLS perturbs Cmore strongly by increasing
the number of perturbation moves Lto be applied (lines 28-30). Otherwise
(i.e., the search succeeded in escaping from the current attractor), the number
of perturbation moves Lis reduced to its initial value L0(L0is a parame-
ter). Second, if the search cannot improve the best solution found so far after
visiting a certain number T(Tis a parameter) of local optima, BLS applies
a significantly stronger perturbation in order to drive definitively the search
towards a new and more distant region in the search space (lines 24-27).
The success of the described method depends basically on two key factors.
First, it is important to determine the number Lof perturbation moves (also
called “perturbation strength” or “jump magnitude”) to be applied to change
or perturb the solution. Second, it is equally important to consider the type
of perturbation moves to be applied. While conventional perturbations are
often based on random moves, more focused perturbations using dedicated
4
information could be more effective. The degree of diversification introduced
by a perturbation mechanism depends both on the jump magnitude and the
type of moves used for perturbation. If the diversification is too weak, the local
search has greater chances to end up cycling between two or more locally
optimal solutions, leading to search stagnation. On the other hand, a too
strong diversification will have the same effect as a random restart, which
usually results in a low probability of finding better solutions in the following
local search phase. For its perturbation mechanism, the proposed BLS takes
advantage of the information related to the search status and history. We
explain the perturbation mechanism is Section 2.3.
2.1 The neighbourhood relations and its exploration
For solution transformations, BLS employs three distinct move operators (moves
for short) M1M3whose basic idea is to generate a new cut Cby moving
vertices to the opposite partition subset. To define these move operators, we
first introduce the notion of move gain which indicates how much a partition
is improved, according to the optimization objective, if a vertex is moved to
another subset. For each vertex vV, we determine the gain gvof moving v
to the opposite partition subset. As we show in Section 2.2, the vertex with
the (best) highest gain can be easily determined using a special bucket data
structure that has been extensively used to the related (and different) graph
partitioning problem.
Given a partition (cut) C={V1, V2}, the three move operators are defined as
follows:
M1: Select a vertex vmwith the highest gain. Move the selected vertex vm
from its current subset to the opposite partition subset.
M2: Select a highest gain vertex v1from V1and a highest gain vertex v2from
V2. Move v1to V2, and v2to V1.
M3: Randomly select a vertex v. Move the selected vertex vfrom its current
subset to the opposite partition subset.
Each iteration of the local search consists in identifying the best move mfrom
M1and applying it to Cto obtain a new solution. This process is repeated
until a local optimum is reached (see lines 9–16 of Alg. 1). The two directed
perturbations of BLS apply a move mfrom M1and M2respectively, while the
strong perturbation, which acts as a restart, performs a move from M3(see
Section 2.3.1 for the three perturbation strategies).
5
Fig. 1. An illustrative example of the bucket sorting data structure on a graph with
6 vertices
2.2 Bucket sorting
To ensure a fast evaluation of the neighbouring moves, our implementation
uses the bucket sorting data structure which keeps vertices ordered by their
gains. This structure is used to avoid unnecessary search for the highest gain
vertex and to minimize the time needed for updating the gains of vertices
affected by each move.
The bucket sorting structure was first proposed by Fiduccia and Mattheyses
[9] to improve the Kerninghan-Lin algorithm [14] for the minimum graph
bisection problem. We adopt this technique for our Max-Cut problem. The
idea is to put all the vertices with the same gain gin a bucket that is ranked
g. Then, to determine a vertex with the maximum gain, it suffices to go to the
non-empty bucket with the highest rank, and select a vertex from the bucket.
After each move, the bucket structure is updated by recomputing gains of the
selected vertex and its neighbors, and transferring these vertices to appropriate
buckets.
The bucket data structure consists of two arrays of buckets, one for each
partition subset, where each bucket of an array is represented by a doubly
linked list. An example of the bucket data structure is illustrated in Figure 1.
The arrays are indexed by the possible gain values for a move, ranging from
gmax to gmin. A special pointer maxgain points to the highest index in the array
whose bucket is not empty, and thus enables to select the best improving move
in constant time. The structure also keeps an additional array of vertices where
each element (vertex) points to its corresponding vertex in the doubly linked
6
lists. This enables a direct access to the vertices in buckets and their transfer
from one bucket to another in constant time.
Each time a move involving a vertex vis performed, only the gains of the
vertices adjacent to vare recalculated (in O(1)) and updated in the bucket
structure in constant time (delete and insert operations in the bucket are
both of O(1) complexity). Therefore, the complexity of moving vertex vfrom
its current subset to the opposite subset is upper-bounded by the number of
vertices adjacent to v.
2.3 Adaptive perturbation mechanism
The perturbation mechanism plays a crucial role within BLS since the local
search alone cannot escape from a local optimum. BLS thus tries to move to
the next basin of attraction by applying a weak or strong, directed or random
perturbation depending on the state of the search (lines 23–37 of Alg. 1). The
pseudo-code of this adaptive perturbation-based diversification procedure is
given in Algorithms 2 and 3.
The perturbation procedure (Alg. 2) takes as its input the following param-
eters: the current solution Cwhich will be perturbed, the jump magnitude
Ldetermined in the main BLS algorithm (Alg. 1, lines 30 and 33), the tabu
list H, the global iteration counter Iter and the number of consecutive non-
improving local optima visited ω. Based on these information, the perturba-
tion procedure determines the type of moves to be applied. The perturbation
moves can either be random or directed. First, if the search fails to update
the best solution after consecutively visiting a certain number Tof local op-
tima (indicated by ω= 0, Alg. 2), the search is considered to be trapped
in a non-promising search-space region and a strong perturbation is applied
(Alg. 2, line 3) which basically displaces randomly a certain number (fixed by
the jump magnitude L) of vertices from one side of the two-way partition to
the other side. Here no constrained is imposed on the choice of the displaced
vertices and any vertex can take part in this perturbation process. We will
explain this random perturbation in Section 2.3.1.
Second, if the number of consecutively visited local optima does not exceed
the threshold T, we will allow the search to explore the current search region
more thoroughly by adaptively choosing between weaker (directed) and ran-
dom (stronger) perturbation moves (the adaptive mechanism is described in
Section 2.3.1). Basically, the directed perturbation is more oriented towards
search intensification than a random perturbation, since perturbation moves
are chosen by also considering the quality criterion so as not to deteriorate
too much the current solution. Two different types of directed perturbation
7
moves are distinguished with BLS and are explained in the next section.
Once the type of perturbation is determined, BLS modifies the current so-
lution Cby applying to it Lperturbation moves which are chosen from the
corresponding set of moves defined in the next section (Alg. 3). Notice that
as in the case of moves performed during the local search phase, perturbation
moves are added into the tabu list to avoid reconsidering them for the next
γiterations (Alg. 3, line 4, see also Section 2.3.1). The perturbed solution is
then used as the new starting point for the next round of the local search.
Algorithm 2 The perturbation procedure P erturbation(C, L, H, Iter, ω)
Require: Local optimum C, jump magnitude L, tabu list H, global iteration counter Iter, number of
consecutive non-improving local optima visited ω.
Ensure: A perturbed solution C.
1: if ω= 0 then
2: /* Best sol. not improved after a certain num. of visited local opt.*/
3: CP erturb(C, L, B )/* Random perturb. with moves from set B, see Section 2.3.1 for the
definition of set B*/
4: else
5: Determine probability Paccording to Formula (2) /* Section 2.3.1 */
6: With probability PQ,CP erburb(C, L, A1)
/* Directed perturb. with moves of set A1, see Section 2.3.1 for the definition of set A1*/
7: With probability P(1 Q), CP erturb(C, L, A2)
/* Directed perturb. with moves of set A2, see Section 2.3.1 for the definition of set A2*/
8: With probability (1 P), CP er turb(C, L, B)
/* Random perturb. with moves of set B*/
9: end if
10: Return C
Algorithm 3 Perturbation operator P erburb(C, L, M )
Require: Local optimum C, perturbation strength L, tabu list H, global
iteration counter Iter, the set of perturbation moves M.
Ensure: A perturbed solution π.
1: for i:= 1 to Ldo
2: Take move mM
3: CCm
4: HIter +γ/* Update tabu list, γis the tabu tenure */
5: Update bucket sorting structure /* Section 2.2 */
6: Iter Iter + 1
7: end for
8: Return C
2.3.1 The perturbation strategies
As mentioned above, BLS employs two types of directed perturbations and a
random perturbation to guide the search towards new regions of the search
space.
Directed perturbations are based on the idea of tabu list from tabu search [10].
These perturbations use a selection rule that favors the moves that minimize
the degradation of the objective, under the constraint that the moves are not
8
prohibited by the tabu list. Move prohibition is determined in the following
way. Each time a vertex vis moved from its current subset Vc, it is forbidden
to place it back to Vcfor γiterations (called tabu tenure), where γtakes a
random value from a given range.
The information for move prohibition is maintained in the tabu list Hwhere
the ith element is the iteration number when vertex iwas last moved plus γ.
The tabu status of a move is neglected only if the move leads to a new solution
better than the best solution found so far (this is called aspiration in the
terminology of tabu search). The directed perturbations rely thus both on 1)
history information which keeps track, for each move, the last time (iteration)
when it was performed and 2) the quality of the moves to be applied for
perturbation in order not to deteriorate too much the perturbed solution.
The eligible moves for the first type of directed perturbation (applied in Alg.
2 line 6) are identified by the set A1such that:
A1={m|mM1, max{gm},(Hm+γ)< Iter or (gm+fc)> fbest}
where gmis the gain for performing move m(see Sections 2.1 and 2.2), fcthe
objective value of the current solution, and fbest the objective value of the best
found solution. Note that the first directed perturbation considers a subset of
eligible moves obtained by applying the move operator M1(see Section 2.1).
The second type of directed perturbation (applied in Alg. 2, line 7) is almost
the same as the first type. The only difference is that the second directed
perturbation considers eligible moves obtained with the move operator M2
(see Section 2.1). These moves are identified by the set A2such that:
A2={m|mM2, max{gm},(Hm+γ)< Iter or (gm+fc)> fbest}
Finally, the random perturbation, consists in performing randomly selected
moves (i.e., M3from Section 2.1). More formally, moves performed during
random perturbation are identified by the set Bsuch that:
B={m|mM3}
Since random perturbations can displace any vertex of the partition without
constraint, the quality of the resulting solution could be severely affected. In
this sense, this perturbation is significantly stronger than the directed pertur-
bations.
As soon as a search stagnation is detected, i.e., the best found solution has not
been improved after consecutively visiting Tlocal optima, BLS applies moves
of random perturbation in order to drive the search towards distant regions of
the search space (lines 1–3 of Alg. 2). Otherwise, BLS applies probabilistically
9
these three types of perturbations. The probability of applying a particular
perturbation is determined dynamically depending on the current number of
consecutive non-improving attractors visited (indicated by ωin Alg. 1). The
idea is to apply more often directed perturbations (with a higher probability)
at the beginning of the search, i.e., as the search progresses towards improved
new local optima (the non-improving consecutive counter ωis small). With
the increase of ω, the probability of using directed perturbations progressively
decreases while the probability of applying random moves increases for the
purpose of a stronger diversification.
Additionally, it has been observed from an experimental analysis that it is
often useful to guarantee a minimum of applications of a directed perturbation.
Therefore, we constraint the probability Pof applying directed perturbations
to take values no smaller than a threshold P0:
P=(eω/T if eω/T > P0
P0otherwise (2)
where Tis the maximum number of non-improving local optima visited before
carrying out a stronger perturbation.
Given probability Pof applying directed perturbation, the probability of ap-
plying the first and the second type of directed perturbation is determined
respectively by P·Qand P·(1 Q) where Qis a constant from [0,1] (see
Alg. 2). BLS then generates a perturbed solution by applying accordingly the
perturbation operator to make the dedicated moves from the sets A1,A2or
B(see Alg. 3).
In Section 4.2, we provide an experimental study showing the influence of this
perturbation strategy on the performance of our search algorithm.
2.4 Discussion
The general BLS procedure combines some features from several well-established
metaheuristics: iterated local search [18], tabu search [10] and simulated an-
nealing [15]. We briefly discuss the similarities and differences between our
BLS approach and these methods.
Following the general framework of ILS, BLS uses local search to discover local
optima and perturbation to diversify the search. However, BLS distinguishes
itself from most ILS algorithms by the combination of multiple perturbation
strategies triggered according to the search status, leading to variable levels
of diversification. Moreover, with BLS each locally optimal solution returned
by the local search procedure is always accepted as the new starting solution
10
no matter its quality, which completely eliminates the acceptance criterion
component of ILS.
A further distinction of BLS is the way an appropriate perturbation strat-
egy is selected at a certain stage of the search. As explained in Section 2.3.1,
BLS applies a weak perturbation with a higher probability Pas long as the
search progresses towards improved solutions. As the number of consecutively
visited non-improving local optima ωincreases, indicating a possible search
stagnation, we progressively decrease the probability for a weak perturbation
and increase the probability for a strong perturbation. The idea of this adap-
tive change of probability finds its inspiration from simulated annealing and
enables a better balance between an intensified and diversified search.
To direct the search towards more promising regions of the search space, BLS
employs perturbation strategies based on the notion of tabu list that is bor-
rowed from tabu search. Tabu list enables BLS to perform perturbation moves
that do not deteriorate too much the solution quality in a way that the search
does not return to the previous local optimum. However, BLS does not con-
sider the tabu list during its local search (descent) phases while each iteration
of tabu search is constrained by the tabu list. As such, BLS and tabu search
may explore different trajectories during their respective search, leading to
different local optima. In fact, one of the keys to the effectiveness of BLS
is that it completely excludes diversification during local search, unlike tabu
search and simulated annealing for which the intensification and diversifica-
tion are always intertwined. We argue that during local search, diversification
schemes may not be relevant and the compromise between search exploration
and exploitation is critical only once a local optimum is reached. Other studies
supporting this idea can be found in [2,13]. We show computational evidences
to support this assumption in Section 4.2.2.
As we will see in the next section, BLS is able to attain highly competitive
results on the set of well-known benchmarks for the Max-Cut problem in
comparison with the state of the art algorithms.
3 Experimental results
In this section, we report extensive computational results of our BLS approach
and show comparisons with some state of the art methods of the literature.
We conduct our experiments on a set of 71 benchmark instances that has
been widely used to evaluate Max-Cut algorithms. These instances can be
downloaded from http://www.stanford.edu/yyye/yyye/Gset/ and include
toroidal, planar and random graphs, with the number of vertices ranging from
|V|= 800 to 20,000, and edge weights of values 1, 0, or -1.
11
Table 1
Settings of parameters
Parameter Description Setting
L0initial jump magnitude 0.01|V|
Tmax. number of non-improving attractors visited before strong perturb. (restart) 1000
φtabu tenure rand[3,|V|/10]
P0smallest probability for applying directed perturb. 0.8
Qprobability for applying directed perturb. I over directed perturb. II. 0.5
3.1 Parameter settings and comparison criteria
The parameter settings of BLS used in our experiments are given in Table 1.
These parameter values were determined by performing a preliminary experi-
ment on a selection of 15 problem instances from the set of the first 54 graphs
(G1-G54). In this experiment, we tested different values for each of the five
parameters (L0,T,φ,P0and Q), while fixing the rest of the parameters to
their default values given in Table 1. In Section 4.1, we provide a parameter
sensitivity analysis and justify the setting of parameters that is used to obtain
the reported results.
Given its stochastic nature, we run our BLS approach 20 times on each of the
71 benchmark instances, each run being limited to 200000|V|iterations where
|V|is the number of vertices in the given graph instance. The assessment
of BLS performance is based on two comparisons: one is against the best-
known results ever reported in the literature and the other is against 4 state
of the art methods. In addition to information related to the quality criteria
(best objective values, average objective values and standard deviation), we
also show computing times for indicative purposes. Our computing times are
based on a C++ implementation of our BLS algorithm which is compiled
with GNU gcc under GNU/Linux running on an Intel Xeon E5440 with 2.83
GHz and 2GB. Following the DIMACS machine benchmark 1, our machine
requires 0.23 CPU seconds for r300.5, 1.42 CPU seconds for r400.5, and 5.42
CPU seconds for r500.5.
It should be noted that a fully fair comparative analysis with the existing Max-
Cut algorithms from the literature is not a straight-forward task because of
the differences in computing hardware, programming language, termination
criterion, etc. For this reason, the evaluation is mainly based on the best-
known results obtained with different state of art Max-Cut algorithms (Table
2). The comparison with individual algorithms is presented only for indicative
purposes and should be interpreted with caution (Table 3). Nevertheless, our
experimental study provides interesting indications about the performance of
the proposed BLS algorithm relative to these state of the art approaches.
1dmclique, ftp://dimacs.rutgers.edu in directory /pub/dsj/clique
12
Table 2
Computational results of BLS on 71 Max-Cut instances. Column fprev shows the
best-known results reported in the literature; columns fbest and favg give the best
and average result obtained with BLS over 20 runs; column σshows the standard
deviation; column t(s) indicated the average time (in seconds) required by BLS to
reach the best result from fbest.
Name |V|fprev fbest favg σt(s)
G1 800 11624 11624(9) 11612.4 11.16 13
G2 800 11620 11620(11) 11615 5.74 41
G3 800 11622 11622(19) 11621.1 3.92 83
G4 800 11646 11646(16) 11642.8 6.65 214
G5 800 11631 11631(20) 11631 0 14
G6 800 2178 2178(20) 2178 0 18
G7 800 2006 2006(12) 2001.05 6.55 317
G8 800 2005 2005(18) 2004.4 1.8 195
G9 800 2054 2054(14) 2049.95 6.22 97
G10 800 2000 2000(13) 1996.05 5.84 79
G11 800 564 564(20) 564 0 1
G12 800 556 556(20) 556 0 2
G13 800 582 582(20) 582 0 2
G14 800 3064 3064(6) 3062.85 0.91 119
G15 800 3050 3050(20) 3050 0 43
G16 800 3052 3052(8) 3051.1 1.14 70
G17 800 3047 3047(16) 3046.7 0.64 96
G18 800 992 992(14) 991.7 0.46 106
G19 800 906 906(6) 904.55 1.56 20
G20 800 941 941(20) 941 0 9
G21 800 931 931(14) 930.2 1.29 42
G22 2000 13359 13359(1) 13344.45 16.14 560
G23 2000 13342 13344(10) 13340.6 4.08 278
G24 2000 13337 13337(5) 13329.8 6.76 311
G25 2000 13326 13340(1) 13333.4 3.68 148
G26 2000 13314 13328(3) 13320 6.98 429
G27 2000 3325 3341(10) 3332.25 9.79 449
G28 2000 3287 3298(8) 3293.85 5.31 432
G29 2000 3394 3405(1) 3388.2 5.94 17
G30 2000 3403 3412(10) 3404.85 9.61 283
G31 2000 3299 3309(8) 3305.3 4.79 285
G32 2000 1410 1410(13) 1409.3 0.95 336
G33 2000 1382 1382(1) 1380.1 0.44 402
G34 2000 1384 1384(20) 1384 0 170
G35 2000 7684 7684(2) 7680.85 0.57 442
G36 2000 7677 7678(1) 7673.6 1.37 604
G37 2000 7689 7689(1) 7685.85 2.1 444
G38 2000 7681 7687(4) 7684.95 2.33 461
G39 2000 2397 2408(13) 2405.35 3.72 251
G40 2000 2392 2400(1) 2394.6 4.65 431
3.2 Comparisons with the current best-known solutions
Table 2 summarizes the computational results of BLS on the set of 71 Max-
Cut instances (G1-G81) in comparison with the current best-known results
(column fprev ) which are from references [7,16,19,21,22]. For BLS, we report
the best objective value fbest, the average objective value favg, the standard
deviation σ, and the average CPU time in seconds required for reaching fbest
over 20 executions. The results from Table 2 show that BLS is able to improve
13
Table 2
Continued.
Name |V|fprev fbest favg σt(s)
G41 2000 2398 2405(17) 2403 4.98 73
G42 2000 2474 2481(2) 2475.4 2.97 183
G43 1000 6660 6660(18) 6658.15 5.57 26
G44 1000 6650 6650(14) 6647.7 3.66 43
G45 1000 6654 6654(15) 6652.15 4.67 104
G46 1000 6649 6649(13) 6647.75 2.26 67
G47 1000 6656 6657(11) 6654.35 3.53 102
G48 3000 6000 6000(20) 6000 0 0
G49 3000 6000 6000(20) 6000 0 0
G50 3000 5880 5880(19) 5879.9 0.44 169
G51 1000 3847 3848(17) 3847.85 0.36 81
G52 1000 3850 3851(19) 3850.85 0.65 78
G53 1000 3848 3850(13) 3849.5 0.74 117
G54 1000 3850 3852(10) 3850.6 1.74 131
G55 5000 10236 10294(2) 10282.4 5.67 842
G56 5000 3949 4012(1) 3998.65 7.19 786
G57 5000 3460 3492(2) 3488.6 2.11 1440
G58 5000 19248 19263(1) 19255.9 4.06 1354
G59 5000 6019 6078(1) 6067.9 5.99 2485
G60 7000 14057 14176(1) 14166.8 4.53 2822
G61 7000 5680 5789(1) 5773.35 7.23 7420
G62 7000 4822 4868(2) 4863.8 2.4 5465
G63 7000 26963 26997(1) 26980.7 6.25 6318
G64 7000 8610 8735(1) 8735.0 9.6 4090
G65 8000 5518 5558(2) 5551.2 2.63 4316
G66 9000 6304 6360(1) 6350.2 4.37 6171
G67 10000 6894 6940(1) 6935.3 2.85 3373
G70 10000 9499 9541(1) 9527.1 7.89 11365
G72 10000 6922 6998(2) 6935.3 2.85 12563
G77 14000 9926(1) 9916.1 4.17 9226
G81 20000 14030(1) 14021.7 5.77 20422
# Improved 34
# Matched 35
# Worse 0
the previous best-known results for 34 instances 2, and reach the best-known
solution in 35 cases. As far as we know, solutions for the two largest Max-Cut
instances (G77 and G81) have not been reported in the literature. However, for
future comparisons, we include in Table 2 the result obtained by BLS for these
two instances. As for the computing time required to reach its best solution
from column fbest, BLS takes on average a time ranging from less than one
second to 10 minutes for instances with up to 3000 vertices. For the large and
very large instances with 5000 to 20000 vertices, the computing time needed
goes from 0.2 to 5.6 hours.
2Our best results are available at http://www.info.univ-
angers.fr/pub/hao/BLS max cut.html
14
3.3 Comparisons with the current best performing approaches
To further evaluate the performance of BLS, we compare it with the following
algorithms that achieve state-of-art performance:
(1) Two very recent GRASP-Tabu search algorithms [22] a GRASP-Tabu
Search algorithm working with a single solution (GRASP-TS) and its
reinforcement (GRASP-TS/PM) by a population management strategy.
The reported results were obtained on a PC running Windows XP with
Pentium 2.83GHz CPU and 2GB RAM (the same computing platform
as the one we used).
(2) Scatter search (SS) [19] an advanced scatter search incorporating several
inovative features. The evaluation of SS was performed on a machine
with a 3.2 GHz Intel Xenon processor and 2GB of RAM (a comparable
computer as that we used).
(3) Rank-2 relaxation heuristic (CirCut) [7] a method based on a relax-
ation of the problem. The results reported in this paper for CirCut were
obtained under the same conditions as that of SS and are taken from [19].
Since large Max-Cut instances (G55-G81) are very rarely used in the literature
for algorithm evaluation, we limit this comparison to the first 54 Max-Cut
instances which are also the most commonly used in the past.
Table 3 provides the results of this comparison with the four reference ap-
proaches. For each approach, we report the percentage deviation ρfrom the
best-known solution (column fprev from Table 2), computed as %ρ= 100
(fprev f)/fprev , where fis the best objective value attained by a given ap-
proach. Moreover, we show for each algorithm the required average time in
seconds, taken from the corresponding papers.
GRASP-TS/PM is one of the current most effective algorithm for the Max-
Cut problem, which attains the previous best-known result for 41 out of 54
instances, with an average percentage deviation of 0.048. GRASP-TS also pro-
vides excellent performance on these instances, compared to the current state-
of-art Max-Cut heuristics. It is able to reach the previous best-known solution
for 22 instances with an average percentage deviation of 0.199. The other
two popular Max-Cut approaches, SS and CirCut, can obtain the previous
best-known result in 11 and 12 cases respectively, with an average percentage
deviation of 0.289 and 0.339.
The results from Table 3 show that BLS outperforms the four reference algo-
rithms in terms of solution quality. Indeed, the average percentage deviation
of the best results obtained with BLS is -0.066, meaning that BLS improves
on average the best-known result by 0.066%. Moreover, BLS also seems to be
highly competitive with other approaches in terms of computing time. The
15
Table 3
Comparison of BLS with the four reference approaches on the most commonly used
54 benchmark instances. For each approach, we report the percentage deviation
%ρof the best result obtained by the given algorithm from the best-known result
reported in the literature. Column t(s) shows the average time in seconds required
for reaching the best result.
BLS GRASP-TS [22] GRASP-TS/PM [22] SS [19] CirCut [7]
Name %ρt(s) %ρt(s) %ρt(s) %ρt(s) %ρt(s)
G1 0 13 0 100 0 47 0 139 0 352
G2 0 41 0 677 0 210 0 167 0.026 283
G3 0 83 0 854 0 297 0 180 0 330
G4 0 214 0 155 0 49 0 194 0.043 524
G5 0 14 0 235 0 232 0 205 0.034 1128
G6 0 18 0 453 0 518 0.597 176 0 947
G7 0 317 0 304 0 203 1.196 176 0.150 867
G8 0 195 0 565 0 596 0.848 195 0.099 931
G9 0 97 0 581 0 559 0.682 158 0.292 943
G10 0 79 0 845 0 709 0.35 210 0.3 881
G11 0 1 0 18 0 10 0.355 172 0.709 74
G12 0 2 0 723 0 233 0.719 241 0.719 58
G13 0 1 0 842 0 516 0.687 228 1.374 62
G14 0 119 0.065 812 0 1465 0.131 187 0.196 128
G15 0 43 0.328 419 0 1245 0.033 143 0.033 155
G16 0 70 0.098 1763 0 335 0.229 162 0.229 142
G17 0 96 0.131 1670 0 776 0.131 313 0.328 366
G18 0 106 0 977 0 81 0.403 174 1.411 497
G19 0 20 0 490 0 144 0.331 128 1.989 507
G20 0 9 0 578 0 80 0 191 0 503
G21 0 42 0.430 484 0 667 0.107 233 0 524
G22 0 560 0.097 983 0.075 1276 0.097 1336 0.097 493
G23 -0.015 278 0.180 1668 0.075 326 0.187 1022 0.187 457
G24 0 311 0.104 643 0.097 1592 0.255 1191 0.172 521
G25 -0.105 148 0.083 767 0 979 0.045 1299 0 1600
G26 -0.105 429 0.060 1483 0.008 1684 0.150 1415 0 1569
G27 -0.481 449 0.271 256 0 832 0.211 1437 0.571 1456
G28 -0.335 432 0.365 81 0 1033 0.061 1314 0.821 1543
G29 -0.324 17 0.236 21 0 993 0.147 1266 0.530 1512
G30 -0.264 283 0.235 1375 0.029 1733 0 1196 0.529 1463
G31 -0.303 285 0.394 904 0 888 0.333 1336 0.424 1448
G32 0 336 1.135 903 0.284 1232 0.851 901 1.418 221
G33 0 401 1.013 1501 0.579 506 1.447 926 1.592 198
G34 0 170 0.578 1724 0.578 1315 1.445 950 1.156 237
G35 0 442 0.403 1124 0.299 1403 0.208 1258 0.182 440
G36 -0.013 604 0.404 543 0.221 1292 0.221 1392 0.221 400
G37 0 444 0.325 983 0.247 1847 0.325 1387 0.299 382
average run-time required by BLS for the 54 instances is 176 seconds, which
is significantly less than the average time required by the four reference ap-
proaches, considering that the reported results were obtained on comparable
computers.
To see whether there exists significant performance difference in terms of solu-
tion quality among BLS and the reference algorithms, we apply the Friedman
non-parametric statistical test followed by the Post-hoc test on the results from
Table 3. From the Friedman test, we observe that there is a significant per-
formance difference among the compared algorithms (with a p-value less than
16
Table 3
Continued.BLS GRASP-TS [22] GRASP-TS/PM [22] SS [19] CirCut [7]
Name %ρt(s) %ρt(s) %ρt(s) %ρt(s) %ρt(s)
G38 -0.078 461 0.365 667 0.143 1296 0 1012 0.456 1189
G39 -0.459 251 0.375 911 0 742 0.167 1311 0.083 852
G40 -0.334 431 0.585 134 0 1206 0.753 1166 0.209 901
G41 -0.292 73 1.293 612 0 1490 0.500 1016 0 942
G42 -0.283 183 0.849 1300 0 1438 0.687 1458 0.202 875
G43 0 26 0 969 0 931 0.060 406 0.060 213
G44 0 43 0.015 929 0.015 917 0.030 356 0.105 192
G45 0 104 0 1244 0 1791 0.180 354 0.030 210
G46 0 67 0.015 702 0 405 0.226 498 0.060 639
G47 -0.015 102 0 1071 0 725 0.105 359 0 633
G48 0 0 0 13 0 4 0 20 0 119
G49 0 0 0 27 0 6 0 35 0 134
G50 0 169 0 80 0 14 0 27 0 231
G51 -0.026 81 0.104 628 0 701 0.026 513 0.260 497
G52 -0.026 78 0.156 1274 0 1228 0.026 551 0.442 507
G53 -0.052 117 0.026 1317 0 1419 0.052 424 0.156 503
G54 -0.052 131 0.052 1231 0 1215 0.104 429 0.208 524
Avg. -0.066 176 0.199 771 0.048 804 0.289 621 0.339 617
2.2e-16). Moreover, the Post-hoc analysis shows that BLS statistically outper-
forms GRASP-TS, SS and CirCut with p-values of 7.081000e-09, 9.992007e-16
and 0.000000e+00 respectively. However, the performance between BLS and
GRASP-TS/PM is statistically less significant with a p-value of 9.547157e-02.
4 Experimental analyses
4.1 Parameter sensitivity analysis
The performed parameter sensitivity analysis is based on a subset of 15 se-
lected Max-Cut instances from the set of the first 54 graphs (G1-G54). For
each BLS parameter (i.e., L0,T,φ,P0and Q), we test a number of possible val-
ues while fixing the other parameters to their default values from Table 1. We
test values for L0in the range [0.0025|V|,0.32|V|], Tin the range [250,2000],
P0in the range [0.5,0.9] and Qin the range [0.2,0.8]. Similarly, for the tabu
tenure φwe tried several ranges which induce increasingly larger degrees of
diversification into the search. For each instance and each parameter setting,
we perform 20 independent runs with the time limit per run set to 30 minutes.
We use the Friedman statistical test to see whether there is any difference in
BLS performance, in terms of its average result, when varying the value of a
single parameter as mentioned above. The Friedman test shows that there is
no statistical difference in the performance (with p-value > 0.8) when varying
respectively the values of T,φ, and Q. This implies that these three parameters
exhibit no particular sensitivity. On the other hand, when varying values of
17
Table 4
Post-hoc test for solution sets obtained by varying the value of L0
L0= 0.0025|V|0.005|V|0.01|V|0.02|V|0.04|V|0.08|V|0.16|V|
L0= 0.005|V|0.85863
L0= 0.01|V|0.19961 0.95858
L0= 0.02|V|0.02021 0.53532 0.99145
L0= 0.04|V|0.01468 0.47563 0.98450 0.99999
L0= 0.08|V|0.65622 0.99997 0.99567 0.76698 0.71316
L0= 0.16|V|0.99986 0.97971 0.44552 0.07605 0.05944 0.89527
L0= 0.32|V|0.99999 0.74057 0.11999 0.00931 0.00704 0.50545 0.99804
Table 5
Post-hoc test for solution sets obtained by varying the value of P0
P0= 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85
P0= 0.55 0.99999
P0= 0.60 0.94419 0.98065
P0= 0.65 0.77176 0.87550 0.99998
P0= 0.70 0.77154 0.87545 0.99998 1.00000
P0= 0.75 0.22748 0.33802 0.94411 0.99529 0.99529
P0= 0.80 0.69552 0.81722 0.99985 0.99999 0.99999 0.99840
P0= 0.85 0.10606 0.17438 0.81728 0.96221 0.96221 0.99999 0.98063
P0= 0.90 0.00557 0.01134 0.22809 0.47073 0.47048 0.94417 0.55630 0.99128
parameters L0and P0respectively, the Friedman test revealed a statistical
difference in performance with p-value = 0.006906 and p-value = 0.005237.
We thus perform the Post-hoc test on these solution sets obtained respectively
with different settings of parameters L0and P0. The results of these analyses
are provided in tables 4 and 5 for L0and P0respectively, where each table
entry shows the p-value for two sets of average results obtained with two
different values of the corresponding parameter.
¿From the results in Table 4 for L0, we observe that for any two tested values
of L0the p-value is generally significant (often p-value > 0.5), except in 4
case when the p-value < 0.05. This implies that L0is not highly sensitive
to different settings. However, the analytical results from Table 5 show that
P0is even less sensitive than L0. Indeed, the p-value for two solution sets
obtained with different values of P0is often very close to 1. Only in 2 cases,
the difference is statistically significant (with p-value < 0.01).
To further investigate the performance of BLS with different values for L0and
P0, we show in figures 2 and 3 the box and whisker plots which indicate, for
each tested parameter value, the distribution and range of the obtained results
for the 15 used instances. For the sake of clarity, these results are expressed as
the percentage deviation of the average result from the best-known solution
fbest reported in the literature.
¿From the box and whisker plot in Figure 2, we observe a visible difference in
the distribution of results among the data sets obtained with different settings
of parameter L0. For the set of results generated with small values of L0
0.01|V|, the plot indicates a significantly smaller variation compared to the
results obtained with larger values of L0. For instance, the comparison between
18
Fig. 2. Box and whisker plot of the results obtained with different settings of pa-
rameter L0for 15 selected instances.
the two sets of results, obtained with L0= 0.01|V|(set S1) and L0= 0.04|V|
(set S2), indicates that the percentage deviations of the results from S1and
S2range from -0.25% to 0.07% and from -0.45% to 0.5% respectively. More
precisely, around 25% of the results from S2have a lower percentage deviation
from fbest than any of the result from S1, while another 37% of the results
from S2have a higher percentage deviation from fbest than any of the result
from S1. We can thus conclude that a lower value for L0(e.g., L0= 0.01|V|) is
a better choice since the deviations from the best-known result does not vary
much from one instance to another.
¿From the box and whisker plot in Figure 3, we observe that the difference in
the distribution and variation of results among the solution sets generated with
different settings of parameter P0is less evident than in Figure 2. This confirms
our previous observations from the Post-hoc analysis that P0is slightly less
sensitive than L0.
19
Fig. 3. Box and whisker plot of the results obtained with different settings of pa-
rameter P0for 15 selected instances.
4.2 Influence of diversification strategies: comparisons and discussions
The objective of this section is twofold. First, we wish to highlight the contri-
bution of the proposed diversification mechanism to the overall performance of
the BLS method. In Section 4.2.1, we thus provide a statistical comparison be-
tween several variants of BLS integrating different perturbation mechanisms.
Second, we try to justify our statement made in Section 2.4 that diversifica-
tion schemes are crucial only once a local optimum is reached and should be
excluded during local search. For this purpose, we provide in Section 4.2.2 a
comparison with tabu search and iterated tabu search (ITS) methods which,
because of the tabu list, induce a certain degree of diversification at each it-
eration. These two methods are obtained with minimal changes of our BLS
algorithm.
We perform the comparisons using the set of the 54 Max-Cut instances (G1-
G54). In every experiment and for each instance, the reported results are
obtained under the same conditions, i.e., after 20 independent executions with
the maximum time limit per run set to 30 minutes.
20
Table 6
Computational result obtained with four different diversification mechanisms. For
each algorithm, columns fbest and favg show the best and average result over 20
runs. M3(V1)M1M3(V2)M1M2(V3)Complete
fbest favg fbest favg fbest favg fbest fav g
G1 0 0.407 0 0.091 0 0.072 0 0.101
G2 0.026 0.478 0 0.113 0 0.038 0 0.046
G3 0.043 0.443 0 0.052 0 0 0 0.023
G4 0.172 0.505 0 0.062 0.010 0.035 0 0.012
G5 0.043 0.432 0 0.031 0 0 0 0.012
G6 0.138 2.162 0 0.191 0 0 0 0.044
G7 0.648 2.806 0 0.501 0 0 0 0.187
G8 0.998 2.117 0 0.304 0 0 0 0.067
G9 0 2.347 0 0.613 0 0 0 0.204
G10 0.4 2.123 0 0.238 0 0.120 0 0.270
G11 0 0 0 0 0 0 0 0
G12 0 0 0 0 0 0 0 0
G13 0 0 0 0 0 0 0 0
G14 0.033 0.165 0 0.042 0.033 0.119 0 0.036
G15 0 0.256 0 0.011 0.011 0.175 0 0
G16 0 0.224 0 0.034 0 0.106 0 0.021
G17 0 0.184 0 0.026 0.033 0.138 0 0.021
G18 0.101 0.968 0 0.297 0.210 0.877 0 0.066
G19 0 1.176 0 0.348 0 0.717 0 0.270
G20 0 1.509 0 0.122 0.210 1.190 0 0
G21 0 1.241 0 0.521 0 1.004 0 0.064
G22 0.467 0.844 0.067 0.467 0.030 0.367 0 0.257
G23 0.232 0.476 0.015 0.061 0.019 0.002 -0.015 -0.001
G24 0.067 0.473 0 0.104 0 0.017 0 0.058
G25 0.038 0.336 -0.105 -0.012 -0.105 -0.052 -0.105 -0.051
G26 0.158 0.321 -0.105 0.004 -0.105 -0.077 -0.105 -0.040
G27 0.451 1.463 -0.331 0.053 -0.481 -0.457 -0.481 -0.331
G28 1.065 1.771 -0.335 -0.090 -0.335 -0.240 -0.335 -0.129
G29 0.619 1.499 -0.324 0.355 -0.324 -0.066 -0.324 0.080
G30 0.558 1.604 -0.294 0.066 -0.294 -0.239 -0.264 -0.076
G31 0.606 1.552 -0.251 0.027 -0.303 -0.174 -0.303 -0.177
G32 0.142 0.277 0 0.007 0 0 0 0.057
G33 0.145 0.412 0 0.109 0 0.015 0 0.137
G34 0 0.195 0 0 0 0 0 0
G35 0.104 0.228 0 0.010 0.013 0.142 0 0.038
G36 0.091 0.231 0 0.032 0.026 0.138 -0.013 0.049
G37 0.104 0.256 0 0.032 0.065 0.161 0 0.029
G38 0.065 0.182 -0.091 -0.055 0.013 0.098 -0.065 -0.040
G39 0.167 0.766 -0.459 -0.200 -0.167 0.465 -0.459 -0.348
G40 0.293 1.248 -0.334 -0.115 0.084 0.717 -0.293 -0.123
G41 -0.291 1.372 0 0.056 -0.292 0.922 -0.292 -0.265
G42 0.121 1.071 -0.283 0.196 0.081 0.879 -0.283 -0.071
G43 0.090 0.437 0 0.052 0 0 0 0.045
G44 0.075 0.452 0 0.055 0 0.023 0 0.042
G45 0 0.325 0 0.106 0 0 0 0.033
G46 0.211 0.426 0 0.119 0 0.006 0 0.043
G47 0.060 0.500 -0.015 0.064 -0.015 0.002 -0.015 0.053
G48 0 0 0 0 0 0 0 0
G49 0 0.100 0 0 0 0 0 0
G50 0 0.607 0 0 0 0 0 0.003
G51 -0.026 0.207 -0.026 -0.009 -0.026 0.119 -0.026 -0.017
G52 0.026 0.148 -0.026 -0.018 -0.026 0.095 -0.026 -0.021
G53 0.026 0.173 -0.052 -0.031 -0.052 0.084 -0.052 -0.045
G54 0.052 0.197 -0.052 0.019 0.026 0.142 -0.052 -0.004
21
4.2.1 Comparison with different variants of the perturbation mechanism
In this section, we perform a comparison between several versions of our algo-
rithm, which employ different diversification strategies to escape local optima.
The first version of the diversification mechanism (call it V1) is based solely on
random moves of type M3(from the set of moves B, see Section 2.3.1). The
second version (call it V2) adaptively switches between the directed perturba-
tion, which performs moves of type M1from the set A1(see Section 2.3.1),
and the random perturbation. The third version (call it V3) integrates a di-
versification strategy that combines the two types of directed perturbations,
which effectuate respectively moves from the set A1(moves of type M1) and
the set A2(moves of type M2). The last version is our default BLS algorithm
detailed in Section 2. Please note that the strongest diversification is induced
by V1since only random moves are considered for perturbation. On the other
hand, the weakest diversification is introduced with V3since all the perturba-
tion moves consider the quality criterion so as not to degrade too much the
resulting solution.
The results of this experiment are shown in Table 6. Columns fbest and favg
provide respectively the best and average results obtained by each algorithm.
The results indicate that the three algorithms, i.e., V2,V3and default BLS,
clearly outperform algorithm V1which combines the descent local search with
the most basic perturbation mechanism based on random moves. More pre-
cisely, the best results reported in columns fbest indicate that our default BLS
algorithm reports a better solution than V1for 36 out of the 54 instances
and attains the same result as V1for the other 18 instances. However, the
difference between V2,V3and the default BLS is much less obvious. Indeed,
we observe from columns fbest that the default BLS outperforms V2and V3in
only 5 and 6 cases respectively (out of 54), and is outperformed on 4 and 5
instances respectively. To see whether there is a statistically significant differ-
ence in average results (from column favg) obtained by the four algorithms, we
perform the Friedman test followed by the Post-hoc test. The Friedman test
revealed a significant difference with the p-value < 2.2e16. As expected,
the Post-hoc test showed a significant difference between the sets of average
solutions obtained with V1and the default BLS algorithm with the p-value of
0.000000e+00. Moreover, the Post-hoc test showed that the default BLS algo-
rithm statistically outperforms V2in terms of average performance with the
p-value of 1.701825e-03. However, there is no statistical difference in average
performance between V3and the default BLS algorithm with the p-value of
7.734857e-01. From Table 6, we observe that V3outperforms the three other
algorithms, in terms of average results, for a number of small instances (i.e.,
G1 G10). This implies that the weakest diversification insures the best per-
formance for these instances. On the other hand, the default BLS algorithm
provides better average results than V3for a number of more difficult instances
(e.g., G36 G42, G51 G54).
22
To conclude, the directed perturbation, which uses dedicated information, is
more beneficial than the random perturbation for the tested Max-Cut in-
stances. Nevertheless, for some difficult instances, an even better average per-
formance with BLS is obtained when combining perturbation strategies that
introduce varying degrees of diversification into the search process.
4.2.2 Comparison with tabu search (TS) and iterated tabu search (ITS)
As previously mentioned, one of the keys to the effectiveness of BLS is that
it completely excludes diversification during its local search phase. Indeed,
the descent phases of BLS are carried out by moves that are selected by
considering only the quality criterion. To provide some justifications to this
idea, we perform a comparison with a tabu search algorithm (TS) and an
iterated tabu search algorithm (ITS) which are obtained by making minor
modifications to our BLS algorithm.
The TS algorithm used for this comparison consists in performing moves iden-
tified by the two sets A1and A2(see Section 2.3.1 for the definition of sets A1
and A2). With equal probability, TS performs a move either from A1or from
A2. Note that this procedure is simply the directed perturbation of BLS. For
ITS, we modify our BLS algorithm by excluding the adaptive perturbation
strategy. The local search phase of ITS is the above-mentioned TS procedure,
while the perturbation mechanism corresponds to the random perturbation
performed by BLS. The perturbation phase of ITS is triggered if the best
found solution is not improved after 10000 iterations of the TS procedure. It
is important to note that the tabu list of TS and ITS insures a certain degree
of diversification at each iteration.
For each approach, we report in Table 7 the best and average result in columns
fbest and favg respectively. From column fbest, we observe that for 19 instances,
BLS finds better results than both TS and ITS. In other 35 cases, the best
solution attained by BLS is at least as good as that reached by both TS and
ILS.
To see whether BLS statistically outperforms TS and ITS in terms of solu-
tion quality, we apply the Friedman non-parametric statistical test followed
by the Post-hoc test on the average results from Table 7. The Friedman test
discloses that there is a significant difference among the compared algorithms
with a p-value of 4.724e-05. Moreover, the Post-hoc analysis shows that BLS
is statistically better than TS and ITS with a p-value of 6.554058e-04 and
4.615015e-05 respectively. Although we did not include the average time re-
quired by each approach to reach the best result from fbest, BLS remains highly
competitive with TS and ILS also in terms of computational time. While TS
and ILS need on average 236 and 330 seconds respectively to reach their best
23
Table 7
Comparison of BLS with tabu search (TS) and iterated local search (ITS). For each
algorithm, columns fbest and favg show the best and average result over 20 runs.
BLS T S I T S
fbest favg fbest favg fbest favg
G1 0 0.101 0 0.001 0 0
G2 0 0.046 0 0.004 0 0.009
G3 0 0.023 0 0.008 0 0.009
G4 0 0.012 0 0.004 0 0.004
G5 0 0.012 0 0.003 0 0.004
G6 0 0.044 0 0.087 0 0.071
G7 0 0.187 0 0.032 0 0.017
G8 0 0.067 0 0.082 0 0.092
G9 0 0.204 0.049 0.119 0 0.097
G10 0 0.270 0.05 0.125 0 0.118
G11 0 0 0 0 0 0
G12 0 0 0 0 0 0
G13 0 0 0 0 0 0
G14 0 0.036 0.033 0.080 0 0.059
G15 0 0 0 0.102 0 0.113
G16 0 0.021 0 0.090 0 0.057
G17 0 0.021 0 0.062 0 0.108
G18 0 0.066 0.101 0.307 0 0.161
G19 0 0.270 0 0.199 0 0.221
G20 0 0 0 0.367 0 0.499
G21 0 0.064 0 0.585 0 0.585
G22 0 0.257 0.212 0.542 0.198 0.421
G23 -0.015 -0.001 0.142 0.200 0.142 0.208
G24 0 0.058 0.060 0.109 0.075 0.127
G25 -0.105 -0.051 -0.015 0.026 -0.015 0.026
G26 -0.105 -0.040 -0.030 0.035 -0.045 0.032
G27 -0.481 -0.331 -0.271 0.002 -0.241 0.036
G28 -0.335 -0.129 0 0.141 -0.122 0.105
G29 -0.324 0.080 -0.147 0.088 -0.059 0.153
G30 -0.264 -0.076 -0.088 0.101 -0.029 0.084
G31 -0.303 -0.177 -0.091 0.192 -0.061 0.196
G32 0 0.057 0 0 0 0
G33 0 0.137 0 0 0 0
G34 0 0 0 0 0 0
G35 0 0.038 0.104 0.159 0.065 0.146
G36 -0.013 0.049 0.091 0.153 0.078 0.177
G37 0 0.029 0.039 0.185 0.117 0.179
G38 -0.065 -0.040 -0.052 0.131 0.052 0.131
G39 -0.459 -0.348 -0.167 0.325 -0.417 0.280
G40 -0.293 -0.123 -0.167 0.355 -0.084 0.337
G41 -0.292 -0.265 -0.291 0.542 -0.292 0.588
G42 -0.283 -0.071 -0.081 0.400 -0.040 0.758
G43 0 0.045 0 0.022 0 0.0188
G44 0 0.042 0 0.038 0.015 0.040
G45 0 0.033 0 0.026 0 0.032
G46 0 0.043 0 0.045 0.015 0.050
G47 -0.015 0.053 0 0.025 0 0.026
G48 0 0 0 0 0 0
G49 0 0 0 0 0 0
G50 0 0.003 0 0 0 0.005
G51 -0.026 -0.017 -0.026 0.057 -0.026 0.077
G52 -0.026 -0.021 0 0.069 0 0.064
G53 -0.052 -0.045 -0.052 0.012 -0.052 0.012
G54 -0.052 -0.004 0 0.108 -0.026 0.082
24
results reported in fbest, BLS requires on average around 180 seconds for the
54 instances.
5 Conclusion
In this paper, we presented the Breakout Local Search approach for the Max-
Cut problem. The BLS alternates between a local search phase (to find local
optima) and a perturbation-based diversification phase (to jump from a local
optimum to another local optimum). In addition to the descent-based local
search procedure, the diversification phase is of an extreme importance for the
performance of BLS since the local search alone is unable to escape a local
optimum. The diversification mechanism of the proposed approach adaptively
controls the jumps towards new local optima according to the state of the
search. This is achieved by varying the magnitude of a jump and selecting the
most suitable perturbation for each diversification phase.
Experimental evaluations on a popular set of benchmark instances showed
that despite its simplicity, our approach outperforms all the current Max-Cut
algorithms in terms of solution quality. Out of the 71 benchmark instances,
BLS improves the current best results in 34 cases and attains the previous
best-known result for 35 instances. Moreover, the computing time required by
BLS to reach the reported results competes very favorably compared to other
state-of-the-art approaches. To attain its best results reported in the paper,
BLS needs less than one second to 10 minutes for the graphs with up to 3000
vertices, and 0.2 to 5.6 hours for the large and very large instances with 5000
to 20000 vertices.
We also provided experimental evidences to highlight the importance of the
adaptive perturbation strategy employed by the proposed BLS approach, and
the benefit of separating completely diversification from intensification during
the local search phases.
Acknowledgment
We are grateful to the anonymous referees for valuable suggestions and com-
ments which helped us improve the paper. The work is partially supported by
the RaDaPop (2009-2013) and LigeRO projects (2009-2013) from the Region
of Pays de la Loire, France.
25
References
[1] E. Arr´aiz, O. Olivo. Competitive simulated annealing and Tabu Search
algorithms for the max-cut problem. Proceedings of Genetic and Evolutionary
Computation Conference (GECCO 2009), pages 1797–1798, 2009.
[2] R. Battiti, M. Protasi. Reactive search, a history-based heuristic for max-sat.
ACM Journal of Experimental Algorithmics, 2:2, 1996.
[3] R. Battiti, M. Brunato, F. Mascia. Reactive search and intelligent optimization.
Operations Research/Computer Science Interfaces Series 45, 2009.
[4] U. Benlic, J.K. Hao. Breakout local search
for maximum clique problems. Computers & Operations Research, (in press,
http://dx.doi.org/10.1016/j.cor.2012.06.002).
[5] U. Benlic, J.K. Hao. A study of breakout local search for the minimum sum
coloring problem. To appear in L.T. Bui et al. (Eds.): SEAL 2012, Lecture
Notes in Computer Science, 2012.
[6] S. Burer, R.D.C. Monteiro. A projected gradient algorithm for solving the
maxcut SDP relaxation. Optimization Methods and Software, 15: 175–200, 2001.
[7] S. Burer, R.D.C. Monteiro, Y. Zhang. Rank-two relaxation heuristics for MAX-
CUT and other binary quadratic programs. SIAM Journal on Optimization,
12: 503–521, 2002.
[8] P. Festa, P.M. Pardalos, M.G.C. Resende, C.C. Ribeiro. Randomized heuristics
for the maxcut problem. Optimization Methods and Software, 17: 1033–1058,
2002.
[9] C. Fiduccia, R. Mattheyses R. A linear-time heuristics for improving network
partitions. In Proceedings of the 19th Design Automation Conference, 171–185,
1982.
[10] F. Glover, M. Laguna. Tabu Search, Kluwer Academic Publishers, Boston, 1997.
[11] S. Kahruman, E. Kolotoglu, S. Butenko, I.V. Hicks. On greedy construction
heuristics for the MAX-CUT problem. International Journal of Computational
Science and Engineering, 3(3): 211–218, 2007.
[12] R.M. Karp, Reducibility among combinatorial problems, in R. E. Miller, J.
W.Thacher, Complexity of Computer Computation, Plenum Press, 85–103,
1972.
[13] J.P. Kelly, M. Laguna, F. Glover. A study of diversification strategies for the
quadratic assignment problem. Computers and Operations Research, 21(8):885
893, 1994
[14] B.W. Kernighan, S. Lin S. An efficient heuristic procedure for partitioning
graphs. Bell System Technical Journal, 49:291–307, 1970.
26
[15] S. Kirkpatrick, C.D. Gelett, M.P. Vecchi. Optimization by simulated annealing.
Science, 220: 621–630, 1983.
[16] G.A. Kochenberger, J.K. Hao, Z. u, H. Wang, F. Glover. Solving large
scale Max Cut problems via tabu search. Journal of Heuristics, (in press,
http://dx.doi.org/10.1007/s10732-011-9189-8).
[17] K. Krishnan, J.E. Mitchell. A Semidefinite Programming Based Polyhedral Cut
and Price Approach for the Maxcut Problem. Computational Optimization and
Applications, 33(1): 51–71, 2006.
[18] H.R. Lourenco, O.Martin, T.St¨utzle. Iterated local search. Handbook of Meta-
heuristics, Springer-Verlag, Berlin Heidelberg, 2003.
[19] R. Mart´ı, A. Duarte, M. Laguna. Advanced Scatter Search for the Max-Cut
Problem. INFORMS Journal on Computing, 21(1): 26–38, 2009.
[20] C.H. Papadimitriou, K. Steiglitz. Combinatorial Optimization : Algorithms and
Complexity Second edition by Dover, 1998.
[21] V.P. Shylo, O.V. Shylo. Solving the maxcut problem by the global equilibrium
search. Cybenetics and Systems Analysis, 46(5): 744–754, 2010.
[22] Y. Wang, Z. u, F. Glover, J.K. Hao. Probabilistic GRASP-tabu search
algorithms for the UBQP problem. Computers and Operations Research, (in
press, http://dx.doi.org/10.1016/j.cor.2011.12.006).
27
... The summand (1) corresponds to an overestimation of all weights of edges incident with exactly one vertex of X by fixing the falsely counted edges between X and S ′ due to the included β v summands. The summand (2) corresponds to an overestimation of the weight of properly colored edges with both endpoints in X. We next show that b c=2 is in fact an upper bound. ...
... Summand (2) overestimates the weight of falsely counted edges with one endpoint in X and one endpoint in S ′ , while (3) overestimates the weight of falsely counted edges with both endpoints in X. Observe that every falsely counted edge weight may be counted for both of its endpoints within ω(E(χ| S ′ )) + (1). Thus, each edge weight in (2) and (3) needs to be multiplied by 2. ...
... To enumerate all connected candidate sets, we use a JAVA implementation of a polynomial-delay algorithm for enumerating all connected induced subgraphs of a given size [30]. We used the graphs from the G-set benchmark 3 , an established benchmark data set for MAX c-CUT with c ∈ {2, 3, 4} (and thus also for MAX CUT) [2,11,33,37,44,46]. The data set consists of 71 graphs with vertex-count between 800 and 20,000 and a density between 0.02% and 6%. ...
Preprint
Full-text available
In the NP-hard Max c-Cut problem, one is given an undirected edge-weighted graph G and aims to color the vertices of G with c colors such that the total weight of edges with distinctly colored endpoints is maximal. The case with c=2 is the famous Max Cut problem. To deal with the NP-hardness of this problem, we study parameterized local search algorithms. More precisely, we study LS Max c-Cut where we are also given a vertex coloring and an integer k and the task is to find a better coloring that changes the color of at most k vertices, if such a coloring exists; otherwise, the given coloring is k-optimal. We show that, for all c2c\ge 2, LS Max c-Cut presumably cannot be solved in f(k)nO(1)f(k)\cdot n^{\mathcal{O}(1)} time even on bipartite graphs. We then present an algorithm for LS Max c-Cut with running time O((3eΔ)kck3Δn)\mathcal{O}((3e\Delta)^k\cdot c\cdot k^3\cdot\Delta\cdot n), where Δ\Delta is the maximum degree of the input graph. Finally, we evaluate the practical performance of this algorithm in a hill-climbing approach as a post-processing for a state-of-the-art heuristic for Max c-Cut. We show that using parameterized local search, the results of this state-of-the-art heuristic can be further improved on a set of standard benchmark instances.
... DVQOA can be used for addressing various problems, simply redefining its cost function with the same ansatz ( (Table S1) (7,35,36), is used as a reference for comparison. Although this experiment does not include distributed executions, our VQOA consistently achieves high approximation ratios (> 0.93) and maintains short times to solution (< 1,000 s) even without hyperparameter optimization for such scales (Fig. S10). ...
... Two HPC systems are utilized for the integration with QC: "Frontier" and "Defiant" Table S1, HQA achieves solutions close to or slightly better than the best-known values, though occasional suboptimal results are also observed (7,35,36). Brute force search, while guaranteeing exact solutions, becomes infeasible for large problem sizes (n > 40) due to its exponential time complexity. Consequently, for such large ...
Preprint
Full-text available
Optimization problems are critical across various domains, yet existing quantum algorithms, despite their great potential, struggle with scalability and accuracy due to excessive reliance on entanglement. To address these limitations, we propose variational quantum optimization algorithm (VQOA), which leverages many-qubit (MQ) operations in an ansatz solely employing quantum superposition, completely avoiding entanglement. This ansatz significantly reduces circuit complexity, enhances noise robustness, mitigates Barren Plateau issues, and enables efficient partitioning for highly complex large-scale optimization. Furthermore, we introduce distributed VQOA (DVQOA), which integrates high-performance computing with quantum computing to achieve superior performance across MQ systems and classical nodes. These features enable a significant acceleration of material optimization tasks (e.g., metamaterial design), achieving more than 50×\times speedup compared to state-of-the-art optimization algorithms. Additionally, DVQOA efficiently solves quantum chemistry problems and N\textit{N}-ary (N2)(N \geq 2) optimization problems involving higher-order interactions. These advantages establish DVQOA as a highly promising and versatile solver for real-world problems, demonstrating the practical utility of the quantum-classical approach.
... Three-phase search minimum sum coloring problem [1] quadratic minimum spanning tree problem [15] quadratic assignment problem [2] maximally diverse grouping problem [23] maximum clique problem [3] capacitated clustering problem [24] max-cut problem [4] max-k-cut problem [26] vertex separator problem [5] clique partitioning problem [36] Steiner tree problem [14] minimum differential dispersion problem assembly sequence planning problem [16] single-machine total weighted tardiness problem [11] Breakout local search introduced in [1,5] combines local search with a dedicated and adaptive perturbation mechanism. Its basic idea is to use a descent-based local search procedure to intensify the search in a given search region, and to perform dedicated perturbations to jump into a new promising search region once a local optimum is encountered. ...
Preprint
Given a set of n elements separated by a pairwise distance matrix, the minimum differential dispersion problem (Min-Diff DP) aims to identify a subset of m elements (m < n) such that the difference between the maximum sum and the minimum sum of the inter-element distances between any two chosen elements is minimized. We propose an effective iterated local search (denoted by ILS_MinDiff) for Min-Diff DP. To ensure an effective exploration and exploitation of the search space, the proposed ILS_MinDiff algorithm iterates through three sequential search phases: a fast descent-based neighborhood search phase to find a local optimum from a given starting solution, a local optima exploring phase to visit nearby high-quality solutions around a given local optimum, and a local optima escaping phase to move away from the current search region. Experimental results on six data sets of 190 benchmark instances demonstrate that ILS_MinDiff competes favorably with the state-of-the-art algorithms by finding 130 improved best results (new upper bounds).
Article
Full-text available
Quantum optimization methods use a continuous degree-of-freedom of quantum states to heuristically solve combinatorial problems, such as the MAX-CUT problem, which can be attributed to various NP-hard combinatorial problems. This paper shows that some existing quantum optimization methods can be unified into a solver to find the binary solution which is most likely measured from the optimal quantum state. Combining this finding with the concept of quantum random access codes (QRACs) for encoding bits into quantum states on fewer qubits, we propose an efficient recursive quantum relaxation method called recursive quantum random access optimization (RQRAO) for MAX-CUT. Experiments on standard benchmark graphs with several hundred nodes in the MAX-CUT problem, conducted in a fully classical manner using a tensor network technique, show that RQRAO not only outperforms the Goemans-Williamson and recursive QAOA methods, but also is comparable to state-of-the-art classical solvers. The code is available at https://github.com/ToyotaCRDL/rqrao.
Article
Full-text available
Quantum computers hold the promise of more efficient combinatorial optimization solvers, which could be game-changing for a broad range of applications. However, a bottleneck for materializing such advantages is that, in order to challenge classical algorithms in practice, mainstream approaches require a number of qubits prohibitively large for near-term hardware. Here we introduce a variational solver for MaxCut problems over m=O(nk)m={{\mathcal{O}}}({n}^{k}) binary variables using only n qubits, with tunable k > 1. The number of parameters and circuit depth display mild linear and sublinear scalings in m, respectively. Moreover, we analytically prove that the specific qubit-efficient encoding brings in a super-polynomial mitigation of barren plateaus as a built-in feature. Altogether, this leads to high quantum-solver performances. For instance, for m = 7000, numerical simulations produce solutions competitive in quality with state-of-the-art classical solvers. In turn, for m = 2000, experiments with n = 17 trapped-ion qubits feature MaxCut approximation ratios estimated to be beyond the hardness threshold 0.941. Our findings offer an interesting heuristics for quantum-inspired solvers as well as a promising route towards solving commercially-relevant problems on near-term quantum devices.
Preprint
Full-text available
Graph neural networks (GNNs) with unsupervised learning can solve large-scale combinatorial optimization problems (COPs) with efficient time complexity, making them versatile for various applications. However, since this method maps the combinatorial optimization problem to the training process of a graph neural network, and the current mainstream backpropagation-based training algorithms are prone to fall into local minima, the optimization performance is still inferior to the current state-of-the-art (SOTA) COP methods. To address this issue, inspired by possibly chaotic dynamics of real brain learning, we introduce a chaotic training algorithm, i.e. chaotic graph backpropagation (CGBP), which introduces a local loss function in GNN that makes the training process not only chaotic but also highly efficient. Different from existing methods, we show that the global ergodicity and pseudo-randomness of such chaotic dynamics enable CGBP to learn each optimal GNN effectively and globally, thus solving the COP efficiently. We have applied CGBP to solve various COPs, such as the maximum independent set, maximum cut, and graph coloring. Results on several large-scale benchmark datasets showcase that CGBP can outperform not only existing GNN algorithms but also SOTA methods. In addition to solving large-scale COPs, CGBP as a universal learning algorithm for GNNs, i.e. as a plug-in unit, can be easily integrated into any existing method for improving the performance.
Conference Paper
Full-text available
Given an undirected graph G=(V,E), the minimum sum coloring problem (MSCP) is to find a legal assignment of colors (represented by natural numbers) to each vertex of G such that the total sum of the colors assigned to the vertices is minimized. In this paper, we present Breakout Local Search (BLS) for MSCP which combines some essential features of several well-established metaheuristics. BLS explores the search space by a joint use of local search and adaptive perturbation strategies. Tested on 27 commonly used benchmark instances, our algorithm shows competitive performance with respect to recently proposed heuristics and is able to find new record-breaking results for 4 instances.
Article
Full-text available
This paper presents two algorithms combining GRASP and Tabu Search for solving the Unconstrained Binary Quadratic Programming (UBQP) problem. We first propose a simple GRASP-Tabu Search algorithm working with a single solution and then reinforce it by introducing a population management strategy. Both algorithms are based on a dedicated randomized greedy construction heuristic and a tabu search procedure. We show extensive computational results on two sets of 31 large random UBQP instances and one set of 54 structured instances derived from the MaxCut problem. Comparisons with state-of-the-art algorithms demonstrate the efficacy of our proposed algorithms in terms of both solution quality and computational efficiency. It is noteworthy that the reinforced GRASP-Tabu Search algorithm is able to improve the previous best known results for 19 MaxCut instances.
Article
Full-text available
In recent years many algorithms have been proposed in the literature for solving the Max-Cut problem. In this paper we report on the application of a new Tabu Search algorithm to large scale Max-cut test problems. Our method provides best known solutions for many well-known test problems of size up to 10,000 variables, although it is designed for the general unconstrained quadratic binary program (UBQP), and is not specialized in any way for the Max-Cut problem.
Article
Full-text available
The maximum clique problem (MCP) is one of the most popular combinatorial optimization problems with various practical applications. An important generalization of MCP is the maximum weight clique problem (MWCP) where a positive weight is associate to each vertex. In this paper, we present Breakout Local Search (BLS) which can be applied to both MC and MWC problems without any particular adaptation. BLS explores the search space by a joint use of local search and adaptive perturbation strategies. Extensive experimental evaluations using the DIMACS and BOSHLIB benchmarks show that the proposed approach competes favourably with the current state-of-art heuristic methods for MCP. Moreover, it is able to provide some new improved results for a number of MWCP instances. This paper also reports for the first time a detailed landscape analysis, which has been missing in the literature. This analysis not only explains the difficulty of several benchmark instances, but also justifies to some extent the behaviour of the proposed approach and the used parameter settings.
Article
Full-text available
In this paper, we present a projected gradient algorithm for solving the semidefinite programming (SDP) relaxation of the maximum cut (maxcut) problem. Coupled with a randomized method, this gives a very efficient approximation algorithm for the maxcut problem. We report computational results comparing our method with two earlier successful methods on problems with dimension up to 7,000. © 2001 OPA (Overseas Publishers Association) N.V. Published by license under the Gordon and Breach Science Publishers imprint, a member of the Taylor & Francis Group.
Chapter
Full-text available
The key idea underlying iterated local search is to focus the search not on the full space of all candidate solutions but on the solutions that are returned by some underlying algorithm, typically a local search heuristic. The resulting search behavior can be characterized as iteratively building a chain of solutions of this embedded algorithm. The result is also a conceptually simple metaheuristic that nevertheless has led to state-of-the-art algorithms for many computationally hard problems. In fact, very good performance is often already obtained by rather straightforward implementations of the metaheuristic. In addition, the modular architecture of iterated local search makes it very suitable for an algorithm engineering approach where, progressively, the algorithms' performance can be further optimized. Our purpose here is to give an accessible description of the underlying principles of iterated local search and a discussion of the main aspects that need to be taken into account for a successful application of it. In addition, we review the most important applications of this method and discuss its relationship to other metaheuristics.
Article
Full-text available
Given a graph with non-negative edge weights, the MAX-CUT problem is to partition the set of vertices into two subsets so that the sum of the weights of edges with endpoints in different subsets is maximized. This classical NP-hard problem finds applications in VLSI design, statistical physics, and classification among other fields. This pa-per compares the performance of several greedy construction heuristics for MAX-CUT problem. In particular, a new "worst-out" approach is studied and the proposed edge contraction heuristic is shown to have an approximation ratio of at least 1/3. The results of experimental com-parison of the worst-out approach, the well-known best-in algorithm, and modifications for both are also included.