Content uploaded by Lei Wu
Author content
All content in this area was uploaded by Lei Wu on Mar 18, 2016
Content may be subject to copyright.
A Fast Large Neighborhood Search for
Disjunctively Constrained Knapsack Problems
M. Hifi, S. Saleh, and L. Wu
EPROAD EA 4669, Univercit´e de Picardie Jules Verne,
7 rue du Moulin Neuf, 80039 Amiens, France
{mhand.hifi,sagvan.saleh,lei.wu}@u-picardie.fr
Abstract. In this paper a fast large neighborhood search-based heuris-
tic is proposed for solving the Disjunctively Constrained Knapsack Prob-
lem (DCKP). The proposed method combines a two-phase procedure and
a large neighborhood search. First, a two-phase procedure is applied in
order to construct a starting feasible solution of the DCKP. Its first phase
serves to determine a feasible solution by combining two complementary
problems: the weighted independent set problem and the classical binary
knapsack problem. Its second phase uses a descent method trying to im-
prove the current solution by applying both degrading and re-optimizing
strategies. Second, a large neighborhood search is used for diversifying
the search space. Finally, the performance of the proposed method is
computationally analyzed on a set of benchmark instances for the litera-
ture and its results are compared to those reached by both Cplex solver
and more recent algorithms in the literature. Several improved solutions
have been obtained within small average runtime.
Keywords: Heuristic; knapsack; neighborhood; re-optimisation.
1 Introduction
In this paper we investigate the use of the large neighborhood search-based
heuristic for solving the disjunctively constrained knapsack problem (DCKP).
DCKP is characterized by a knapsack of fixed capacity c, a set Iof nitems, and
a set Eof incompatible couples of items, where E⊆n(i, j )∈I×I, i < jo.
Each item i∈Iis represented by nonnegative weight wiand profit pi. The
goal of the DCKP is to maximize the total profit of items that can be placed
into the knapsack without exceeding its capacity, where all items included in the
knapsack must be compatible. Formally, DCKP can be defined as follows:
(PDCKP ) max X
i∈I
pixi
s.t. X
i∈I
wixi≤c(1)
xi+xj≤1∀(i, j)∈E(2)
xi∈ {0,1} ∀ i∈I,
2 M. Hifi, S. Saleh and L. Wu
where xi,∀i∈I, is equal to 1 if item iis included in the knapsack (solu-
tion); 0 otherwise. Inequality (1) denotes the knapsack constraint with capacity
cand inequalities (2) are the disjunctive constraints which ensure that all items
belonging to a feasible solution must be compatible. We can observe that the
knapsack’s polytope is the one obtained by combining inequality (1) with in-
tegral constraints xi∈ {0,1},∀i∈I, and that of weighted independent set
problem obtained by associating inequalities (2) with the integral constraints.
Without loss of generality, we assume that: (i) all input data c, pi, wi,∀i∈I,
are strictly positive integers and, (ii) X
i∈I
wi> c for avoiding trivial solutions.
The remainder of the paper is organized as follows. Section 2 reviews some
previous works on the DCKP. Section 3 discusses the two-phase procedure that
provides a starting solution for PDC KP . Section 4 describes the large neighbor-
hood search-based heuristic for the DCKP. Section 5 evaluates the performance
of the proposed method on the instances taken from the literature, and analyzes
the obtained results. Finally, Section 6 summarizes the contents of the paper.
2 Background
The DCKP is an NP-hard combinatorial optimization problem. It reduces to
the maximum weighted independent set problem (Garey and Johnson [1]) when
the knapsack capacity constraint is omitted and to the classic knapsack problem
when E=∅. It is easy to show that DCKP is a more complex extension of the
multiple choice knapsack problem which arises either as a stand alone problem
or as a component of more difficult combinatorial optimization problems. Its
induced structure in complex problems allows the computation of upper bounds
and the design of heuristic and exact methods for these complex instances. For
example, DCKP was used in Dantzig-Wolfe’s decomposition formulation for the
two-dimensional bin packing problem (Pisinger and Sigurd [10]). It served as a
local optimization subproblem for the pricing problem which consists in find-
ing a feasible packing of a single bin verifying the smallest reduced cost. The
same problem has been also used in Sadykov and Vanderbeck [12] as the pricing
problem for solving the bin packing with conflicts.
Due to the complexity and hardness of the DCKP, most results on this topic
are based on heuristics although exact methods have been proposed. Among
papers addressing the resolution of DCKP, we found that of Yamada et al. [14,
15] in which the problem was tackled with approximate and exact methods.
The approximate heuristic generates an initial feasible solution, and improves it
using a 2-opt neighborhood-search. The exact algorithm starts its search from
the solution obtained by the approximate algorithm, and undertakes an implicit
enumeration combined with an interval reduction technique.
Hifi and Michrafy [3] proposed three exact algorithms in which reduction
strategies, an equivalent model and a dichotomous search cooperate to solve
DCKP. The first algorithm reduces the size of the original problem by start-
ing with a lower bound and successively solving relaxed DCKPs. The second
M. Hifi, S. Saleh and L. Wu 3
algorithm combines a reduction strategy with a dichotomous search in order to
accelerate the search process. The third algorithm tackles instances with a large
number of disjunctive constraints using two cooperating equivalent models.
Hifi and Michrafy [4] proposed a three-step reactive local search. The first
step of the algorithm starts by determining an initial solution using a greedy pro-
cedure. The second step is based on an intensification procedure which removes
an item from the solution and inserts other ones. It adopts a memory list that
stores swaps and/or the hashing function; thus, forbids cycling. The third step
diversifies the search process by accepting to temporarily degrade the quality of
the solution in hope to escape from local optima.
Pferschy and Schauer [9] presented pseudo-polynomial algorithms for special
cases of the disjunctively knapsack problem which are mainly based on a graph
representation: trees, graphs with bounded tree-width and chordal graphs. The
authors extended their algorithms for establishing fully polynomial time approx-
imation schemes (FPTAS).
Hifi et al. [7] investigated the use of the rounding solution procedure and an
effective local branching. The method combines two procedures: (i) a rounding
solution procedure and (ii) a restricted exact solution procedure. Hifi and Ot-
mani [5] investigated the use of the scatter search for approximately solving the
DCKP. The approach tried to explore some characteristics of two problems in
order to tackle the DCKP: the independent set problem and the single knapsack
problem. The performance of the approach was evaluated on the same instances
as considered in [7] and showed that such approach was able to improve the
solution quality of some instances. Hifi and Otmani [6] adapted the same ap-
proach as in [5], but by considering an equivalent model of the DCKP already
proposed by Hifi and Michrafy [4]. The equivalent model was solved by applying
a first level scatter search in which the model was refined by injecting some valid
constraints.
Finally, Hifi [2] investigates an iterative rounding search-based algorithm.
The method can be viewed as an alternate to both approaches considered in
Hifi et al. [5, 7] where three strategies were combined: (i) the variable-fixing
technic using the rounding method applied to the linear relaxation of DCKP,
(ii) the injection of successive valid constraints with bounding the objective
function, and (iii) a neighbor search around solutions characterizing a series
of reduced subproblems. The aforementioned steps are iterated until satisfying
some stopping criteria.
3 A Two-Phase Solution Procedure
This section describes an efficient algorithm for approximately solving the DCKP
by combining two alternative procedures. The first procedure is applied for con-
structing a starting feasible solution while the second one is used in order to
improve the current solution. For the rest of the paper, we assume that all items
are ranked in decreasing order of their profits.
4 M. Hifi, S. Saleh and L. Wu
3.1 The first phase
The first phase determines a feasible solution of the DCKP by solving two opti-
mization problems:
–Aweighted independent set problem (noted PW IS ), extracted from PD CKP ,
is first solved to determine an Independent Set solution, noted I S.
–Aclassical binary knapsack problem (noted PK) associated to both I S and
the corresponding capacity constraint (i.e., Pi∈IS wixi≤c) is solved in
order to provide a feasible solution of PD CKP .
Let SIS = (s1,...,sn) be a feasible solution of PW IS , where siis the binary
value assigned to xi,∀i∈I. Let IS ⊆Ibe the restricted set of items denoting
items of SIS whose values are fixed to 1. Then, linear programs referring to both
PW IS and PKmay be defined as follow:
(PW IS )
max X
i∈I
pixi
s.t. xi+xj≤1,∀(i, j)∈E
xi∈ {0,1},∀i∈I,
(PK)
max X
i∈IS
pixi
s.t. X
i∈IS
wixi≤c,
xi∈ {0,1},∀i∈IS.
On the one hand, we can observe that the solution domain of PW IS includes
the solution domain of PD CKP . On the other hand, an optimal solution of PW I S
is not necessary an optimal solution of PDCK P . Therefore, in order to search a
quick solution IS , the following procedure is applied.
Algorithm 1 : An independent set as a solution of PW IS
Input: An instance Iof PDCK P .
Output: A feasible solution (independent set) IS for PWI S .
1: Initialization:
Set IS =∅and I={1,...,n}.
2: while I6=∅do
3: Let i= argmaxnpi|pi≥pk, k ∈Io.
4: Set IS =IS ∪ {i}.
5: Remove iand all items jsuch that (i, j)∈Efrom I.
6: end while
7: return IS as a feasible solution of PI S .
Specifically, Algorithm 1 starts by initializing IS to an empty set (a feasible
solution of PW I S ). It then, iteratively, selects the item realizing the greatest
profit and drops the selected item i, with its incompatible items, from I. The
process is iterated until no item can be added into the current solution IS. In
this case, the algorithm stops and exits with a feasible solution IS for PW I S .
As mentioned above, IS may violate the capacity constraint of PDCK P .
Then, in order to provide a feasible solution for PDC KP , the knapsack problem
M. Hifi, S. Saleh and L. Wu 5
Algorithm 2 : A starting DCKP’s feasible solution
Input: IS , an independent set of PWI S , and I, an instance of PDC KP .
Output: SDCK P , a DCKP’s feasible solution.
1: Initialization:
Let PKbe the resulting knapsack problem constructed according to items belonging
to IS.
2: if SIS satisfies the capacity constraint (1) of PD CKP then
3: Set SDCK P =SIS ;
4: else
5: Let SDCK P be the resulting solution of PK.
6: end if
7: return SDCK P as a starting feasible solution of PDC KP .
PKis solved. Herein, PKis solved using the exact solver of Martello et al. [8].
Algorithm 2 describes the main steps used for determining a feasible solution of
PDCK P .
3.2 The second phase
In order to improve the quality of the solution obtained from the first phase (i.e.,
the starting solution SDC K P returned by Algorithm 2), a local search is per-
formed. The used local search can be considered as a descent method that tends
to improve a solution by alternatively calling two procedures: degrading and re-
optimizing procedures. The degrading procedure serves to build a k-neighborhood
of SDC KP by dropping kfixed items from SD CK P while the re-optimizing proce-
dure tries to determine an optimal solution regarding the current neighborhood.
The descent procedure is stopped when no better solution can be reached.
Algorithm 3 : A descent method
Input: SDCK P , a starting solution of DCKP.
Output: S⋆
DCK P , a local optimal solution of DCKP.
1: Set S⋆
DCK P as an initial feasible solution, where all variables are fixed to 0.
2: while SDCK P is better than S⋆
DCK P do
3: Update S⋆
DCK P with SDCK P .
4: Set α|I|fixed variables of SDCK P as free.
5: Define the corresponding neighborhood of SD CKP .
6: Determine the optimal solution in the current neighborhood and update SD CKP .
7: end while
8: return S⋆
DCK P .
Algorithm 3 shows how an improved solution can be computed by using a
descent method. Indeed, let SDC KP be the current feasible solution obtained at
the first phase. Let αbe a constant, such that αbelongs to the interval [0,100],
denoting the percentage of unassigned decision variables, i.e., some variables are
6 M. Hifi, S. Saleh and L. Wu
free according to the current solution SD CKP . The core of the decent method
is represented by the main loop (cf., lines 2 - 7). At line 3, the best solution
found so far, S⋆
DCK P , is updated with the solution SDC K P returned at the
last iteration. Line 4 determines the α|I|unassigned variables regarding the
current solution SDC KP ,where items with highest degree (i.e., items with largest
neighborhood), are favored. Let ibe an item realizing the highest degree; that
is, an item iwhose variable is xifixed to 1 in SDC KP . Then, set xito a free
variable with its incompatible variables xj,such that (i, j)∈Eand (j, k)/∈E,
where k6=icorresponds to the index of the variables whose values are equal to
1 in SDC KP . At line 6, SDCK P , is replaced by the best solution found in the
current neighborhood. Finally, the process is iterated until no better solution
can be reached (in this case, Algorithm 3 exits with the best solution S⋆
DCK P ).
Algorithm 4 : Remove β|I|variables of SDCK P
Input: SDCK P , a starting solution of PDCKP .
Output: An independent set I S and a reduced instance Irof PDC KP .
1: Set counter = 0, Ir=∅and IS to the set of items whose decision variable are
fixed to 1 in SDCK P .
2: Range IS in non decreasing order of their profit per weight.
3: while counter < β|I|do
4: Let rbe a real number randomly generated in the interval [0,1] and i=|IS|×rγ.
5: Set IS =IS \ {i},Ir=Ir∪iand increment counter =counter + 1.
6: for all items jsuch that (i, j)∈Edo
7: if item jis compatible with all items belong to I S then
8: Set Ir=Ir∪ {j}and counter =counter + 1.
9: end if
10: end for
11: end while
12: return IS and Ir.
Note that, on the one hand, Algorithm 3 may increase when αtends to 100,
since dropping a large percentage of items involves that the reduced DCKP is
closest to the original one. On the other hand, Algorithm 3 is called at each
iteration of the large neighborhood search (cf., Section 4), a large size reduced
DCKP can cause the large neighborhood search slow down. Therefore, we favor
the achievement of a fast algorithm which is able to converge towards a good
local optimum. This is why our choice is oriented to moderate the values of α,
as shown in the experimental part (cf., Section 5).
4 A Large Neighborhood Search
LNS is a heuristic that has proven to be effective on wide range of combinatorial
optimization problems. A simplest version of LNS has been presented in Shaw
[13] for solving the vehicle routing problem (cf., also Pisinger [11]). LNS is based
M. Hifi, S. Saleh and L. Wu 7
on the concepts of building and exploring a neighborhood; that is, a neighbor-
hood defined implicitly by a destroy and a repair procedure. Unlike the descent
methods, which may stagnates in local optima, using large neighborhoods makes
it possible to reach better solutions and explore a more promising search space.
For instance, the descent method discussed in Section 3.2 (cf., Algorithm 3)
may explore some regions and stagnates in a local optimum because either
degrading or re-optimizing considers a mono-criterion. In order to enlarge the
chance of reaching a series of improved solutions or to escape from a series of lo-
cal optima, a random destroying strategy, which depends the value of the profit
per weight of items, is applied. Algorithm 5 summarizes the main steps of LNS
(noted LNSBH) which uses Algorithm 4 for determining the neighborhood of a
given solution.
Algorithm 5 : A large neighborhood search-based heuristic
Input: SDCK P , a starting solution of PDCKP .
Output: S⋆
DCK P , a local optimum of PDCKP .
1: Set S⋆
DCK P as a starting feasible solution, where all variables are assigned to 0.
2: while the time limit is not performed do
3: Call Algorithm 4 in order to find IS and Iraccording to SDCK P .
4: Call Algorithm 1 with an argument Irto complete IS.
5: Call Algorithm 2 with an argument IS for reaching a new solution SDCK P .
6: Improve SDCKP by applying Algorithm 3.
7: Update S⋆
DCK P with the best solution.
8: end while
9: return S⋆
DCK P .
5 Computational Results
This section evaluates the effectiveness of the proposed Large Neighborhood
Search-Based Heuristic (LNSBH) on two groups of instances (taken from the
literature [4] and generated following the schema used by Yamada et al. [14,
15]). The first group contains twenty medium instances with n= 500 items,
a capacity c= 1800,and different densities (assessed in terms of the number
of disjunctive constraints). The second group contains thirty large instances,
where each instance contains 1000 items, with ctaken in the discrete interval
{1800,2000}and with various densities. The proposed LNSBH was coded in
C++ and run on an Intel Pentium Core i5-2500 with 3.3 Ghz.
LNSBH uses some parameters, like the percentage αof the items dropped in
the descent method, the percentage βof items removed from the solution when
the large neighborhood search is applied, the constant γused by Algorithm 4
and, the fixed runtime limit tused for stopping the resolution.
8 M. Hifi, S. Saleh and L. Wu
5.1 Effect of both degrading and re-optimizing procedures
This section evaluates the effect of the descent method based upon degrading
and re-optimizing procedures (as used in Algorithm 3) on the starting solution
realized by Algorithm 2. We recall that the re-optimization procedure tries to
solve a reduced PDC KP which is a problem with a small size. So, in order to
balance between the quality of the complementary solution and the runtime that
maintains a fast resolution, we solve it using the Cplex solver v 12.4.
Table 1 shows the variation of Av Sol, the average value of solutions provided
by the considered algorithm over all treated instances and Av time, the average
runtime needed by each algorithm for reaching the results. Therefore, the choice
of the value of αmay influence the behavior of the descent method and so,
the performance of algorithm is performed by varying αin the discrete interval
{5,10; 15; 20}.
From Table 1, we can observe that the best average solution value is realized
The descent method
α=
Algo. 1-2 5% 10% 15% 20%
Av. Sol. 2014.62 2129.98 2188.80 2232.36 2217.04
Av. time ≈0.001 0.15 2.03 19.72 118.30
Table 1. Effect of the descent method on the starting DCKP’s solution.
for the value α= 15%, but it needs an average runtime of 19.72 seconds. Note
that LNSBH’s runtime depends on the descent method’s runtime. Therefore,
according to the results displayed in Table 1, on can observe that the value of 5%
favors a quick resolution (0.15 seconds) with an interesting average solution value
of 2129.98. Since herein the goal is to propose a faster LNSBH, we then retuned
the version of Algorithm 5 with the value of α= 5%.
5.2 Behavior of LNSBH on both groups of instances
Remark that, according to the results shown in Shaw [13], when γvaries over
the range of the integer interval [5,20], the LNS works reasonably well. In our
test, we set γ= 20 for Algorithm 4. Therefore, in order to evaluate the per-
formance of LNSBH, we focus on both parameters βand t; that are, used in
Algorithm 5. The study is conducted by varying the value of βin the discrete
interval {10,15,20,25,30}and tin {25,50,100,150,200}(seconds). Table 2 dis-
plays the average solution values realized by LNSBH using the different values
of (β, t).
From Table 2, we observe what follows:
–Setting β= 10% provides the best average solution value. Further, the so-
lution quality increases when the runtime limit is extended.
M. Hifi, S. Saleh and L. Wu 9
Variation of β
❅❅
❅
t
β10% 15% 20% 25% 30%
25 2395.38 2394.48 2392.5 2391.14 2389.06
50 2397.52 2397.44 2394.74 2393.34 2391.18
100 2399.16 2398.98 2396.36 2394.2 2393.9
150 2399.28 2399 2397.82 2395.62 2394.58
200 2400.22 2399.62 2398.76 2396.8 2395.74
Table 2. The quality of the average values when varying the values of the couple (β, t).
–All other variations of βinduce smaller average values than those of β= 10%
in 200 seconds.
Therefore, according to the results displayed in Table 2, the objective value
of the solutions determined by setting (β, t) = (10%,100) and (10%,200) are dis-
played in Table 3. In our computational study, ten random trials of the LNSBH
are performed on the fifty literature instances. and each trial is stopped respec-
tively in 100 and 200 seconds.
Table 3 shows objective values reached by LNSBH and Cplex when compared
to the best solutions of the literature (taken from Hifi et al. [2, 6]). Column 1
of Table 3 displays the instance label, column 2 reports the value of the best
solution (denoted VCplex) reached by Cplex v12.4 after one hour of runtime and
column 3 displays the best known solution of the literature, denoted VIRS. Fi-
nally, column 4 (resp. 5) reports Max.Sol. (resp. Av.Sol) denoting the maximum
(average) solution value obtained by LNSBH over the ten trials for the first run-
time limit of 100 seconds and columns 6 and 7 state those of LNSBH for the
second runtime limit of 200 seconds.
From Table 3, we observe what follows:
1. First, we can observe the inferiority of the Cplex solver since it realizes an
average value of 2317.88 compared to that realized by LNSBH (2390.40). In
this case, Cplex matches 5 instances over 50, representing a percentage of
10% of the best solutions of the literature.
2. Second, for both runtime limits (100 and 200 seconds respectively), LNSBH
realizes better average values than the average value of the best solution of
the literature. Indeed, LNSBH realizes an average value of 2393.63 (resp.
2395.94) with the first (resp. second) runtime limit.
3. Third, over all trials and with the first runtime limit of 100 seconds, LNSBH
is able to reach 30 new solutions, it matches 14 instances and fails in 6
occasions. On the other hand, running LNSBH with the second runtime
limit of 200 seconds increases the percentage of the new solutions. Indeed, in
this case, LNSBH realizes 33 new solutions, matches 13 solutions and fails
in 4 occasions.
10 M. Hifi, S. Saleh and L. Wu
4. Fourth and last, LNSBH with the second runtime limit (200 seconds) is able
to reach 10 new solutions compared to the solutions reached by LNSBH with
the first runtime limit of 100 seconds.
LNSBH
β= 10 and t= 100 β= 10 and t= 200
Instance VCplex VIRS Max.Sol. Av.Sol. Max.Sol. Av.Sol.
1I1 2567 2567 2567 2564.2 2567 2564.6
1I2 2594 2594 2594 2594 2594 2594
1I3 2320 2320 2320 2319 2320 2319
1I4 2298 2303 2310 2310 2310 2310
1I5 2310 2310 2330 2328 2330 2329
2I1 2080 2100 2117 2116.1 2118 2117
2I2 2070 2110 2110 2110 2110 2110
2I3 2098 2128 2119 2110.2 2132 2118.1
2I4 2070 2107 2109 2106.9 2109 2108.2
2I5 2090 2103 2110 2109.7 2114 2111.2
3I1 1667 1840 1845 1788.4 1845 1814
3I2 1681 1785 1779 1759.9 1779 1769.2
3I3 1461 1742 1774 1759.3 1774 1762.9
3I4 1567 1792 1792 1792 1792 1792
3I5 1563 1772 1775 1751.6 1775 1759.2
4I1 1053 1321 1330 1330 1330 1330
4I2 1199 1378 1378 1378 1378 1378
4I3 1212 1374 1374 1374 1374 1374
4I4 1066 1353 1353 1352.7 1353 1353
4I5 1229 1354 1354 1336.4 1354 1336.4
5I1 2680 2690 2690 2684 2690 2686
5I2 2690 2690 2690 2683.9 2690 2685.9
5I3 2670 2689 2680 2675.7 2690 2679.7
5I4 2680 2690 2698 2683.2 2698 2689.2
5I5 2660 2680 2670 2668 2670 2669.9
6I1 2820 2840 2850 2850 2850 2850
6I2 2800 2820 2830 2823.9 2830 2827.7
6I3 2790 2820 2830 2819.9 2830 2821.9
6I4 2790 2800 2820 2817 2822 2820.2
6I5 2800 2810 2830 2823.7 2830 2825.6
7I1 2700 2750 2780 2771.9 2780 2773
7I2 2720 2750 2770 2769 2770 2770
7I3 2718 2747 2760 2759 2760 2760
7I4 2728 2773 2800 2791 2800 2793
7I5 2730 2757 2770 2763 2770 2765
8I1 2638 2720 2720 2719.1 2720 2719.1
8I2 2659 2709 2720 2719 2720 2720
8I3 2664 2730 2740 2733 2740 2734
8I4 2620 2710 2710 2708.7 2719 2709.9
8I5 2644 2710 2710 2709 2710 2710
9I1 2589 2650 2676 2670.9 2677 2671.3
9I2 2580 2640 2665 2661.5 2665 2663
9I3 2580 2635 2670 2665.8 2670 2668.6
9I4 2540 2630 2660 2659.8 2660 2659.9
9I5 2594 2630 2669 2663.5 2670 2664.9
10I1 2500 2610 2620 2616.7 2620 2619.7
10I2 2549 2642 2630 2627.5 2630 2629.9
10I3 2527 2618 2620 2617.1 2627 2620.5
10I4 2509 2621 2620 2617 2620 2618.6
10I5 2530 2606 2620 2619.3 2625 2620.5
Av. Sol. 2317.88 2390.40 2399.16 2393.63 2400.22 2395.94
Table 3. Performance of LNSBH vs Cplex and IRS on the benchmark instances of the
literature.
M. Hifi, S. Saleh and L. Wu 11
6 Conclusion
In this paper, we proposed a fast large neighborhood search-based heuristic for
solving the disjunctively constrained knapsack problem. The proposed method
combines a two-phase procedure and a large neighborhood search. First, a two-
phase procedure serves to determine an initial feasible solution by combining
the resolution of two complementary combinatorial optimization: the weighted
independent set problem and the classical binary knapsack problem. Second, a
descent method, based upon degrading and re-optimization strategies, is applied
in order to improve the quality of the solutions. Third, a large neighborhood
search is used for diversifying the search space. Finally, the computational results
showed that the proposed algorithm performed better than the Cplex solver and
it yielded high-quality solutions by improving several best known solutions of
the literature.
References
1. M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the
Theory of NP-completeness, W.H. Freeman and Comp., San Francisco, 1979.
2. M. Hifi. An iterative rounding search-based algorithm for the disjunctively con-
strained knapsack problem. Engineering Optimization, to appear (2012).
3. M. Hifi and M. Michrafy. Reduction strategies and exact algorithms for the dis-
junctively constrained knapsack problem. Computers and Operations Research 34:
2657–2673, 2007.
4. M. Hifi and M. Michrafy. A reactive local search algorithm for the disjunctively
constrained knapsack problem. Journal of the Operational Research Society 57:
718–726, 2006.
5. M. Hifi and N. Otmani. An algorithm for the disjunctively constrained knapsack
problem, International Journal of Operational Research,13: 22–43, 2012, 2012.
6. M. Hifi and N. Otmani. An algorithm for the disjunctively constrained knapsack
problem, IEEE - International Conference on Communications, Computing and
Control Applications, pp. 1-6, 2011.
7. M. Hifi, S. Negre and M. Ould Ahmed Mounir. Local branching-based algorithm
for the disjunctively constrained knapsack problem, IEEE, Proceedings of the In-
ternational Conference on Computers & Industrial Engineering, pp. 279–284, 2009.
8. S. Martello, D. Pisinger and P. Toth. Dynamic programming and strong bounds for
the 0-1 knapsack problem, Management Science, Vol. 45, pp. 414-424, 1999.
9. U. Pferschy and J. Schauer. The knapsack problem with conflict graphs, Journal of
Graph Algorithms and Applications, 13: 233–249, 2009.
10. D. Pisinger and M. Sigurd. Using decomposition techniques and constraint program-
ming for solving the two-dimensional bin-packing problem. INFORMS Journal on
Computing 19: 36–51, 2007.
11. D. Pisinger and S. Ropke. Large Neighborhood Search, Handbook of Metaheuristics,
International Series in Operations Research & Management Science Volume 146,
399-419, 2010.
12. R. Sadykov and F. Vanderbeck. Bin packing with conflicts: A generic branch-and-
price algorithm. INFORMS Journal on Computing (Published online in May 4,
2012, doi: 10.1287/ijoc.1120.0499).
12 M. Hifi, S. Saleh and L. Wu
13. P. Shaw. Using constraint programming and local search methods to solve vehicle
routing problems, In: CP-98 (Fourth International Conference on Principles and
Practice of Constraint Programming). Lect. Notes Comput. Sci., 1520, 417431,
1998.
14. T. Yamada, S. Kataoka and K. Watanabe. Heuristic and exact algorithms for
the disjunctively constrained knapsack problem, Information Processing Society of
Japan Journal, 43: 2864–2870, 2002.
15. T. Yamada and S. Kataoka. Heuristic and exact algorithms for the disjunctively
constrained knapsack problem. EURO 2001, Rotterdam, The Netherlands, July 9–
11, 2001.