Conference PaperPDF Available

Abstract and Figures

In this paper, we propose a heuristic based upon the large neighborhood search for the disjunctively constrained knapsack problem (DCKP). The proposed method combines a two-phase procedure and a large neighborhood search. First, the two-phase procedure is applied in order to provide a starting feasible solution for the large neighborhood search. The first phase serves to determine a feasible solution by successively solving two subproblems: the weighted independent set and the classical binary knapsack. The second phase tries to improve the quality of the solutions by using a descent method which applies both degrading and re-optimizing strategies. Second, a large neighborhood search is introduced in order to diversify the search space. Finally, the performance of the proposed method is computationally analyzed on a set of benchmark instances of the literature where its provided results are compared to those reached by Cplex solver and some recent algorithms. The provided results show that the method is very competitive since it is able to reach new solutions within small runtimes.
Content may be subject to copyright.
A Fast Large Neighborhood Search for
Disjunctively Constrained Knapsack Problems
M. Hifi, S. Saleh, and L. Wu
EPROAD EA 4669, Univercit´e de Picardie Jules Verne,
7 rue du Moulin Neuf, 80039 Amiens, France
{mhand.hifi,sagvan.saleh,lei.wu}@u-picardie.fr
Abstract. In this paper a fast large neighborhood search-based heuris-
tic is proposed for solving the Disjunctively Constrained Knapsack Prob-
lem (DCKP). The proposed method combines a two-phase procedure and
a large neighborhood search. First, a two-phase procedure is applied in
order to construct a starting feasible solution of the DCKP. Its first phase
serves to determine a feasible solution by combining two complementary
problems: the weighted independent set problem and the classical binary
knapsack problem. Its second phase uses a descent method trying to im-
prove the current solution by applying both degrading and re-optimizing
strategies. Second, a large neighborhood search is used for diversifying
the search space. Finally, the performance of the proposed method is
computationally analyzed on a set of benchmark instances for the litera-
ture and its results are compared to those reached by both Cplex solver
and more recent algorithms in the literature. Several improved solutions
have been obtained within small average runtime.
Keywords: Heuristic; knapsack; neighborhood; re-optimisation.
1 Introduction
In this paper we investigate the use of the large neighborhood search-based
heuristic for solving the disjunctively constrained knapsack problem (DCKP).
DCKP is characterized by a knapsack of fixed capacity c, a set Iof nitems, and
a set Eof incompatible couples of items, where En(i, j )I×I, i < jo.
Each item iIis represented by nonnegative weight wiand profit pi. The
goal of the DCKP is to maximize the total profit of items that can be placed
into the knapsack without exceeding its capacity, where all items included in the
knapsack must be compatible. Formally, DCKP can be defined as follows:
(PDCKP ) max X
iI
pixi
s.t. X
iI
wixic(1)
xi+xj1(i, j)E(2)
xi∈ {0,1} iI,
2 M. Hifi, S. Saleh and L. Wu
where xi,iI, is equal to 1 if item iis included in the knapsack (solu-
tion); 0 otherwise. Inequality (1) denotes the knapsack constraint with capacity
cand inequalities (2) are the disjunctive constraints which ensure that all items
belonging to a feasible solution must be compatible. We can observe that the
knapsack’s polytope is the one obtained by combining inequality (1) with in-
tegral constraints xi∈ {0,1},iI, and that of weighted independent set
problem obtained by associating inequalities (2) with the integral constraints.
Without loss of generality, we assume that: (i) all input data c, pi, wi,iI,
are strictly positive integers and, (ii) X
iI
wi> c for avoiding trivial solutions.
The remainder of the paper is organized as follows. Section 2 reviews some
previous works on the DCKP. Section 3 discusses the two-phase procedure that
provides a starting solution for PDC KP . Section 4 describes the large neighbor-
hood search-based heuristic for the DCKP. Section 5 evaluates the performance
of the proposed method on the instances taken from the literature, and analyzes
the obtained results. Finally, Section 6 summarizes the contents of the paper.
2 Background
The DCKP is an NP-hard combinatorial optimization problem. It reduces to
the maximum weighted independent set problem (Garey and Johnson [1]) when
the knapsack capacity constraint is omitted and to the classic knapsack problem
when E=. It is easy to show that DCKP is a more complex extension of the
multiple choice knapsack problem which arises either as a stand alone problem
or as a component of more difficult combinatorial optimization problems. Its
induced structure in complex problems allows the computation of upper bounds
and the design of heuristic and exact methods for these complex instances. For
example, DCKP was used in Dantzig-Wolfe’s decomposition formulation for the
two-dimensional bin packing problem (Pisinger and Sigurd [10]). It served as a
local optimization subproblem for the pricing problem which consists in find-
ing a feasible packing of a single bin verifying the smallest reduced cost. The
same problem has been also used in Sadykov and Vanderbeck [12] as the pricing
problem for solving the bin packing with conflicts.
Due to the complexity and hardness of the DCKP, most results on this topic
are based on heuristics although exact methods have been proposed. Among
papers addressing the resolution of DCKP, we found that of Yamada et al. [14,
15] in which the problem was tackled with approximate and exact methods.
The approximate heuristic generates an initial feasible solution, and improves it
using a 2-opt neighborhood-search. The exact algorithm starts its search from
the solution obtained by the approximate algorithm, and undertakes an implicit
enumeration combined with an interval reduction technique.
Hifi and Michrafy [3] proposed three exact algorithms in which reduction
strategies, an equivalent model and a dichotomous search cooperate to solve
DCKP. The first algorithm reduces the size of the original problem by start-
ing with a lower bound and successively solving relaxed DCKPs. The second
M. Hifi, S. Saleh and L. Wu 3
algorithm combines a reduction strategy with a dichotomous search in order to
accelerate the search process. The third algorithm tackles instances with a large
number of disjunctive constraints using two cooperating equivalent models.
Hifi and Michrafy [4] proposed a three-step reactive local search. The first
step of the algorithm starts by determining an initial solution using a greedy pro-
cedure. The second step is based on an intensification procedure which removes
an item from the solution and inserts other ones. It adopts a memory list that
stores swaps and/or the hashing function; thus, forbids cycling. The third step
diversifies the search process by accepting to temporarily degrade the quality of
the solution in hope to escape from local optima.
Pferschy and Schauer [9] presented pseudo-polynomial algorithms for special
cases of the disjunctively knapsack problem which are mainly based on a graph
representation: trees, graphs with bounded tree-width and chordal graphs. The
authors extended their algorithms for establishing fully polynomial time approx-
imation schemes (FPTAS).
Hifi et al. [7] investigated the use of the rounding solution procedure and an
effective local branching. The method combines two procedures: (i) a rounding
solution procedure and (ii) a restricted exact solution procedure. Hifi and Ot-
mani [5] investigated the use of the scatter search for approximately solving the
DCKP. The approach tried to explore some characteristics of two problems in
order to tackle the DCKP: the independent set problem and the single knapsack
problem. The performance of the approach was evaluated on the same instances
as considered in [7] and showed that such approach was able to improve the
solution quality of some instances. Hifi and Otmani [6] adapted the same ap-
proach as in [5], but by considering an equivalent model of the DCKP already
proposed by Hifi and Michrafy [4]. The equivalent model was solved by applying
a first level scatter search in which the model was refined by injecting some valid
constraints.
Finally, Hifi [2] investigates an iterative rounding search-based algorithm.
The method can be viewed as an alternate to both approaches considered in
Hifi et al. [5, 7] where three strategies were combined: (i) the variable-fixing
technic using the rounding method applied to the linear relaxation of DCKP,
(ii) the injection of successive valid constraints with bounding the objective
function, and (iii) a neighbor search around solutions characterizing a series
of reduced subproblems. The aforementioned steps are iterated until satisfying
some stopping criteria.
3 A Two-Phase Solution Procedure
This section describes an efficient algorithm for approximately solving the DCKP
by combining two alternative procedures. The first procedure is applied for con-
structing a starting feasible solution while the second one is used in order to
improve the current solution. For the rest of the paper, we assume that all items
are ranked in decreasing order of their profits.
4 M. Hifi, S. Saleh and L. Wu
3.1 The first phase
The first phase determines a feasible solution of the DCKP by solving two opti-
mization problems:
Aweighted independent set problem (noted PW IS ), extracted from PD CKP ,
is first solved to determine an Independent Set solution, noted I S.
Aclassical binary knapsack problem (noted PK) associated to both I S and
the corresponding capacity constraint (i.e., PiIS wixic) is solved in
order to provide a feasible solution of PD CKP .
Let SIS = (s1,...,sn) be a feasible solution of PW IS , where siis the binary
value assigned to xi,iI. Let IS Ibe the restricted set of items denoting
items of SIS whose values are fixed to 1. Then, linear programs referring to both
PW IS and PKmay be defined as follow:
(PW IS )
max X
iI
pixi
s.t. xi+xj1,(i, j)E
xi∈ {0,1},iI,
(PK)
max X
iIS
pixi
s.t. X
iIS
wixic,
xi∈ {0,1},iIS.
On the one hand, we can observe that the solution domain of PW IS includes
the solution domain of PD CKP . On the other hand, an optimal solution of PW I S
is not necessary an optimal solution of PDCK P . Therefore, in order to search a
quick solution IS , the following procedure is applied.
Algorithm 1 : An independent set as a solution of PW IS
Input: An instance Iof PDCK P .
Output: A feasible solution (independent set) IS for PWI S .
1: Initialization:
Set IS =and I={1,...,n}.
2: while I6=do
3: Let i= argmaxnpi|pipk, k Io.
4: Set IS =IS ∪ {i}.
5: Remove iand all items jsuch that (i, j)Efrom I.
6: end while
7: return IS as a feasible solution of PI S .
Specifically, Algorithm 1 starts by initializing IS to an empty set (a feasible
solution of PW I S ). It then, iteratively, selects the item realizing the greatest
profit and drops the selected item i, with its incompatible items, from I. The
process is iterated until no item can be added into the current solution IS. In
this case, the algorithm stops and exits with a feasible solution IS for PW I S .
As mentioned above, IS may violate the capacity constraint of PDCK P .
Then, in order to provide a feasible solution for PDC KP , the knapsack problem
M. Hifi, S. Saleh and L. Wu 5
Algorithm 2 : A starting DCKP’s feasible solution
Input: IS , an independent set of PWI S , and I, an instance of PDC KP .
Output: SDCK P , a DCKP’s feasible solution.
1: Initialization:
Let PKbe the resulting knapsack problem constructed according to items belonging
to IS.
2: if SIS satisfies the capacity constraint (1) of PD CKP then
3: Set SDCK P =SIS ;
4: else
5: Let SDCK P be the resulting solution of PK.
6: end if
7: return SDCK P as a starting feasible solution of PDC KP .
PKis solved. Herein, PKis solved using the exact solver of Martello et al. [8].
Algorithm 2 describes the main steps used for determining a feasible solution of
PDCK P .
3.2 The second phase
In order to improve the quality of the solution obtained from the first phase (i.e.,
the starting solution SDC K P returned by Algorithm 2), a local search is per-
formed. The used local search can be considered as a descent method that tends
to improve a solution by alternatively calling two procedures: degrading and re-
optimizing procedures. The degrading procedure serves to build a k-neighborhood
of SDC KP by dropping kfixed items from SD CK P while the re-optimizing proce-
dure tries to determine an optimal solution regarding the current neighborhood.
The descent procedure is stopped when no better solution can be reached.
Algorithm 3 : A descent method
Input: SDCK P , a starting solution of DCKP.
Output: S
DCK P , a local optimal solution of DCKP.
1: Set S
DCK P as an initial feasible solution, where all variables are fixed to 0.
2: while SDCK P is better than S
DCK P do
3: Update S
DCK P with SDCK P .
4: Set α|I|fixed variables of SDCK P as free.
5: Define the corresponding neighborhood of SD CKP .
6: Determine the optimal solution in the current neighborhood and update SD CKP .
7: end while
8: return S
DCK P .
Algorithm 3 shows how an improved solution can be computed by using a
descent method. Indeed, let SDC KP be the current feasible solution obtained at
the first phase. Let αbe a constant, such that αbelongs to the interval [0,100],
denoting the percentage of unassigned decision variables, i.e., some variables are
6 M. Hifi, S. Saleh and L. Wu
free according to the current solution SD CKP . The core of the decent method
is represented by the main loop (cf., lines 2 - 7). At line 3, the best solution
found so far, S
DCK P , is updated with the solution SDC K P returned at the
last iteration. Line 4 determines the α|I|unassigned variables regarding the
current solution SDC KP ,where items with highest degree (i.e., items with largest
neighborhood), are favored. Let ibe an item realizing the highest degree; that
is, an item iwhose variable is xifixed to 1 in SDC KP . Then, set xito a free
variable with its incompatible variables xj,such that (i, j)Eand (j, k)/E,
where k6=icorresponds to the index of the variables whose values are equal to
1 in SDC KP . At line 6, SDCK P , is replaced by the best solution found in the
current neighborhood. Finally, the process is iterated until no better solution
can be reached (in this case, Algorithm 3 exits with the best solution S
DCK P ).
Algorithm 4 : Remove β|I|variables of SDCK P
Input: SDCK P , a starting solution of PDCKP .
Output: An independent set I S and a reduced instance Irof PDC KP .
1: Set counter = 0, Ir=and IS to the set of items whose decision variable are
fixed to 1 in SDCK P .
2: Range IS in non decreasing order of their profit per weight.
3: while counter < β|I|do
4: Let rbe a real number randomly generated in the interval [0,1] and i=|ISrγ.
5: Set IS =IS \ {i},Ir=Iriand increment counter =counter + 1.
6: for all items jsuch that (i, j)Edo
7: if item jis compatible with all items belong to I S then
8: Set Ir=Ir∪ {j}and counter =counter + 1.
9: end if
10: end for
11: end while
12: return IS and Ir.
Note that, on the one hand, Algorithm 3 may increase when αtends to 100,
since dropping a large percentage of items involves that the reduced DCKP is
closest to the original one. On the other hand, Algorithm 3 is called at each
iteration of the large neighborhood search (cf., Section 4), a large size reduced
DCKP can cause the large neighborhood search slow down. Therefore, we favor
the achievement of a fast algorithm which is able to converge towards a good
local optimum. This is why our choice is oriented to moderate the values of α,
as shown in the experimental part (cf., Section 5).
4 A Large Neighborhood Search
LNS is a heuristic that has proven to be effective on wide range of combinatorial
optimization problems. A simplest version of LNS has been presented in Shaw
[13] for solving the vehicle routing problem (cf., also Pisinger [11]). LNS is based
M. Hifi, S. Saleh and L. Wu 7
on the concepts of building and exploring a neighborhood; that is, a neighbor-
hood defined implicitly by a destroy and a repair procedure. Unlike the descent
methods, which may stagnates in local optima, using large neighborhoods makes
it possible to reach better solutions and explore a more promising search space.
For instance, the descent method discussed in Section 3.2 (cf., Algorithm 3)
may explore some regions and stagnates in a local optimum because either
degrading or re-optimizing considers a mono-criterion. In order to enlarge the
chance of reaching a series of improved solutions or to escape from a series of lo-
cal optima, a random destroying strategy, which depends the value of the profit
per weight of items, is applied. Algorithm 5 summarizes the main steps of LNS
(noted LNSBH) which uses Algorithm 4 for determining the neighborhood of a
given solution.
Algorithm 5 : A large neighborhood search-based heuristic
Input: SDCK P , a starting solution of PDCKP .
Output: S
DCK P , a local optimum of PDCKP .
1: Set S
DCK P as a starting feasible solution, where all variables are assigned to 0.
2: while the time limit is not performed do
3: Call Algorithm 4 in order to find IS and Iraccording to SDCK P .
4: Call Algorithm 1 with an argument Irto complete IS.
5: Call Algorithm 2 with an argument IS for reaching a new solution SDCK P .
6: Improve SDCKP by applying Algorithm 3.
7: Update S
DCK P with the best solution.
8: end while
9: return S
DCK P .
5 Computational Results
This section evaluates the effectiveness of the proposed Large Neighborhood
Search-Based Heuristic (LNSBH) on two groups of instances (taken from the
literature [4] and generated following the schema used by Yamada et al. [14,
15]). The first group contains twenty medium instances with n= 500 items,
a capacity c= 1800,and different densities (assessed in terms of the number
of disjunctive constraints). The second group contains thirty large instances,
where each instance contains 1000 items, with ctaken in the discrete interval
{1800,2000}and with various densities. The proposed LNSBH was coded in
C++ and run on an Intel Pentium Core i5-2500 with 3.3 Ghz.
LNSBH uses some parameters, like the percentage αof the items dropped in
the descent method, the percentage βof items removed from the solution when
the large neighborhood search is applied, the constant γused by Algorithm 4
and, the fixed runtime limit tused for stopping the resolution.
8 M. Hifi, S. Saleh and L. Wu
5.1 Effect of both degrading and re-optimizing procedures
This section evaluates the effect of the descent method based upon degrading
and re-optimizing procedures (as used in Algorithm 3) on the starting solution
realized by Algorithm 2. We recall that the re-optimization procedure tries to
solve a reduced PDC KP which is a problem with a small size. So, in order to
balance between the quality of the complementary solution and the runtime that
maintains a fast resolution, we solve it using the Cplex solver v 12.4.
Table 1 shows the variation of Av Sol, the average value of solutions provided
by the considered algorithm over all treated instances and Av time, the average
runtime needed by each algorithm for reaching the results. Therefore, the choice
of the value of αmay influence the behavior of the descent method and so,
the performance of algorithm is performed by varying αin the discrete interval
{5,10; 15; 20}.
From Table 1, we can observe that the best average solution value is realized
The descent method
α=
Algo. 1-2 5% 10% 15% 20%
Av. Sol. 2014.62 2129.98 2188.80 2232.36 2217.04
Av. time 0.001 0.15 2.03 19.72 118.30
Table 1. Effect of the descent method on the starting DCKP’s solution.
for the value α= 15%, but it needs an average runtime of 19.72 seconds. Note
that LNSBH’s runtime depends on the descent method’s runtime. Therefore,
according to the results displayed in Table 1, on can observe that the value of 5%
favors a quick resolution (0.15 seconds) with an interesting average solution value
of 2129.98. Since herein the goal is to propose a faster LNSBH, we then retuned
the version of Algorithm 5 with the value of α= 5%.
5.2 Behavior of LNSBH on both groups of instances
Remark that, according to the results shown in Shaw [13], when γvaries over
the range of the integer interval [5,20], the LNS works reasonably well. In our
test, we set γ= 20 for Algorithm 4. Therefore, in order to evaluate the per-
formance of LNSBH, we focus on both parameters βand t; that are, used in
Algorithm 5. The study is conducted by varying the value of βin the discrete
interval {10,15,20,25,30}and tin {25,50,100,150,200}(seconds). Table 2 dis-
plays the average solution values realized by LNSBH using the different values
of (β, t).
From Table 2, we observe what follows:
Setting β= 10% provides the best average solution value. Further, the so-
lution quality increases when the runtime limit is extended.
M. Hifi, S. Saleh and L. Wu 9
Variation of β
t
β10% 15% 20% 25% 30%
25 2395.38 2394.48 2392.5 2391.14 2389.06
50 2397.52 2397.44 2394.74 2393.34 2391.18
100 2399.16 2398.98 2396.36 2394.2 2393.9
150 2399.28 2399 2397.82 2395.62 2394.58
200 2400.22 2399.62 2398.76 2396.8 2395.74
Table 2. The quality of the average values when varying the values of the couple (β, t).
All other variations of βinduce smaller average values than those of β= 10%
in 200 seconds.
Therefore, according to the results displayed in Table 2, the objective value
of the solutions determined by setting (β, t) = (10%,100) and (10%,200) are dis-
played in Table 3. In our computational study, ten random trials of the LNSBH
are performed on the fifty literature instances. and each trial is stopped respec-
tively in 100 and 200 seconds.
Table 3 shows objective values reached by LNSBH and Cplex when compared
to the best solutions of the literature (taken from Hifi et al. [2, 6]). Column 1
of Table 3 displays the instance label, column 2 reports the value of the best
solution (denoted VCplex) reached by Cplex v12.4 after one hour of runtime and
column 3 displays the best known solution of the literature, denoted VIRS. Fi-
nally, column 4 (resp. 5) reports Max.Sol. (resp. Av.Sol) denoting the maximum
(average) solution value obtained by LNSBH over the ten trials for the first run-
time limit of 100 seconds and columns 6 and 7 state those of LNSBH for the
second runtime limit of 200 seconds.
From Table 3, we observe what follows:
1. First, we can observe the inferiority of the Cplex solver since it realizes an
average value of 2317.88 compared to that realized by LNSBH (2390.40). In
this case, Cplex matches 5 instances over 50, representing a percentage of
10% of the best solutions of the literature.
2. Second, for both runtime limits (100 and 200 seconds respectively), LNSBH
realizes better average values than the average value of the best solution of
the literature. Indeed, LNSBH realizes an average value of 2393.63 (resp.
2395.94) with the first (resp. second) runtime limit.
3. Third, over all trials and with the first runtime limit of 100 seconds, LNSBH
is able to reach 30 new solutions, it matches 14 instances and fails in 6
occasions. On the other hand, running LNSBH with the second runtime
limit of 200 seconds increases the percentage of the new solutions. Indeed, in
this case, LNSBH realizes 33 new solutions, matches 13 solutions and fails
in 4 occasions.
10 M. Hifi, S. Saleh and L. Wu
4. Fourth and last, LNSBH with the second runtime limit (200 seconds) is able
to reach 10 new solutions compared to the solutions reached by LNSBH with
the first runtime limit of 100 seconds.
LNSBH
β= 10 and t= 100 β= 10 and t= 200
Instance VCplex VIRS Max.Sol. Av.Sol. Max.Sol. Av.Sol.
1I1 2567 2567 2567 2564.2 2567 2564.6
1I2 2594 2594 2594 2594 2594 2594
1I3 2320 2320 2320 2319 2320 2319
1I4 2298 2303 2310 2310 2310 2310
1I5 2310 2310 2330 2328 2330 2329
2I1 2080 2100 2117 2116.1 2118 2117
2I2 2070 2110 2110 2110 2110 2110
2I3 2098 2128 2119 2110.2 2132 2118.1
2I4 2070 2107 2109 2106.9 2109 2108.2
2I5 2090 2103 2110 2109.7 2114 2111.2
3I1 1667 1840 1845 1788.4 1845 1814
3I2 1681 1785 1779 1759.9 1779 1769.2
3I3 1461 1742 1774 1759.3 1774 1762.9
3I4 1567 1792 1792 1792 1792 1792
3I5 1563 1772 1775 1751.6 1775 1759.2
4I1 1053 1321 1330 1330 1330 1330
4I2 1199 1378 1378 1378 1378 1378
4I3 1212 1374 1374 1374 1374 1374
4I4 1066 1353 1353 1352.7 1353 1353
4I5 1229 1354 1354 1336.4 1354 1336.4
5I1 2680 2690 2690 2684 2690 2686
5I2 2690 2690 2690 2683.9 2690 2685.9
5I3 2670 2689 2680 2675.7 2690 2679.7
5I4 2680 2690 2698 2683.2 2698 2689.2
5I5 2660 2680 2670 2668 2670 2669.9
6I1 2820 2840 2850 2850 2850 2850
6I2 2800 2820 2830 2823.9 2830 2827.7
6I3 2790 2820 2830 2819.9 2830 2821.9
6I4 2790 2800 2820 2817 2822 2820.2
6I5 2800 2810 2830 2823.7 2830 2825.6
7I1 2700 2750 2780 2771.9 2780 2773
7I2 2720 2750 2770 2769 2770 2770
7I3 2718 2747 2760 2759 2760 2760
7I4 2728 2773 2800 2791 2800 2793
7I5 2730 2757 2770 2763 2770 2765
8I1 2638 2720 2720 2719.1 2720 2719.1
8I2 2659 2709 2720 2719 2720 2720
8I3 2664 2730 2740 2733 2740 2734
8I4 2620 2710 2710 2708.7 2719 2709.9
8I5 2644 2710 2710 2709 2710 2710
9I1 2589 2650 2676 2670.9 2677 2671.3
9I2 2580 2640 2665 2661.5 2665 2663
9I3 2580 2635 2670 2665.8 2670 2668.6
9I4 2540 2630 2660 2659.8 2660 2659.9
9I5 2594 2630 2669 2663.5 2670 2664.9
10I1 2500 2610 2620 2616.7 2620 2619.7
10I2 2549 2642 2630 2627.5 2630 2629.9
10I3 2527 2618 2620 2617.1 2627 2620.5
10I4 2509 2621 2620 2617 2620 2618.6
10I5 2530 2606 2620 2619.3 2625 2620.5
Av. Sol. 2317.88 2390.40 2399.16 2393.63 2400.22 2395.94
Table 3. Performance of LNSBH vs Cplex and IRS on the benchmark instances of the
literature.
M. Hifi, S. Saleh and L. Wu 11
6 Conclusion
In this paper, we proposed a fast large neighborhood search-based heuristic for
solving the disjunctively constrained knapsack problem. The proposed method
combines a two-phase procedure and a large neighborhood search. First, a two-
phase procedure serves to determine an initial feasible solution by combining
the resolution of two complementary combinatorial optimization: the weighted
independent set problem and the classical binary knapsack problem. Second, a
descent method, based upon degrading and re-optimization strategies, is applied
in order to improve the quality of the solutions. Third, a large neighborhood
search is used for diversifying the search space. Finally, the computational results
showed that the proposed algorithm performed better than the Cplex solver and
it yielded high-quality solutions by improving several best known solutions of
the literature.
References
1. M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the
Theory of NP-completeness, W.H. Freeman and Comp., San Francisco, 1979.
2. M. Hifi. An iterative rounding search-based algorithm for the disjunctively con-
strained knapsack problem. Engineering Optimization, to appear (2012).
3. M. Hifi and M. Michrafy. Reduction strategies and exact algorithms for the dis-
junctively constrained knapsack problem. Computers and Operations Research 34:
2657–2673, 2007.
4. M. Hifi and M. Michrafy. A reactive local search algorithm for the disjunctively
constrained knapsack problem. Journal of the Operational Research Society 57:
718–726, 2006.
5. M. Hifi and N. Otmani. An algorithm for the disjunctively constrained knapsack
problem, International Journal of Operational Research,13: 22–43, 2012, 2012.
6. M. Hifi and N. Otmani. An algorithm for the disjunctively constrained knapsack
problem, IEEE - International Conference on Communications, Computing and
Control Applications, pp. 1-6, 2011.
7. M. Hifi, S. Negre and M. Ould Ahmed Mounir. Local branching-based algorithm
for the disjunctively constrained knapsack problem, IEEE, Proceedings of the In-
ternational Conference on Computers & Industrial Engineering, pp. 279–284, 2009.
8. S. Martello, D. Pisinger and P. Toth. Dynamic programming and strong bounds for
the 0-1 knapsack problem, Management Science, Vol. 45, pp. 414-424, 1999.
9. U. Pferschy and J. Schauer. The knapsack problem with conflict graphs, Journal of
Graph Algorithms and Applications, 13: 233–249, 2009.
10. D. Pisinger and M. Sigurd. Using decomposition techniques and constraint program-
ming for solving the two-dimensional bin-packing problem. INFORMS Journal on
Computing 19: 36–51, 2007.
11. D. Pisinger and S. Ropke. Large Neighborhood Search, Handbook of Metaheuristics,
International Series in Operations Research & Management Science Volume 146,
399-419, 2010.
12. R. Sadykov and F. Vanderbeck. Bin packing with conflicts: A generic branch-and-
price algorithm. INFORMS Journal on Computing (Published online in May 4,
2012, doi: 10.1287/ijoc.1120.0499).
12 M. Hifi, S. Saleh and L. Wu
13. P. Shaw. Using constraint programming and local search methods to solve vehicle
routing problems, In: CP-98 (Fourth International Conference on Principles and
Practice of Constraint Programming). Lect. Notes Comput. Sci., 1520, 417431,
1998.
14. T. Yamada, S. Kataoka and K. Watanabe. Heuristic and exact algorithms for
the disjunctively constrained knapsack problem, Information Processing Society of
Japan Journal, 43: 2864–2870, 2002.
15. T. Yamada and S. Kataoka. Heuristic and exact algorithms for the disjunctively
constrained knapsack problem. EURO 2001, Rotterdam, The Netherlands, July 9–
11, 2001.
... The DCKP is able to formulate a number of real-world applications related to public transportation [20], scheduling problems [26,43,8], network communications [2] and daily photograph scheduling for earth observation satellite [44,46]. In addition to its practical significance, the DCKP plays an important role in combinatorial optimization, since it is closely related to several other popular NP-hard problems. ...
... Consequently, a variety of heuristic algorithms have been devised to solve the DCKP approximately. The existing heuristic approaches include the neighborhood search algorithm of [50], parallel neighbor algorithms of [20,37,36], scatter search algorithms of [21,1], the iterative rounding algorithm of [17], the probabilistic tabu search algorithm of [38] and the threshold search based memetic algorithm of [46]. According to their computational results on the popular set of 100 benchmark instances (see Section 3), four algorithms represent the state-of-the-art heuristic approaches for the DCKP, i.e., the parallel neighborhood search algorithm (PNS) [37], the cooperative parallel adaptive neighborhood search algorithm (CPANS) [36], the probabilistic tabu search algorithm (PTS) [38] and the threshold search based memetic algorithm (TSBMA) [46]. ...
... S ← S /* S replaces S when the threshold T is satisfied */ 11: break; 12: As shown in Algorithm 2, the FLS procedure first performs some initialization tasks (lines [3][4][5]. Then the search enters the 'while' loop (lines [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] to improve the input solution S iteratively by sequentially exploring three neighborhoods N F 1 to N F 3 (see [46] for more details). Each iteration of the 'while' loop performs three operations. ...
Article
Full-text available
Given a directed graph G = ( V, E ), a feedback vertex set is a vertex subset C whose removal makes the graph G acyclic. The feedback vertex set problem is to find the subset C * whose cardinality is the minimum. As a general model, this problem has a variety of applications. However, the problem is known to be NP-hard, and thus computationally challenging. To solve this difficult problem, this article develops an iterated dynamic thresholding search algorithm, which features a combination of local optimization, dynamic thresholding search, and perturbation. Computational experiments on 101 benchmark graphs from various sources demonstrate the advantage of the algorithm compared with the state-of-the-art algorithms, by reporting record-breaking best solutions for 24 graphs, equally best results for 75 graphs, and worse best results for only two graphs. We also study how the key components of the algorithm affect its performance of the algorithm.
... The DCKP is able to formulate a number of real-world applications related to public transportation [20], scheduling problems [26,43,8], network communications [2] and daily photograph scheduling for earth observation satellite [44,46]. In addition to its practical significance, the DCKP plays an important role in combinatorial optimization, since it is closely related to several other popular NP-hard problems. ...
... Consequently, a variety of heuristic algorithms have been devised to solve the DCKP approximately. The existing heuristic approaches include the neighborhood search algorithm of [50], parallel neighbor algorithms of [20,37,36], scatter search algorithms of [21,1], the iterative rounding algorithm of [17], the probabilistic tabu search algorithm of [38] and the threshold search based memetic algorithm of [46]. According to their computational results on the popular set of 100 benchmark instances (see Section 3), four algorithms represent the state-of-the-art heuristic approaches for the DCKP, i.e., the parallel neighborhood search algorithm (PNS) [37], the cooperative parallel adaptive neighborhood search algorithm (CPANS) [36], the probabilistic tabu search algorithm (PTS) [38] and the threshold search based memetic algorithm (TSBMA) [46]. ...
... S ← S /* S replaces S when the threshold T is satisfied */ 11: break; 12: As shown in Algorithm 2, the FLS procedure first performs some initialization tasks (lines [3][4][5]. Then the search enters the 'while' loop (lines [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] to improve the input solution S iteratively by sequentially exploring three neighborhoods N F 1 to N F 3 (see [46] for more details). Each iteration of the 'while' loop performs three operations. ...
... Finally, the computational results show that the proposed algorithm is very competitive when compared to both Cplex solver and one of the most recent algorithm of the literature. This work is published in an international conference with proceedings 3rd International Symposium on Combinatorial Optimization (cf., Hifi et al. [36]). In the following chapter, we present the second sequential algorithm for the DCKP. ...
... This work is published in an international conference with proceedings 3rd International Symposium on Combinatorial Optimization (cf., Hifi et al.[36]) ...
Thesis
Combinatorial optimization problems are of high interest both for the scientific world and for the industrial world. The research community has simplified many practical situations as combinatorial optimization problems. Among these problems, we can find some problems belonging to the knapsack family. This thesis considers a particular problem belonging to the knapsack family, known as the disjunctively constrained knapsack problem. Because of the difficulty of this problem, we are searching for approximate solution techniques with fast solution times for its large scale instances. A promising way to solve the disjunctively constrained knapsack problem is to consider some techniques based upon the principle of neighborhood search. Although such techniques produce approximate solution methods, they allow us to present fast algorithms that yield interesting solutions within a short average running time. In order to tackle large scale instances of the disjunctively constrained knapsack problem, we present sequential and parallel algorithms based upon neighborhood search techniques. The first algorithm can be viewed as a random neighborhood search method. This algorithm uses a combination of neighborhood search techniques in order to randomly explore a series of sub-solution spaces, where each subspace is characterized by a neighborhood of a local optimum. The second algorithm is an adaptive neighborhood search that guides the search process in the feasible solution space towards high quality solutions. This algorithm uses an ant colony optimization system to simulate the guided search. The third andlast algorithm is a parallel random neighborhood search method which exploits the parallelism for exploring simultaneously different sub-solution spaces by several processors. Each processor adopts its own random strategy to yield its own neighborhoods according to its internal information
... DCKP consists in packing a subset of pairwisely compatible items in a capacity-constrained knapsack in a way that the total profit of the selected items is maximized while satisfying the knapsack capacity. These two generalized knapsack problems can formulate additional relevant applications such as database partitioning [Nav+84], flexible manufacturing [GNY94], key-pose caching [LLD10], public key prototyping [Sch96], data allocating [WH20a] and public transportation [Hif+14]. ...
... The performance of the proposed algorithm is confirmed by experiments on 50 benchmark instances. [Hif+14] introduced the first parallel algorithm (PLNSH) for the DCKP with a large neighborhood search heuristic (LNSH). The proposed PLNSH algorithm explores the neighborhoods simultaneously with 5 or 10 processors, where each processor applies a LNSH procedure to improve the current solution. ...
Thesis
This thesis considers two generalized knapsack problems : the set-union knap-sack problem (SUKP) and the disjunctively constrained knapsack problem (DCKP). These two problems are useful models to formulate numerous practical applications. Given that they belong to the family of NP-hard problems, it is computationally challenging to solve them in the general case. This thesis is devoted to advancing the state-of-the-art for solving these relevant problems Specifically, we introduce an iterated two-phase local search algorithm, a kernel based tabu search algorithm, a multistart solution-based tabu search algorithm to solve the SUKP and a threshold search based memetic algorithm to solve the DCKP. Computational studies performed on a wide range of benchmark instances indicate that all the proposed approaches compete favourably with state-of-the-art algorithms. Additional experiments show the roles of the key composing ingredients of our algorithms, including the frequency-based local optima escaping strategy, the kernel search heuristic, the solution-based tabu search technique for the SUKP and the dedicated threshold search method for the DCKP.
... Tabu search (Rudek, 2014;Jin et al., 2012;Bozejko et al., 2017;Hou et al., 2017;Bozejko et al., 2013;Czapinski and Barnes, 2011;James et al., 2009;Czapiński, 2013;Bukata et al., 2015;Cordeau and Maischberger, 2012;Wei et al., 2017;Janiak et al., 2008;Shylo et al., 2011;Jin et al., 2014;Bożejko et al., 2016;Jin et al., 2011;Maischberger and Cordeau, 2011;Van Luong et al., 2013;Dai et al., 2009;Melab et al., 2011) Simulated annealing (Thiruvady et al., 2016Rudek, 2014;Defersha, 2015;Mu et al., 2016;Wang et al., 2015;Ferreiro et al., 2013;Lou and Reinitz, 2016;Banos et al., 2016;Bożejko et al., 2009Bożejko et al., , 2016; Lazarova and Borovska, 2008) Variable neigborhood search (Yazdani et al., 2010;Lei and Guo, 2015;Davidović and Crainic, 2012;Quan and Wu, 2017;Menendez et al., 2017;Eskandarpour et al., 2013;Coelho et al., 2016;Polat, 2017;Tu et al., 2017;Aydin and Sevkli, 2008;Polacek et al., 2008) (Greedy randomized) adaptive search (Caniou et al., 2012;Santos et al., 2010) Other single solution heuristics (Hifi et al., 2014) Population-based metaheuristics: ...
... One study uses GPGPU (Coelho et al., 2016) and achieves a speedup in the range of 0.93-14.49. Additionally, we found two studies (Caniou et al., 2012;Santos et al., 2010) that parallelize (greedy randomized) adaptive search and one study (Hifi et al., 2014) that parallelizes large neighborhood search (subsumed under "other single solution heuristic (OSSH)" ...
Article
Solving optimization problems with parallel algorithms has a long tradition in OR. Its future relevance for solving hard optimization problems in many fields, including finance, logistics, production and design, is leveraged through the increasing availability of powerful computing capabilities. Acknowledging the existence of several literature reviews on parallel optimization, we did not find reviews that cover the most recent literature on the parallelization of both exact and (meta)heuristic methods. However, in the past decade substantial advancements in parallel computing capabilities have been achieved and used by OR scholars so that an overview of modern parallel optimization in OR that accounts for these advancements is beneficial. Another issue from previous reviews results from their adoption of different foci so that concepts used to describe and structure prior literature differ. This heterogeneity is accompanied by a lack of unifying frameworks for parallel optimization across methodologies, application fields and problems, and it has finally led to an overall fragmented picture of what has been achieved and still needs to be done in parallel optimization in OR. This review addresses the aforementioned issues with three contributions: First, we suggest a new integrative framework of parallel computational optimization across optimization problems, algorithms and application $ Invited review domains. The framework integrates the perspectives of algorithmic design and computational implementation of parallel optimization. Second, we apply the framework to synthesize prior research on parallel optimization in OR, focusing on computational studies published in the period 2008-2017. Finally, we suggest research directions for parallel optimization in OR.
... Tabu search [Rudek, 2014, Jin et al., 2012, Bozejko et al., 2017, Bozejko et al., 2013, Czapinski and Barnes, 2011, James et al., 2009, Czapiński, 2013, Bukata et al., 2015, Cordeau and Maischberger, 2012, Wei et al., 2017, Janiak et al., 2008, Shylo et al., 2011, Jin et al., 2014, Bożejko et al., 2016, Jin et al., 2011, Maischberger and Cordeau, 2011, Van Luong et al., 2013, Dai et al., 2009] Simulated annealing [Thiruvady et al., 2016, Rudek, 2014, Defersha, 2015, Mu et al., 2016, Ferreiro et al., 2013, Lou and Reinitz, 2016, Banos et al., 2016, 2016, Lazarova and Borovska, 2008 Variable neigborhood search [Yazdani et al., 2010, Lei and Guo, 2015, Davidović and Crainic, 2012, Quan and Wu, 2017, Menendez et al., 2017, Eskandarpour et al., 2013, Coelho et al., 2016, Polat, 2017, Tu et al., 2017, Aydin and Sevkli, 2008, Polacek et al., 2008 (Greedy randomized) adaptive search [Caniou et al., 2012, Santos et al., 2010 Other single solution heuristics [Hifi et al., 2014] Population-based metaheuristics: ...
... One study uses GPGPU [Coelho et al., 2016] and achieves a speedup in the range of 0.93-14.49. Additionally, we found two studies [Caniou et al., 2012, Santos et al., 2010 that parallelize (greedy randomized) adaptive search and one study [Hifi et al., 2014] that parallelizes large neighborhood search (subsumed under "other single solution heuristic (OSSH)" in Table 2). ...
Preprint
Solving optimization problems with parallel algorithms has a long tradition in OR. Its future relevance for solving hard optimization problems in many fields, including finance, logistics, production and design, is leveraged through the increasing availability of powerful computing capabilities. Acknowledging the existence of several literature reviews on parallel optimization, we did not find reviews that cover the most recent literature on the parallelization of both exact and (meta)heuristic methods. However, in the past decade substantial advancements in parallel computing capabilities have been achieved and used by OR scholars so that an overview of modern parallel optimization in OR that accounts for these advancements is beneficial. Another issue from previous reviews results from their adoption of different foci so that concepts used to describe and structure prior literature differ. This heterogeneity is accompanied by a lack of unifying frameworks for parallel optimization across methodologies, application fields and problems, and it has finally led to an overall fragmented picture of what has been achieved and still needs to be done in parallel optimization in OR. This review addresses the aforementioned issues with three contributions: First, we suggest a new integrative framework of parallel computational optimization across optimization problems, algorithms and application domains. The framework integrates the perspectives of algorithmic design and computational implementation of parallel optimization. Second, we apply the framework to synthesize prior research on parallel optimization in OR, focusing on computational studies published in the period 2008-2017. Finally, we suggest research directions for parallel optimization in OR.
... LNS consists in building and exploring alternatively the neighbourhoods issuing from a given local optima. In recent research, LNS-based metaheuristics aroused great interest in combinatorial optimization, especially for solving the large-scalar optimization problems (see e.g., Pisinger & Ropke, 2010;Hansen, Mladenovic´,Mladenovic´Mladenovic´, Brimberg, & Pérez, 2010;Hifi, Saleh, & Wu, 2014 andWei, Zhang, Zhang, & Lim, 2015). In this section, we introduce a hybrid metaheursitc, which applies a LNS-based procedure to vary the solution space and then carries out a series of local moves to improve the current solution. ...
... LNS consists in building and exploring alternatively the neighbour- hoods issuing from a given local optima. In recent research, LNS-based metaheuristics aroused great interest in combinatorial optimization, especially for solving the large-scalar optimization problems (see e.g., Pisinger & Ropke, 2010;Hansen, Mladenovic´,Mladenovic´Mladenovic´, Brimberg, & Pérez, 2010;Hifi, Saleh, & Wu, 2014 andWei, Zhang, Zhang, & Lim, 2015). In this section, we introduce a hybrid meta- heursitc, which applies a LNS-based procedure to vary the solution space and then carries out a series of local moves to improve the current solution. ...
Article
This article investigates a new robust criterion for the vehicle routing problem with uncertainty on the travel time. The objective of the proposed criterion is to find a robust solution which displays better behaviour on a majority of scenarios, where each scenario represents a potential state of an uncertain event. In order to highlight the robustness of the proposed approach, the new robust criterion is compared with the classical robust criteria, such as best-case, worst-case and min-max deviation. Inspired from the mechanism developed by B. Roy for evaluating the robustness, this paper focuses on providing two robust conclusions for the new robust criterion: perfectly robust and pseudo robust. For the perfectly robust, the robust criterion is evaluated by using an exact method on a set of 480 small-scale instances generated from Solomon’s benchmark instances. For the pseudo robust, the robust criterion is evaluated by using a metaheuristic on a set of 54 medium-scale and large-scale instances. The numerical results show that the new approach is able to produce the robust solutions in a majority of cases.
... The exact solution method is based on the principles of implicit enumeration search which starts its search from the initial solution obtained from the approximate part. Hifi et al. (Hifi et al., 2014) proposed a parallel method based on the principles of large neighborhood search method for solving the disjunctively constrained knapsack problem. The method is designed using message passing interface. ...
Article
Full-text available
This paper proposed a parallel method for solving the Agricultural Land Investment Problem (ALIP), the problem that has an important impact on the agriculture issues. The author is first represent mathematically the problem by introducing a mathematical programming model. Then, a parallel method is proposed for optimizing the problem. The proposed method based on principles of parallel computing and neighborhood search methods. Neighborhood search techniques explore a series of solutions spaces with the aim of finding the best one. This is exploited in parallel computing, where several search processes are performed simultaneously. The parallel computing is designed using Message Passing Interface (MPI) which allows to build a flexible parallel program that can be executed in multicore and/or distributed environment. The method is competitive since it is able to solve a real life problem and yield high quality results in a fast solution runtime.
Article
In the last 35 years, parallel computing has drawn increasing interest from the academic community, especially in solving complex optimization problems that require large amounts of computational power. The use of parallel (multi-core and distributed) architectures is a natural and effective alternative to speeding up search methods, such as metaheuristics, and to enhancing the quality of the solutions. This survey focuses particularly on studies that adopt high-performance computing techniques to design, implement, and experiment trajectory-based metaheuristics, which pose a great challenge to high-performance computing and represent a large gap in the operations research literature. We outline the contributions from 1987 to the present, and the result is a complete overview of the current state of the art with respect to multi-core and distributed trajectory-based metaheuristics. Basic notions of high-performance computing are introduced, and different taxonomies for multi-core and distributed architectures and metaheuristics are reviewed. A comprehensive list of 127 publications is summarized and classified according to taxonomies and application types. Furthermore, past and future trends are indicated, and open research gaps are identified.
Chapter
The paper focuses on studying a class KCFG of a Constrained Knapsack Problem (CKP), where conflict and forcing constraints are present. Four KCFG-formulations as quadratically constrained programs are introduced that utilize geometric properties of a feasible domain such as inscribability in an ellipsoid and coverability by two parallel planes. The new models are applied in deriving new upper bounds that can be effectively found by semi-definite programming and the r-algorithm. Another introduced application area is the Polyhedral-ellipsoid method (PEM) for linear optimization on two-level sets in a polytope \(P'\) (\(P'\)-2LSs) illustrated by a numerical example. Besides KCFG, the new modelling and solution approaches can be applied to any CKP reducible to a polynomial number of CKPs on \(P'\)-2LSs.
Article
Full-text available
The bin packing problem with conflicts consists in packing items in a minimum number of bins of limited capacity while avoiding joint assignments of items that are in conflict. Our study demonstrates the comparatively good performance of a generic Branch-and-Price algorithm for this problem. We made use of our black-box solver BaPCod, relying on its generic branching scheme and primal heuristics, while developing a specific pricing oracle. For the case where the conflict graph is an interval graph, we developed a dynamic program-ming algorithm for pricing, while for the general case, we implemented a depth-first-search branch-and-bound approach. The algorithm is tested on instances from the literature where the conflict graph is an interval graph, as well as on newly generated instances with an arbitrarily conflict graph. The computational results show that our generic algorithm out-performs special purpose algorithms of the literature, closing all open instances in one hour of CPU time.
Article
This article proposes an iterative rounding search-based algorithm for approximately solving the disjunctively constrained knapsack problem. The problem can be viewed as a variant of the well-known knapsack problem with some sets of incompatible items. The algorithm considers two key features: a rounding strategy applied to the fractional variables of a linear relaxation and a neighbouring strategy used for improving the quality of the solutions at hand. Both strategies are iterated into a process based on adding a series of (i) valid cardinality constraints and (ii) lower bounds used for bounding the objective function. The proposed algorithm is analysed computationally on a set of benchmark instances of the literature. The proposed algorithm outperforms the Cplex solver and the results obtained improve on most existing solutions.
Article
In this paper, we propose a reactive local search-based algorithm for the disjunctively constrained knapsack problem (DCKP). DCKP is a variant of the standard knapsack problem, an NP-hard combinatorial optimization problem, with special disjunctive constraints. A disjunctive constraint is a couple of items for which only one item is packed. The proposed algorithm is based upon a reactive local search, where an explicit check for the repetition of configurations is added to the search process. Initially, two complementary greedy procedures are applied in order to construct a starting solution. Second, a degrading procedure is introduced in order (i) to escape to local optima and (ii) to introduce a diversification in the search space. Finally, a memory list is added in order to forbid the repetition of configurations. The performance of two versions of the algorithm has been evaluated on several problem instances and compared to the results obtained by running the Cplex solver. Encouraging results have been obtained.
Conference Paper
In this paper we propose a version of the scatter search (SS) for tackling disjunctively constrained knapsack problems (DCKP). The DCKP is a special single constraint knapsack problem which contains specific additional constraints representing an independent set problem. The proposed approach applies the first level of SS using both the starting phase and the evolutionary one. Both phases are simulated using the five principal components of the scatter search, applied to an equivalent DCKP model re-enforced with two types of valid inequalities. The proposed algorithm is analyzed computationally on a set of problem instances of the literature which cannot be solved to proven optimality in a reasonable time. The obtained results are compared to the results provided by the Cplex solver; encouraging results have been obtained (19 new solution values out of 25 are obtained).
Article
This paper proposes an adaptation of the scatter search (SS) meta-heuristic for approximately solving the disjunctively constrained knapsack problem (DCKP). The DCKP can be viewed as a variant of the standard knapsack problem with special disjunctive constraints. Two versions of SS are presented which are organised following the usual structure of SS. The method is analysed computationally on a set of problem instances of the literature and compared to the results provided by the Cplex solver and other algorithms of the literature. For these instances, most of which cannot be solved to proven optimality in a reasonable time, the proposed method provides results of high quality within reasonable computational time.
Article
Two new algorithms recently proved to outperform all previous methods for the exact solution of the 0-1 Knapsack Problem. This paper presents a combination of such approaches, where, in addition, valid inequalities are generated and surrogate relaxed, and a new initial core problem is adopted. The algorithm is able to solve all classical test instances, with up to 10,000 variables, in less than 0.2 seconds on a HP9000-735/99 computer. The C language implementation of the algorithm is available on the internet.