Computers & Operations Research

Published by Elsevier BV

Print ISSN: 0305-0548

Articles


Table 1 : Statistics for the destroy and repair operators
Table 2 : Results for the three 2E-VRP sets compared to the best known solutions from the literature
Table 3 : Results for the new set of larger 2E-VRP instances
Table 4 : Average % deviation to the BKS and CPU times in seconds over all the instances of the three instance sets for the LRP
Table 5 : Characteristics of the 2E-VRP and LRP instances

+2

An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics
  • Article
  • Full-text available

December 2012

·

1,488 Reads

Vera C Hemmelmayr

·

·

In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.
Download
Share

Example of two adjacent solutions on the Pareto front for instance Mbayene. (a) First solution to instance Mbayene and (b) Second solution to instance Mbayene.
A solution close to the end of the Pareto front for instance Meouane.
Comparison of exact and H1 for instance Diender Guedj. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Comparison of exact and H1 for instance Ndiagagniao.
size, Pareto front size and CPU effort for Senegal instances.
The bi-objective stochastic covering tour problem

July 2012

·

145 Reads

We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach.

Lower and upper bounds for the two-echelon capacitated location-routing problem

December 2012

·

353 Reads

In this paper, we introduce two algorithms to address the two-echelon capacitated location-routing problem (2E-CLRP). We introduce a branch-and-cut algorithm based on the solution of a new two-index vehicle-flow formulation, which is strengthened with several families of valid inequalities. We also propose an adaptive large-neighbourhood search (ALNS) meta-heuristic with the objective of finding good-quality solutions quickly. The computational results on a large set of instances from the literature show that the ALNS outperforms existing heuristics. Furthermore, the branch-and-cut method provides tight lower bounds and is able to solve small- and medium-size instances to optimality within reasonable computing times.

Metaheuristics for the dynamic stochastic dial-a-ride problem with expected return transports

December 2011

·

146 Reads

The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15%. Moreover, improvements of up to 41% can be achieved for some test instances.

A FTA-Based Method for Risk Decision Making in Emergency Response

May 2011

·

93 Reads

Emergency decision making problem is a crucial issue of emergency management and is a valuable academic research topic. Although some research has been conducted, there has been no attempt to solve emergency decision making problems on the basis of analyzing the influence of alternatives on the development and evolvement of emergency. In this paper, a novel FTA-based method is proposed for risk decision making in emergency response. In the method, firstly, a fault tree of the undesirable state of emergency is constructed, by which the influence of alternatives on the emergency can be analyzed. On this basis, the probabilities that the undesirable state will occur given that different alternatives are chosen are estimated. Then, according to the determined probabilities, the overall ranking values of alternatives are calculated based on multiple criteria risk decision making (MCRDM). Furthermore, a ranking of alternatives is determined according to the overall ranking values. Finally, a practical example is used to illustrate the feasibility and validity of the proposed method. The proposed method overcomes the limitations of existing methods that the influence of alternatives on emergency is not considered, and enriches the theories and methods for emergency decision making.

An M/G/C/C state-dependent network simulation model

April 2005

·

158 Reads

A discrete-event digital simulation model is developed to study traffic flows in M/G/C/C state-dependent queueing networks. Several performance measures are evaluated, namely (i) the blocking probability, (ii) throughput, (iii) the expected number of the customers in the system, and (iv) expected travel (service) time. Series, merge, and split topologies are examined with special application to pedestrian planning evacuation problems in buildings. Extensive computational experiments are presented showing that the simulation model is an effective and insightful tool to validate analytical expressions and also to analyze general accessibility in network evacuation problems especially in high-rise buildings.

Core problems in bi-criteria {0,1}-knapsack problems

January 2004

·

64 Reads

The most efficient algorithms for solving the single-criterion {0,1}-knapsack problem are based on the core concept (i.e., based on a small number of relevant variables). But this concept is not used in problems with more than one criterion. The main purpose of this paper is to validate the existence of such a set of variables in bi-criteria {0–1}-knapsack instances. Numerical experiments were performed on five types of {0,1}-knapsack instances. The results are presented for the supported and non-supported solutions as well as for the entire set of efficient solutions. A description of an approximate and an exact method is also presented.

Modeling and Solving Several Classes of Arc Routing Problems as Traveling Salesman Problems”, Computers & Operations Research, 24, 1057-1061

November 1997

·

55 Reads

Several important types of arc routing problems can be transformed into traveling salesman problems. Computational results indicate that the approach works well on low density graphs containing few edges. Instances involving up to 220 vertices, 660 arcs, and a few edges were solved to optimality. It constitutes the only known approach for solving Mixed Rural Postman Problems and Stacker Crane Problems to optimality.

Paolucci, M.: Parallel machine total tardiness scheduling with a new hybrid metaheuristic approach. Computers & Operations Research 34(11), 3471-3490

November 2007

·

104 Reads

This work proposes a hybrid metaheuristic (HMH) approach which integrates several features from tabu search (TS), simulated annealing (SA) and variable neighbourhood search (VNS) in a new configurable scheduling algorithm. In particular, either a deterministic or a random candidate list strategy can be used to generate the neighbourhood of a solution, both a tabu list mechanism and the SA probabilistic rule can be adopted to accept solutions, and the dimension of the explored neighbourhood can be dynamically modified. The considered class of scheduling problems is characterized by a set of independent jobs to be executed on a set of parallel machines with non-zero ready times and sequence dependent setups. In particular, the NP-hard generalized parallel machine total tardiness problem (GPMTP) recently defined by Bilge et al. [A tabu search algorithm for parallel machine total tardiness problem. Computers & Operations Research 2004;31:397–414], is faced. Several alternative configurations of the HMH have been tested on the same benchmark set used by Bilge et al. The results obtained highlight the appropriateness of the proposed approach.

Genetic algorithms and Tabu Search: hybrids for optimization. Computers é4 Operations Research, 22: 111-134

January 1995

·

318 Reads

Genetic algorithms and tabu search have a number of significant differences. They also have some common bonds, often unrecognized. We explore the nature of the connections between the methods, and show that a variety of opportunities exist for creating hybrid approaches to take advantage of their complementary features. Tabu search has pioneered the systematic exploration of memory functions in search processes, while genetic algorithms have pioneered the implementation of methods that exploit the idea of combining solutions. There is also another approach, related to both of these, that is frequently overlooked. The procedure called scatter search, whose origins overlap with those of tabu search (and roughly coincide with the emergence of genetic algorithms) also proposes mechanisms for combining solutions, with useful features that offer a bridge between tabu search and genetic algorithms. Recent generalizations of scatter search concepts, embodied in notions of structured combinations and path relinking, have produced effective strategies that provide a further basis for integrating GA and TS approaches. A prominent TS component called strategic oscillation is susceptible to exploitation by GA processes as a means of creating useful degrees of diversity and of allowing effective transitions between feasible and infeasible regions. The independent success of genetic algorithms and tabu search in a variety of applications suggests that each has features that are valuable for solving complex problems. The thesis of this paper is that the study of methods that may be created from their union can provide useful benefits in diverse settings.

Artificial neural network representations for hierarchical preference structures. Computers and Operations Research, 23(12), 1191-1201

December 1996

·

158 Reads

In this paper, we introduce two artificial neural network formulations that can be used to assess the preference ratings from the pairwise comparison matrices of the Analytic Hierarchy Process. First, we introduce a modified Hopfield network that can determine the vector of preference ratings associated with a positive reciprocal comparison matrix. The dynamics of this network are mathematically equivalent to the power method, a widely used numerical method for computing the principal eigenvectors of square matrices. However, this Hopfield network representation is incapable of generalizing the preference patterns, and consequently is not suitable for approximating the preference ratings if the pairwise comparison judgments are imprecise. Second, we present a feed-forward neural network formulation that does have the ability to accurately approximate the preference ratings. We use a simulation experiment to verify the robustness of the feed-forward neural network formulation with respect to imprecise pairwise judgments. From the results of this experiment, we conclude that the feed-forward neural network formulation appears to be a powerful tool for analyzing discrete alternative multicriteria decision problems with imprecise or fuzzy ratio-scale preference judgments.

Ramanathan, R.: Data envelopment analysis for weight derivation and aggregation in the analytic hierarchy process. Computers & Operations Research 33(5), 1289-1307

May 2006

·

341 Reads

Data envelopment analysis (DEA) is proposed in this paper to generate local weights of alternatives from pair-wise comparison judgment matrices used in the analytic hierarchy process (AHP). The underlying assumption behind the approach is explained, and some salient features are explored. It is proved that DEA correctly estimates the true weights when applied to a consistent matrix formed using a known set of weights. DEA is further proposed to aggregate the local weights of alternatives in terms of different criteria to compute final weights. It is proved further that the proposed approach, called DEAHP in this paper, does not suffer from rank reversal when an irrelevant alternative(s) is added or removed.

Future Paths for Integer Programming and Links to Artificial Intelligence.” Computers & Operations Research 13, 533-549

January 1986

·

3,792 Reads

Integer programming has benefited from many innovations in models and methods. Some of the promising directions for elaborating these innovations in the future may be viewed from a framework that links the perspectives of artificial intelligence and operations research. To demonstrate this, four key areas are examined: 1.(1) controlled randomization, 2.(2) learning strategies, 3.(3) induced decomposition and 4.(4) tabu search. Each of these is shown to have characteristics that appear usefully relevant to developments on the horizon.

Approximate solutions for the Capacitated Arc Routing Problem. Computers and Operations Research 16, 589-600

December 1989

·

53 Reads

The capacitated arc routing problem (CARP), is a capacitated variation of the arc routing problems in which there is a capacity constraint associated with each vehicle. Due to the computational complexity of the problem, recent research has focussed on developing and testing heuristic algorithms which solve the CARP approximately. In this paper, we review some of the existing solution procedures, analyze their complexity, and present two modifications of the existing methods to obtain near-optimal solutions for the CARP. Extensive computational results are presented and analyzed.

Törn, A.: Population Set-Based Global Optimization Algorithms: Some Modifications and Numerical Studies. Computers & Operations Research 31(10), 1703-1725

September 2004

·

201 Reads

This paper studies the efficiency and robustness of some recent and well known population set-based direct search global optimization methods such as Controlled Random Search, Differential Evolution and the Genetic Algorithm. Some modifications are made to Differential Evolution and to the Genetic Algorithm to improve their efficiency and robustness. All methods are tested on two sets of test problems, one composed of easy but commonly used problems and the other of a number of relatively difficult problems.

Thompson, J.: On the application of graph colouring techniques in round-robin sports scheduling. Computers & Operations Research 38(1), 190-204

January 2011

·

218 Reads

The purpose of this paper is twofold. First, it explores the issue of producing valid, compact round-robin sports schedules by considering the problem as one of graph colouring. Using this model, which can also be extended to incorporate additional constraints, the difficulty of such problems is then gauged by considering the performance of a number of different graph colouring algorithms. Second, neighbourhood operators are then proposed that can be derived from the underlying graph colouring model and, in an example application, we show how these operators can be used in conjunction with multi-objective optimisation techniques to produce high-quality solutions to a real-world sports league scheduling problem encountered at the Welsh Rugby Union in Cardiff, Wales.

Analysis of the market for access to broadband telecommunications in the year 2000

February 1998

·

3 Reads

This article presents a case study of the application of the Delphi Method to forecasting the market for broadband telecommunications in the year 2000. Specifically, we analyse demand for a customer premises switch, which takes traffic from subscriber premises networks (e.g. LANs, PBXs, and video systems) and aggregates it in order to be transported over a wide area broadband telecommunications network. This product is an ATM premises switch, which has only recently become commercially available and therefore has negligible history of subscriber demand, so that forecasts cannot be made by extrapolation. In order to assess the market for such a product, a quantitative analysis of subscriber demand during the latter half of the 1990s was performed. This article describes that analysis based upon evaluating market demand from seven different viewpoints. In order to ensure consistency between these approaches, a methodology based upon the Delphi Method was applied to the problem. Results were obtained for the market in the whole of North America with a focussed case study of the Toronto urban core. This article broadens the applicability of the Delphi method to situations in which individual ‘experts’ on each viewpoint are unavailable.

Extensions of a Tabu Search Adaptation to the Quadratic Assignment Problem,” Computers and Operations Research, 21 (8), 855-865

October 1994

·

21 Reads

The adaptation of tabu search to the Quadratic Assignment Problem (QAP) is refined by incorporating knowledge about permutations yielding good objective function values into the solution process. This knowledge serves two purposes. First, it is used to guide the nonimproving moves toward targeted “good” permutations. Second, it allows the possibility to restrict search in the solution space by fixing those parts of permutations that are common in good quality solutions. Fixing and freeing parts of permutations provides an interplay between intensification and diversification of search. Restricting the search neighborhood becomes essential in solution attempts for QAPs of larger dimensions due to the increase in computational time. Computational results for QAPs of dimension 100 are very promising. An implementation of the approach to dynamic tabu list sizes that creates “moving gaps” in the tabu list is also described. Combining these ingredients, the method obtains very good results for all Skorin-Kapov's problems of dimensions 42–90.

Performance Analysis of Scheduling Policies in Re-entrant Manufacturing Systems, Computers and Operations Research, 23, 37-51

January 1996

·

9 Reads

Re-entrant lines are a class of non-traditional queueing network models that are congenial for the modeling of manufacturing systems with distinct multiple visits to work centers. Analyzing the performance of scheduling policies in re-entrant lines is a problem of significant research interest. Reentrant lines are non-product form owing to priority scheduling, and all the existing performance studies have used simulation for analysis. In this paper we present an approximate technique for analytical performance prediction of re-entrant lines. The technique is based on MVA (Mean Value Analysis). The running time of the algorithm is linear in the product of the system population and the number of operations, which makes it overwhelmingly efficient compared to simulation. A detailed comparison of performance values obtained through simulation and the proposed technique shows that the analytical estimates are quite accurate.

Hifi, M.: An Improvement of Viswanathan and Bagchi’s Exact Algorithm for Constrained Two-Dimensional Cutting Stock. Computer Operations Research 24(8), 727-736

August 1997

·

51 Reads

Viswanathan and Bagchi [Operations Research, 1993, 41(4), 768–776] [1] have proposed a bottom-up algorithm which combines in the nice tree-search procedure Gilmore and Gomory's algorithm, called at each node of the tree, for solving exactly the constrained two-dimensional cutting problem. This algorithm is one of the best exact algorithms known today. In this paper, we propose an improved version of this algorithm by introducing one-dimensional bounded knapsacks in the original algorithm. Then, by exploiting dynamic programming properties, we obtain good lower and upper bounds which lead to significant branching cuts. Finally, the improved version is compared to the standard version of Viswanathan and Bagchi on some small and medium instances.

Heuristic factory planning algorithm for advanced planning and scheduling. Computers & Operations Research 36(9): 2513-2530

September 2009

·

195 Reads

This study focuses on solving the factory planning (FP) problem for product structures with multiple final products. In situations in which the capacity of the work center is limited and multiple job stages are sequentially dependent, the algorithm proposed in this study is able to plan all the jobs, while minimizing delay time, cycle time, and advance time. Though mixed integer programming (MIP) is a popular way to solve supply chain factory planning problems, the MIP model becomes insolvable for complex FP problems, due to the time and computer resources required. For this reason, this study proposes a heuristic algorithm, called the heuristic factory planning algorithm (HFPA), to solve the supply chain factory planning problem efficiently and effectively. HFPA first identifies the bottleneck work center and sorts the work centers according to workload, placing the work center with the heaviest workload ahead of the others. HFPA then groups and sorts jobs according to various criteria, for example, dependency on the bottleneck work center, the workload at the bottleneck work center, and the due date. HFPA plans jobs individually in three iterations. First, it plans jobs without preempting, advancing, and/or delaying. Jobs that cannot be scheduled under these conditions are scheduled in the second iteration, which allows preemption. In the final iteration, which allows jobs to be preempted, advanced, and delayed, all the remaining jobs are scheduled. A prototype was constructed and tested to show HFPA's effectiveness and efficiency. This algorithm's power was demonstrated using computational and complexity analysis.

Zheng, D.: An effective hybrid optimisation strategy for job-shop scheduling problems. Computers and Operations Research 28, 585-596

May 2001

·

84 Reads

Simulated annealing is a naturally serial algorithm, but its behavior can be controlled by the cooling schedule. Genetic algorithm exhibits implicit parallelism and can retain useful redundant information about what is learned from previous searches by its representation in individuals in the population, but GA may lose solutions and substructures due to the disruptive effects of genetic operators and is not easy to regulate GA's convergence. By reasonably combining these two global probabilistic search algorithms, we develop a general, parallel and easily implemented hybrid optimization framework, and apply it to job-shop scheduling problems. Based on effective encoding scheme and some specific optimization operators, some benchmark job-shop scheduling problems are well solved by the hybrid optimization strategy, and the results are competitive with the best literature results. Besides the effectiveness and robustness of the hybrid strategy, the combination of different search mechanisms and structures can relax the parameter-dependence of GA and SA.Scope and purposeJob-shop scheduling problem (JSP) is one of the most well-known machine scheduling problems and one of the strongly NP-hard combinatorial optimization problems. Developing effective search methods is always an important and valuable work. The scope and purpose of this paper is to present a parallel and easily implemented hybrid optimization framework, which reasonably combines genetic algorithm with simulated annealing. Based on effective encoding scheme and some specific optimization operators, the job-shop scheduling problems are well solved by the hybrid optimization strategy.

Aldowaisan, T.: A new heuristic and dominance relations for no-wait flowshops with setups. Computers and Operations Research 28, 563-584

May 2001

·

29 Reads

The two-machine no-wait flowshop problem, where setup times are considered separate from processing times and sequence independent, is addressed with respect to minimizing total flowtime. A local and a global dominance relation are developed and a new heuristic is provided. Furthermore, a lower bound is obtained and used along with the dominance relations in a branch-and-bound algorithm in order to evaluate the efficiency of the heuristic. Computational experience demonstrates the superiority of the local dominance relation and the new heuristic.Scope and purposeNo-wait flowshop problems, where jobs have to be processed without interruption between consecutive machines, represent an important area in scheduling. There are several industries where the no-wait flowshop problem applies including the metal, plastic, and chemical industries. For instance, in the case of steel production, the heated metal must continuously go through a sequence of operations before it is cooled in order to prevent defects in the composition of the material. Another important area arises when setup time is considered separate from processing time. Such a consideration is particularly justified when the ratio of setup to processing time is non-negligible. Many applications warrant separate consideration of setup; examples include the re-tooling of multi-tool equipment. Other applications can be found in textile, plastic, chemical, and semi-conductor industries. This paper develops a new heuristic and dominance relations for the two-machine no-wait separate setup flowshop problem, where the performance criterion is total flowtime.

Chabrier, A.: Vehicle routing problem with elementary shortest path based column generation. Comput. Oper. Res. 33(10), 2972-2990

October 2006

·

276 Reads

The usual column generation model for a Vehicle Routing Problem involves an elementary shortest-path subproblem. The worst-case complexity of the known algorithms for this problem being too high, the elementary-path constraint is usually relaxed. Indeed, as each customer must be visited exactly once, the two problems with and without the elementary-path constraint have the same optimal integer solutions. In this article, we propose one theoretical and several practical improvements to the algorithm for elementary paths. We obtain better lower bounds and pruning of the search tree, and these improvements allowed us to find an exact solution to 17 instances of the Solomon benchmark suite which were previously open.

A Common Framework for Deriving Preference Values from Pairwise Comparison Matrices. Computers and Operational Research 31, 893-908

May 2004

·

315 Reads

Pairwise comparison is commonly used to estimate preference values of finite alternatives with respect to a given criterion. We discuss 18 estimating methods for deriving preference values from pairwise judgment matrices under a common framework of effectiveness: distance minimization and correctness in error free cases. We point out the importance,of commensurate scales when aggregating all the columns of a judgment matrix and the desirability of weighting the columns according to the preference values. The common framework is useful in differentiating the strength and weakness of the estimated methods. Some comparison results of these 18 methods on two sets of judgment matrices with small and large errors are presented. We also give insight regarding the underlying mathematical structure of some of the methods.

Hentenryck, P.V.: A two-stage hybrid algorithm for pickup and delivery vehicle routing problems with time windows. Computers & Operations Research 33(4), 875-893

April 2006

·

136 Reads

This paper presents a two-stage hybrid algorithm for pickup and delivery vehicle routing problems with time windows and multiple vehicles (PDPTW). The first stage uses a simple simulated annealing algorithm to decrease the number of routes, while the second stage uses Large neighborhood search (LNS) to decrease total travel cost. Experimental results show the effectiveness of the algorithm which has produced many new best solutions on problems with 100, 200, and 600 customers. In particular, it has improved 47% and 76% of the best solutions on the 200 and 600-customer benchmarks, sometimes by as much as 3 vehicles. These results further confirm the benefits of two-stage approaches in vehicle routing. They also answer positively the open issue in the original LNS paper, which advocated the use of LNS for the PDPTW and argue for the robustness of LNS with respect to side-constraints.

Vandaele, N.: Reverse logistics network design with stochastic lead times. Comput. Oper. Res. 34(2), 395-416

February 2007

·

529 Reads

This work is concerned with the efficient design of a reverse logistics network using an extended version of models currently found in the literature. Those traditional, basic models are formulated as mixed integer linear programs (MILP-model) and determine which facilities to open that minimize the investment, processing, transportation, disposal and penalty costs while supply, demand and capacity constraints are satisfied. However, we show that they can be improved when they are combined with a queueing model because it enables to account for (1) some dynamic aspects like lead time and inventory positions, and (2) the higher degree of uncertainty inherent to reverse logistics. Since this extension introduces nonlinear relationships, the problem is defined as a mixed integer nonlinear program (MINLP-model). Due to this additional complexity, the MINLP-model is presented for a single product-single-level network. Several examples are solved with a genetic algorithm based on the technique of differential evolution.

A new approach to the learning effect: beyond the learning curve restrictions. Computers and Operations Research, 35, 3727-3736

November 2008

·

51 Reads

In this paper, we bring into the scheduling field a new model of the learning effect, where in two ways the existing approach is generalized. First we relax one of the rigorous constraints, and thus in our model each job can provide different experience to the processor. Second we formulate the job processing time as a non-increasing k-stepwise function, that, in general, is not restricted to a certain learning curve, thereby it can accurately fit every possible shape of a learning function. Furthermore, we prove that the problem of makespan minimization with the considered model is polynomially solvable if every job provides the same experience to the processor, and it becomes NP-hard if the experiences are diversified. The most essential result is a pseudopolynomial time algorithm that solves optimally the makespan minimization problem with any function of an experience-based learning model reduced into the form of the k-stepwise function.

Sequential linear goal programming: Implementation via MPSX/370E

December 1991

·

18 Reads

This note shows how one can solve linear goal programming problems using a preemptive priority structure in a very efficient manner. In particular, the entire sequence of computational steps needed can easily be incorporated in MPSX/370E's control program. The procedure does not require the user to interface a FORTRAN program to the user's MPSX/370 program as previous studies have suggested.

A genetic algorithm-based heuristic for the dynamic integrated forward/reverse logistics network for 3PLs

February 2007

·

515 Reads

Today's competitive business environment has resulted in increasing cooperation among individual companies as members of a supply chain. Accordingly, third party logistics providers (3PLs) must operate supply chains for a number of different clients who want to improve their logistics operations for both forward and reverse flows. As a result of the dynamic environment in which these supply chains must operate, 3PLs must make a sequence of inter-related decisions over time. However, in the past, the design of distribution networks has been independently conducted with respect to forward and reverse flows. Thus, this paper presents a mixed integer nonlinear programming model for the design of a dynamic integrated distribution network to account for the integrated aspect of optimizing the forward and return network simultaneously. Since such network design problems belong to a class of NP hard problems, a genetic algorithm-based heuristic with associated numerical results is presented and tested in a set of problems by an exact algorithm. Finally, a solution of a network plan would help in the determination of various resource plans for capacities of material handling equipments and human resources.

Genetic programming for anticancer therapeutic response prediction using the NCI-60 dataset

August 2010

·

123 Reads

Statistical methods, and in particular machine learning, have been increasingly used in the drug development workflow. Among the existing machine learning methods, we have been specifically concerned with genetic programming. We present a genetic programming-based framework for predicting anticancer therapeutic response. We use the NCI-60 microarray dataset and we look for a relationship between gene expressions and responses to oncology drugs Fluorouracil, Fludarabine, Floxuridine and Cytarabine. We aim at identifying, from genomic measurements of biopsies, the likelihood to develop drug resistance. Experimental results, and their comparison with the ones obtained by Linear Regression and Least Square Regression, hint that genetic programming is a promising technique for this kind of application. Moreover, genetic programming output may potentially highlight some relations between genes which could support the identification of biological meaningful pathways. The structures that appear more frequently in the “best” solutions found by genetic programming are presented.

A case-based model for multi-criteria ABC analysis

March 2008

·

453 Reads

In ABC analysis, a well-known inventory planning and control technique, stock-keeping units (SKUs) are sorted into three categories. Traditionally, the sorting is based solely on annual dollar usage. The aim of this paper is to introduce a case-based multiple-criteria ABC analysis that improves on this approach by accounting for additional criteria, such as lead time and criticality of SKUs, thereby providing more managerial flexibility. Using decisions from cases as input, preferences over alternatives are represented intuitively using weighted Euclidean distances which can be easily understood by a decision maker. Then a quadratic optimization program finds optimal classification thresholds. This system of multiple criteria decision aid is demonstrated using an illustrative case study.

A branch and bound procedure to minimize mean absolute lateness on a single processor

February 1996

·

18 Reads

This paper presents a solution procedure to minimize the Mean Absolute Lateness single machine scheduling problem. A Branch and Bound methodology is utilized in conjunction with a one-pass linear program to find an optimal solution. In order to fathom branches, several theorems are developed to establish dominance between adjacent job pairs and in some cases between three adjacent jobs. In addition, several theorems are presented to establish lower bounds on the solution to further limit the enumeration. A simple decomposition procedure is shown to reduce problem size in many cases. Solution results indicate that problem characteristics, in addition to size, have a major impact on solution requirements. As the tightness of due dates increases, a corresponding increase in solution requirements is evident. For example, for problems of size N = 25, the solution requirements when tightness was 0.1 is an average 0.035 CPU seconds while 3234.0 CPU seconds were required to solve problems with a tightness of 0.9. Conversely, as the due date coefficient of variation (CV) increases, solution requirements lessen. For problems of size N = 20 and a CV of 0.2, CPU time required to find an optimal solution was 16.8 compared to 0.01 seconds for a CV of 0.6. Solution times for certain problems up to N = 30 were reported.

Scheduling in a two-machine flowshop for the minimization of the mean absolute deviation from a common due date

January 2009

·

45 Reads

This paper addresses the minimization of the mean absolute deviation from a common due date in a two-machine flowshop scheduling problem. We present heuristics that use an algorithm, based on proposed properties, which obtains an optimal schedule for a given job sequence. A new set of benchmark problems is presented with the purpose of evaluating the heuristics. Computational experiments show that the developed heuristics outperform results found in the literature for problems up to 500 jobs.

Single machine scheduling to minimize mean absolute lateness: A heuristic solution

January 1990

·

39 Reads

This paper presents a heuristic solution procedure based on the well known methodology of adjacent pairwise interchange (API) to minimize mean absolute lateness (MAL) on a single machine. MAL is a nonregular measure of performance and schedules with inserted machine idle time may contain the global optimal solution. The heuristic solution is compared to the optimal solution for 192 randomly generated problems to investigate the effects of problem size. due date coefficient of variation, and due date lightness on the quality of the heuristic. Results indicate that none of the treatments tested significantly affected the heuristic solution. The heuristic solution was found to average about 2.49% greater than the optimal. Also, the heuristic found the optimal for 122 of the 192 randomly generated test problems.

Academic Departments Efficiency via DEA

May 1994

·

156 Reads

This paper presents a case study where academic departments at Ben-Gurion University were evaluated via the Data Envelopment Analysis using the CCR model. Extensive post analyses were performed in several directions. First various sets of data were used to identify efficient and inefficient departments. New efficiency measures are suggested in relation to the reference set included in the analyses of academic departments. We measured the efficiency of departments to other departments within the same school. We applied cluster analyses to divide the departments into several sets; and the discriminant analysis to test the match of the efficiency/inefficiency division of the CCR ratio. We further tested organizational changes where an inefficient department was closed and joins other departments. Finally we compared the CCR model to the pure economic approach—the cost per student ratio.

Accelerating column generation for aircraft scheduling using constraint propagation

October 2006

·

83 Reads

We discuss how constraint programming can improve the performance of a column generation solution process for the NP-hard Tail Assignment problem in aircraft scheduling. Combining a constraint model of a relaxed Tail Assignment problem with column generation, we achieve substantially improved performance. A generalized preprocessing technique based on constraint propagation is presented that can dramatically reduce the size of the flight network. We also present a heuristic preprocessing method based on the costs of connections, and show how constraint propagation can be used to improve fixing heuristics. Proof of concept is provided using real world Tail Assignment instances.

Decomposition schemes and acceleration techniques in application to production–assembly–distribution system design

December 2008

·

21 Reads

The purpose of this paper is to study several schemes for applying Dantzig–Wolfe decomposition (DWD) to the production–assembly–distribution system design problem (PADS). Each scheme exploits selected embedded structures. The research objective is to enhance the rate of DWD convergence in application to PADS through formulating a rationale for decomposition by analyzing potential schemes, adopting acceleration techniques, and assessing the impacts of schemes and techniques computationally. Test results provide insights that may be relevant to other applications of DWD.

Order acceptance with weighted tardiness

October 2007

·

129 Reads

Over the past decade the strategic importance of order acceptance has been widely recognized in practice as well as academic research. This paper examines order acceptance decisions when capacity is limited, customers receive a discount for late delivery, but early delivery is neither penalized nor rewarded. We model a manufacturing facility that considers a pool of orders, and chooses for processing the subset that results in the highest profit. We present several solution methods, beginning with a straightforward application of an approach which separates sequencing and job acceptance. We then develop an optimal branch-and-bound procedure that uses a linear (integer) relaxation for bounding and performs the sequencing and job acceptance decisions jointly. We develop a variety of fast and high-quality heuristics based on this approach. For small problems, beam search runs almost 20 times faster than the benchmark, with a high degree of accuracy, and a branch-and-bound heuristic using Vogel's method for bounding is over 100 times faster with very high accuracy. For larger problems, a myopic heuristic based on the relaxation runs 2000 times faster than the beam-search benchmark, with comparable accuracy.

Design of economically optimal acceptance sampling plans with inspection error

September 2002

·

160 Reads

Models to determine economically optimal quality control systems have appeared in the literature for a number of years and involved a variety of approaches. This approach is unique because of the following reasons: (1) A continuous loss function is used to quantify deviations between a quality characteristic and its target level. (2) Inspector error is modeled explicitly as is the ability to influence this error with resources. (3) The models are used to construct graphs that allow practitioners to design near optimal inspection plans with a minimal understanding of the model details.Scope and purposeDesigning economically optimal acceptance sampling plans has not been widely addressed even though sampling remains a commonly used technique in certain quality engineering systems. In this research, we develop mathematical models that can be used to design both 100% inspection and single sampling plans. Since many actual implementations involve human inspectors, an important consideration that is frequently ignored is inspection error. In this research, inspection error is explicitly included in the model as is the ability to mitigate the consequences by expending resources.For the 100% inspection case, the design parameters are the inspection tolerance and resources expended to reduce inspection error. For single sampling, the appropriate model can be used to determine the optimal inspection tolerance and resource expenditure given a sampling plan or they can be solved for the sample size and inspection number given a prescribed inspection tolerance and resource expenditure. The paper illustrates all of these uses through simple examples that can easily be modified to specific situations.

Order acceptance using genetic algorithms

June 2009

·

78 Reads

This paper uses a genetic algorithm to solve the order-acceptance problem with tardiness penalties. We compare the performance of a myopic heuristic and a genetic algorithm, both of which do job acceptance and sequencing, using an upper bound based on an assignment relaxation. We conduct a pilot study, in which we determine the best settings for diversity operators (clone removal, mutation, immigration, population size) in connection with different types of local search. Using a probabilistic local search provides results that are almost as good as exhaustive local search, with much shorter processing times. Our main computational study shows that the genetic algorithm always dominates the myopic heuristic in terms of objective function, at the cost of increased processing time. We expect that our results will provide insights for the future application of genetic algorithms to scheduling problems.Scope and purposeThe importance of the order-acceptance decision has gained increasing attention over the past decade. This decision is complicated by the trade-off between the benefits of the revenue associated with an order, on one hand, and the costs of capacity, as well as potential tardiness penalties, on the other. In this paper, we use a genetic algorithm to solve the problem of which orders to choose to maximize profit, when there is limited capacity and an order delivered after its due date incurs a tardiness penalty. The genetic algorithm improves upon the performance of previous methods for large problems.

A hub location problem with fully interconnected backbone and access networks

August 2007

·

72 Reads

This paper considers the design of two-layered fully interconnected networks. A two-layered network consists of clusters of nodes, each defining an access network and a backbone network. We consider the integrated problem of determining the access networks and the backbone network simultaneously. A mathematical formulation is presented, but as the linear programming relaxation of the mathematical formulation is weak, a formulation based on the set partitioning model and column generation approach is also developed. The column generation subproblems are solved by solving a series of quadratic knapsack problems. We obtain superior bounds using the column generation approach than with the linear programming relaxation. The column generation method is therefore developed into an exact approach using the branch-and-price framework. With this approach we are able to solve problems consisting of up to 25 nodes in reasonable time. Given the difficulty of the problem, the results are encouraging.

Designing radio-mobile access networks based on synchronous digital hierarchy rings

February 2005

·

47 Reads

In this paper, we address the SDH network design problem (SDHNDP) which arises while designing the fixed part of global system for mobile communications access networks using synchronous digital hierarchy (SDH) rings.An SDH ring is a simple cycle that physically links a subset of antennae to a single concentrator. Inside a ring, a concentrator handles the total traffic induced by antennae. Technological considerations limit the number of antennae and the total length of a ring.The SDHNDP is a new problem. It belongs to a class of location-routing problems that introduce location into the multi-depot vehicle routing problem. In this paper, we precisely describe the SDHNDP and propose a mixed integer programming-based model for it. Furthermore, we devise a heuristic algorithm that computes a feasible solution.We report the results of our computational experiments using the CPLEX software, on instances comprising up to 70 antennae or six concentrator sites. An analysis provides insight into the behavior of the lower bound obtained by the LP relaxation of the model, in response to the network density. This lower bound can be improved by adding some valid inequalities. We show that an interesting cut can be obtained by approximating the minimum number of rings in any feasible solution. This can be achieved by solving a “minimum capacitated partition problem”. Finally, we compare the lower bound to the heuristic solution value for a set of instances.

An efficient algorithm for a capacitated subtree of a tree problem in local access telecommunication networks

August 1997

·

9 Reads

Given a rooted tree T with node profits and node demands, the capacitated subtree of a tree problem (CSTP) consists of finding a rooted subtree of the maximum profit, subject to having total demand no larger than the given capacity H. We first define the so-called critical item for CSTP and find upper bounds on the optimal value of CSTP in O(n2) time, where n is the number of nodes in T. We then present our branch-and-bound algorithm for solving CSTP and illustrate the algorithm by using an example. Finally, we implement our branch-and-bound algorithm by using one of the developed upper bounds and compare the computational results with those given by the branch-and-bound version of CPLEX and given by a dynamic programming algorithm for CSTP whose complexity is O(nH). The comparison shows that our branch-and-bound algorithm performs much better than both CPLEX and the dynamic programming algorithm, especially when n and H are large, for example, in the range of [50, 500] and [5000, 10,000], respectively.

Fig. 1. Overview of M/M/m/m Node Model. 5 OPNET Modeler version 7.0 contains some support for circuit switching. 
Fig. 2. Overview of modeling process. 
Fig. 3. Impact of call limits (1100 modems). 
Fig. 6. Impact of hybrid solution on blocking. 
Modeling dialup Internet access: an examination of user-to-modem ratios, blocking probability, and capacity planning in a modem pool

November 2003

·

222 Reads

In the near future, dialup connections will remain as one of the most popular methods of remote access to the Internet as well as to enterprise networks. The dimensioning of modem pools to support this type of access is of particular importance to commercial Internet service providers as well as to universities that maintain their own modem pools to support access by faculty, staff, and students. The primary contribution of this paper is to analyze how network administrators may decrease the probability of blocking for access to the pool by imposing session limits restricting the maximum duration of the online session. Session limits may provide a viable alternative to improving network performance without necessarily adding capacity to the pool. Blocking probability is examined under a number of different scenarios to assess potential improvements in the quality of service by imposing session limitations during peak-period operations.

Analyzing tradeoffs between zonal constraints and accessibility in facility location

January 1994

·

11 Reads

One recent extension of the PMP is the zonally constrained median problem. This model recognizes that site selection often is influenced by the desire to distribute equitably the impacts or benefits of facilities by locating them among multiple regions, districts, or zones. Zonal constraints can be used in one form to ensure a minimum number of facilities in any zone and in another form to prevent too many facilities in any zone. However, a planner's desire to meet zonal constraints can conflict with the desire to maximize system-wide public accessibility (minimize total distance traveled). Non-inferior compromise solutions which partially enforce zonal constraints could be most helpful to decision-makers, especially in a sensitive political climate. This paper presents a constrained multiobjective model (denoted the extended zonally constrained median problem, or EZCOMP) which can identify both supported non-dominated solutions and unsupported non-dominated solutions (which would be missed using the weighting approach to multiobjectives). A special Lagrangian relaxation is exploited in the proposed solution methodology. This is a first attempt at using a Lagrangian based approach to identify unsupported non-dominated solutions to a location model. Results on two data sets with different types of zones show the Lagrangian approach to be efficient compared to linear-integer programming and a vertex substitution heuristic, even in the solution of problems of over 23,000 variables and 23,000 constraints.

Inventory systems for deteriorating items with shortages and a linear trend in demand-taking account of time value

August 2001

·

134 Reads

This paper derives an inventory model for deteriorating items with the demand of linear trend and shortages during the finite planning horizon considering the time value of money. A simple solution algorithm using a line search is presented to determine the optimal interval which has positive inventories. Numerical examples are given to explain the solution algorithm. Sensitivity analysis is performed to study the effect of changes in the system parameters.Scope and purpose The traditional inventory model considers the ideal case in which depletion of inventory is caused by a constant demand rate. However, in real-life situations there is inventory loss due to deterioration. In a realistic product life cycle, demand is increasing with time and eventually reaching zero. Most of the classical inventory models did not take into account the effects of inflation and time value of money. But in the past, the economic situation of most of the countries has changed to such an extent due to large scale inflation and consequent sharp decline in the purchasing power of money. So, it has not been possible to ignore the effects of inflation and time value of money any further. The purpose of this article is to present a solution procedure for the inventory problem of deteriorating items with shortages and a linear trend in demand taking account of time value.

Planning working time accounts under demand uncertainty

February 2011

·

30 Reads

Working time accounts (WTAs) are employer-oriented flexibility systems that have been applied in industry but could be used far more. WTAs enable capacity to be adapted to fluctuations in demand. The required capacity, which is needed to plan WTAs, usually depends on several factors. It is often impossible to reliably predict the required capacity or unrealistic to adjust it to a probability distribution. In some cases, a set of required-capacity scenarios can be determined, each with a related probability. This paper presents a multistage stochastic optimisation model that is robust (i.e., provides a solution that is feasible for any possible scenario) and minimises the expected total cost (which includes the cost of overtime and the cost of the capacity shortage).

An iterative mixed integer programming method for classification accuracy maximizing discriminant analysis

February 2003

·

82 Reads

Linear discriminant functions which maximize the number of correctly classified observations in a training sample can be generated by a mixed integer programming (MIP) discriminant analysis model in which a binary variable is associated with each observation, but because of the computational requirements this model can only be applied to relatively small problems. In this paper, an iterative MIP method is developed to allow classification accuracy maximizing discriminant functions to be generated for problems with many more observations than can be considered by the standard MIP formulation. Using minimization of the sum of deviations as the objective, a mathematical programming discriminant analysis model is first used to generate a discriminant function for the complete set of observations. A neighborhood of observations about this function is then defined and a MIP model is used to generate a discriminant function that maximizes classification accuracy within this neighborhood. The process of defining a neighborhood about the most recently generated discriminant function and solving a neighborhood MIP model is repeated until there is no improvement in the total number of observations classified correctly. This new iterative MIP method is applied to a two-group problem involving 690 observations.

Analysis of population size in the accuracy and performance of genetic training for rule-based control systems

January 1995

·

4 Reads

In off-line training of a rule-based controller, the significant measure of successful training is the quality of control provided by the generated rule set. In an adaptive or on-line control environment, performance is also measured by the ability to accurately maintain a satisfactory rule set, but within constraints on speed and/or resource availability. Very small population genetic algorithms, or microGAs, have been proposed as a means of capitalizing on the hill climbing characteristics of faster local optimizatiion techniques while requiring less memory and retaining much of the robustness of traditional, larger population genetic search. A traditional genetic algorithm and a similar microGA are developed and applied to two control problems. The performance of these algorithms is analyzed with respect to (1) the quality of the rules learned, (2) the rate at which learning occurs, and (3) the memory resources required during learning.

Top-cited authors