## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We introduce two new combinatorial optimization problems, which are gener-alizations of the Traveling Salesman Problem (TSP) and the Assignment Problem (AP) and which we call Traveling Salesman Problem of Second Order (TSP2) and Assignment Problem of Second Order (AP2). TSP2 is motivated by an important ap-plication in bioinformatics, especially the Permuted Variable Length Markov model. While TSP2 is trivially N P-hard, we show the N P-hardness of AP2 by a reduc-tion from SAT. We propose seven elementary heuristics for the TSP2, some of which are generalizations of similar algorithms for the Traveling Salesman Problem, some of which are new ideas. Furthermore we give four exact algorithms for the TSP2, namely a Branch-and-Bound (BnB) algorithm, an Integer Programming (IP) algorithm, a Branch-and-Cut (BnC) algorithm and an algorithm based on a polynomial reduc-tion to the original TSP (TSP-R). Finally we experimentally compare the algorithms for many different random instances and real instances from the already mentioned application in bioinformatics. Our experiments show that for real instances most heuristics lead to optimal or almost-optimal solutions. For both, random and real classes, our most sophisticated exact approach BnC is the leading algorithm. In par-ticular, the BnC algorithm is able to solve real instances up to size 80 in reasonable time, proving the applicability of this approach.

To read the full-text of this research,

you can request a copy directly from the authors.

... The QCCP is introduced by Jäger and Molitor [42]. Fischer et al. [25] show that the problem is N P-hard. This result is later on strengthened by De Meijer and Sotirov [43], who prove that the QCCP is strongly N P-hard and not approximable within any constant factor. ...

... For the existence of a directed 2-factor in a graph, see e.g., Chiba and Yamashita [17]. The QCCP is shown to be N P-hard in the strong sense and not approximable within any constant factor [25,43]. The linear problem corresponding to the QCCP is called the cycle cover problem (CCP). ...

We study the Quadratic Cycle Cover Problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting planes. Computational results show that our SDP bounds and our efficient cutting plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment.

... Since we assume that all cycle covers in this paper are disjoint, we use the term cycle cover to denote this concept throughout this work. The QCCP is proven to be NP-hard (Fischer et al. 2009). The corresponding linear problem is called the cycle cover problem (CCP), in which one wants to find a minimum cycle cover with respect to linear arc costs. ...

... For the existence of such a directed 2-factor in a directed graph, see e.g., Chiba and Yamashita (2018). The quadratic cycle cover problem is NP-hard (Fischer et al. 2009). Also, the related problems Angle-CCP and the MinRC3 problem are shown to be NP-hard (Aggarwal et al. 1999) and strongly NP-hard (Galbiati et al. 2014), respectively. ...

The quadratic cycle cover problem is the problem of finding a set of node-disjoint cycles visiting all the nodes such that the total sum of interaction costs between consecutive arcs is minimized. In this paper we study the linearization problem for the quadratic cycle cover problem and related lower bounds. In particular, we derive various sufficient conditions for the quadratic cost matrix to be linearizable, and use these conditions to compute bounds. We also show how to use a sufficient condition for linearizability within an iterative bounding procedure. In each step, our algorithm computes the best equivalent representation of the quadratic cost matrix and its optimal linearizable matrix with respect to the given sufficient condition for linearizability. Further, we show that the classical Gilmore–Lawler type bound belongs to the family of linearization based bounds, and therefore apply the above mentioned iterative reformulation technique. We also prove that the linearization vectors resulting from this iterative approach satisfy the constant value property. The best among here introduced bounds outperform existing lower bounds when taking both quality and efficiency into account.

... The QCCP is introduced by Jäger and Molitor (2008). Fischer et al. (2009) show that the problem is N P-hard. This result is later on strengthened by De Meijer and Sotirov (2020), who prove that the QCCP is strongly N P-hard and not approximable within any constant factor. ...

We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra’s projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment.
Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra’s projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations.

Nowadays more and more scientific projects use volunteer computing to complete complex tasks. Volunteer computing are one method to get more computation power by parsing application into several small tasks and distribute them among volunteer clients in parallel. This method will generate faster computation time with relatively low budget requirement since it is based on volunteering participant. This paper discusses how volunteer computing can be used in finding shortest path in Jakarta landscape. These projects will be design on BOINC platform therefore can be implemented directly to BOINC middleware with small changes.

Lagrangian relaxation is a successful solution approach for many combinatorial optimisation problems, one of them being the train timetabling problem (TTP). We model this problem using time expanded networks for the single train schedules and coupling constraints to enforce restrictions like station capacities and headway times. Lagrangian relaxation of these coupling constraints leads to shortest path subproblems in the time expanded networks and is solved using a proximal bundle method. However, large instances of our practical partner Deutsche Bahn lead to computationally intractable models. In this thesis we develop two new algorithmic techniques to improve the solution process for this kind of optimisation problems.
The first new technique, Dynamic Graph Generation (DGG), aims at improving the computation of the shortest path subproblems in large time expanded networks. Without sacrificing any accuracy, DGG allows to store only small parts of the networks and to dynamically extend them whenever the stored part proves to be too small. This is possible by exploiting the properties of the objective function in many scheduling applications to prefer early paths or due times, respectively. We prove that DGG can be implemented very efficiently and its running time and the size of nodes that have to be stored additionally does not depend on the size of the time expanded network but only on the length of the train routes.
The second technique is an asynchronous and parallel bundle method (APBM). Traditional bundle methods require one solution of each subproblem in each iteration. However, many practical applications, e.g. the TTP, consist of rather loosely coupled subproblems. The APBM chooses only small subspaces corresponding to the Lagrange multipliers of strongly violated coupling constraints and optimises only these variables while keeping all other variables fixed. Several subspaces of disjoint variables may be chosen simultaneously and are optimised in parallel. The solutions of the subspace problem are incorporated into the global data as soon as it is available without any synchronisation mechanism. However, in order to guarantee convergence, the algorithm detects automatically dependencies between different subspaces and respects these dependencies in future subspace selections. We prove the convergence of the APBM under reasonable assumptions for both, the dual and associated primal aggregate data. The APBM is then further extended to problems with unknown dependencies between subproblems and constraints in the Lagrangian relaxation problem. The algorithm automatically detects these dependencies and respects them in future iterations. Again we prove the convergence of this algorithm under reasonable assumptions.
Finally we test our solution approach for the TTP on some real world instances of Deutsche Bahn. Using an iterative rounding heuristic based on the approximate fractional solutions obtained by the Lagrangian relaxation we are able to compute feasible schedules for all trains in a subnetwork of about 10% of the whole German network in about 12 hours. In these timetables 99% of all passenger trains could be scheduled with no significant delay and the travel time of the freight trains could be reduced by about one hour on average.

In the quadratic traveling salesman problem a cost is associated with any three nodes traversed in succession. This structure arises, e.g., if the succession of two edges represents energetic conformations, a change of direction or a possible change of transportation means. In the symmetric case, costs do not depend on the direction of traversal. We study the polyhedral structure of a linearized integer programming formulation of the symmetric quadratic traveling salesman problem. Our constructive approach for establishing the dimension of the underlying polyhedron is rather involved but offers a generic path towards proving facetness of several classes of valid inequalities. We establish relations to facets of the Boolean quadric polytope, exhibit new classes of polynomial time separable facet defining inequalities that exclude conflicting configurations of edges, and provide a generic strengthening approach for lifting valid inequalities of the usual traveling salesman problem to stronger valid inequalities for the symmetric quadratic traveling salesman problem. Applying this strengthening to subtour elimination constraints gives rise to facet defining inequalities, but finding a maximally violated inequality among these is NP-complete. For the simplest comb inequality with three teeth the strengthening is no longer sufficient to obtain a facet. Preliminary computational results indicate that the new cutting planes may help to considerably improve the quality of the root relaxation in some important applications.

In this paper we deal with sensitivity analysis of combinatorial optimization problems andits fundamental term, the tolerance. For three classes of objective functions ( , S , P MAX ) we givesome basic properties on upper and lower tolerances. We show that the upper tolerance of an elementis well defined, how to compute the upper tolerance of an element and give equivalent formulationswhen the upper tolerance is +¥ or 0 > . Analogous results are given for the lower tolerance and someresults on the relationship between lower and upper tolerances are given.

Diese Arbeit stellt einen integrierten Ansatz aus "Constraint Programming" (CP) und Gemischt-Ganzzahliger Programmierung ("Mixed Integer Programming", MIP) vor, den wir "Constraint Integer Programming" (CIP) nennen. Sowohl Modellierungs- als auch Lösungstechniken beider Felder fließen in den neuen integrierten Ansatz ein, um die unterschiedlichen Stärken der beiden Gebiete zu kombinieren. Als weiteren Beitrag stellen wir der wissenschaftlichen Gemeinschaft die Software SCIP zur Verfügung, die ein Framework für Constraint Integer Programming darstellt und zusätzlich Techniken des SAT-Lösens beinhaltet. SCIP ist im Source Code für akademische und nicht-kommerzielle Zwecke frei erhältlich. Unser Ansatz des Constraint Integer Programming ist eine Verallgemeinerung von MIP, die zusätzlich die Verwendung beliebiger Constraints erlaubt, solange sich diese durch lineare Bedingungen ausdrücken lassen falls alle ganzzahligen Variablen auf feste Werte eingestellt sind. Die Constraints werden von einer beliebigen Kombination aus CP- und MIP-Techniken behandelt. Dies beinhaltet insbesondere die "Domain Propagation", die Relaxierung der Constraints durch lineare Ungleichungen, sowie die Verstärkung der Relaxierung durch dynamisch generierte Schnittebenen. Die derzeitige Version von SCIP enthält alle Komponenten, die für das effiziente Lösen von Gemischt-Ganzzahligen Programmen benötigt werden. Die vorliegende Arbeit liefert eine ausführliche Beschreibung dieser Komponenten und bewertet verschiedene Varianten in Hinblick auf ihren Einfluß auf das Gesamt-Lösungsverhalten anhand von aufwendigen praktischen Experimenten. Dabei wird besonders auf die algorithmischen Aspekte eingegangen. Ein weiterer Hauptteil der Arbeit befasst sich mit der Chip-Design-Verifikation, die ein wichtiges Thema innerhalb des Fachgebiets der "Electronic Design Automation" darstellt. Chip-Hersteller müssen sicherstellen, dass der logische Entwurf einer Schaltung der gegebenen Spezifikation entspricht. Andernfalls würde der Chip fehlerhaftes Verhalten aufweisen, dass zu Fehlfunktionen innerhalb des Gerätes führen kann, in dem der Chip verwendet wird. Ein wichtiges Teilproblem in diesem Feld ist das Eigenschafts-Verifikations-Problem, bei dem geprüft wird, ob der gegebene Schaltkreisentwurf eine gewünschte Eigenschaft aufweist. Wir zeigen, wie dieses Problem als Constraint Integer Program modelliert werden kann und geben eine Reihe von problemspezifischen Algorithmen an, die die Struktur der einzelnen Constraints und der Gesamtschaltung ausnutzen. Testrechnungen auf Industrie-Beispielen vergleichen unseren Ansatz mit den bisher verwendeten SAT-Techniken und belegen den Erfolg unserer Methode.

Most research on algorithms for combinatorial optimization use the costs of the elements in the ground set for making decisions about the solutions that the algorithms would output. For traveling salesman problems, this implies that al-gorithms generally use arc lengths to decide on whether an arc is included in a partial solution or not. In this paper we study the effect of using element tolerances for making these decisions. We choose the traveling salesman prob-lem as a model combinatorial optimization problem and propose several greedy algorithms for it based on tolerances. We report extensive computational experi-ments on benchmark instances that clearly demonstrate that our tolerance-based algorithms outperform their weight-based counterpart. This indicates that the potential for using tolerance-based algorithms for various optimization problems is high and motivates further investigation of the approach.

Non-Euclidean traveling salesman problem (TSP) construction heuristics, and especially asymmetric TSP construction heuristics, have been neglected in the literature by comparison with the extensive efforts devoted to studying Euclidean TSP construction heuristics. This state of affairs is at odds with the fact that asymmetric models are relevant to a wider range of applications, and indeed are uniformly more general that symmetric models. Moreover, common construction approaches for the Euclidean TSP have been shown to produce poor quality solutions for non-Euclidean instances. Motivation for remedying this gap in the study of construction approaches is increased by the fact that such methods are a great deal faster than other TSP heuristics, which can be important for real time problems requiring continuously updated response. The purpose of this paper is to describe two new construction heuristics for the asymmetric TSP and a third heuristic based on combining the other two. Extensive computational experiments are performed for several different families of TSP instances, disclosing that our combined heuristic clearly outperforms well-known TSP construction methods and proves significantly more robust in obtaining (relatively) high quality solutions over a wide range of problems.

A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.

In this paper we improve the quality of a recently suggested class of construction heuristics for the Asymmetric Traveling Salesman Problem (ATSP), namely the Contract-or-Patch heuristic. Our improve- ment is based on replacing the selection of each path to be contracted after deleting a heaviest arc from each short cycle in an Optimal Assign- ment Problem Solution (OAPS) by contracting a single arc from a short cycle in an OAPS with the largest upper tolerance with respect to one of the relaxed ATSP. The improved algorithm produces higher-quality tours than all previous COP versions and is clearly outperforming all other construction heuristics on robustness.

In this paper we deal with sensitivity analysis of combinatorial optimization problems and its fundamental term, the tolerance. For three classes of objective functions (Σ,Π, MAX) we give some basic properties on upper and lower tolerances. We show that the upper tolerance of an element is well defined, how to compute the upper tolerance of an element, and give equivalent formulations when the upper tolerance is + ∞ or > 0. Analogous results are given for the lower tolerance and some results on the relationship between lower and upper tolerances are given.

Several polynomial time algorithms finding “good,” but not
necessarily optimal, tours for the traveling salesman problem are
considered. We measure the closeness of a tour by the ratio of the
obtained tour length to the minimal tour length. For the nearest
neighbor method, we show the ratio is bounded above by a logarithmic
function of the number of nodes. We also provide a logarithmic lower
bound on the worst case. A class of approximation methods we call
insertion methods are studied, and these are also shown to have a
logarithmic upper bound. For two specific insertion methods, which we
call nearest insertion and cheapest insertion, the ratio is shown to
have a constant upper bound of 2, and examples are provided that come
arbitrarily close to this upper bound. It is also shown that for any
n≥8, there are traveling salesman problems with n nodes having tours
which cannot be improved by making n/4 edge changes, but for which the
ratio is 2(1-1/n).

Motivated by applications in robotics, we formulate the problem of minimizing the total angle cost of a TSP tour for a set of points in Euclidean space, where the angle cost of a tour is the sum of the direction changes at the points. We establish the NP-hardness of both this problem and its relaxation to the cycle cover problem. We then consider the issue of designing approximation algorithms for these problems and show that both problems can be approximated to within a ratio of O(log n) in polynomial time. We also consider the problem of simultaneously approximating both the angle and the length measure for a TSP tour. In studying the resulting tradeoff, we choose to focus on the sum of the two performance ratios and provide tight bounds on the sum. Finally, we consider the extremal value of the angle measure and obtain essentially tight bounds for it. In this paper we restrict our attention to the planar setting, but all our results are easily extended to higher dimensions.

Even though every cell in an organism contains the same genetic material, each cell does not express the same cohort of genes. Therefore, one of the major problems facing genomic research today is to determine not only which genes are differentially expressed and under what conditions, but also how the expression of those genes is regulated. The first step in determining differential gene expression is the binding of sequence-specific DNA binding proteins (i.e. transcription factors) to regulatory regions of the genes (i.e. promoters and enhancers). An important aspect to understanding how a given transcription factor functions is to know the entire gamut of binding sites and subsequently potential target genes that the factor may bind/regulate. In this study, we have developed a computer algorithm to scan genomic databases for transcription factor binding sites, based on a novel Markov chain optimization method, and used it to scan the human genome for sites that bind to hepatocyte nuclear factor 4 α (HNF4α). A list of 71 known HNF4α binding sites from the literature were used to train our Markov chain model. By looking at the window of 600 nucleotides around the transcription start site of each confirmed gene on the human genome, we identified 849 sites with varying binding potential and experimentally tested 109 of those sites for binding to HNF4α. Our results show that the program was very successful in identifying 77 new HNF4α binding sites with varying binding affinities (i.e. a 71% success rate). Therefore, this computational method for searching genomic databases for potential transcription factor binding sites is a powerful tool for investigating mechanisms of differential gene regulation.
Contact: jiang@cs.ucr.edu

In this note we deal with sensitivity analysis of combinatorial optimization problems and its fundamental term, the tolerance. For three classes of objective functions (?, ?, MAX) we prove some basic properties on upper and lower tolerances. We show that the upper tolerance of an element is well defined, how to compute the upper tolerance of an element, and give equivalent formulations when the upper tolerance is +? or > 0. Analogous results are proven for the lower tolerance and some results on the relationship between lower and upper tolerances are given.

Introduction The Maximum Traveling Salesman Problem (MAX TSP), also known informally as the "taxicab ripoff problem", is stated as follows: Given an n Theta n real matrix c = (c ij ), called a weight matrix, find a hamiltonian cycle i 1 7! i 2 7! : : : 7! i n 7! i 1 , for which the maximum value of c i 1 i 2 + c i 2 i 3 + : : : + c i nGamma1 i n + c i n i 1 is attained. Here (i 1 ; : : : ; i n ) is a permutation of the set f1; : : : ; ng. Of course, in this general setting, the Maximum Traveling Salesman Problem is equivalent to the Minimum Traveling Salesman Problem, Partially supported by NSF Grant DMS 9734138 since the maximum weight hamiltonian cycle with the weight matrix c corresponds to the minimum weight hamiltonian cycle with the weight matrix Gammac. What makes the MAX TSP special is that there are some interesting and natural special cases of weights c ij , not preserved by the sign reversal, where much more can be said about the problem than in the general case. Be

The cost scaling push-relabel method has been shown to be efficient for solving minimum-cost flow problems. In this paper we apply the method to the assignment problem. We investigate implementations of the method that take advantage of the problem structure. The results show that the method is very promising for practical use; it outperforms all other codes on almost all problems in our study.

Consider the following restricted (symmetric or asymmetric) traveling salesman problem: given an initial ordering of the n cities and an integer k ? 0, find a minimum cost tour such that if city i precedes city j by at least k positions in the initial ordering, then city i precedes city j in any optimal tour. Balas [5] has proposed a dynamic programming algorithm that solves this problem in time linear in n, though exponential in k. Some important real-world problems are amenable to this model or some of its close relatives. The algorithm of [5] constructs a layered network with a layer of nodes for each position in the tour, such that source-sink paths in this network are in 1-1 correspondence with tours that satisfy the postulated precedence constraints. In this paper we discuss an implementation of the dynamic programming algorithm for the general case when the integer k is replaced with city-specific integers k(j), j = 1; : : : ; n. One important feature of our implementation is that we construct in advance, without knowledge of any particular problem instance, a typical layer of an auxiliary supernetwork that can be stored and subsequently used to solve any instance with max j k(j) K for some large K associated with this auxiliary structure. This advance construction, which needs to be done only once, absorbs the bulk of the computing time. We discuss applications to, and computational experience with, TSP's with time windows, a model frequently used in vehicle routing as well as in scheduling with setup, release and delivery times. We also introduce a new model, the TSP with target times, applicable to Just-in-Time scheduling problems. For TSP's that do not satisfy the postulated precedence constraints, we use the algorithm as a heuristic that finds in linear tim...

In this paper we report on a cutting plane procedure with which we solved symmetric travelling salesman problems of up to 1000 cities to optimality. Our implementation is based on a fast LP-solver (IBM's MPSX) and makes effective use of polyhedral results on the symmetric travelling salesman polytope. We describe the important ingredients of our code and give an extensive documentation of its computational performance.

In this paper we further investigate tour construction algorithms for the Asymmetric Traveling Salesman Problem (ATSP). In [4] we introduced a new algorithm, called Contract-or-Patch (COP). We have tested the algorithm together with other well-known and new heuristics on a variety of families of ATSP instances. In our study, COP has demonstrated good performance, clearly outperforming all other algorithms on robustness. It has either produced the shortest tours or came close to the leader on each of the seven families tested, while each of the remaining algorithms failed on at least two families of instances. In this paper we introduce three new variants of the COP algorithm, and perform an extensive computational study of the original as well as new versions of the algorithm on a variety of ATSP instances. We also study the influence of the threshold parameter on the quality of tours produced by COP. We conclude the study by recommending one of the new versions of COP as a replacement for the original algorithm. The modified algorithm produces higher-quality tours than the original version, and has a nice property of being a much simpler algorithm. We also recommend a good universal choice of the parameter value.

This book presents the latest findings on one of the most intensely investigated subjects in computational mathematics--the traveling salesman problem. It sounds simple enough: given a set of cities and the cost of travel between each pair of them, the problem challenges you to find the cheapest route by which to visit all the cities and return home to where you began. Though seemingly modest, this exercise has inspired studies by mathematicians, chemists, and physicists. Teachers use it in the classroom. It has practical applications in genetics, telecommunications, and neuroscience. The authors of this book are the same pioneers who for nearly two decades have led the investigation into the traveling salesman problem. They have derived solutions to almost eighty-six thousand cities, yet a general solution to the problem has yet to be discovered. Here they describe the method and computer code they used to solve a broad range of large-scale problems, and along the way they demonstrate the interplay of applied mathematics with increasingly powerful computing platforms. They also give the fascinating history of the problem--how it developed, and why it continues to intrigue us.

It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D.C., has the shortest road distance.
Operations Research, ISSN 0030-364X, was published as Journal of the Operations Research Society of America from 1952 to 1955 under ISSN 0096-3984.

An algorithm is described for solving large-scale instances of the Symmetric Traveling Salesman Problem (STSP) to optimality. The core of the algorithm is a "polyhedral" cutting-plane procedure that exploits a subset of the system of linear inequalities defining the convex hull of the incidence vectors of the hamiltonian cycles of a complete graph. The cuts are generated by several identification procedures that have been described in a companion paper. Whenever the cutting-plane procedure does not terminate with an optimal solution the algorithm uses a tree- search strategy that, as opposed to branch-and-bound, keeps on producing cuts after branching. The algorithm has been implemented in FORTRAN. Two different linear programming (LP) packages have been used as the LP solver. The implementation of the algorithm and the interface with one of the LP solvers is described in sufficient detail to permit the replication of our experiments. Computational results are reported with up to 42 STSPs with sizes ranging from 48 to 2,392 nodes. Most of the medium-sized test problems are taken from the literature; all others are large-scale real-world problems. All of the instances considered in this study were solved to optimality by the algorithm in "reasonable" computation times.

Linear programming used to reduce the combinatorial magnitude of travelling-salesman-problems. To illustrate the method, a step-by-step solution of Barachet’s ten-city example is presented.

Assuming that numerical scores are available for the performance of each of n persons on each of n jobs, the "assignment problem" is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible. It is shown that ideas latent in the work of two Hungarian mathematicians may be exploited to yield a new method of solving this problem.

Cited By (since 1996): 24, Export Date: 24 January 2013, Source: Scopus

The RAND Corporation in the early 1950s contained “what may have been the most remarkable group of mathematicians working
on optimization ever assembled” [6]: Arrow, Bellman, Dantzig, Flood, Ford, Fulkerson, Gale, Johnson, Nash, Orchard-Hays, Robinson,
Shapley, Simon, Wagner, and other household names. Groups like this need their challenges. One of them appears to have been
the traveling salesman problem (TSP) and particularly its instance of finding a shortest route through Washington, DC, and
the 48 states [4, 7].

We propose a new algorithm for the classical assignment problem. The algorithm resembles in some ways the Hungarian method but differs substantially in other respects. The average computational complexity of an efficient implementation of the algorithm seems to be considerably better than the one of the Hungarian method. In a large number of randomly generated problems the algorithm has consistently outperformed an efficiently coded version of the Hungarian method by a broad margin. The factor of improvement increases with the problem dimensionN and reaches an order of magnitude forN equal to several hundreds.

We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented.Wir entwickeln einen Algorithmus mit krzesten alternierenden Wegen fr das lineare Zuordnungsproblem. Er enthlt neue Routinen fr die Anfangswerte und eine spezielle Implementierung der Krzesten-Wege-Methode von Dijkstra. Sowohl fr dichte als auch fr dnne Probleme zeigen Testlufe, da unser Algorithmus gleichmig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.

The paper considers the classic linear assignment problem with a min-sum objective function, and the most efficient and easily available codes for its solution. We first give a survey describing the different approaches in the literature, presenting their implementations, and pointing out similarities and differences. Then we select eight codes and we introduce a wide set of dense instances containing both randomly generated and benchmark problems. Finally we discuss the results of extensive computational experiments obtained by solving the above instances with the eight codes, both on a workstation with Unix operating system and on a personal computer running under Windows 95.

This report describes an implementation of the Lin-Kernighan heuristic, one of the most successful methods for generating optimal or nearoptimal solutions for the symmetric traveling salesman problem. Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 7397-city problem (the largest nontrivial problem instance solved to optimality today). Furthermore, the algorithm has improved the best known solutions for a series of large-scale problems with unknown optima, among these an 85900-city problem.

We discuss the relationship between Gilmore–Lawler lower bounds with decomposition for the quadratic assignment problem and a lagrangean relaxation of a particular integer programming formulation

This paper has been presented with the Best Paper Award. It will appear in print in Volume 52, No. 1, February 2005.

We describe a computer code and data that together certify the optimality of a solution to the 85,900-city traveling salesman problem pla85900, the largest instance in the TSPLIB collection of challenge problems.

This paper discusses a highly effective heuristic procedure for generating optimum and near-optimum solutions for the symmetric traveling-salesman problem. The procedure is based on a general approach to heuristics that is believed to have wide applicability in combinatorial optimization problems. The procedure produces optimum solutions for all problems tested, “classical” problems appearing in the literature, as well as randomly generated test problems, up to 110 cities. Run times grow approximately as n ² ; in absolute terms, a typical 100-city problem requires less than 25 seconds for one case (GE635), and about three minutes to obtain the optimum with above 95 per cent confidence.

Many short DNA motifs, such as transcription factor binding sites (TFBS) and splice sites, exhibit strong local as well as nonlocal dependence. We introduce permuted variable length Markov models (PVLMM) which could capture the potentially important dependencies among positions and apply them to the problem of detecting splice and TFB sites. They have been satisfactory from the viewpoint of prediction performance and also give ready biological interpretations of the sequence dependence observed. The issue of model selection is also studied.

In this paper we further investigate tour construction algorithms for the Asymmetric Traveling Salesman Problem (ATSP). In [1] we introduced a new algorithm, called Contract-or-Patch (COP). We have tested the algorithm together with other well-known and new heuristics on a variety of families of ATSP instances. In our study, COP has demonstrated good performance, clearly outperforming all other algorithms on robustness. It has either produced the shortest tours or came close to the leader on each of the seven families tested, while each of the remaining algorithms failed on at least two families of instances. In this paper we introduce three new variants of the COP algorithm, and perform an extensive computational study of the original as well as new versions of the algorithm on a variety of ATSP instances. We also study the inuence of the threshold parameter on the quality of tours produced by COP. We conclude the study by recommending one of the new versions of COP as ...

Molitor: Tolerances Applied in Combinatorial Optimiza-tion

- B Goldengorin
- G Ager

B. Goldengorin, G. J¨ ager, P. Molitor: Tolerances Applied in Combinatorial Optimiza-tion. J. Comput. Sci. 2(9), 716-734, 2006.

Bertsekas: A New Algorithm for the Assignment Problem

D.P. Bertsekas: A New Algorithm for the Assignment Problem. Math. Program. 21,
152-171, 1981.

Tolerance-Based Greedy Algorithms for the Traveling Salesman Problem Chapter 5 in: Mathematical Programming and Game Theory for Decision Making World Scientific

- D Ghosh
- B Goldengorin
- G Gutin
- G Jäger

D. Ghosh, B. Goldengorin, G. Gutin, G. Jäger: Tolerance-Based Greedy Algorithms for the Traveling Salesman Problem. Chapter 5 in: Mathematical Programming and Game Theory for Decision Making. S.K. Neogy, R.B. Bapat, A.K. Das, T.
Parthasarathy (Eds.). World Scientific, New Jersey, 47-59, 2008.

University of Halle-Wittenberg, Chair for Bioinformatics, and Leibniz Institute of Plant Genetics and Crop Plant Research in Gatersleben

- I Grosse
- J Keilwagen

I. Grosse, J. Keilwagen, University of Halle-Wittenberg, Chair for Bioinformatics, and
Leibniz Institute of Plant Genetics and Crop Plant Research in Gatersleben. Private
communication.

Chair for Bioinformatics, and Leibniz Institute of Plant Genetics and Crop Plant Research in Gatersleben

- I Grosse
- J Keilwagen

I. Grosse, J. Keilwagen, University of Halle-Wittenberg, Chair for Bioinformatics, and
Leibniz Institute of Plant Genetics and Crop Plant Research in Gatersleben. Private
communication.