Article

The traveling-salesman problem and minimum spanning trees: Part II

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The relationship between the symmetric traveling-salesman problem and the minimum spanning tree problem yields a sharp lower bound on the cost of an optimum tour. An efficient iterative method for approximating this bound closely from below is presented. A branch-and-bound procedure based upon these considerations has easily produced proven optimum solutions to all traveling-salesman problems presented to it, ranging in size up to sixty-four cities. The bounds used are so sharp that the search trees are minuscule compared to those normally encountered in combinatorial problems of this type.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... If k edges in the tour are simultaneously replaced, this is known as the k-opt move [Hel09]. To prune the search space, the algorithm relies on minimum spanning trees [HK70,HK71] to identify edges that are more likely to be in the tour. This "importance" metric for edges is called α-nearness and described subsequently. ...
... This transformation of the distance matrix does not change the shortest tour and leads to significantly improved α-nearness values [Hel98]. This method can also be used to compute a lower bound which is in general very close to the optimal tour length [HK70,HK71]. Figure 3 shows the impact of the subgradient optimization. In the example, most of the edges of the optimal tour are already present in the optimized 1-tree. ...
... In the example, most of the edges of the optimal tour are already present in the optimized 1-tree. For a more detailed description of α-nearness, 1-trees, and the subgradient optimization scheme, we refer to [HK70,HK71,Hel98]. ...
Preprint
In this work, we aim to explore connections between dynamical systems techniques and combinatorial optimization problems. In particular, we construct heuristic approaches for the traveling salesman problem (TSP) based on embedding the relaxed discrete optimization problem into appropriate manifolds. We explore multiple embedding techniques -- namely, the construction of new dynamical systems on the manifold of orthogonal matrices and associated Procrustes approximations of the TSP cost function. Using these dynamical systems, we analyze the local neighborhood around the optimal TSP solutions (which are equilibria) using computations to approximate the associated \emph{stable manifolds}. We find that these flows frequently converge to undesirable equilibria. However, the solutions of the dynamical systems and the associated Procrustes approximation provide an interesting biasing approach for the popular Lin--Kernighan heuristic which yields fast convergence. The Lin--Kernighan heuristic is typically based on the computation of edges that have a `high probability' of being in the shortest tour, thereby effectively pruning the search space. Our new approach, instead, relies on a natural relaxation of the combinatorial optimization problem to the manifold of orthogonal matrices and the subsequent use of this solution to bias the Lin--Kernighan heuristic. Although the initial cost of computing these edges using the Procrustes solution is higher than existing methods, we find that the Procrustes solution, when coupled with a homotopy computation, contains valuable information regarding the optimal edges. We explore the Procrustes based approach on several TSP instances and find that our approach often requires fewer k-opt moves than existing approaches. Broadly, we hope that this work initiates more work in the intersection of dynamical systems theory and combinatorial optimization.
... The value of β d is presently unknown, although previous studies have provided numerical Monte Carlo estimates for β 2 [41,63,67,75,77,93,96], as well as for β 3 and β 4 [63,77]. These estimations primarily rely on the Held-Karp linear programming relaxation [56,57]. The most contemporary estimates obtained in [8,34] are derived from simulating exceedingly large instances, and show with a high degree of confidence that β 2 ≈ 0.71. ...
... In their subsequent endeavors [56,57] ...
... The approximation of the Beardwood-Halton-Hammersley constant β d for dimensions higher than two (i.e., d > 2) became increasingly attainable with the significant advancements in computing power during the late 1990s. One of the key studies in this area was conducted by [63], whose work was built upon the Held-Karp lower bound [56,57] and undertook a thorough asymptotic experimental analysis of the bounds of the travelling salesman problem. Their research provided refined estimates of β 2 ≈ 0.7124 ± 0.0002, β 3 ≈ 0.6980±0.0003, ...
... As shown in Fig. 2, the layout information in matrix form can be converted into a solution vector and solved as a TSP. Karp (1971) proved that the TSP is an NP-hard problem, and therefore, the problem addressed in this paper can also be understood as an NP-hard problem [38,39]. Figure 2 shows an example of converting layout information into solution vector. ...
... As shown in Fig. 2, the layout information in matrix form can be converted into a solution vector and solved as a TSP. Karp (1971) proved that the TSP is an NP-hard problem, and therefore, the problem addressed in this paper can also be understood as an NP-hard problem [38,39]. Figure 2 shows an example of converting layout information into solution vector. ...
Article
Full-text available
The automotive industry is experiencing rapid changes due to the rise of the Industry 4.0 manufacturing paradigm, which requires strategic implementation of advanced manufacturing systems to meet diverse customer needs. The Matrix Manufacturing System, characterized by modular facilities and autonomous mobile robots, offers greater flexibility compared to traditional dedicated production systems. This paper conducts a multi-objective optimization of facility layout planning within the matrix manufacturing system to enhance efficiency and responsiveness to market volatility. To solve the optimization problem, three heuristic algorithms—Simulated Annealing, Particle Swarm Optimization, and Non-dominated Sorting Genetic Algorithm-II are employed and their performance is compared. For the comparative analysis, frequency maps are used, visualizing the optimization processes and outcomes between metaheuristic algorithms. The framework with methodologies presented in this report is expected to improve productivity and flexibility of a matrix manufacturing system in the automotive industry.
... An integral variable assignment that satisfies constraints (10), (12), and constraints (11) for neighboring residues, i.e., for j = i + 1, encodes a path p in the corresponding k -partite graph from a node in V 1 to a node in V k that traverses exclusively edges between neighboring residues. The remaining constraints of (11) involve y -variables that do not appear in any other constraint and can thus be chosen independently of each other. ...
... We apply a standard subgradient optimization technique [12] to find those Lagrangian multipliers λ i v that yield the largest lower bound to our relaxation. This iterative adaption of the Lagrangian multipliers only requires the profits of a small fraction of the vertices to be recomputed from scratch in each iteration. ...
Preprint
Computational protein design aims at constructing novel or improved functions on the structure of a given protein backbone and has important applications in the pharmaceutical and biotechnical industry. The underlying combinatorial side-chain placement problem consists of choosing a side-chain placement for each residue position such that the resulting overall energy is minimum. The choice of the side-chain then also determines the amino acid for this position. Many algorithms for this NP-hard problem have been proposed in the context of homology modeling, which, however, reach their limits when faced with large protein design instances. In this paper, we propose a new exact method for the side-chain placement problem that works well even for large instance sizes as they appear in protein design. Our main contribution is a dedicated branch-and-bound algorithm that combines tight upper and lower bounds resulting from a novel Lagrangian relaxation approach for side-chain placement. Our experimental results show that our method outperforms alternative state-of-the art exact approaches and makes it possible to optimally solve large protein design instances routinely.
... The combinatorial algorithm we study is an edge-elimination/fixing method developed by Hougardy and Schroeder [13]. The central idea was proposed by Jonker and Volgenant [14], in the context of an implementation of the Held-Karp [11] branchand-bound algorithm. In a study of Euclidean and road-map instances ranging in size from 40 to 120 points, Jonker and Volgenant begin by eliminating edges based on an optimal 1-tree, similar to the Dantzig et al. elimination via reduced costs; for details see Volgenant and Jonker [23]. ...
... In our implementation, we adopt the Held-Karp 1-tree code [11] included with the Concorde TSP solver [1]. The branch-and-bound search is terminated early if either a tour shorter than T F (σ, b) is found or a specified maximum number of search nodes is reached. ...
Article
Full-text available
Hougardy and Schroeder (WG 2014) proposed a combinatorial technique for pruning the search space in the traveling salesman problem, establishing that, for a given instance, certain edges cannot be present in any optimal tour. We describe an implementation of their technique, employing an exact TSP solver to locate k-opt moves in the elimination process. In our computational study, we combine LP reduced-cost elimination together with the new combinatorial algorithm. We report results on a set of geometric instances, with the number of points n ranging from 3038 up to 115,475. The test set includes all TSPLIB instances having at least 3000 points, together with 250 randomly generated instances, each with 10,000 points, and three currently unsolved instances having 100,000 or more points. In all but two of the test instances, the complete-graph edge sets were reduced to under 3n edges. For the three large unsolved instances, repeated runs of the elimination process reduced the graphs to under 2.5n edges.
... The starting point for solving the g-TSP, which is enhanced in section 3, is the classical branch and bound scheme proposed by Held and Karp [HK71], with some improvements from more recent works [VJ97;Shu01]. ...
... Following [HK71], we use a lower bound based on a Lagrangian relaxation combined with an iterative sub-gradient improvement, that achieves the optimum of the original g-TSP problem through branching iterations. Upon choosing a starting node, e.g. 1, for computing the Lagrangian relaxation, degree constraints (2) for ≠ 1 are relaxed into the objective function via multipliers = ( : ∈ \ {1}). ...
Preprint
Full-text available
This article demonstrates the effectiveness of employing a deep learning model in an optimization pipeline. Specifically, in a generic exact algorithm for a NP problem, multiple heuristic criteria are usually used to guide the search of the optimum within the set of all feasible solutions. In this context, neural networks can be leveraged to rapidly acquire valuable information, enabling the identification of a more expedient path in this vast space. So, after the explanation of the tackled traveling salesman problem, the implemented branch and bound for its classical resolution is described. This algorithm is then compared with its hybrid version termed "graph convolutional branch and bound" that integrates the previous branch and bound with a graph convolutional neural network. The empirical results obtained highlight the efficacy of this approach, leading to conclusive findings and suggesting potential directions for future research.
... Lagrangian relaxation (LR) is an efficient method for solving large-scale problems. In 1970and 1971, Held and Karp (1970 and Held and Karp (1971) applied the Lagrange problem on a traveling salesman problem. Their success in the implementation of a Lagrangian method motivated other researchers to applying this approach, and in the early 1970s, Fisher (1981), Fisher (1985, and Fisher (2004) applied this method to the scheduling and general integer programming problems. ...
... Lagrangian relaxation (LR) is an efficient method for solving large-scale problems. In 1970and 1971, Held and Karp (1970 and Held and Karp (1971) applied the Lagrange problem on a traveling salesman problem. Their success in the implementation of a Lagrangian method motivated other researchers to applying this approach, and in the early 1970s, Fisher (1981), Fisher (1985, and Fisher (2004) applied this method to the scheduling and general integer programming problems. ...
... Moreover, we use Polyak's stepsize update [47], see (12), where U * is computed using the heuristic introduced in Section 6.1.4. For the value of µ ℓ 1 we use the approach of Held and Karp [35], implying that µ 0 1 = 1 and we halve its value each time the obtained bound did not increase for N step = 40 subsequent iterations. As starting point S 0 , we use the (approximate) dual solution that we obtain from the implementation of the ADMM mentioned above. ...
Preprint
Full-text available
This paper presents the Lagrangian duality theory for mixed-integer semidefinite programming (MISDP). We derive the Lagrangian dual problem and prove that the resulting Lagrangian dual bound dominates the bound obtained from the continuous relaxation of the MISDP problem. We present a hierarchy of Lagrangian dual bounds by exploiting the theory of integer positive semidefinite matrices and propose three algorithms for obtaining those bounds. Our algorithms are variants of well-known algorithms for minimizing non-differentiable convex functions. The numerical results on the max-k-cut problem show that the Lagrangian dual bounds are substantially stronger than the semidefinite programming bound obtained by relaxing integrality, already for lower levels in the hierarchy. Computational costs for computing our bounds are small.
... The model structure for very large optimization problems can often be exploited by using Lagrangian duality and Lagrangian relaxation, which first came to light by the seminal work of Held and Karp [25]. Some decomposition methods relating to electrical energy applications are also covered by Sagastizábal [53]. ...
Thesis
Full-text available
The global production of electricity contributes significantly to the release of CO2 emissions. Therefore, a transformation of the electricity system is of vital importance in order to restrict global warming. This thesis concerns modelling and methodology of electricity systems which contain a large share of variable renewable electricity generation (i.e. wind and solar power). The two models developed in this thesis concern optimization of long-term investments in the electricity system. They aim at minimizing investment and production costs under electricity production constraints, using different spatial resolutions and technical detail, while meeting the electricity demand. These models are very large in nature due to the 1) high temporal resolution needed to capture the wind and solar variations while maintaining chronology in time, and 2) need to cover a large geographical scope in order to represent strategies to manage these variations (e.g. electricity trade). Thus, different decomposition methods are applied to reduce computation times. We develop three different decomposition methods: Lagrangian relaxation combined with variable splitting solved using either i) a subgradient algorithm or ii) an ADMM algorithm, and iii) a heuristic decomposition using a consensus algorithm. In all three cases, the decomposition is done with respect to the temporal resolution by dividing the year into 2-week periods. The decomposition methods are tested and evaluated for cases involving regions with different energy mixes and conditions for wind and solar power. Numerical results show faster computation times compared to the non-decomposed models and capacity investment options similar to the optimal solutions given by the latter models. However, the reduction in computation time may not be sufficient to motivate the increase in complexity and uncertainty of the decomposed models.
... Approximate algorithms [34] come with a worst-case approximation factor. Two traditional methods include a Minimum Spanning Tree (MST) based algorithm [35], which achieves a factor 2 approximation. A combination of MST and Minimum Matching Problem (MMP), achieves a factor 3/2 approximation. ...
... Another evolutionary algorithm used in this work for the optimal synthesis of reversible circuits is the ant colony algorithm (ACO), which has proven its promise in solving combinatorial problems, such as the travelingsalesman problem [14], protein-ligand docking [15], scheduling [16], etc. ACO combines a probabilistic decision-making model as a tool for overcoming stagnation problems with a distributed memory model, which is an effective way to find the attainable optimum. Thus, ACO can be used to build efficient and effective solutions, which has led to a more detailed analysis of its use for the synthesis of reversible devices. ...
Article
Full-text available
Considering the relentless progress in the miniaturization of electronic devices and the need to reduce energy consumption, technical challenges in the synthesis of circuit design solutions have become evident. According to Moore's Law, the reduction of transistor sizes to the atomic scale faces physical limits, which complicate further development. Additionally, reducing transistor sizes causes current leakage, leading to increased thermal noise, which can disrupt the proper functioning of digital devices. A promising solution to these problems is the application of reversible logic in circuit design. Reversible logic allows for a reduction in energy and information losses because logical reversible operations are performed without loss. The research synthesized optimal reversible circuits based on reversible gates using evolutionary algorithms and compare them with existing analogues. The focus of this study is on logical circuits built using reversible gates, which can significantly reduce energy losses, which is critical for modern and future electronic devices. The synthesis of reversible circuits is closely related to quantum computing, where quantum gates also possess a reversible nature. This enables the use of synthesis methods to create quantum reversible logical computing devices, which in turn promotes the development of quantum technologies. The study focuses on the application of evolutionary artificial intelligence algorithms, specifically genetic algorithms and ant colony optimization algorithms, for the optimal synthesis of reversible circuits. As a result, a detailed description of the key concepts of the improved algorithms, simulation results, and comparison of the two methods is provided. The efficiency of the reversible device synthesis was evaluated using the proposed implementation of the genetic algorithm and the ant colony optimization algorithm. The obtained results were compared to existing analogs and verified using the Qiskit framework in the IBM quantum computing laboratory. The conclusions describe the developed algorithms, which demonstrate high efficiency in solving circuit topology optimization problems. A genetic algorithm was developed, featuring multi-component mutation and a matrix approach to chromosome encoding combined with Tabu search to avoid local optima. The ant colony optimization algorithms were improved, including several changes to the proposed data representation model, structure, and operational principles of the synthesis algorithm, enabling effective synthesis of devices on the NCT basis along with Fredkin gates. An improved structure for storing and using pheromones was developed to enable multi-criteria navigation in the solution space.
... A minimal spanning tree (MST) is a subset of edges that connects all vertices in a weighted graph with the minimum possible total edge weight, forming a tree structure without any cycles. It ensures every vertex is reachable while minimizing the overall connection cost [81,138,156,157,163,265,355]. The maximum spanning tree problem has also been studied in a similar manner [29,223]. ...
Chapter
Full-text available
In this paper, we conduct a comprehensive study of Trees, Forests, and Paths within the framework of Fuzzy and Neutrosophic Graphs. Graph theory, known for its wide-ranging applications across various fields, is extended here to account for uncertainties through the use of fuzzy and neutrosophic structures. These graphs introduce degrees of truth, indeterminacy, and falsity to better model real-world complexities. Our research delves into the classification and properties of graph structures like trees and paths in this context, aiming to offer deeper insights and contribute to ongoing developments in the field of graph theory.
... It turns out that almost all our results lie within the widely accepted margin of error caused by rounding distances to the nearest integer. Furthermore, the (relatively time-consuming) standard Held-Karp bound (see [Held and Karp 1971]) is outperformed by our methods for most instances. This is remarkable, as it usually performs quite well, and has been studied widely, even for geometric instances of the TSP. ...
Preprint
We consider geometric instances of the Maximum Weighted Matching Problem (MWMP) and the Maximum Traveling Salesman Problem (MTSP) with up to 3,000,000 vertices. Making use of a geometric duality relationship between MWMP, MTSP, and the Fermat-Weber-Problem (FWP), we develop a heuristic approach that yields in near-linear time solutions as well as upper bounds. Using various computational tools, we get solutions within considerably less than 1% of the optimum. An interesting feature of our approach is that, even though an FWP is hard to compute in theory and Edmonds' algorithm for maximum weighted matching yields a polynomial solution for the MWMP, the practical behavior is just the opposite, and we can solve the FWP with high accuracy in order to find a good heuristic solution for the MWMP.
... Lagrangian relaxation has been used extensively in the design of approximation algorithms for a variety of problems such as TSP [20,21], k-MST [13,2,8,12], partial vertex cover [23], k-median [26,6,1], MST with degree constraints [32] and budgeted MST [36]. ...
Preprint
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.
... Consider, for example, the classical Traveling Salesperson Problem (TSP): given a distance metric on a set of n points, the task is to find a shortest path 1 visiting all n points. This problem can be solved in time 2 n · n O(1) using a well-known dynamic programming algorithm of Bellman [5] and of Held and Karp [22] that works for any metric. However, in the special case when the points are in d-dimensional Euclidean space, TSP can be solved in time 2 d O(d) · n O(dn 1−1/d ) by an algorithm of Smith and Wormald [38], that is, treating d as a fixed constant, the running time is n O(n 1−1/d ) = 2 O(n 1−1/d ·log n) = 2 O(n 1−1/d+ ) for every > 0. This means that, as the dimension d grows, the running time quickly converges to the 2 n · n O(1) time of the standard dynamic programming algorithm that does not exploit any geometric property of the problem. ...
Preprint
We are studying d-dimensional geometric problems that have algorithms with 11/d1-1/d appearing in the exponent of the running time, for example, in the form of 2n11/d2^{n^{1-1/d}} or nk11/dn^{k^{1-1/d}}. This means that these algorithms perform somewhat better in low dimensions, but the running time is almost the same r all large values d of the dimension. Our main result is showing that for some of these problems the dependence on 11/d1-1/d is best possible under a standard complexity assumption. We show that, assuming the Exponential Time Hypothesis, --- d-dimensional Euclidean TSP on n points cannot be solved in time 2O(n11/dϵ)2^{O(n^{1-1/d-\epsilon})} for any ϵ>0\epsilon>0, and --- the problem of finding a set of k pairwise nonintersecting d-dimensional unit balls/axis parallel unit cubes cannot be solved in time f(k)no(k11/d)f(k)n^{o(k^{1-1/d})} for any computable function f. These lower bounds essentially match the known algorithms for these problems. To obtain these results, we first prove lower bounds on the complexity of Constraint Satisfaction Problems (CSPs) whose constraint graphs are d-dimensional grids. We state the complexity results on CSPs in a way to make them convenient starting points for problem-specific reductions to particular d-dimensional geometric problems and to be reusable in the future for further results of similar flavor.
... For CYTOB, the DP time is 8640 seconds, an order of magnitude larger.) Table 5 also shows that Zhang's tour always had cost close to the Held-Karp lower bound [13,14] on the cost of the optimum TSP tour. ...
Preprint
We study the problem of compressing massive tables within the partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which a table is partitioned by an off-line training procedure into disjoint intervals of columns, each of which is compressed separately by a standard, on-line compressor like gzip. We provide a new theory that unifies previous experimental observations on partitioning and heuristic observations on column permutation, all of which are used to improve compression rates. Based on the theory, we devise the first on-line training algorithms for table compression, which can be applied to individual files, not just continuously operating sources; and also a new, off-line training algorithm, based on a link to the asymmetric traveling salesman problem, which improves on prior work by rearranging columns prior to partitioning. We demonstrate these results experimentally. On various test files, the on-line algorithms provide 35-55% improvement over gzip with negligible slowdown; the off-line reordering provides up to 20% further improvement over partitioning alone. We also show that a variation of the table compression problem is MAX-SNP hard.
... • Spanning tree: A spanning tree is a subgraph that connects all vertices in a graph without any cycles, using the minimum edges [314,465,523,524,567,935,1163]. Related concepts include fuzzy minimum spanning tree [374,475,628], intuitionistic fuzzy minimum spanning tree [833], Single-valued neutrosophic minimum spanning tree [1167], and neutrosophic minimum spanning tree graph [742]. ...
Chapter
Full-text available
Graph theory is a fundamental branch of mathematics that studies networks consisting of nodes (vertices) and their connections (edges). Extensive research has been conducted on various graph classes within this field. Fuzzy Graphs and Neutrosophic Graphs are specialized models developed to address uncertainty in relationships. Intersection graphs, such as Unit Square Graphs, Circle Graphs, Ray Intersection Graphs, Grid Intersection Graphs, and String Graphs, play a critical role in analyzing graph structures. In this paper, we explore intersection graphs within the frameworks of Fuzzy Graphs, Intuitionistic Fuzzy Graphs, Neutrosophic Graphs, Turiyam Graphs, and Plithogenic Graphs, highlighting their mathematical properties and interrelationships. Additionally, we provide a comprehensive survey of the graph classes and hierarchies related to intersection graphs and uncertain graphs, reflecting the increasing number of graph classes being developed in these areas.
... Classically, the problem has been tackled by exact as well as heuristic algorithms. Notably, seminal work in linear programming in Ref. [2] introduced cutting planes, laying the groundwork for branch and cut methods [3]- [5], and branch and bound algorithms [6], [7]. In particular, Ref. [8] discussed an implementation of the method from Ref. [2], suitable for TSP instances having a million or more cities. ...
Preprint
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be N P-hard with a brute-force complexity of O(N N) or O(N 2N) for N number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of O(N N/2) or O(N N). We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is O(N 3 log(N)κ/ϵ + 1/ϵ 3), where the errors ϵ are O(1/poly(N)), and κ is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
... • Spanning tree: A spanning tree is a subgraph that connects all vertices in a graph without any cycles, using the minimum edges [314,465,523,524,567,935,1163]. Related concepts include fuzzy minimum spanning tree [374,475,628], intuitionistic fuzzy minimum spanning tree [833], Single-valued neutrosophic minimum spanning tree [1167], and neutrosophic minimum spanning tree graph [742]. ...
Preprint
Full-text available
Graph theory is a fundamental branch of mathematics that studies networks consisting of nodes (vertices) and their connections (edges). Extensive research has been conducted on various graph classes within this field. Fuzzy Graphs and Neutrosophic Graphs are specialized models developed to address uncertainty in relationships. Intersection graphs, such as Unit Square Graphs, Circle Graphs, Ray Intersection Graphs, Grid Intersection Graphs, and String Graphs, play a critical role in analyzing graph structures. In this paper, we explore intersection graphs within the frameworks of Fuzzy Graphs, Intuitionistic Fuzzy Graphs, Neutrosophic Graphs, Turiyam Graphs, and Plithogenic Graphs, highlighting their mathematical properties and interrelationships. Additionally, we provide a comprehensive survey of the graph classes and hierarchies related to intersection graphs and uncertain graphs, reflecting the increasing number of graph classes being developed in these areas.
... The success story of the Lagrangian relaxation principle, see e.g. [10,13], began in the early 1970s with the pioneering work of Held and Karp [17] on the travelling salesman problem, and it has since then become a standard tool within the field of mathematical optimization, and especially within discrete optimization. The success is due to its flexibility and ability to decompose complex optimization problems by exploiting their structural properties. ...
Article
Full-text available
Lagrangian heuristics for discrete optimization work by modifying Lagrangian relaxed solutions into feasible solutions to an original problem. They are designed to identify feasible, and hopefully also near-optimal, solutions and have proven to be highly successful in many applications. Based on a primal-dual global optimality condition for non-convex optimization problems, we develop a meta-heuristic extension of the Lagrangian heuristic framework. The optimality condition characterizes (near-)optimal solutions in terms of near-optimality and near-complementarity measures for Lagrangian relaxed solutions. The meta-heuristic extension amounts to constructing a weighted combination of these measures, thus creating a parametric auxiliary objective function, which is a close relative to a Lagrangian function, and embedding a Lagrangian heuristic in a search procedure in the space of the weight parameters. We illustrate and make a first assessment of this meta-heuristic extension by applying it to the generalized assignment and set covering problems. Our computational experience show that the meta-heuristic extension of a standard Lagrangian heuristic can significantly improve upon solution quality.
... A minimal spanning tree (MST) is a subset of edges that connects all vertices in a weighted graph with the minimum possible total edge weight, forming a tree structure without any cycles. It ensures every vertex is reachable while minimizing the overall connection cost [87,147,166,167,173,277,360]. The maximum spanning tree problem has also been studied in a similar manner [28,231]. ...
Preprint
Full-text available
In this paper, we conduct a comprehensive study of Trees, Forests, and Paths within the framework of Fuzzy and Neutrosophic Graphs. Graph theory, known for its wide-ranging applications across various fields, is extended here to account for uncertainties through the use of fuzzy and neutrosophic structures. These graphs introduce degrees of truth, indeterminacy, and falsity to better model real-world complexities. Our research delves into the classification and properties of graph structures like trees and paths in this context, aiming to offer deeper insights and contribute to ongoing developments in the field of graph theory.
... Thus, the brute force method of calculating the distance of each route is intractable on a classical computer for even a relatively small set of cities. Many classical algorithms have been developed that approximately solve the TSP such as branch and bound techniques [1][2][3][4] and other heuristic algorithms [5][6][7][8][9][10][11], however, there is no known classical algorithm that solves the general TSP exactly with polynomial resources. ...
Preprint
Full-text available
We propose an amplitude encoding of the traveling salesperson problem along with a method for calculating the cost function. Our encoding requires a number of qubits which grows logarithmically with the number of cities. We propose to calculate the cost function classically based on the probability distribution extracted from measuring the quantum computer. This is in contrast from the typical method of evaluating the cost function by taking the expectation value of a quantum operator. We demonstrate our method using a variational quantum eigensolver algorithm to find the shortest route for a given graph. We find that there is a broad range in the hyperparameters of the optimization procedure for which the best route is found.
... A detailed description of both SetCoverPy and SCP is beyond the scope of this paper; however, we encourage readers to refer to Zhu (2016) for a detailed discussion. It uses the Lagrangian relaxation algorithm (Held & Karp 1971;Geoffrion 1974;Caprara et al. 1999;Fisher 2004) to find the minimum number of archetypes that span the parent sample. One example of such use of generating archetypes using SetCoverPy is Brodzeller & Dawson (2022), where it was used to identify quasar archetypes for the purpose of creating physical models of quasar spectra. ...
Article
Full-text available
We present a computationally efficient galaxy archetype-based redshift estimation and spectral classification method for the Dark Energy Survey Instrument (DESI) survey. The DESI survey currently relies on a redshift fitter and spectral classifier using a linear combination of principal component analysis–derived templates, which is very efficient in processing large volumes of DESI spectra within a short time frame. However, this method occasionally yields unphysical model fits for galaxies and fails to adequately absorb calibration errors that may still be occasionally visible in the reduced spectra. Our proposed approach improves upon this existing method by refitting the spectra with carefully generated physical galaxy archetypes combined with additional terms designed to absorb data reduction defects and provide more physical models to the DESI spectra. We test our method on an extensive data set derived from the survey validation (SV) and Year 1 (Y1) data of DESI. Our findings indicate that the new method delivers marginally better redshift success for SV tiles while reducing catastrophic redshift failure by 10%–30%. At the same time, results from millions of targets from the main survey show that our model has relatively higher redshift success and purity rates (0.5%–0.8% higher) for galaxy targets while having similar success for QSOs. These improvements also demonstrate that the main DESI redshift pipeline is generally robust. Additionally, it reduces the false-positive redshift estimation by 5%−40% for sky fibers. We also discuss the generic nature of our method and how it can be extended to other large spectroscopic surveys, along with possible future improvements.
... Increasing the distance from one node to its nearest neighbors will thus lead to fewer connections to this node in a minimum spanning tree. With this, a refinement through subgradient optimization can be applied such that the minimum 1-tree more closely resembles a tour, via weights per node added to the distances of all edges including the node [7,8,9]. Essentially, the subgradient algorithm iteratively increases the weights, i.e., the distance in the distance matrix, of nodes with a degree of 3 or more and decreases the weights of leaf nodes within the minimum 1-tree. ...
Preprint
Full-text available
Solving the Traveling Salesperson Problem (TSP) remains a persistent challenge, despite its fundamental role in numerous generalized applications in modern contexts. Heuristic solvers address the demand for finding high-quality solutions efficiently. Among these solvers, the Lin-Kernighan-Helsgaun (LKH) heuristic stands out, as it complements the performance of genetic algorithms across a diverse range of problem instances. However, frequent timeouts on challenging instances hinder the practical applicability of the solver. Within this work, we investigate a previously overlooked factor contributing to many timeouts: The use of a fixed candidate set based on a tree structure. Our investigations reveal that candidate sets based on Hamiltonian circuits contain more optimal edges. We thus propose to integrate this promising initialization strategy, in the form of POPMUSIC, within an efficient restart version of LKH. As confirmed by our experimental studies, this refined TSP heuristic is much more efficient - causing fewer timeouts and improving the performance (in terms of penalized average runtime) by an order of magnitude - and thereby challenges the state of the art in TSP solving.
... Moreover, the solutions produced by this method can be assessed without requiring knowledge of the true optimal solutions, which is a significant advantage since computing optimal solutions for large BTAMC instances can be computationally prohibitive, as is the case in this study. The application of Lagrangean relaxation to integer programming was first introduced by Held and Karp [21,22]. Later, Fisher [23] and Geoffrion [24] played important roles in advocating its effectiveness for solving integer programs. ...
Article
Full-text available
This paper studies the Bundled Task Assignment Problem in Mobile Crowdsensing (BTAMC), a significant extension of the traditional Task Assignment Problem in Mobile Crowdsensing (TAMC). Unlike TAMC, BTAMC introduces a more realistic scenario where requesters present bundles of two tasks to the platform, giving the platform the flexibility to accept both tasks, accept one, or reject both. This added complexity reflects the multifaceted nature of task assignment in mobile crowdsensing. To address the challenges inherent in BTAMC, we examine two pricing strategies—discount and premium pricing—available to platform operators for pricing task bundles. Additionally, we delve into the critical issue of task quality, emphasizing the quality of workers assigned to each task. This is achieved by ensuring that the overall quality of workers assigned to each task consistently meets a predefined quality threshold, which, in turn, offers a more favorable outcome for all task requesters. The paper presents an integer programming formulation for the BTAMC. This formulation serves as the foundation for a Lagrangean-based solution approach, which has proven to be remarkably effective. Notably, it provides near-optimal solutions even for instances considerably larger than those traditionally encountered in the literature. These contributions offer valuable insights for platform operators and stakeholders in the mobile crowdsensing domain, presenting opportunities to augment profitability and enhance system performance.
... The Lagrangian dual L(λ) = max λ≥0 L R(λ) is solved by the subgradient optimization introduced by Held and Karp (1971). To each Lagrange multiplier λ cs , a subgradient vector associated is defined by: v cs = 1 − p∈P g pcs , s ∈ S, c ∈ C(s). ...
Article
This paper explores the complex logistics of ship routing and scheduling for fertilizer companies in Brazil, considering the challenges posed by a dynamic international trade landscape affected by factors such as the COVID-19 pandemic and geopolitical conflicts. It expands previous research by incorporating cargo allocation to the heterogeneous fleet in the cargo routing and scheduling problem, which is critical for the contemporary operating scenario. The primary objective is to minimize transportation costs by optimizing ship routes, schedules, and cargo allocation to compartments, while addressing specific operational constraints, such as the capacity of ships’ compartments, delivery delays, change of direction in a ship’s route on the Brazilian coast, segregated storage, and ship stability. Considering the complexity of the generated models for real-life problems, a Lagrangian-based solution method, incorporating a modified relax-and-fix matheuristics and several heuristics to improve the solution process of mixed-integer linear programming solvers, is developed. Experimentation with previous real-life instances reveals that the optimization method obtains implementable operational plans with transportation costs reduced by 47.66%, on average, compared to the amounts paid by companies for sea freight services. A case study demonstrates the applicability of the optimization approach as a decision-support tool for real-life planning.
... A detailed description of both SetCoverPy and SCP is beyond the scope of this paper; however, we encourage readers to refer to Zhu (2016) for a detailed discussion. It uses the Lagrangian relaxation algorithm (Held & Karp 1971;Geoffrion 1974;Caprara et al. 1999;Fisher 2004) to find the minimum number of archetypes that span the parent sample. One example of such use of generating archetypes using Set-11 LRGs are usually passive and have no or very low star formation activity; therefore, they lack many emission lines. ...
Preprint
Full-text available
We present a computationally efficient galaxy archetype-based redshift estimation and spectral classification method for the Dark Energy Survey Instrument (DESI) survey. The DESI survey currently relies on a redshift fitter and spectral classifier using a linear combination of PCA-derived templates, which is very efficient in processing large volumes of DESI spectra within a short time frame. However, this method occasionally yields unphysical model fits for galaxies and fails to adequately absorb calibration errors that may still be occasionally visible in the reduced spectra. Our proposed approach improves upon this existing method by refitting the spectra with carefully generated physical galaxy archetypes combined with additional terms designed to absorb data reduction defects and provide more physical models to the DESI spectra. We test our method on an extensive dataset derived from the survey validation (SV) and Year 1 (Y1) data of DESI. Our findings indicate that the new method delivers marginally better redshift success for SV tiles while reducing catastrophic redshift failure by 1030%10-30\%. At the same time, results from millions of targets from the main survey show that our model has relatively higher redshift success and purity rates (0.50.8%0.5-0.8\% higher) for galaxy targets while having similar success for QSOs. These improvements also demonstrate that the main DESI redshift pipeline is generally robust. Additionally, it reduces the false positive redshift estimation by 540%5-40\% for sky fibers. We also discuss the generic nature of our method and how it can be extended to other large spectroscopic surveys, along with possible future improvements.
... O conjunto relaxadoé adicionadoà função objetivo, atreladoà multiplicadores (multiplicadores de Lagrange). Os precursores do método foram [Held e Karp, 1970], seguido de [Held e Karp, 1971], que aplicaram o método para resolver o problema do caixeiro viajante. O método foi aplicado em problemas clássicos como o de localização de instalações e de particionamento de conjuntos. ...
... A schematic overview of the decomposition approachKarp 1971;Fisher 2004). We call this the Benders Lagrangian Decomposition (BLD) method. ...
Article
Full-text available
Maintenance activities are inevitable and costly in integrated mining operations. Conducting maintenance may require the whole system, or sub-units of the system, to be shut down temporarily. These maintenance activities not only disrupt the unit being shut down, but they also have consequences for inventory levels and product flow downstream. In this paper, we consider an interconnected mining system in which there are complicated maintenance relationships and stock accumulation at intermediate nodes. We propose a time-indexed mixed-integer linear programming formulation to optimize the long-term integrated maintenance plan and maximize the total throughput. We also devise an algorithm, which combines Benders decomposition and Lagrangian relaxation, to accelerate the computational speed. To validate our mathematical model, we perform simulations for a real-world case study in the iron ore industry. The results show that our method can yield better solutions than CPLEX optimization solver alone in faster time.
... Mixed-integer programming problems are usually solved exactly using the branch-and-bound method. Nikolaev A. et al. [35] proposed a branch-and-bound algorithm based on the 1-tree Lagrangian relaxation method, which proposes a new branching strategy, i.e., branching on 1-tree edges belonging to vertices in the 1-tree of the highest degree with the largest tolerance, which is shown by testing on benchmark instances in TSPLIB to have higher solution quality compared to branching on the shortest edge and the strong branching proposed by Held M. et al. [36]. However, the exact algorithm still has limitations for solving large-scale TSP due to the limitations of the number of variables and the number of constraints. ...
Article
Full-text available
The ant colony algorithm faces dimensional catastrophe problems when solving the large-scale traveling salesman problem, which leads to unsatisfactory solution quality and convergence speed. To solve this problem, an adaptive ant colony optimization for large-scale traveling salesman problem (AACO-LST) is proposed. First, AACO-LST improves the state transfer rule to make it adaptively adjust with the population evolution, thus accelerating its convergence speed; then, the 2-opt operator is used to locally optimize the part of better ant paths to further optimize the solution quality of the proposed algorithm. Finally, the constructed adaptive pheromone update rules can significantly improve the search efficiency and prevent the algorithm from falling into local optimal solutions or premature stagnation. The simulation based on 45 traveling salesman problem instances shows that AACO-LST improves the solution quality by 79% compared to the ant colony system (ACS), and in comparison with other algorithms, the PE of AACO-LST is not more than 1% and the Err is not more than 2%, which indicates that AACO-LST can find high-quality solutions with high stability. Finally, the convergence speed of the proposed algorithm was tested. The data shows that the average convergence speed of AACO-LST is more than twice that of the comparison algorithm. The relevant code can be found on our project homepage.
... We combine the subgradient method [48] and surrogate subgradient method [49], the most popular or state-of-the-art technique for obtaining solutions to the Lagrangian dual, with two different formulations for the NAC constraints in (2a) and (2b) (specifically the sequential formulation as given in (4a) and asymmetric formulation provided in (4b) [39]). In this context, we call the second method LDSS which is the combination of the subgradient method and the sequential NAC formulation, and the third method LDSA is composed of subgradient method and the asymmetric NAC formulation. ...
Article
Full-text available
Chronic disease patients often require revisits for long-term care. Online medical services shift revisits to online, which can improve the access to chronic care and reduce the burden on offline medical services. However, whether Internet healthcare can truly match the medical supply and demand, one of the critical issues is the efficient advance scheduling of the integrated online and offline systems. This study investigates the advance scheduling problem for the first visit and revisit patients in chronic care. The uncertainty of revisit status (i.e., online or offline) and heterogeneity of online and offline revisits (i.e., revisit interval, continuity of care violation penalty) are considered. A stochastic mixed-integer programming model is formulated for assigning patients to a specific physician on a specific day over the course of a finite planning period. The aim is to minimize the expected sum of three cost components related to offline and online services: overtime and idle time, continuity of care violation penalty, and fixed setup. This study proposes a modified progressive hedging algorithm and applies a sequential decision-making framework to obtain rolling time advance schedules. Results of the numerical analysis demonstrate the effectiveness of our algorithm compared to both the published state-of-the-art Lagrangian decomposition embedded with surrogate subgradient method and the commercial solver Gurobi. The insight obtained from the experiments is that a capacity allocation scheme with all physicians assigned with both offline and online capacities would be a good choice for considerable cost savings. Note to Practitioners —Internet healthcare is becoming increasingly popular. Operation and management issues have arisen in the integrated online and offline appointment systems. A sequential decision-making method embedded with a stochastic programming model and a modified PHA is proposed to help decision-makers generate the first visit and revisit advance schedules for chronic care. The performance of this approach and the system is thoroughly verified. Results show that the developed decision technique can lessen the operational cost generated by scheduling and realize the goal of continuity of care. This study offers a useful tool to help with intelligent patient advance scheduling in an integrated management system of online and offline chronic care.
Thesis
The dissertation provides a detailed account of our research on developing a quantum algorithm for the Traveling Salesperson Problem (TSP). TSP involves finding the shortest route through a city network, a known NP-hard problem. We introduce a quantum polynomial time algorithm addressing the existence of a Hamiltonian cycle in a graph and the more complex task of determining the shortest Hamiltonian cycle in the context of TSP. Preliminary results suggest our quantum algorithm offers exponential speedup over classical counterparts, albeit with a small probability.
Article
Full-text available
Biofuel has gained significant attention as a potential source to meet fuel demands instead of fossil fuel. The price of biofuel and alternative fuel have a considerable impact on biofuel demand. Thus, it is important to design a biofuel supply chain network that incorporates the biofuel price into an elastic demand. More precisely, a variable demand, including customer importance level (to the environment), biofuel price, and substituted fuel price, is considered in this research. Furthermore, this research presents a bi-objective mixed-integer quadratic formulation that aims to maximize the total profit of the supply chain and carbon absorption in harvesting areas. The problem is solved by the ε-constraint and lagrangian relaxation methods due to its complexity. Moreover, substituted fuel price uncertainty is addressed by two-stage stochastic programming. Finally, a real case study utilizing the data envelopment analysis approach is applied to assess the efficiency and currency of the addressed model. Several consequences are illustrated in the case study, such as rich areas for exporting algae, suggesting hub locations for biofuel production, etc.
Article
Intelligent reflecting surface (IRS) alters the wireless channel by dynamically adjusting the propagation of wireless signals, thereby having the ability to enhance communication performance. Due to its low power consumption, IRS is expected to play an important role in improving energy efficiency (EE). In this paper, both optical IRS (OIRS) and radio frequency (RF) IRS are employed to assist aggregated visible light communication (VLC)-RF systems, and then a resource allocation algorithm is proposed to improve EE. To this end, a model for the IRS-assisted aggregated VLC-RF system is first established, followed by the formulated EE maximization problem. Furthermore, the EE maximization problem is decomposed into four subproblems, focusing on RF IRS configuration, optical subchannel assignment, OIRS arrangement, and joint power allocation. By employing block coordinate descent (BCD), the four subproblems are solved iteratively. Moreover, simulation results show the significant EE improvement brought by the IRS for aggregated VLC-RF systems. In addition, the convergence and effectiveness of the proposed BCD-based resource allocation algorithm, as well as the influence of key system parameters on EE are also demonstrated.
Article
The state-of-the-art auxiliary calibration algorithms can perform comprehensive calibration of various array non-ideal characteristics, such as mutual coupling, gain/phase uncertainties, and sensor position errors, employing a set of cooperative calibration sources with known direction of arrival (DOA). However, the task of deploying calibration sources at precisely measured DOAs is complex. Otherwise, the calibration source DOA errors would seriously degrade the performance of these algorithms. In this paper, a dimension-reduction maximum likelihood calibration algorithm with inaccurate cooperative sources is proposed to overcome this issue. First, a maximum likelihood calibration model is established including both the unknown array non-ideal parameters and 2-D DOAs of all calibration sources. Next, the ambiguity of sensor position estimation caused by inaccurate 2-D DOAs of calibration sources is analyzed. Furthermore, a dimension-reduction maximum likelihood calibration model is proposed to resolve the ambiguity under the zero mean Gaussian distribution assumption of the calibration source elevation errors. Then, since the proposed multi-parameter dimension-reduction model is non-convex and multimodal, a new filled function method is proposed to cope with its local extrema attractors. The proposed single-parameter filled function has a single form without an exponential term and is second-order continuously differentiable, which is stable for numerical calculations and easy to optimize by local optimization tools. Finally, the closed-form hybrid Cramer-Rao lower-bound expressions of array parameters under unknown source DOAs are derived in detail. Numerical results verify the effectiveness of the proposed algorithm.
Article
Full-text available
A group of satellites, with either homogeneous or heterogeneous orbital characteristics and/or hardware specifications, can undertake a reconfiguration process due to variations in operations pertaining to Earth observation missions. This paper investigates the problem of optimizing a satellite constellation reconfiguration process against two competing mission objectives: 1) the maximization of the total coverage reward, and 2) the minimization of the total cost of the transfer. The decision variables for the reconfiguration process include the design of the new configuration and the assignment of satellites from one configuration to another. We present a novel biobjective integer linear programming formulation that combines constellation design and transfer problems. The formulation lends itself to the use of generic mixed-integer linear programming (MILP) methods such as the branch-and-bound algorithm for the computation of provably optimal solutions; however, these approaches become computationally prohibitive even for moderately sized instances. In response to this challenge, this paper proposes a Lagrangian relaxation-based heuristic method that leverages the assignment problem structure embedded in the problem. The results from the computational experiments attest to the near-optimality of the Lagrangian heuristic solutions and a significant improvement in the computational runtime as compared to a commercial MILP solver.
Article
Let A be a closed set of points in the n-dimensional euclidean space E n . If p and p 1 are points of E n such that 1.1 then p 1 is said to be point-wise closer than p to the set A. If p is such that there is no point p 1 which is point-wise closer than p to A , then p is called a closest point to the set A.
Article
This paper explores a dynamic programming approach to the solution of three sequencing problems: a scheduling problem involving arbitrary cost functions, the traveling-salesman problem, and an assembly line balancing problem. Each of the problems is shown to admit of numerical solution through the use of a simple recursion scheme; these recursion schemes also exhibit similarities and contrasts in the structures of the three problems. For large problems, direct solution by means of dynamic programming is not practical, but procedures are given for obtaining good approximate results by solving sequences of smaller derived problems. Experience with a computer program for the solution of traveling-salesman problems is presented.
Article
It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D.C., has the shortest road distance. Operations Research, ISSN 0030-364X, was published as Journal of the Operations Research Society of America from 1952 to 1955 under ISSN 0096-3984.
Article
In various numerical problems one is confronted with the task of solving a system of linear inequalities: (1.1) (i = 1, … ,m) assuming, of course, that the above system is consistent. Sometimes one has, in addition, to minimize a given linear form l ( x ). Thus, in linear programming one obtains a problem of the latter type.
Article
A survey and synthesis of research on the traveling salesman problem is given. The problem is defined and several theorems are presented. This is followed by a general classification of the solution techniques and a description of some of the proven methods. A summary of computational results is given.
Article
This paper explores new approaches to the symmetric traveling-salesman problem in which 1-trees, which are a slight variant of spanning trees, play an essential role. A 1-tree is a tree together with an additional vertex connected to the tree by two edges. We observe that (i) a tour is precisely a 1-tree in which each vertex has degree 2, (ii) a minimum 1-tree is easy to compute, and (iii) the transformation on “intercity distances” cij → Cij + πi + πj leaves the traveling-salesman problem invariant but changes the minimum 1-tree. Using these observations, we define an infinite family of lower bounds w(π) on C*, the cost of an optimum tour. We show that maxπw(π) = C* precisely when a certain well-known linear program has an optimal solution in integers. We give a column-generation method and an ascent method for computing maxπw(π), and construct a branch-and-bound method in which the lower bounds w(π) control the search for an optimum tour.
Article
A code for solving travelling salesman problem employing heuristic ideas is described. Acyclic permutations of the cities are constructed by first choosing two cities at random for a permutation of length two, putting the remaining cities in a random list and then inserting cities from the list in the partially constructed permutations so that they add least to the length of the partial tour. A second heuristic idea used in the code is that of breaking up the problem into convex, or almost convex sub-problems and employing the above-mentioned heuristic on these subproblems. Numerical experience with the code is described as well as weaknesses and strengths of the method.
Chapter
The RAND Corporation in the early 1950s contained “what may have been the most remarkable group of mathematicians working on optimization ever assembled” [6]: Arrow, Bellman, Dantzig, Flood, Ford, Fulkerson, Gale, Johnson, Nash, Orchard-Hays, Robinson, Shapley, Simon, Wagner, and other household names. Groups like this need their challenges. One of them appears to have been the traveling salesman problem (TSP) and particularly its instance of finding a shortest route through Washington, DC, and the 48 states [4, 7].
Article
We consider a graph with n vertices, all pairs of which are connected by an edge; each edge is of given positive length. The following two basic problems are solved. Problem 1: construct the tree of minimal total length between the n vertices. (A tree is a graph with one and only one path between any two vertices.) Problem 2: find the path of minimal total length between two given vertices.
Article
Let X be finite ordered set and let Φ be a function from X to subsets of a set Y. A subset A of X is called Assignable if there is an injectionfrom A to Y that (a)∈Φ(a) for all a in A. It is shown that the assignable sets form a matroid on X and using this it is shown that there exists an optimal assignable set Ā, meaning that if A is any other assignable set then there is an injection f from A to Ā such that f(a)>-a for all a in A.
Article
The traveling-salesman problem is a generalized form of the simple problem to find the smallest closed loop that connects a number of points in a plane. Efforts in the past to find an efficient method for solving it have met with only partial success. The present paper describes a method of solution that has the following properties (a) It is applicable to both symmetric and asymmetric problems with random elements (b) It does not use subjective decisions, so that it can be completely mechanized (c) It is appreciably faster than any other method proposed (d) It can be terminated at any point where the solution obtained so far is deemed sufficiently accurate.
Article
It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D.C., has the shortest road distance. Operations Research, ISSN 0030-364X, was published as Journal of the Operations Research Society of America from 1952 to 1955 under ISSN 0096-3984.
Article
Two algorithms for solving the (symmetric distance) traveling salesman problem have been programmed for a high-speed digital computer. The first produces guaranteed optimal solution for problems involving no more than 13 cities; the time required (IBM 7094 II) varies from 60 milliseconds for a 9-city problem to 1.75 seconds for a 13-city problem. The second algorithm produces precisely characterized, locally optimal solutions for large problems (up to 145 cities) in an extremely short time and is based on a general heuristic approach believed to be of general applicability to various optimization problems. The average time required to obtain a locally optimal solution is under 30n3 microseconds where n is the number of cities involved. Repeated runs on a problem from random initial tours result in a high probability of finding the optimal solution among the locally optimal solutions obtained. For large problems where many locally optimal solutions have to be obtained in order to be reasonably assured of having the optimal solution, an efficient reduction scheme is incorporated in the program to reduce the total computation time by a substantial amount.