## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

In this article, the first approach for solving the vertex-biconnectivity augmentation problem (V2AUG) to optimality is proposed. Given a spanning subgraph of an edge-weighted graph, we search for the cheapest subset of edges to augment this subgraph to make it vertex-biconnected. The problem is reduced to augmentation of the corresponding block-cut tree [Khuller and Thummella, J Algorithms 14 (1993), 214–225], and its connectivity properties are exploited to develop two minimum-cut-based ILP formulations: a directed and an undirected one. In contrast to the recently obtained result for the more general vertex-biconnected Steiner network problem [Chimani et al., Proceedings of 2nd Annual International Conference on Combinatorial Optimization and Applications, Lecture Notes in Computer Science, Vol. 5165, Springer, 2008, pp. 190–200.], our theoretical comparison shows that orienting the undirected graph does not help in improving the quality of lower bounds. Hence, starting from the undirected cut formulation, we develop a branch-and-cut-and-price (BCP) algorithm which represents the first exact approach to V2AUG. Our computational experiments show the practical feasibility of BCP: complete graphs with more than 400 vertices can be solved to provable optimality. Furthermore, BCP is even faster than state-of-the-art metaheuristics and approximation algorithms, for graphs up to 200 vertices. For large graphs with more than 2000 vertices, optimality gaps that are strictly below 2% are reported. © 2009 Wiley Periodicals, Inc. NETWORKS, 2010

To read the full-text of this research,

you can request a copy directly from the author.

We consider the Network Design Problem with Vulnerability Constraints (NDPVC) which simultaneously addresses resilience against failures (network survivability) and bounds on the lengths of each communication path (hop constraints). Solutions to the NDPVC are subgraphs containing a path of length at most Hst for each commodity {s, t} and a path of length at most between s and t after at most edge failures. We first show that a related and well known problem from the literature, the Hop-Constrained Survivable Network Design Problem (kHSNDP), that addresses the same two measures produces solutions that are too conservative in the sense that they might be too expensive in practice or may even fail to provide feasible solutions. We also explain that the reason for this difference is that Mengerian-like theorems not hold in general when considering hop-constraints. Three graph theoretical characterizations of feasible solutions to the NDPVC are derived and used to propose integer linear programming formulations. In a computational study we compare these alternatives with respect to the lower bounds obtained from the corresponding linear programming relaxations and their capability of solving instances to proven optimality. In addition, we show that in many cases, the solutions produced by solving the NDPVC are cheaper than those obtained by the related kHSNDP.

This chapter analyzes optimal attack-tolerant network design and augmentation strategies for bounded-diameter networks. In the definitions of attack tolerance used in this chapter, we generally require that a network has a guaranteed ability to maintain not only the overall connectivity, but also preserve the same diameter after multiple failures of network components (nodes and/or edges), regardless of whether these failures are random or targeted. This property is referred to as “strong” attack tolerance, whereas the property of a network to maintain just the regular connectivity after node/edge failures (with no explicit restriction on the diameter), such as in the case of K-connected networks, is referred to as “weak” attack tolerance. We analyze necessary and sufficient conditions for guaranteed “weak” and “strong” attack tolerance properties for fixed-diameter networks, including the important special case of diameter-2 (two-hop) networks. We demonstrate that the recently introduced concept of an R-robust 2-club is the only diameter-2 network configuration that is guaranteed to have a strong attack tolerance property (i.e., maintain both connectivity and diameter 2) after any R-1 edges are deleted. Furthermore, we demonstrate that if all edges have the same construction cost, the problem of optimal R-robust 2-club network design has an exact analytical solution that requires O(Rn) constructed edges, which makes this configuration asymptotically as cost-efficient as a regular sparse connected network. We also give linear 0-1 formulations for related network design and augmentation problems with different edge construction costs, which are NP-hard in the general case. Illustrative examples are provided to demonstrate the considered concepts and results.

This book presents recent developments and results found by participants of the Third International Conference on the Dynamics of Information Systems, which took place at the University of Florida, Gainesville FL, USA on February 16-18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and universities to exchange knowledge and results in a broad range of topics relevant to the theory and practice of the dynamics of information systems.Dynamics of Information plays an increasingly critical role in our society. The influence of information on social, biological, genetic, and military systems must be better understood to achieve large advances in the capability and understanding of these systems. Applications are widespread and include: research in evolutionary theory, optimization of information workflow, military applications, climate networks, collision work, and much more.Dynamics of Information plays an increasingly critical role in our society. The influence of information on social, biological, genetic, and military systems must be better understood to achieve large advances in the capability and understanding of these systems. Applications are widespread and include: research in evolutionary theory, optimization of information workflow, military applications, climate networks, collision work, and much more.

We consider the problems of minimum-cost design and augmentation of directed network clusters that have diameter 2 and maintain the same diameter after the deletion of up to R elements (nodes or arcs) anywhere in the cluster. The property of a network to maintain not only the overall connectivity, but also the same diameter after the deletion of multiple nodes/arcs is referred to as strong attack tolerance. This paper presents the proof of NP-completeness of the decision version of the problem, derives tight theoretical bounds, as well as develops a heuristic algorithm for the considered problems, which are extremely challenging to solve to optimality even for small networks. Computational experiments suggest that the proposed heuristic algorithm does identify high-quality near-optimal solutions; moreover, in the special case of undirected networks with identical arc construction costs, the algorithm provably produces an exact optimal solution to strongly attack-tolerant two-hop network design problem, regardless of the network size.

One way to achieve reliability with low-latency is through multi-path routing and transport proto-cols that build redundant delivery channels (or data paths) to reduce end-to-end packet losses and retransmis-sions. However, the applicability and effectiveness of such protocols are limited by the topological constraints of the underlying communication infrastructure. Multiple data delivery paths can only be constructed over networks that are capable of supporting multiple paths. In mission-critical wireless networks, the underlying network topology is directly affected by the terrain, location and environmental interferences, however the settings of the wireless radios at each node can be properly configured to compensate for these effects for multi-path support. In this work we investigate optimization models for topology designs that enable end-to-end dual-path support on a distributed wireless sensor network. We consider the case of a fixed sensor network with isotropic antennas, where the control variable for topology management is the transmission power on network nodes. For optimization modeling, the network metrics of relevance are coverage, robustness and power utilization. The optimization models proposed in this work eliminate some of the typical assumptions made in the pertinent network design literature that are too strong in this application context.

Integer and combinatorial optimization deals with problems of maximizing or minimizing a function of many variables subject to (a) inequality and equality constraints and (b) integrality restrictions on some or all of the variables. Because of the robustness of the general model, a remarkably rich variety of problems can be represented by discrete optimization models. This chapter is concerned with the formulation of integer optimization problems, which means how to translate a verbal description of a problem into a mathematical statement of the form linear mixed-integer programming problem ( MIP), linear (pure) integer programming problem ( IP), or combinatorial optimization problem (CP). The chapter presents two important uses of binary variables in the modeling of optimization problems. The first concerns the representation of nonlinear objective functions of the form l>jfj(yj) using linear functions and binary variables. The second concerns the modeling of disjunctive constraints.

In this paper we describe an implementation of a cutting plane algorithm for the perfect matching problem which is based on the simplex method. The algorithm has the following features:
-It works on very sparse subgraphs ofK
n
which are determined heuristically, global optimality is checked using the reduced cost criterion.
-Cutting plane recognition is usually accomplished by heuristics. Only if these fail, the Padberg-Rao procedure is invoked to guarantee finite convergence.
Our computational study shows that—on the average—very few variables and very few cutting planes suffice to find a globally optimal solution. We could solve this way matching problems on complete graphs with up to 1000 nodes. Moreover, it turned out that our cutting plane algorithm is competitive with the fast combinatorial matching algorithms known to date.

We consider {0,1,2}-Survivable Network Design problems with node-connectivity constraints. In the most prominent variant,
we are given an edge-weighted graph and two customer sets
\fancyscriptR1{\fancyscript{R}_1} and
\fancyscriptR2{\fancyscript{R}_2} ; we ask for a minimum cost subgraph that connects all customers, and guarantees two-node-connectivity for the
\fancyscriptR2{\fancyscript{R}_2} customers. We also consider an alternative of this problem, in which 2-node-connectivity is only required w.r.t. a certain
root node, and its prize-collecting variant. The central result of this paper is a novel graph-theoretical characterization
of 2-node-connected graphs via orientation properties. This allows us to derive two classes of ILP formulations based on directed
graphs, one using multi-commodity flow and one using cut-inequalities. We prove the theoretical advantages of these directed
models compared to the previously known ILP approaches. We show that our two concepts are equivalent from the polyhedral point
of view. On the other hand, our experimental study shows that the cut formulation is much more powerful in practice. Moreover,
we propose a collection of benchmark instances that can be used for further research on this topic.
KeywordsGraph orientation-2-connected networks-ILP formulations-Branch and cut
Mathematics Subject Classification (2000)90C27-90C57-90C90

We consider a survivable network design problem known as the 2-Node-Connected Steiner Network Problem (2NCON): we are given a weighted undirected graph with a node partition into two sets of customer nodes and one set of Steiner nodes. We ask for the minimum weight connected subgraph containing all customer nodes, in which the nodes of the second customer set are nodewise 2-connected. This problem class has received lively attention in the past, especially with regard to exact ILP formulations and their polyhedral properties.
In this paper, we present a transformation of 2NCON into a related problem on directed graphs and use this to establish two novel ILP formulations, based on multi-commodity flow and on directed cuts, respectively. We prove the strength of our formulations over the known formulations, and compare our ILPs theoretically and experimentally. This paper thereby consitutes the first experimental study of exact 2NCON algorithms considering more than ~100 nodes, and shows that graphs with up to 4900 nodes can be solved to provable optimality.

Branch-and-cut (-and-price) algorithms belong to the most successful techniques for solving mixed integer linear programs
and combinatorial optimization problems to optimality (or, at least, with certified quality). In this unit, we concentrate
on sequential branch-and-cut for hard combinatorial optimization problems, while branch-and-cut for general mixed integer
linear programming is treated in [→ Martin] and parallel branch-and-cut is treated in [→ Ladányi/Ralphs/Trotter]. After telling
our most recent story ofa successful application ofbranc hand-cut in Section [1], we give in Section [2] a briefreview ofthe history, including the contributions ofpioneers with an emphasis on the computational aspects oftheir
work. In Section [3], the components ofa generic branch-and-cut algorithm are described and illustrated on the traveling salesman problem. In
Section [4], we first elaborate a bit on the important separation problem where we use the traveling salesman problem and the maximum
cut problem as examples, then we show how branchand- cut can be applied to problems with a very large number ofv ariables
(branch-and-cut-and-price). Section [5] is devoted to the design and applications ofthe ABACUS software framework for the implementation ofbranc h-and-cut algorithms. Finally, in Section [6], we make a few remarks on the solution ofthe exercise consisting ofthe design ofa simple TSP-solver in ABACUS.

This paper considers the problem of augmenting a given graph by a cheapest possible set of additional edges in order to make the graph vertex-biconnected. A real-world instance of this problem is the enhancement of an already established computer network to become robust against single node failures. The presented memetic algorithm includes an eective preprocessing of problem data and a fast local im- provement strategy which is applied during initialization, mutation, and recombination. Only feasible, locally optimal solutions are created as can- didates. Empirical results indicate the superiority of the new approach over two previous heuristic and an earlier evolutionary method.

The Prize-Collecting Steiner Tree Problem (PCST) on a graph with edge costs and vertex profits asks for a subtree minimizing the sum of the total cost of all edges in the subtree plus the total profit of all vertices not contained in the subtree. PCST appears frequently in the design of utility networks where profit generating customers and the network connecting them have to be chosen in the most profitable way.
Our main contribution is the formulation and implementation of a branch-and-cut algorithm based on a directed graph model where we combine several state-of-the-art methods previously used for the Steiner tree problem. Our method outperforms the previously published results on the standard benchmark set of problems.
We can solve all benchmark instances from the literature to optimality, including some of them for which the optimum was not known. Compared to a recent algorithm by Lucena and Resende, our new method is faster by more than two orders of magnitude. We also introduce a new class of more challenging instances and present computational results for them. Finally, for a set of large-scale real-world instances arising in the design of fiber optic networks, we also obtain optimal solution values.

We study the polyhedron associated with a network design problem which consists in determining at minimum cost a two-connected
network such that the shortest cycle to which each edge belongs (a “ring”) does not exceed a given length K.¶We present here
a new formulation of the problem and derive facet results for different classes of valid inequalities. We study the separation
problems associated to these inequalities and their integration in a Branch-and-Cut algorithm, and provide extensive computational
results.

This paper considers the problem of augmenting a given graph by a cheapest possible set of additional edges in order to make the graph vertex-biconnected. A real-world instance of this problem is the enhancement of an already established computer network to become robust against single node failures. The presented memetic algorithm includes effective preprocessing of problem data and a fast local improvement strategy which is applied before a solution is included into the population. In this way, the memetic algorithm's population consists always of only feasible, locally optimal solution candidates. Empirical results on two sets of test instances indicate the superiority of the new approach over two previous heuristics and an earlier genetic algorithm.

Motivation.- Network survivability models using node types.- Survivable network design under connectivity constraints - a survey.- Decomposition.- Basic inequalities.- Lifting theorems.- Partition inequalities.- Node partition inequalities.- Lifted r-cover inequalities.- Comb inequalities.- How to find valid inequalities.- Implementation of the cutting plane algorithm.- Computational results.

We consider the important practical and theoretical problem of designing a low-cost communications network which can survive failures of certain network components. Our initial interest in this area was motivated by the need to design certain “two-connected” survivable topologies for fiber optic communication networks of interest to the regional telephone companies. In this paper, we describe some polyhedral results for network design problems with higher connectivity requirements. We also report on some preliminary computational results for a cutting plane algorithm for various real-world and random problems with high connectivity requirements, which shows promise for providing good solutions to these difficult problems.

Publisher Summary This chapter focuses on the important practical and theoretical problem of designing survivable communication networks, i.e., communication networks that are still functional after the failure of certain network components. A very general model (for undirected networks) is presented which includes practical, as well as theoretical, problems, including the well-studied minimum spanning tree, Steiner tree, and minimum cost k-connected network design problems. The development of this area starts with outlining structural properties which are useful for the design and analysis of algorithms for designing survivable networks. These lead to worst-case upper and lower bounds. Heuristics that work well in practice are also described. Polynomially-solvable special cases of the general survivable network design problem are summarized. The chapter discusses polyhedral results from the study of these problems as integer programming models. The chapter provides complete and nonredundant descriptions of a number of polytopes related to network survivability problems of small dimensions. The computational results using cutting plane approaches basedon the polyhedral results are given. The results show that these methods are efficient and effective in producing optimal or near-optimal solutions to real-world problems. A brief review of the work on survivability models of directed networks is given.

An algorithm is described for solving large-scale instances of the Symmetric Traveling Salesman Problem (STSP) to optimality. The core of the algorithm is a "polyhedral" cutting-plane procedure that exploits a subset of the system of linear inequalities defining the convex hull of the incidence vectors of the hamiltonian cycles of a complete graph. The cuts are generated by several identification procedures that have been described in a companion paper. Whenever the cutting-plane procedure does not terminate with an optimal solution the algorithm uses a tree- search strategy that, as opposed to branch-and-bound, keeps on producing cuts after branching. The algorithm has been implemented in FORTRAN. Two different linear programming (LP) packages have been used as the LP solver. The implementation of the algorithm and the interface with one of the LP solvers is described in sufficient detail to permit the replication of our experiments. Computational results are reported with up to 42 STSPs with sizes ranging from 48 to 2,392 nodes. Most of the medium-sized test problems are taken from the literature; all others are large-scale real-world problems. All of the instances considered in this study were solved to optimality by the algorithm in "reasonable" computation times.

Given a finite undirected graph with nonnegative edge capacities the minimum capacity cut problem consists of partitioning the graph into two nonempty sets such that the sum of the capacities of edges connecting the two parts is minimum among all possible partitionings. The standard algorithm to calculate a minimum capacity cut, due to Gomory and Hu (1961), runs in O(n
4) time and is difficult to implement. We present an alternative algorithm with the same worst-case bound which is easier to implement and which was found empirically to be far superior to the standard algorithm. We report computational results for graphs with up to 2000 nodes.

The determination of true optimum solutions of combinatorial optimization problems is seldomly required in practical applications. The majority of users of optimization software would be satisfied with solutions of guaranteed quality in the sense that it can be proven that the given solution is at most a few percent off an optimum solution. This paper presents a general framework for practical problem solving with emphasis on this aspect. A detailed discussion along with a report about extensive computational experiments is given for the traveling salesman problem.

We study the problem of increasing the connectivity1 of a graph at an optimal cost. Since the general problem is NP-hard, we focus on efficient approximation schemes that come within a constant factor from the optimal. Previous algorithms either do not take edge costs into consideration, or run slower than our algorithm. Our algorithm takes as input an undirected graph G0 = (V, E0) on n vertices, that is not necessarily connected, and a set Feasible of m weighted edges on V, and outputs a subset Aug of edges which when added to G0 make it 2-connected. The weight of Aug, when G0 is initially connected, is no more than twice the weight of the least weight subset of edges of Feasible that increases the connectivity to 2. The running time of our algorithm is O(m + n logn). We also study the problem of increasing the edge connectivity of any graph G, to k, within a factor of 2 (for any k > 0). The running time of this algorithm is O(nk log n(m + n log n)). We observe that when k is odd we can use different techniques to obtain an approximation factor of 2 for increasing edge connectivity from k to (k+1) in O(kn
2) time.

The problem of increasing both edge and vertex connectivity of a graph at an optimal cost is studied. Since the general problem is NP-hard, we focus on efficient approximation schemes that come within a constant factor from the optima]. Previous algorithms either do not take edge costs into consideration, or they run slower than our algorithm. Our algorithm takes as input an undirected graph G0 = (V, E0) on n vertices, that is not necessarily connected, and a set Feasible of m weighted edges on V, and outputs a subset Aug of edges which when added to G0 make it two-connected. The weight of Aug, when G0 is initially connected, is no more than twice the weight of the least weight subset of edges of Feasible that increases the connectivity to two. The running time of our algorithm is O(m + n log n). As a consequence of our results, we can find an approximation to the least-weight two-connected spanning subgraph of a two-connected weighted graph.

Very frequently in practical applications we are faced with large or very large scale traveling salesman problems where the number of cities may range from several hundreds or thousands up to even millions. Due to timing restrictions in the production process there may be situations where good approximative solutions to an instance of the traveling salesman problem have to be found very fast, and where it is not feasible to call even O(n ² ) time procedures too often. In this paper we discuss several ideas to handle such large traveling salesman problems under time restrictions. We will consider Euclidean traveling salesman problems in the plane and show how their geometric structure can be exploited to derive fast heuristics.
INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

Graph augmentation problems on a weighted graph involve determining a minimum-cost set of edges to add to a graph to satisfy a specified property, such as biconnectivity, bridge-connectivity or strong connectivity. These augmentation problems are shown to be NP-complete in the restricted case of the graph being initially connected. Approximation algorithms with favorable time complexity are presented and shown to have constant worst-case performance ratios.

This paper presents a new and simple technique to solve the problem of adding a minimum number of edges to an undirected graph in order to obtain a biconnected, i.e., 2-vertex-connected, resulting graph. Our technique results in a simpler algorithm, which runs in sequential linear time, that is also faster in parallel than the previous result.Previous approaches for the problem require the usage of sorting, and advanced data structures to dynamically maintain either (1) a rooted tree when vertices are collapsing, or (2) the largest two sets among a collection of sets when an element from each of the largest two sets is being deleted.Our algorithm only needs to find a maximum integer among a set of O(n) non-negative integers that are less than n and to compute various simple tree functions, e.g., the number of vertices and a consecutive numbering of the degree-1 vertices in a rooted subtree, on a rooted tree. No sorting routine and dynamic data structure is used in the algorithm. Our simple algorithm implies a linear-time sequential implementation. For parallel implementation, all but the step for finding connected components in our algorithm can be done optimally in O(logn) time on an EREW PRAM, where n is the number of vertices in the input graph. Hence our parallel implementation runs in either O(logn) time using O((n+m)·α(m,n)/logn) processors on a CRCW PRAM, or O(logn) time using O(n+m) processors on an EREW PRAM, where m is the number of edges in the input graph and α is the inverse Ackerman function. The previous best parallel algorithm for solving this problem runs in O(log2n) time using O(n+m) processors on an EREW PRAM.

We consider the 2-edge-connectivity augmentation problem: given a graph S = (V, E) which is not 2-edge-connected and a set of new edges EV ◊ V with non-negative weights, find a minimum cost subset X of Esuch that adding the edges of X to S results in a 2-edge-connected graph. A practical application is the extension of an existing telecommunication network to become robust against single link failures. We compare, computationally, dierent algorithms for solv ing general and large-scale instances. This includes exact methods based on mathematical programming, simple construction heuristics and metaheuristics. As part of the design of metaheuristics, we consider dierent neighborhood structures for local search, among which a very large scale neighborhood. In all cases, we exploit approaches through the graph formulation as well as through an equivalent set covering formulation. The re- sults indicate that exact solutions through a basic integer programming model can be obtained in reasonably short time even on networks with 800 vertices and around 287.000 edges. Alternatively, an advanced heuristic algorithm based on subgradient optimization and iterated greedy finds consis- tently the optimal solution and is very fast. All previous benchmark instances are easily solved to optimality and new, larger, instances are introduced.

The network design problem with connectivity requirements (NDC) models a wide variety of celebrated combinatorial optimization problems including the minimum span- ning tree, Steiner tree, and survivable network design problems. We develop strong for- mulations for two versions of the edge-connectivity NDC problem: unitary problems re- quiring connected network designs, and nonunitary problems permitting non-connected networks as solutions. We (i) present a new directed formulation for the unitary NDC problem that is stronger than a natural undirected formulation, (ii) project out several classes of valid inequalities—partition inequalities, odd-hole inequalities, and combi- natorial design inequalities—that generalize known classes of valid inequalities for the Steiner tree problem to the unitary NDC problem, and (iii) show how to strengthen and direct nonunitary problems. Our results provide a unifying framework for strengthening formulations for NDC problems, and demonstrate the strength and power of ßow-based formulations for net- work design problems with connectivity requirements.

Thesis (Ph. D. in Operations Research)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995. Includes bibliographical references (p. 249-259). by S. Raghavan. Ph.D.in Operations Research

The representation of candidate solutions and the variation
operators are fundamental design choices in an evolutionary algorithm
(EA). This paper proposes a novel representation technique and suitable
variation operators for the degree-constrained minimum spanning tree
problem. For a weighted, undirected graph G(V, E), this problem seeks to
identify the shortest spanning tree whose node degrees do not exceed an
upper bound d⩾2. Within the EA, a candidate spanning tree is simply
represented by its set of edges. Special initialization, crossover, and
mutation operators are used to generate new, always feasible candidate
solutions. In contrast to previous spanning tree representations, the
proposed approach provides substantially higher locality and is
nevertheless computationally efficient; an offspring is always created
in O(|V|) time. In addition, it is shown how problem-dependent
heuristics can be effectively incorporated into the initialization,
crossover, and mutation operators without increasing the
time-complexity. Empirical results are presented for hard problem
instances with up to 500 vertices. Usually, the new approach identifies
solutions superior to those of several other optimization methods within
few seconds. The basic ideas of this EA are also applicable to other
network optimization tasks

In this paper we present a genetic algorithm (GA) for the NP-hard
biconnectivity problem for graphs. Suppose a 2-connected, undirected
weighted graph G(V,E) and a spanning subset of edges E<sub>0</sub>⊂E
are given. The goal is to augment set E<sub>0</sub> with a set
AUG⊂E-E<sub>0</sub> of minimal weight, such that graph G(V,E<sub>0
</sub>∪AUG) is biconnected. To our knowledge, this is the first time
a GA is applied to this problem. First, a straight-forward
“pure” GA improved with caching is introduced, which is then
hybridized with a greedy, problem dependent heuristic. The proposed
approaches are problem instances with up to 1160 feasible edges. While
the pure GA performs well, significantly better solutions can be
obtained by the hybrid strategy

We introduce a new algorithmic technique that applies to several graph connectivity problems. Its power is demonstrated by experimental studies of the minimum-weight strongly-connected spanning subgraph problem and the minimum-weight augmentation problem. Even though we are unable to improve the approximation ratios for these problems, our studies indicate that the new method generates significantly better solutions than the current known approximation algorithms, and yields solutions very close to optimal. We believe that our technique will eventually lead to algorithms that improve the performance ratios as well. 1 Introduction Let a weighted graph G = (V; E) represent all the feasible links of a potential communications network. A minimum spanning tree in G is the cheapest connected subgraph, i.e., the cheapest network that will allow the sites to communicate. Such a network is highly susceptible to failures, since it cannot even survive a single link or site failure. For more rel...

NP-hard problems are problems that are computationally expensive to solve optimally. In order to get the exact solution, one has to consume a lot of time and resources. So in practice, we consider heuristics that produce solutions very close to the optimal solution while running in manageable (polynomial) time. We measure the heuristics by the quality of the solutions they produce. A heuristic has an approximation factor ff if the cost of the solution is guaranteed to be no more than ff times the cost of the optimal solution over all instances. Often people design heuristics for individual NP-hard problems. We introduce a new general algorithmic technique that applies to a family of NP-hard optimization problems [8]. A single algorithmic approach appears to apply successfully to a diverse collection of graph connectivity problems. We demonstrate the power of this method by doing an experimental study of the minimum-weight strongly-connected spanning subgraph problem, as well...

. We study the problem of designing at minimum cost a two-connected network such that each edge belongs to a cycle whose length does not exceed a given bound. This problem was first studied by Fortz, Labb'e and Maffioli [7]. Several classes of valid inequalities for this problem were proposed [5--7]. We study here the separation problems associated to these inequalities and their integration in a Branch-andCut algorithm. We also present a Tabu Search heuristic, and provide extensive computational results. 1. Introduction The Two-Connected Network with Bounded Rings (or meshes) problem (2CNBR) consists in designing a minimum cost network N with the following constraints: (a) N contains at least two node-disjoint paths between every pair of nodes (2-connectivity constraints), and (b) each edge of N must belong to at least one cycle whose length is bounded by a given constant K (ring constraints). This problem, first studied by Fortz et al. [7], arises in the context of designing su...

The representation of candidate solutions and the variation operators are fundamental design choices in an evolutionary algorithm (EA). This paper proposes a novel representation technique and suitable variation operators for the degree-constrained minimum spanning tree problem. For a weighted, undirected graph G(V, E), this problem seeks to identify the shortest spanning tree whose node degrees do not exceed an upper bound d 2. Within the EA, a candidate spanning tree is simply represented by its set of edges. Special initialization, crossover, and mutation operators are used to generate new, always feasible candidate solutions. In contrast to previous spanning tree representations, the proposed approach provides substantially higher locality and is nevertheless computationally efficient; an offspring is always created in O(|V time. In addition, it is shown how problemdependent heuristics can be effectively incorporated into the initialization, crossover, and mutation operators without increasing the time-complexity. Empirical results are presented for hard problem instances with up to 500 vertices. Usually, the new approach identifies solutions superior to those of several other optimization methods within few seconds. The basic ideas of this EA are also applicable to other network optimization tasks.

In the late eighties and early nineties, three major exciting new developments (and some ramifications) in the computation of minimum capacity cuts occurred and these developments motivated us to evaluate the old and new methods experimentally. We provide a brief overview of the most important algorithms for the minimum capacity cut problem and compare these methods both on problem instances from the literature and on problem instances originating from the solution of the traveling salesman problem by branch-and-cut.

Ljubi´ algo-rithm for vertex-biconnectivity augmentation Applications of evolutionary computing

- S Kersting
- G R Raidl

S. Kersting, G.R. Raidl, and I. Ljubi´ algo-rithm for vertex-biconnectivity augmentation, " Applications of evolutionary computing, Proceedings of EvoWorkshops 2002, Lecture Notes in Computer Science, Vol. 2279, Springer, New York, 2002, pp. 102–111.

- Harary