# Discrete Applied Mathematics

Print ISSN: 0166-218X
Publications
An information theory based multiple alignment ("Malign") method was used to align the DNA binding sequences of the OxyR and Fis proteins, whose sequence conservation is so spread out that it is difficult to identify the sites. In the algorithm described here, the information content of the sequences is used as a unique global criterion for the quality of the alignment. The algorithm uses look-up tables to avoid recalculating computationally expensive functions such as the logarithm. Because there are no arbitrary constants and because the results are reported in absolute units (bits), the best alignment can be chosen without ambiguity. Starting from randomly selected alignments, a hill-climbing algorithm can track through the immense space of s(n) combinations where s is the number of sequences and n is the number of positions possible for each sequence. Instead of producing a single alignment, the algorithm is fast enough that one can afford to use many start points and to classify the solutions. Good convergence is indicated by the presence of a single well-populated solution class having higher information content than other classes. The existence of several distinct classes for the Fis protein indicates that those binding sites have self-similar features.

Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

Let p be a graph parameter that assigns a positive integer value to every graph. The inverse problem for p asks for a graph within a prescribed class (here, we will only be concerned with trees), given the value of p. In this context, it is of interest to know whether such a graph can be found for all or at least almost all integer values of p. We will provide a very general setting for this type of problem over the set of all trees, describe some simple examples and finally consider the interesting parameter "number of subtrees", where the problem can be reduced to some number-theoretic considerations. Specifically, we will prove that every positive integer, with only 34 exceptions, is the number of subtrees of some tree.

This paper concerns classification by Boolean functions. We investigate the classification accuracy obtained by standard classification techniques on unseen points (elements of the domain, {0, 1}(n), for some n) that are similar, in particular senses, to the points that have been observed as training observations. Explicitly, we use a new measure of how similar a point x in {0, 1}(n) is to a set of such points to restrict the domain of points on which we offer a classification. For points sufficiently dissimilar, no classification is given. We report on experimental results which indicate that the classification accuracies obtained on the resulting restricted domains are better than those obtained without restriction. These experiments involve a number of standard data-sets and classification techniques. We also compare the classification accuracies with those obtained by restricting the domain on which classification is given by using the Hamming distance.

Multi-labeled trees are a generalization of phylogenetic trees that are used, for example, in the study of gene versus species evolution and as the basis for phylogenetic network construction. Unlike phylogenetic trees, in a leaf-multi-labeled tree it is possible to label more than one leaf by the same element of the underlying label set. In this paper we derive formulae for generating functions of leaf-multi-labeled trees and use these to derive recursions for counting such trees. In particular, we prove results which generalize previous theorems by Harding on so-called tree-shapes, and by Otter on relating the number of rooted and unrooted phylogenetic trees.

Motivated by topology control in ad-hoc wireless networks, power assignment is a family of problems, each defined by a certain connectivity constraint (such as strong connectivity). These problems have been studied in the past. In this paper we consider delay bounds as an additional constraint to provide quality of service. Delay is measured by the number of hops on a path between two nodes. We present an algorithm for minimum power bounded hops broadcast with guaranteed bicriteria ratio of (O(log n), O(log n)) for general graphs. That is, in the solution produced by our algorithm, the number of hops between the root and any other node is at most O(log n) times the given bound and the power is at most O(log n) times the power of optimal solution. Our bicriteria results extend to min-power bounded-hops strong connectivity (the solution must have a path of at most d edges in between any two nodes) and min-power bounded-hops symmetric connectivity (the undirected graph having an edge uv iff the solution has both uv and vu is required to have diameter at most d). Previous work for min-power bounded-hops strong connectivity consists only of constant or better approximation for special cases of the Euclidean case. We also provide better guarantees for the Euclidean cases by post processing solutions of the main algorithm.

We consider the broadcasting operation in point-to-point packet-switched parallel and distributed networks of processors. We develop a general technique for the design of optimal broadcast algorithms on a wide range of such systems. Our technique makes it easier to design such algorithms and, furthermore, provides tools that can be used to derive precise analyses of their running times. As direct applications of this method we give an exact analysis of a known algorithm for the POSTAL model, and design and analyze an optimal broadcast algorithm for the MULTI PORT MULTI-MEDIA model. We then show how our method can be applied to networks with different underlying topologies, by designing and giving an exact analysis of an optimal broadcast algorithm for the OPTICAL RING

Minimum cost multicommodity flows are a useful model for bandwidth allocation problems. These problems are arising more frequently as regional service providers wish to carry their traffic over some national core network. We describe a simple and practical combinatorial algorithm to find a minimum cost multicommodity flow in a ring network. Apart from 1 and 2-commodity flow problems, this seems to be the only such “combinatorial augmentation algorithm” for a version of exact mincost multicommodity flow. The solution it produces is always a half-integral, and by increasing the capacity of each link by one, we may also find an integral routing of no greater cost. The “pivots” in the algorithm are determined by choosing an ε>0, increasing and decreasing sets of variables, and adjusting these variables up or down accordingly by ε. In this sense, it generalizes the cycle cancelling algorithms for (single source) mincost flow. Although the algorithm is easily stated, proof of its correctness and polynomially bounded running time are more complex

A negative even graph, introduced by Lee et al. (2002) is a timed event graph that allows negative places and negative tokens for modeling time window constraints between any two transitions. Such time constrained discrete event systems are found in cluster tool scheduling for semiconductor manufacturing or microcircuit design. We examine the steady state behavior of the feasible firing schedules of a negative event graph that satisfy the time window constraints. We develop a recurrent equation for the feasible firing epochs based on the minimax algebra. By extending the steady state results of a conventional timed event graph based on the minimax algebra, we show that there are four classes of steady states that correspond to the earliest and latest feasible steady firing schedules for each of the minimum and maximum cycle times. We characterize how the cycle times and the steady schedules are computed through some matrix algebra and the associated graph algorithms.

Assume that each vertex of a graph G is either a supply vertex or a demand vertex and is assigned a positive integer, called a supply or a demand. Each demand vertex can receive "power" from at most one supply vertex. One thus wishes to partition G into connected components by deleting edges from G so that each component C has exactly one supply vertex whose supply is no less than the sum of demands of all demand vertices in C. If G has no such partition, one wishes to partition G into connected components so that each component C either has no supply vertex or has exactly one supply vertex whose supply is no less than the sum of demands in C, and wishes to maximize the sum of demands in all components with supply vertices. We deal with such a maximization problem, which is NP-hard even for trees and strong NP-hard for general graphs. In this paper, we give a pseudo-polynomial-time algorithm to solve the problem for series-parallel graphs. The algorithm can be easily extended for partial k-trees, that is, graphs with bounded tree-width.

We give nearly optimal algorithms for matrix transpose on meshes with wormhole and XY routing and with a 1-port or 2-port communication model. For an N×N mesh, where N=3·2<sup>n</sup> and each mesh node has a submatrix of size m to be transposed, our algorithms take Nm/2 time steps for 1-port model, and about Nm/3.27 time steps for 2-port model. The lower bound is Nm/3.414. While there is no previously known algorithm for matrix transpose on meshes with wormhole and XY routing, a naive algorithm, which is naturally adapted from the well-known Recursive Exchange Algorithm, has a complexity of about Nm. That is our best algorithm improves over the naive algorithm by about a factor of 3.27, and is about a factor of 3.414/3.27 of the lower bound

The uncapacitated facility location problem in the following formulation is considered:where I and J are finite sets, and bij, ci⩾0 are rational numbers. Let Z∗ denote the optimal value of the problem and let . Cornuejols et al. (Ann. Discrete Math. 1 (1977) 163–178) prove that for the problem with the additional cardinality constraint |S|⩽K, a simple greedy algorithm finds a feasible solution S such that . We suggest a polynomial-time approximation algorithm for the unconstrained version of the problem, based on the idea of randomized rounding due to Goemans and Williamson (SIAM J. Discrete Math. 7 (1994) 656–666). It is proved that the algorithm delivers a solution S such that . We also show that there exists ε>0 such that it is NP-hard to find an approximate solution S with .

The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and MAX SNP-hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum size subgraph satisfying these requirements. For both problems we give -approximation algorithms. This improves on the straightforward 2-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems.

The “lambda method” is a well-known method for using integer linear-programming methods to model separable piecewise-linear functions in the context of optimization formulations. We extend the lambda method to the nonseparable case, and we use polyhedral methods to strengthen the formulation.

This paper addresses job-shop scheduling problems with deteriorating jobs, i.e. jobs whose processing times are an increasing function of their starting time. A simple linear deterioration is assumed and our objective is makespan minimization. We provide a complete analysis of the complexity of flow-shops, open-shops and job-shop problems. We introduce a polynomial-time algorithm for the two-machine flow-shop, and prove NP-hardness when an arbitrary number of machines (three and above) is assumed. Similarly, we introduce a polynomial-time algorithm for the two-machine open-shop, and prove NP-hardness when an arbitrary number of machines (three and above) is assumed. Finally, we prove NP-hardness of the job-shop problem even for two machines.

Given a graph, in the maximum clique problem, one desires to find the largest number of vertices, any two of which are adjacent. A branch-and-bound algorithm for the maximum clique problem—which is computationally equivalent to the maximum independent (stable) set problem—is presented with the vertex order taken from a coloring of the vertices and with a new pruning strategy. The algorithm performs successfully for many instances when applied to random graphs and DIMACS benchmark graphs.

In classical deterministic scheduling problems, the job processing times are assumed to be constant parameters. In many practical cases, however, processing times are controllable by allocating a resource (that may be continuous or discrete) to the job operations. In such cases, each processing time is a decision variable to be determined by the scheduler, who can take advantage of this flexibility to improve system performance. Since scheduling problems with controllable processing times are very interesting both from the practical and theoretical point of view, they have received a lot of attention from researchers over the last 25 years. This paper aims to give a unified framework for scheduling with controllable processing times by providing an up-to-date survey of the results in the field.

We describe the recursive structures of the set of two-stack sortable permutations which avoid 132 and the set of two-stack sortable permutations which contain 132 exactly once. Using these results and standard generating function techniques, we enumerate two-stack sortable permutations which avoid (or contain exactly once) 132 and which avoid (or contain exactly once) an arbitrary permutation τ. In most cases the number of such permutations is given by a simple formula involving Fibonacci or Pell numbers.

With the advent of large-scale DNA physical mapping and sequencing, studies of genome rearrangements are becoming increasingly important in evolutionary molecular biology. From a computational perspective, the study of evolution based on rearrangements leads to a rearrangement distance problem, i.e., computing the minimum number of rearrangement events required to transform one genome into another. Different types of rearrangement events give rise to a spectrum of interesting combinatorial problems. The complexity of most of these problems is unknown. Multichromosomal genomes frequently evolve by a rearrangement event called translocation which exchanges genetic material between different chromosomes. In this paper we study the translocation distance problem, modeling the evolution of genomes evolving by translocations. The translocation distance problem was recently studied for the first time by Kececioglu and Ravi, who gave a 2-approximation algorithm for computing translocation distance. In this paper we prove a duality theorem leading to a polynomial time algorithm for computing translocation distance for the case when the orientations of the genes are known. This leads to an algorithm generating a most parsimonious (shortest) scenario, transforming one genome into another by translocations.

It is now well-documented that the structure of evolutionary relationships between a set of present-day species is not necessarily tree-like. The reason for this is that reticulation events such as hybridizations mean that species are a mixture of genes from different ancestors. Since such events are relatively rare, a fundamental problem for biologists is to determine the smallest number of hybridization events required to explain a given (input) set of data in a single (hybrid) phylogeny. The main results of this paper show that computing this smallest number is APX-hard, and thus NP-hard, in the case the input is a collection of phylogenetic trees on sets of present-day species. This answers a problem which was raised at a recent conference (Phylogenetic Combinatorics and Applications, Uppsala University, 2004). As a consequence of these results, we also correct a previously published NP-hardness proof in the case the input is a collection of binary sequences, where each sequence represents the attributes of a particular present-day species. The APX-hardness of these problems means that it is unlikely that there is an efficient algorithm for either computing the result exactly or approximating it to any arbitrary degree of accuracy.

A new class of distances for graph vertices is proposed. This class contains parametric families of distances which reduce to the shortest-path, weighted shortest-path, and the resistance distances at the limiting values of the family parameters. The main property of the class is that all distances it comprises are graph-geodetic: d(i,j)+d(j,k)=d(i,k) if and only if every path from i to k passes through j. The construction of the class is based on the matrix forest theorem and the transition inequality.

We show that the independent spanning tree conjecture on digraphs is true if we restrict ourselves to line digraphs. Also, we construct independent spanning trees with small depths in iterated line digraphs. From the results, we can obtain independent spanning trees with small depths in de Bruijn and Kautz digraphs that improve the previously known upper bounds on the depths.

This paper presents a fast algorithm that provides optimal or near-optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size M × N into P equal-area regions while minimizing the total perimeter of the regions. The approach taken here is to divide the grid into stripes that can be filled completely with an integer number of regions. This striping method gives rise to a knapsack integer program that can be efficiently solved by existing codes. The solution of the knapsack problem is then used to generate the grid region assignments. An implementation of the algorithm partitioned a 1000 × 1000 grid into 1000 regions to a provably optimal solution in less than one second. With sufficient memory to hold the M × N grid array, extremely large minimum perimeter problems can be solved easily.

An L(2,1)-labeling of a graph is an assignment of nonnegative integers to its vertices so that adjacent vertices get labels at least two apart and vertices at distance two get distinct labels. The λ-number of a graph G, denoted by λ(G), is the minimum range of labels taken over all of its L(2,1)-labelings. We show that the λ-number of the Cartesian product of any two cycles is 6, 7 or 8. In addition, we provide complete characterizations for the products of two cycles with λ-number exactly equal to each one of these values.

An L(2,1)-coloring of a graph G is a coloring of G's vertices with integers in {0,1,…,k} so that adjacent vertices’ colors differ by at least two and colors of distance-two vertices differ. We refer to an L(2,1)-coloring as a coloring. The spanλ(G) of G is the smallest k for which G has a coloring, a span coloring is a coloring whose greatest color is λ(G), and the hole indexρ(G) of G is the minimum number of colors in {0,1,…,λ(G)} not used in a span coloring. We say that G is full-colorable if ρ(G)=0. More generally, a coloring of G is a no-hole coloring if it uses all colors between 0 and its maximum color. Both colorings and no-hole colorings were motivated by channel assignment problems. We define the no-hole spanμ(G) of G as ∞ if G has no no-hole coloring; otherwise μ(G) is the minimum k for which G has a no-hole coloring using colors in {0,1,…,k}.Let n denote the number of vertices of G, and let Δ be the maximum degree of vertices of G. Prior work shows that all non-star trees with Δ⩾3 are full-colorable, all graphs G with n=λ(G)+1 are full-colorable, μ(G)⩽λ(G)+ρ(G) if G is not full-colorable and n⩾λ(G)+2, and G has a no-hole coloring if and only if n⩾λ(G)+1. We prove two extremal results for colorings. First, for every m⩾1 there is a G with ρ(G)=m and μ(G)=λ(G)+m. Second, for every m⩾2 there is a connected G with λ(G)=2m, n=λ(G)+2 and ρ(G)=m.

An L(2,1)-labeling of a graph G is an assignment of labels from {0,1,…,λ} to the vertices of G such that vertices at distance two get different labels and adjacent vertices get labels that are at least two apart. The λ-number λ(G) of G is the minimum value λ such that G admits an L(2,1)-labeling. Let G×H denote the direct product of G and H. We compute the λ-numbers for each of C7i×C7j, C11i×C11j×C11k, P4×Cm, and P5×Cm. We also show that for n⩾6 and m⩾7, λ(Pn×Cm)=6 if and only if m=7k, k⩾1. The results are partially obtained by a computer search.

Rotagraphs generalize all standard products of graphs in which one factor is a cycle. A computer-based approach for searching graph invariants on rotagraphs is proposed and two of its applications are presented. First, the lambda-numbers of the Cartesian product of a cycle and a path are computed, where the lambda-number of a graph G is the minimum number of colors needed in a (2,1)-coloring of G. The independence numbers of the family of the strong product graphs C-7 boxed times C-7 boxed times C2k+1 are also obtained.

In the classical channel assignment problem, transmitters that are sufficiently close together are assigned transmission frequencies that differ by prescribed amounts, with the goal of minimizing the span of frequencies required. This problem can be modeled through the use of an L(2,1)-labeling, which is a function f from the vertex set of a graph G to the non-negative integers such that |f(x)–f(y)|⩾ 2 if xand y are adjacent vertices and |f(x)–f(y)|⩾1 if xand y are at distance two. The goal is to determine the λ-number of G, which is defined as the minimum span over all L(2,1)-labelings of G, or equivalently, the smallest number k such that G has an L(2,1)-labeling using integers from {0,1,…,k}. Recent work has focused on determining the λ-number of generalized Petersen graphs (GPGs) of order n. This paper provides exact values for the λ-numbers of GPGs of orders 5, 7, and 8, closing all remaining open cases for orders at most 8. It is also shown that there are no GPGs of order 4, 5, 8, or 11 with λ-number exactly equal to the known lower bound of 5, however, a construction is provided to obtain examples of GPGs with λ-number 5 for all other orders. This paper also provides an upper bound for the number of distinct isomorphism classes for GPGs of any given order. Finally, the exact values for the λ-number of n-stars, a subclass of the GPGs inspired by the classical Petersen graph, are also determined. These generalized stars have a useful representation on Möebius strips, which is fundamental in verifying our results.

A {\it path covering} of a graph $G$ is a set of vertex disjoint paths of $G$ containing all the vertices of $G$. The {\it path covering number} of $G$, denoted by $P(G)$, is the minimum number of paths in a path covering of $G$. An {\sl $k$-L(2,1)-labeling} of a graph $G$ is a mapping $f$ from $V(G)$ to the set ${0,1,...,k}$ such that $|f(u)-f(v)|\ge 2$ if $d_G(u,v)=1$ and $|f(u)-f(v)|\ge 1$ if $d_G(u,v)=2$. The {\sl L(2,1)-labeling number $\lambda (G)$} of $G$ is the smallest number $k$ such that $G$ has a $k$-L(2,1)-labeling. The purpose of this paper is to study path covering number and L(2,1)-labeling number of graphs. Our main work extends most of results in [On island sequences of labelings with a condition at distance two, Discrete Applied Maths 158 (2010), 1-7] and can answer an open problem in [On the structure of graphs with non-surjective L(2,1)-labelings, SIAM J. Discrete Math. 19 (2005), 208-223].

The (2,1)-total labelling number of a graph G is the width of the smallest range of integers that suffices to label the vertices and the edges of G such that no two adjacent vertices have the same label, no two adjacent edges have the same label and the difference between the labels of a vertex and its incident edges is at least 2. In this paper we prove that if G is an outerplanar graph with maximum degree Δ(G), then if Δ(G)⩾5, or Δ(G)=3 and G is 2-connected, or Δ(G)=4 and G contains no intersecting triangles.

Given a graph with edge costs, the power of a node is the maximum cost of an edge leaving it, and the power of a graph is the sum of the powers of its nodes. Motivated by applications in wireless networks, we consider several fundamental undirected network design problems under the power minimization criteria. The Minimum-Power Edge-Cover (MPEC) problem is: given a graph G=(V,E) with edge costs {c(e):e∈E} and a subset S⊆V of nodes, find a minimum-power subgraph H of G containing an edge incident to every node in S. We give a 3/2-approximation algorithm for MPEC, improving over the 2-approximation by [M.T. Hajiaghayi, G. Kortsarz, V.S. Mirrokni, Z. Nutov, Power optimization for connectivity problems, Mathematical Programming 110 (1) (2007) 195–208]. For the Min-Powerk-Connected Subgraph () problem we obtain the following results. For k=2 and k=3, we improve the previously best known ratios of 4 [G. Calinescu, P.J. Wan, Range assignment for biconnectivity and k-edge connectivity in wireless ad hoc networks, Mobile Networks and Applications 11 (2) (2006) 121–128] and 7 [M.T. Hajiaghayi, G. Kortsarz, V.S. Mirrokni, Z. Nutov, Power optimization for connectivity problems, Mathematical Programming 110 (1) (2007) 195–208] to and , respectively. Finally, we give a 4rmax-approximation algorithm for the Minimum-Power Steiner Network (MPSN) problem: find a minimum-power subgraph that contains r(u,v) pairwise edge-disjoint paths for every pair u,v of nodes.

Graph-theoretical models are given for constructing the season schedule of a sports league; we consider in particular the problem of having an alternation of home-games and away-games which is as regular as possible, and we characterize schedules with a minimum number of breaks in the alternations. Some related graph-theoretical properties are discussed and a solvable case with preassignments of home-games and away-games is presented.

A set of vertices S in a graph G is independent if no neighbor of a vertex of S belongs to S. The independence number α is the maximum cardinality of an independent set of G. A series of best possible lower and upper bounds on α and some other common invariants of G are obtained by the system AGX 2, and proved either automatically or by hand. In the present paper, we report on such lower and upper bounds considering, as second invariant, minimum, average and maximum degree, diameter, radius, average distance, spread of eccentricities, chromatic number and matching number.

Some combinatorial problems occurring in scheduling the games of a sports league are presented; solutions are obtained by constructing oriented factorisations of complete graphs. One considers schedules with a minimum number of breaks in the sequences of home-games and away-games and schedules with minimum number of days with breaks. Some open problems are also mentioned.

In this paper we prove that there are exactly 7 inequivalent [20,10,6] even extremal formally self-dual binary codes and that there are over 1000 inequivalent [22,11,6] even extremal formally self-dual binary codes. We give properties of these codes and present a summary of what is known about the classification of even extremal formally self-dual codes of small length.

A set of vertices S in a graph G is independent if no neighbor of a vertex of S belongs to S. A set of vertices U in a graph G is irredundant if each vertex v of U has a private neighbor, which may be v itself, i.e., a neighbor of v which is not a neighbor of any other vertex of U. The independence number α (resp. upper irredundance number IR) is the maximum number of vertices of an independent (resp. irredundant) set of G. In previous work, a series of best possible lower and upper bounds on α and some other usual invariants of G were obtained by the system AGX 2, and proved either automatically or by hand. These results are strengthened in the present paper by systematically replacing α by IR. The resulting conjectures were tested by AGX which could find no counter-example to an upper bound nor any case where a lower bound could not be shown to remain tight. Some proofs for the bounds on α carry over. In all other cases, new proofs are provided.

Tabu search is a general heuristic procedure for global optimization which has been successfully applied to several types of difficult combinatorial optimization problems (scheduling, graph coloring, etc.).Based on this technique, an efficient algorithm for getting almost optimal solutions of large traveling salesman problems is proposed. The algorithm uses the intermediate- and long-term memory concepts of tabu search as well as a new kind of move. Experimental results are presented for problems of 500–100 000 cities and a new estimation of the asymptotic normalized length of the shortest tour through points uniformly distributed in the unit square is given.Finally, as the algorithm is well suited for parallel computation, an implementation on a transputer network is described. Numerical results and speedups obtained show the efficiency of the parallel algorithm.

The paper deals with the sequencing problems in which job processing times, along with a processing order, are decision variables having their own associated linearly varying costs. The existing results in this area are surveyed and some new results are provided. In the paper, an attention is focussed on the computational complexity aspects, polynomial algorithms and the worst-case analysis of approximation algorithms.

The bin packing problem, in which a set of items of various sizes has to be packed into a minimum number of identical bins, has been extensively studied during the past fifteen years, mainly with the aim of finding fast heuristic algorithms to provide good approximate solutions. We present lower bounds and a dominance criterion and derive a reduction algorithm. Lower bounds are evaluated through an extension of the concept of worst-case performance. For both lower bounds and reduction algorithm an experimental analysis is provided.

We present results and conjectures on the van der Waerden numbers w(2;3,t) and on the new palindromic van der Waerden numbers pdw(2;3,t). We have computed the new number w(2;3,19) = 349, and we provide lower bounds for 20 <= t <= 39, where for t <= 30 we conjecture these lower bounds to be exact. The lower bounds for 24 <= t <= 30 refute the conjecture that w(2;3,t) <= t^2, and we present an improved conjecture. We also investigate regularities in the good partitions (certificates) to better understand the lower bounds. Motivated by such reglarities, we introduce *palindromic van der Waerden numbers* pdw(k; t_0,...,t_{k-1}), defined as ordinary van der Waerden numbers w(k; t_0,...,t_{k-1}), however only allowing palindromic solutions (good partitions), defined as reading the same from both ends. Different from the situation for ordinary van der Waerden numbers, these "numbers" need actually to be pairs of numbers. We compute pdw(2;3,t) for 3 <= t <= 27, and we provide lower bounds, which we conjecture to be exact, for t <= 35. All computations are based on SAT solving, and we discuss the various relations between SAT solving and Ramsey theory. Especially we introduce a novel (open-source) SAT solver, the tawSolver, which performs best on the SAT instances studied here, and which is actually the original DLL-solver, but with an efficient implementation and a modern heuristic typical for look-ahead solvers (applying the theory developed in the SAT handbook article of the second author).

Repetitive substructures in two-dimensional arrays emerge in speeding up searches and have been recently studied also independently in an attempt to parallel some of the classical derivations concerning repetitions in strings. The present paper focuses on repetitions in two dimensions that manifest themselves in form of two “tandem” occurrences of a same primitive rectangular pattern W where the two replicas touch each other with either one side or corner. Being primitive here means that W cannot be expressed in turn by repeated tiling of another array. The main result of the paper is an O(n3logn) algorithm for detecting all “side-sharing” repetitions in an n×n array. This is optimal, based on bounds on the number of such repetitions established in previous work. With easy adaptations, these constructions lead to an equally optimal, O(n4) algorithm for repetitions of the second type.

A new syntactic model, called pure two-dimensional (2D) context-free grammar (P2DCFG), is introduced based on the notion of pure context-free string grammar. The rectangular picture generative power of this 2D grammar model is investigated. Certain closure properties are obtained. An analogue of this 2D grammar model called pure 2D hexagonal context-free grammar (P2DHCFG) is also considered to generate hexagonal picture arrays on triangular grids.

In this paper, we propose an algorithm for shattering a set of disjoint line segments of arbitrary length and orientation placed arbitrarily on a 2D plane. The time and space complexities of our algorithm are O(n2) and O(n), respectively. It is an improvement over the time algorithm proposed in (R. Freimer, J.S.B. Mitchell, C.D. Piatko, On the complexity of shattering using arrangements, Canadian Conference on Computational Geometry, 1990, pp. 218–222.). A minor modification of this algorithm applies when objects are simple polygons, keeping the time and space complexities invariant.

The class of 2K2-free graphs includes several interesting subclasses such as split, pseudo-split, threshold graphs, complements to chordal, interval or trivially perfect graphs. The fundamental property of 2K2-free graphs is that they contain polynomially many maximal independent sets. As a consequence, several important problems that are NP-hard in general graphs, such as 3-colorability, maximum weight independent set (WIS), minimum weight independent dominating set (WID), become polynomial-time solvable when restricted to the class of 2K2-free graphs. In the present paper, we extend 2K2-free graphs to larger classes with polynomial-time solvable WIS or WID. In particular, we show that WIS can be solved in polynomial time for (K2+K1,3)-free graphs and WID for (K2+K1,2)-free graphs. The latter result is in contrast with the fact that independent domination is NP-hard in the class of 2K1,2-free graphs, which has been recently proven by Zverovich.

In this paper we describe and analyse an algorithm for solving the satisfiability problem. If E is a boolean formula in conjunctive normal form with n variables and r clauses, then we will show that this algorithm solves the satisfiability problem for formulas with at most k literals per clause in time O(|F|·αkn), where αk is the greatest number satisfying αk = 2−1/αkk−1 (in the case of 3-satisfiability α3 = 1,6181).

The MINIMUM 2SAT-DELETION problem is to delete the minimum number of clauses in a 2SAT instance to make it satisfiable. It is one of the prototypes in the approximability hierarchy of minimization problems Khanna et al. [Constraint satisfaction: the approximability of minimization problems, Proceedings of the 12th Annual IEEE Conference on Computational Complexity, Ulm, Germany, 24–27 June, 1997, pp. 282–296], and its approximability is largely open. We prove a lower approximation bound of , improving the previous bound of by Dinur and Safra [The importance of being biased, Proceedings of the 34th Annual ACM Symposium on Theory of Computing (STOC), May 2002, pp. 33–42, also ECCC Report TR01-104, 2001]. For highly restricted instances with exactly four occurrences of every variable we provide a lower bound of . Both inapproximability results apply to instances with no mixed clauses (the literals in every clause are both either negated, or unnegated).We further prove that any k-approximation algorithm for the MINIMUM 2SAT-DELETION problem polynomially reduces to a (2-2/(k+1))-approximation algorithm for the MINIMUM VERTEX COVER problem.One ingredient of these improvements is our proof that the MINIMUM VERTEX COVER problem is hardest to approximate on graphs with perfect matching. More precisely, the problem to design a ρ-approximation algorithm for the MINIMUM VERTEX COVER on general graphs polynomially reduces to the same problem on graphs with perfect matching. This improves also on the results by Chen and Kanj [On approximating minimum vertex cover for graphs with perfect matching, Proceedings of the 11st ISAAC, Taipei, Taiwan, Lecture Notes in Computer Science, vol. 1969, Springer, Berlin, 2000, pp. 132–143].

We propose a method for computing the cohomology ring of three-dimensional (3D) digital binary-valued pictures. We obtain the cohomology ring of a 3D digital binary-valued picture I, via a simplicial complex K(I) topologically representing (up to isomorphisms of pictures) the picture I. The usefulness of a simplicial description of the “digital” cohomology ring of 3D digital binary-valued pictures is tested by means of a small program visualizing the different steps of the method. Some examples concerning topological thinning, the visualization of representative (co)cycles of (co)homology generators and the computation of the cup product on the cohomology of simple pictures are showed.

We recall the definition of simple points which uses the digital fundamental group introduced by T. Y. Kong in [Kon89]. Then, we prove that a not less restrictive de.nition can be given. Indeed, we prove that there is no need of considering the fundamental group of the complement of an object in order to characterize its simple points. In order to prove this result, we do not use the fact that “the number of holes of X is equal to the number of holes in X” which is not su.cient for our purpose but we use the linking number de.ned in [FM00]. In so doing, we formalize the proofs of several results stated without proof in the literature (Bertrand, Kong, Morgenthaler).

A binary three-dimensional (3D) image $I$ is well-composed if the boundary surface of its continuous analog is a 2D manifold. Since 3D images are not often well-composed, there are several voxel-based methods ("repairing" algorithms) for turning them into well-composed ones but these methods either do not guarantee the topological equivalence between the original image and its corresponding well-composed one or involve sub-sampling the whole image. In this paper, we present a method to locally "repair" the cubical complex $Q(I)$ (embedded in $\mathbb{R}^3$) associated to $I$ to obtain a polyhedral complex $P(I)$ homotopy equivalent to $Q(I)$ such that the boundary of every connected component of $P(I)$ is a 2D manifold. The reparation is performed via a new codification system for $P(I)$ under the form of a 3D grayscale image that allows an efficient access to cells and their faces.

In this paper, we propose a new methodology to conceive a thinning scheme based on the parallel deletion of P-simple points. This scheme needs neither a preliminary labelling nor an extended neighborhood, in the opposite of the already proposed thinning algorithms based on P-simple points. Moreover, from an existent thinning algorithm A, we construct another thinning algorithm A′, such that A′ deletes at least all the points removed by A, while preserving the same end points. In fact, we propose a 12-subiteration thinning algorithm which deletes at least the points removed by the one proposed by Palágyi and Kuba (Graphical Models Image Process. 61 (1999) 199).

Top-cited authors
• Poznan University of Technology
• University of Bologna
• Rutgers, The State University of New Jersey
• The University of Winnipeg
• University of Johannesburg