ACM Transactions on Algorithms

Published by Association for Computing Machinery
Print ISSN: 1549-6325
Publications
In several applications such as databases, planning, and sensor networks, parameters such as selectivity, load, or sensed values are known only with some associated uncertainty. The performance of such a system (as captured by some objective function over the parameters) is significantly improved if some of these parameters can be probed or observed. In a resource constrained situation, deciding which parameters to observe in order to optimize system performance, itself becomes an interesting and important optimization problem. This general problem is the focus of this article. One of the most important considerations in this framework is whether adaptivity is required for the observations. Adaptive observations introduce blocking or sequential operations in the system whereas nonadaptive observations can be performed in parallel. One of the important questions in this regard is to characterize the benefit of adaptivity for probes and observation. We present general techniques for designing constant factor approximations to the optimal observation schemes for several widely used scheduling and metric objective functions. We show a unifying technique that relates this optimization problem to the outlier version of the corresponding deterministic optimization. By making this connection, our technique shows constant factor upper bounds for the benefit of adaptivity of the observation schemes. We show that while probing yields significant improvement in the objective function, being adaptive about the probing is not beneficial beyond constant factors.
 
Multi-input multi-output generate gate adder for r = k = 2  
We consider the problem of constructing fast and small binary adder circuits. Among widely-used adders, the Kogge-Stone adder is often considered the fastest, because it computes the carry bits for two $n$-bit numbers (where $n$ is a power of two) with a depth of $2\log_2 n$ logic gates, size $4 n\log_2 n$, and all fan-outs bounded by two. Fan-outs of more than two are avoided, because they lead to the insertion of repeaters for repowering the signal and additional depth in the physical implementation. However, the depth bound of the Kogge-Stone adder is off by a factor of two from the lower bound of $\log_2 n$. This bound is achieved asymptotically in two separate constructions by Brent and Krapchenko. Brent's construction gives neither a bound on the fan-out nor the size, while Krapchenko's adder has linear size, but can have up to linear fan-out. In this paper we introduce the first family of adders with an asymptotically optimum depth of $\log_2 n + o(\log_2 n)$, linear size $\mathcal {O}(n)$, and a fan-out bound of two.
 
We study the admission control problem in general networks. Communication requests arrive over time, and the online algorithm accepts or rejects each request while maintaining the capacity limitations of the network. The admission control problem has been usually analyzed as a benefit problem, where the goal is to devise an online algorithm that accepts the maximum number of requests possible. The problem with this objective function is that even algorithms with optimal competitive ratios may reject almost all of the requests, when it would have been possible to reject only a few. This could be inappropriate for settings in which rejections are intended to be rare events. In this article, we consider preemptive online algorithms whose goal is to minimize the number of rejected requests. Each request arrives together with the path it should be routed on. We show an O (log ² ( mc ))-competitive randomized algorithm for the weighted case, where m is the number of edges in the graph and c is the maximum edge capacity. For the unweighted case, we give an O (log m log c )-competitive randomized algorithm. This settles an open question of Blum et al. [2001]. We note that allowing preemption and handling requests with given paths are essential for avoiding trivial lower bounds. The admission control problem is a generalization of the online set cover with repetitions problem, whose input is a family of m subsets of a ground set of n elements. Elements of the ground set are given to the online algorithm one by one, possibly requesting each element a multiple number of times. (If each element arrives at most once, this corresponds to the online set cover problem.) The algorithm must cover each element by different subsets, according to the number of times it has been requested. We give an O (log m log n )-competitive randomized algorithm for the online set cover with repetitions problem. This matches a recent lower bound of Ω(log m log n ) given by Korman [2005] (based on Feige [1998]) for the competitive ratio of any randomized polynomial time algorithm, under the BPP ≠ NP assumption. Given any constant ϵ > 0, an O (log m log n )-competitive deterministic bicriteria algorithm is shown that covers each element by at least (1 - ϵ) k sets, where k is the number of times the element is covered by the optimal solution.
 
We study algorithmic problems that are motivated by bandwidth trading in next generation networks. Typically, bandwidth trading involves sellers (e.g., network operators) interested in selling bandwidth pipes that offer to buyers a guaranteed level of service for a specified time interval. The buyers (e.g., bandwidth brokers) are looking to procure bandwidth pipes to satisfy the reservation requests of end-users (e.g., Internet subscribers). Depending on what is available in the bandwidth exchange, the goal of a buyer is to either spend the least amount of money to satisfy all the reservations made by its customers, or to maximize its revenue from whatever reservations can be satisfied. We model the above as a real-time non-preemptive scheduling problem in which machine types correspond to bandwidth pipes and jobs correspond to the end-user reservation requests. Each job specifies a time interval during which it must be processed and a set of machine types on which it can be executed. If necessary, multiple machines of a given type may be allocated, but each must be paid for. Finally, each job has a revenue associated with it, which is realized if the job is scheduled on some machine. There are two versions of the problem that we consider. In the cost minimization version, the goal is to minimize the total cost incurred for scheduling all jobs, and in the revenue maximization version the goal is to maximize the revenue of the jobs that are scheduled for processing on a given set of machines. We consider several variants of the problems that arise in practical scenarios, and provide constant factor approximations.
 
We investigate the parameterized complexity of Vertex Cover parameterized by the difference between the size of the optimal solution and the value of the linear programming (LP) relaxation of the problem. By carefully analyzing the change in the LP value in the branching steps, we argue that combining previously known preprocessing rules with the most straightforward branching algorithm yields an O*(2.618k) algorithm for the problem. Here, k is the excess of the vertex cover size over the LP optimum, and we write O*(f(k)) for a time complexity of the form O(f(k)nO(1)). We proceed to show that a more sophisticated branching algorithm achieves a running time of O*(2.3146k). Following this, using previously known as well as new reductions, we give O*(2.3146k) algorithms for the parameterized versions of Above Guarantee Vertex Cover, Odd Cycle Transversal, Split Vertex Deletion, and Almost 2-SAT, and O*(1.5214k) algorithms for König Vertex Deletion and Vertex Cover parameterized by the size of the smallest odd cycle transversal and König vertex deletion set. These algorithms significantly improve the best known bounds for these problems. The most notable improvement among these is the new bound for Odd Cycle Transversal—this is the first algorithm that improves on the dependence on k of the seminal O*(3k) algorithm of Reed, Smith, and Vetta. Finally, using our algorithm, we obtain a kernel for the standard parameterization of Vertex Cover with at most 2k − clog k vertices. Our kernel is simpler than previously known kernels achieving the same size bound.
 
We consider connectivity-augmentation problems in a setting where each potential new edge has a nonnegative cost associated with it, and the task is to achieve a certain connectivity target with at most p new edges of minimum total cost. The main result is that the minimum cost augmentation of edge-connectivity from k-1 to k with at most p new edges is fixed-parameter tractable parameterized by p and admits a polynomial kernel. We also prove the fixed-parameter tractability of increasing edge-connectivity from 0 to 2, and increasing node-connectivity from 1 to 2.
 
The list update problem is a classical online problem, with an optimal competitive ratio that is still open, known to be somewhere between 1.5 and 1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and almost all other list update algorithms, like MTF, are projective in the sense that they can be defined by looking only at any pair of list items at a time. Projectivity (also known as "list factoring") simplifies both the description of the algorithm and its analysis, and so far seems to be the only way to define a good online algorithm for lists of arbitrary length. In this paper we characterize all projective list update algorithms and show that their competitive ratio is never smaller than 1.6 in the partial cost model. Therefore, COMB is a best possible projective algorithm in this model.
 
We study algorithms for spectral graph sparsification. The input is a graph G with n vertices and m edges, and the output is a sparse graph &Gtilde; that approximates G in an algebraic sense. Concretely, for all vectors x and any ε > 0, the graph &Gtilde; satisfies (1-ε )xTLGx ≤ xTL&Gtilde;x ≤ (1+ε)xTLGx, where LG and &Gtilde; are the Laplacians of G and &Gtilde; respectively. The first contribution of this article applies to all existing sparsification algorithms that rely on solving solving linear systems on graph Laplacians. These algorithms are the fastest known to date. Specifically, we show that less precision is required in the solution of the linear systems, leading to speedups by an O(log n) factor. We also present faster sparsification algorithms for slightly dense graphs: — An O(mlog n) time algorithm that generates a sparsifier with O(nlog ³n/ε²) edges. — An O(mlog log n) time algorithm for graphs with more than nlog ⁵nlog log n edges. — An O(m) algorithm for graphs with more than nlog ¹⁰n edges. — An O(m) algorithm for unweighted graphs with more than nlog ⁸n edges. These bounds hold up to factors that are in O(poly(log log n)) and are conjectured to be removable.
 
Basic anatomy of a protrusion.
A sketch of how the marking algorithm obtains a protrusion decomposition. X denotes a
treewidth-modulator. Edges among the individual vertex sets are not depicted.
Kernelization results for problems with finite integer index on sparse graph classes with
their corresponding additional condition.
We present a linear-time algorithm to compute a decomposition scheme for graphs G that have a set X ⊆ V(G), called a treewidth-modulator, such that the treewidth of G − X is bounded by a constant. Our decomposition, called a protrusion decomposition, is the cornerstone in obtaining the following two main results. Our first result is that any parameterized graph problem (with parameter k) that has finite integer index and such that positive instances have a treewidth-modulator of size O(k) admits a linear kernel on the class of H-topological-minor-free graphs, for any fixed graph H. This result partially extends previous meta-theorems on the existence of linear kernels on graphs of bounded genus and H-minor-free graphs. Let \(\mathcal{F}\) be a fixed finite family of graphs containing at least one planar graph. Given an n-vertex graph G and a non-negative integer k, Planar \(\mathcal{F}\)- Deletion asks whether G has a set X ⊆ V(G) such that \(|X|\leqslant k\) and G − X is H-minor-free for every \(H\in \mathcal{F}\). As our second application, we present the first single-exponential algorithm to solve Planar \(\mathcal{F}\)- Deletion. Namely, our algorithm runs in time 2O(k)·n 2, which is asymptotically optimal with respect to k. So far, single-exponential algorithms were only known for special cases of the family \(\mathcal{F}\).
 
We study the k-route cut problem: given an undirected edge-weighted graph G=(V,E), a collection {(s_1,t_1),(s_2,t_2),...,(s_r,t_r)} of source-sink pairs, and an integer connectivity requirement k, the goal is to find a minimum-weight subset E' of edges to remove, such that the connectivity of every pair (s_i, t_i) falls below k. Specifically, in the edge-connectivity version, EC-kRC, the requirement is that there are at most (k-1) edge-disjoint paths connecting s_i to t_i in G \ E', while in the vertex-connectivity version, NC-kRC, the same requirement is for vertex-disjoint paths. Prior to our work, poly-logarithmic approximation algorithms have been known for the special case where k >= 3, but no non-trivial approximation algorithms were known for any value k>3, except in the single-source setting. We show an O(k log^{3/2}r)-approximation algorithm for EC-kRC with uniform edge weights, and several polylogarithmic bi-criteria approximation algorithms for EC-kRC and NC-kRC, where the connectivity requirement k is violated by a constant factor. We complement these upper bounds by proving that NC-kRC is hard to approximate to within a factor of k^{eps} for some fixed eps>0. We then turn to study a simpler version of NC-kRC, where only one source-sink pair is present. We give a simple bi-criteria approximation algorithm for this case, and show evidence that even this restricted version of the problem may be hard to approximate. For example, we prove that the single source-sink pair version of NC-kRC has no constant-factor approximation, assuming Feige's Random k-AND assumption.
 
An example of a linear binary deterministic network.  
Path P 1 identified during the first iteration is depicted in bold. During the second iteration, path P 2 reached node A 3 .  
Continuing from Fig. 8. Resulting configuration after performing the L x function for edge (x 5 , y 4 ) and the φ-function at node A 2 . The potential path P 2 now reaches node A 1 .  
Continuing from Fig. 9. Resulting configuration after performing the L x function for edge (x 3 , y 4 ), and continuing P 2 from node A 3 to node B 2 and D.  
A long-standing open question in information theory is to characterize the unicast capacity of a wireless relay network. The difficulty arises due to the complex signal interactions induced in the network, since the wireless channel inherently broadcasts the signals and there is interference among transmissions. Recently, Avestimehr et al. [2007b] proposed a linear deterministic model that takes into account the shared nature of wireless channels, focusing on the signal interactions rather than the background noise. They generalized the min-cut max-flow theorem for graphs to networks of deterministic channels and proved that the capacity can be achieved using information theoretical tools. They showed that the value of the minimum cut is in this case the minimum rank of all the adjacency matrices describing source-destination cuts. In this article, we develop a polynomial-time algorithm that discovers the relay encoding strategy to achieve the min-cut value in linear deterministic (wireless) networks, for the case of a unicast connection. Our algorithm crucially uses a notion of linear independence between channels to calculate the capacity in polynomial time. Moreover, we can achieve the capacity by using very simple one-symbol processing at the intermediate nodes, thereby constructively yielding finite-length strategies that achieve the unicast capacity of the linear deterministic (wireless) relay network.
 
Figures show graphs of potentials, g * v , f uv and f * v , where w u 1 v > w u 2 v > w u 3 v > . . .. These functions only have value at integer points. For the sake of presentation, these functions are plotted as lines.
We consider the problem of finding semi-matching in bipartite graphs, which is also extensively studied under various names in the scheduling literature. We give faster algorithms for both weighted and unweighted cases. For the weighted case, we give an O(nmlog n)-time algorithm, where n is the number of vertices and m is the number of edges, by exploiting the geometric structure of the problem. This improves the classical O(n³)-time algorithms by Horn [1973] and Bruno et al. [1974b]. For the unweighted case, the bound can be improved even further. We give a simple divide-and-conquer algorithm that runs in O(√nmlog n) time, improving two previous O(nm)-time algorithms by Abraham [2003] and Harvey et al. [2003, 2006]. We also extend this algorithm to solve the Balanced Edge Cover problem in O(√nmlog n) time, improving the previous O(nm)-time algorithm by Harada et al. [2008].
 
Assume that a group of n people is going to an excursion and our task is to seat them into buses with several constraints each saying that a pair of people does not want to see each other in the same bus. This is a well-known graph coloring problem (with n being the number of vertices) and it can be solved in O*(2ⁿ) time by the inclusion-exclusion principle as shown by Björklund, Husfeldt, and Koivisto in 2009. Another approach to solve this problem in O*(2ⁿ) time is to use the Fast Fourier Transform (FFT). For this, given a graph G one constructs a polynomial PG(x) of degree O*(2ⁿ) with the following property: G is k-colorable if and only if the coefficient of xm (for some particular value of m) in the k-th power of P(x) is nonzero. Then, it remains to compute this coefficient using FFT. Assume now that we have additional constraints: the group of people contains several infants and these infants should be accompanied by their relatives in a bus. We show that if the number of infants is linear, then the problem can be solved in O*((2 − ϵ)ⁿ) time, where ϵ is a positive constant independent of n. We use this approach to improve known bounds for several NP-hard problems (the traveling salesman problem, the graph coloring problem, the problem of counting perfect matchings) on graphs of bounded average degree, as well as to simplify the proofs of several known results.
 
The restricted max-min fair allocation problem (also known as the restricted Santa Claus problem) is one of few problems that enjoys the intriguing status of having a better estimation algorithm than approximation algorithm. Indeed, Asadpour et al. [1] proved that a certain configuration LP can be used to estimate the optimal value within a factor 1/(4 + ε), for any ε > 0, but at the same time it is not known how to efficiently find a solution with a comparable performance guarantee. A natural question that arises from their work is if the difference between these guarantees is inherent or because of a lack of suitable techniques. We address this problem by giving a quasi-polynomial approximation algorithm with the mentioned performance guarantee. More specifically, we modify the local search of [1] and provide a novel analysis that lets us significantly improve the bound on its running time: from 2O(n) to n O(logn). Our techniques also have the interesting property that although we use the rather complex configuration LP in the analysis, we never actually solve it and therefore the resulting algorithm is purely combinatorial.
 
Tunnel between the routes of two agents
Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly in?nite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerfull adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (?nite or in?nite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with path-connected interior. While our algorithms work in a very general setting ? agents can, indeed, meet almost everywhere ? we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.
 
The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley's theorem for bounding Gaussian processes. Our main result states that any set of $N = \exp(\tilde{O}(n))$ real vectors in $n$ dimensional space can be linearly mapped to a space of dimension $k=O(\log N\polylog(n))$, while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time $O(n\log n)$ on each vector. This improves on the best known $N = \exp(\tilde{O}(n^{1/2}))$ achieved by Ailon and Liberty and $N = \exp(\tilde{O}(n^{1/3}))$ by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a $\polylog(n)$ factor while considerably simplifying their constructions.
 
The Internet has emerged as perhaps the most important network in modern computing, but rather miraculously, it was created through the individual actions of a multitude of agents rather than by a central planning authority. This motivates the game theoretic study of network formation, and our paper considers one of the most-well studied models, originally proposed by Fabrikant et al. In it, each of n agents corresponds to a vertex, which can create edges to other vertices at a cost of α each, for some parameter α. Every edge can be freely used by every vertex, regardless of who paid the creation cost. To reflect the desire to be close to other vertices, each agent’s cost function is further augmented by the sum total of all (graph theoretic) distances to all other vertices. Previous research proved that for many regimes of the (α,n) parameter space, the total social cost (sum of all agents’ costs) of every Nash equilibrium is bounded by at most a constant multiple of the optimal social cost. In algorithmic game theoretic nomenclature, this approximation ratio is called the price of anarchy. In our paper, we significantly sharpen some of those results, proving that for all constant non-integral α > 2, the price of anarchy is in fact 1 + o(1), i.e., not only is it bounded by a constant, but it tends to 1 as n → ∞. For constant integral α ≥ 2, we show that the price of anarchy is bounded away from 1. We provide quantitative estimates on the rates of convergence for both results.
 
We present a $(1+\epsilon)$-approximation algorithm running in $O(f(\epsilon)\cdot n \log^4 n)$ time for finding the diameter of an undirected planar graph with non-negative edge lengths.
 
Rank-width is defined by Seymour and the author to investigate clique-width; they show that graphs have bounded rank-width if and only if they have bounded clique-width. It is known that many hard graph problems have polynomial-time algorithms for graphs of bounded clique-width, however, requiring a given decomposition corresponding to clique-width (k-expression); they remove this requirement by constructing an algorithm that either outputs a rank-decomposition of width at most f(k) for some function f or confirms rank-width is larger than k in O(|V|9log |V|) time for an input graph G = (V,E) and a fixed k. This can be reformulated in terms of clique-width as an algorithm that either outputs a (21 + f(k)–1)-expression or confirms clique-width is larger than k in O(|V|9log |V|) time for fixed k. In this paper, we develop two separate algorithms of this kind with faster running time. We construct a O(|V|4)-time algorithm with f(k) = 3k + 1 by constructing a subroutine for the previous algorithm; we may now avoid using general submodular function minimization algorithms used by Seymour and the author. Another one is a O(|V|3)-time algorithm with f(k) = 24k by giving a reduction from graphs to binary matroids; then we use an approximation algorithm for matroid branch-width by Hliněný.
 
We study the communication complexity and streaming complexity of approximating unweighted semi-matchings. A semi-matching in a bipartite graph G = (A, B, E), with n = |A|, is a subset of edges S ⊆ E that matches all A vertices to B vertices with the goal usually being to do this as fairly as possible. While the term semi-matching was coined in 2003 by Harvey et al. [WADS 2003], the problem had already previously been studied in the scheduling literature under different names. We present a deterministic one-pass streaming algorithm that for any 0 ≤ ε ≤ 1 uses space Õ(n 1 + ε ) and computes an O(n (1 − ε)/2)-approximation to the semi-matching problem. Furthermore, with o(logn) passes it is possible to compute an O(logn)-approximation with space Õ(n). In the one-way two-party communication setting, we show that for every ε > 0, deterministic communication protocols for computing an O\((n^{\frac{1}{(1+\epsilon)c + 1}})\)-approximation require a message of size more than cn bits. We present two deterministic protocols communicating n and 2n edges that compute an O\((\sqrt{n})\) and an O(n 1/3)-approximation respectively. Finally, we improve on results of Harvey et al. [Journal of Algorithms 2006] and prove new links between semi-matchings and matchings. While it was known that an optimal semi-matching contains a maximum matching, we show that there is a hierachical decomposition of an optimal semi-matching into maximum matchings. A similar result holds for semi-matchings that do not admit length-two degree-minimizing paths.
 
We study the well-known Label Cover problem under the additional requirement that problem instances have large girth. We show that if the girth is some $k$, the problem is roughly $2^{\log^{1-\epsilon} n/k}$ hard to approximate for all constant $\epsilon > 0$. A similar theorem was claimed by Elkin and Peleg [ICALP 2000], but their proof was later found to have a fundamental error. We use the new proof to show inapproximability for the basic $k$-spanner problem, which is both the simplest problem in graph spanners and one of the few for which super-logarithmic hardness was not known. Assuming $NP \not\subseteq BPTIME(2^{polylog(n)})$, we show that for every $k \geq 3$ and every constant $\epsilon > 0$ it is hard to approximate the basic $k$-spanner problem within a factor better than $2^{(\log^{1-\epsilon} n) / k}$ (for large enough $n$). A similar hardness for basic $k$-spanner was claimed by Elkin and Peleg [ICALP 2000], but the error in their analysis of Label Cover made this proof fail as well. Thus for the problem of Label Cover with large girth we give the first non-trivial lower bound. For the basic $k$-spanner problem we improve the previous best lower bound of $\Omega(\log n)/k$ by Kortsarz [Algorithmica 1998]. Our main technique is subsampling the edges of 2-query PCPs, which allows us to reduce the degree of a PCP to be essentially equal to the soundness desired. This turns out to be enough to essentially guarantee large girth.
 
In this article, we consider the fault-tolerant k-median problem and give the first constant factor approximation algorithm for it. In the fault-tolerant generalization of the classical k-median problem, each client j needs to be assigned to at least rj ⩾ 1 distinct open facilities. The service cost of j is the sum of its distances to the rj facilities, and the k-median constraint restricts the number of open facilities to at most k. Previously, a constant factor was known only for the special case when all rjs are the same, and alogarithmic approximation ratio was known for the general case. In addition, we present the first polynomial time algorithm for the fault-tolerant k-median problem on a path or an HST by showing that the corresponding LP always has an integral optimal solution. We also consider the fault-tolerant facility location problem, in which the service cost of j can be a weighted sum of its distance to the rj facilities. We give a simple constant factor approximation algorithm, generalizing several previous results that work only for nonincreasing weight vectors.
 
We introduce the st-cut version of the sparsest-cut problem, where the goal is to find a cut of minimum sparsity in a graph G(V, E) among those separating two distinguished vertices s, t ∈ V. Clearly, this problem is at least as hard as the usual (non-st) version. Our main result is a polynomial-time algorithm for the product-demands setting that produces a cut of sparsity O(&sqrt;OPT), where OPT ⩽ 1 denotes the optimum when the total edge capacity and the total demand are assumed (by normalization) to be 1. Our result generalizes the recent work of Trevisan [arXiv, 2013] for the non-st version of the same problem (sparsest cut with product demands), which in turn generalizes the bound achieved by the discrete Cheeger inequality, a cornerstone of Spectral Graph Theory that has numerous applications. Indeed, Cheeger’s inequality handles graph conductance, the special case of product demands that are proportional to the vertex (capacitated) degrees. Along the way, we obtain an O(log |V|) approximation for the general-demands setting of sparsest st-cut.
 
We consider an optimization problem consisting of an undirected graph, with cost and profit functions defined on all vertices. The goal is to find a connected subset of vertices with maximum total profit, whose total cost does not exceed a given budget. The best result known prior to this work guaranteed a (2,O(logn)) bicriteria approximation, i.e. the solution’s profit is at least a fraction of \frac1O(logn)\frac{1}{O(\log n)} of an optimum solution respecting the budget, while its cost is at most twice the given budget. We improve these results and present a bicriteria tradeoff that, given any ε ∈ (0,1], guarantees a (1+e,O(\frac1elogn))(1+\varepsilon,O(\frac{1}{\varepsilon}\log n)) -approximation.
 
Local moments are used for local regression, to compute statistical measures such as sums, averages, and standard deviations, and to approximate probability distributions. We consider the case where the data source is a very large I/O array of size n and we want to compute the first N local moments, for some constant N. Without precomputation, this requires O(n) time. We develop a sequence of algorithms of increasing sophistication that use precomputation and additional buffer space to speed up queries. The simpler algorithms partition the I/O array into consecutive ranges called bins, and they are applicable not only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM, etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A more sophisticated approach uses hierarchical buffering and has a logarithmic time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b. Using Overlapped Bin Buffering, we show that only a single buffer is needed, as with wavelet-based algorithms, but using much less storage. Applications exist in multidimensional and statistical databases over massive data sets, interactive image processing, and visualization.
 
Two level MSB bucketing  
We consider the {\it indexable dictionary} problem, which consists of storing a set $S \subseteq \{0,...,m-1\}$ for some integer $m$, while supporting the operations of $\Rank(x)$, which returns the number of elements in $S$ that are less than $x$ if $x \in S$, and -1 otherwise; and $\Select(i)$ which returns the $i$-th smallest element in $S$. We give a data structure that supports both operations in O(1) time on the RAM model and requires ${\cal B}(n,m) + o(n) + O(\lg \lg m)$ bits to store a set of size $n$, where ${\cal B}(n,m) = \ceil{\lg {m \choose n}}$ is the minimum number of bits required to store any $n$-element subset from a universe of size $m$. Previous dictionaries taking this space only supported (yes/no) membership queries in O(1) time. In the cell probe model we can remove the $O(\lg \lg m)$ additive term in the space bound, answering a question raised by Fich and Miltersen, and Pagh. We present extensions and applications of our indexable dictionary data structure, including: An information-theoretically optimal representation of a $k$-ary cardinal tree that supports standard operations in constant time, A representation of a multiset of size $n$ from $\{0,...,m-1\}$ in ${\cal B}(n,m+n) + o(n)$ bits that supports (appropriate generalizations of) $\Rank$ and $\Select$ operations in constant time, and A representation of a sequence of $n$ non-negative integers summing up to $m$ in ${\cal B}(n,m+n) + o(n)$ bits that supports prefix sum queries in constant time.
 
The construction of perfect hash functions is a well-studied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from $[n]$ to $[k]$ is a $\delta$-balanced $(n,k)$-family of perfect hash functions if for every $S \subseteq [n]$, $|S|=k$, the number of functions that are 1-1 on $S$ is between $T/\delta$ and $\delta T$ for some constant $T>0$. The standard definition of a family of perfect hash functions requires that there will be at least one function that is 1-1 on $S$, for each $S$ of size $k$. In the new notion of balanced families, we require the number of 1-1 functions to be almost the same (taking $\delta$ to be close to 1) for every such $S$. Our main result is that for any constant $\delta > 1$, a $\delta$-balanced $(n,k)$-family of perfect hash functions of size $2^{O(k \log \log k)} \log n$ can be constructed in time $2^{O(k \log \log k)} n \log n$. Using the technique of color-coding we can apply our explicit constructions to devise approximation algorithms for various counting problems in graphs. In particular, we exhibit a deterministic polynomial time algorithm for approximating both the number of simple paths of length $k$ and the number of simple cycles of size $k$ for any $k \leq O(\frac{\log n}{\log \log \log n})$ in a graph with $n$ vertices. The approximation is up to any fixed desirable relative error.
 
We deal with exact algorithms for Bandwidth, a long studied NP-hard problem. For a long time nothing better than the trivial O*(n!)¹ exhaustive search was known. In 2000, Feige and Kilian [Feige 2000] came up with a O*(10n)-time and polynomial space algorithm. In this article we present a new algorithm that solves Bandwidth in O*(5n) time and O*(2n) space. Then, we take a closer look and introduce a major modification that makes it run in O(4.83n) time with a cost of a O*(4n) space complexity. This modification allowed us to perform the Measure & Conquer analysis for the time complexity which was not used for graph layout problems before.
 
An instance of FlexDraw requireing linearly many edges to have four bends. Flexibilites are 1 except for the thick edges with flexibility 4.
The path between the new and the old root in the SPQR-tree containing µ (left). The whole graph G containing the principal split component H corresponding to µ with respect to the new root and the principal split component H of the new root with respect to the old root (right).
Traditionally, the quality of orthogonal planar drawings is quantified by the total number of bends, or the maximum number of bends per edge. However, this neglects that in typical applications, edges have varying importance. We consider the problem OptimalFlexDraw that is defined as follows. Given a planar graph G on n vertices with maximum degree 4 (4-planar graph) and for each edge e a cost function \({\rm cost}_{e}: \mathbb{N}_{0} \longrightarrow \mathbb{R}\) defining costs depending on the number of bends e has, compute an orthogonal drawing of G of minimum cost. In this generality OptimalFlexDraw is NP-hard. We show that it can be solved efficiently if 1) the cost function of each edge is convex and 2) the first bend on each edge does not cause any cost. Our algorithm takes time O(n ·T flow(n)) and O(n 2 ·T flow(n)) for biconnected and connected graphs, respectively, where T flow(n) denotes the time to compute a minimum-cost flow in a planar network with multiple sources and sinks. Our result is the first polynomial-time bend-optimization algorithm for general 4-planar graphs optimizing over all embeddings. Previous work considers restricted graph classes and unit costs.
 
We consider the maximization problem in the value oracle model of functions defined on k-tuples of sets that are submodular in every orthant and r-wise monotone, where k ⩾ 2 and 1 ⩽ r ⩽ k. We give an analysis of a deterministic greedy algorithm that shows that any such function can be approximated to a factor of 1/(1 + r). For r = k, we give an analysis of a randomized greedy algorithm that shows that any such function can be approximated to a factor of 1/(1+&sqrt;k/2. In the case of k = r = 2, the considered functions correspond precisely to bisubmodular functions, in which case we obtain an approximation guarantee of 1/2. We show that, as in the case of submodular functions, this result is the best possible both in the value query model and under the assumption that NP ≠ RP. Extending a result of Ando et al., we show that for any k ⩾ 3, submodularity in every orthant and pairwise monotonicity (i.e., r = 2) precisely characterize k-submodular functions. Consequently, we obtain an approximation guarantee of 1/3 (and thus independent of k) for the maximization problem of k-submodular functions.
 
The bipartite graph in the proof of Theorem 3
A well-studied special case of bin packing is the 3-partition problem, where n items of size > 1/4 have to be packed in a minimum number of bins of capacity one. The famous Karmarkar-Karp algorithm transforms a fractional solution of a suitable LP relaxation for this problem into an integral solution that requires at most O(log n) additional bins. The three-permutations-problem of Beck is the following. Given any three permutations on n symbols, color the symbols red and blue, such that in any interval of any of those permutations, the number of red and blue symbols is roughly the same. The necessary difference is called the discrepancy. We establish a surprising connection between bin packing and Beck’s problem: The additive integrality gap of the 3-partition linear programming relaxation can be bounded by the discrepancy of three permutations. This connection yields an alternative method to establish an O(log n) bound on the additive integrality gap of the 3-partition. Conversely, making use of a recent example of three permutations, for which a discrepancy of Ω(log n) is necessary, we prove the following: The O(log²n) upper bound on the additive gap for bin packing with arbitrary item sizes cannot be improved by any technique that is based on rounding up items. This lower bound holds for a large class of algorithms including the Karmarkar-Karp procedure.
 
In this paper we further investigate the well-studied problem of finding a perfect matching in a regular bipartite graph. The first non-trivial algorithm, with running time $O(mn)$, dates back to K\"{o}nig's work in 1916 (here $m=nd$ is the number of edges in the graph, $2n$ is the number of vertices, and $d$ is the degree of each node). The currently most efficient algorithm takes time $O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to $O(\min\{m, \frac{n^{2.5}\ln n}{d}\})$; this minimum can never be larger than $O(n^{1.75}\sqrt{\ln n})$. We obtain this improvement by proving a uniform sampling theorem: if we sample each edge in a $d$-regular bipartite graph independently with a probability $p = O(\frac{n\ln n}{d^2})$ then the resulting graph has a perfect matching with high probability. The proof involves a decomposition of the graph into pieces which are guaranteed to have many perfect matchings but do not have any small cuts. We then establish a correspondence between potential witnesses to non-existence of a matching (after sampling) in any piece and cuts of comparable size in that same piece. Karger's sampling theorem for preserving cuts in a graph can now be adapted to prove our uniform sampling theorem for preserving perfect matchings. Using the $O(m\sqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum matchings in bipartite graphs on the sampled graph then yields the stated running time. We also provide an infinite family of instances to show that our uniform sampling result is tight up to poly-logarithmic factors (in fact, up to $\ln^2 n$).
 
We show how to test the bipartiteness of an intersection graph of n line segments or simple polygons in the plane, or of balls in R^d, in time O(n log n). More generally we find subquadratic algorithms for connectivity and bipartiteness testing of intersection graphs of a broad class of geometric objects. For unit balls in R^d, connectivity testing has equivalent randomized complexity to construction of Euclidean minimum spanning trees, and hence is unlikely to be solved as efficiently as bipartiteness testing. For line segments or planar disks, testing k-colorability of intersection graphs for k>2 is NP-complete. Comment: 32 pages, 20 figures. A shorter (10 page) version of this paper was presented at the 15th ACM-SIAM Symp. Discrete Algorithms, New Orleans, 2004, pp. 853-861
 
We consider the problem of computing the distance between two piecewise-linear bivariate functions $f$ and $g$ defined over a common domain $M$. We focus on the distance induced by the $L_2$-norm, that is $\|f-g\|_2=\sqrt{\iint_M (f-g)^2}$. If $f$ is defined by linear interpolation over a triangulation of $M$ with $n$ triangles, while $g$ is defined over another such triangulation, the obvious na\"ive algorithm requires $\Theta(n^2)$ arithmetic operations to compute this distance. We show that it is possible to compute it in $\O(n\log^4 n)$ arithmetic operations, by reducing the problem to multi-point evaluation of a certain type of polynomials. We also present an application to terrain matching.
 
Illustrating the proof of coverage of edges of length ≤ (b − a)r/c. 
Illustrating the upper bound on the degree, where C = 1 and degree is ∆ + 1 for bridges, and C = b/2 and degree is ∆ * Spanner degree for non-bridges. 
Strip between nodes u and v showing bin covering and slices. 
Sensor nodes are very weak computers that get distributed at random on a surface. Once deployed, they must wake up and form a radio network. Sensor network bootstrapping research thus has three parts: one must model the restrictions on sensor nodes; one must prove that the connectivity graph of the sensors has a subgraph that would make a good network; and one must give a distributed protocol for finding such a network subgraph that can be implemented on sensor nodes. Although many particular restrictions on sensor nodes are implicit or explicit in many papers, there remain many inconsistencies and ambiguities from paper to paper. The lack of a clear model means that solutions to the network-bootstrapping problem in both the theory and systems literature all violate constraints on sensor nodes. For example, random geometric graph results on sensor networks predict the existence of subgraphs on the connectivity graph with good route-stretch, but these results do not address the degree of such a graph, and sensor networks must have constant degree. Furthermore, proposed protocols for actually finding such graphs require that nodes have too much memory, whereas others assume the existence of a contention-resolution mechanism. We present a formal Weak Sensor Model that summarizes the literature on sensor node restrictions, taking the most restrictive choices when possible. We show that sensor connectivity graphs have low-degree subgraphs with good hop-stretch, as required by the Weak Sensor Model. Finally, we give a Weak Sensor Model-compatible protocol for finding such graphs. Ours is the first network initialization algorithm that is implementable on sensor nodes.
 
We show that the travelling salesman problem in bounded-degree graphs can be solved in time O((2-e)n)O\bigl((2-\epsilon)^n\bigr), where ε> 0 depends only on the degree bound but not on the number of cities, n. The algorithm is a variant of the classical dynamic programming solution due to Bellman, and, independently, Held and Karp. In the case of bounded integer weights on the edges, we also present a polynomial-space algorithm with running time O((2-e)n)O\bigl((2-\epsilon)^n\bigr) on bounded-degree graphs.
 
Sequence representations supporting queries $access$, $select$ and $rank$ are at the core of many data structures. There is a considerable gap between the various upper bounds and the few lower bounds known for such representations, and how they relate to the space used. In this article we prove a strong lower bound for $rank$, which holds for rather permissive assumptions on the space used, and give matching upper bounds that require only a compressed representation of the sequence. Within this compressed space, operations $access$ and $select$ can be solved in constant or almost-constant time, which is optimal for large alphabets. Our new upper bounds dominate all of the previous work in the time/space map.
 
We give an O(n 3) time algorithm for constructing a minimum-width branch-decomposition of a given planar graph with n vertices. This is achieved through a refinement to the previously best known algorithm of Seymour and Thomas, which runs in O(n 4) time.
 
An undirected graph in which the capacities of all the edges are identically equal to B.  
In this paper, we focus our attention on the large capacities unsplittable flow problem in a game theoretic setting. In this setting, there are selfish agents, which control some of the requests characteristics, and may be dishonest about them. It is worth noting that in game theoretic settings many standard techniques, such as randomized rounding, violate certain monotonicity properties, which are imperative for truthfulness, and therefore cannot be employed. In light of this state of affairs, we design a monotone deterministic algorithm, which is based on a primal-dual machinery, which attains an approximation ratio of $\frac{e}{e-1}$, up to a disparity of $\epsilon$ away. This implies an improvement on the current best truthful mechanism, as well as an improvement on the current best combinatorial algorithm for the problem under consideration. Surprisingly, we demonstrate that any algorithm in the family of reasonable iterative path minimizing algorithms, cannot yield a better approximation ratio. Consequently, it follows that in order to achieve a monotone PTAS, if exists, one would have to exert different techniques. We also consider the large capacities \textit{single-minded multi-unit combinatorial auction problem}. This problem is closely related to the unsplittable flow problem since one can formulate it as a special case of the integer linear program of the unsplittable flow problem. Accordingly, we obtain a comparable performance guarantee by refining the algorithm suggested for the unsplittable flow problem.
 
Suppose that a random n-bit number V is multiplied by an odd constant M, greater than or equal to 3, by adding shifted versions of the number V corresponding to the 1s in the binary representation of the constant M. Suppose further that the additions are performed by carry-save adders until the number of summands is reduced to two, at which time the final addition is performed by a carry-propagate adder. We show that in this situation the distribution of the length of the longest carry-propagation chain in the final addition is the same (up to terms tending to 0 as n tends to infinity) as when two independent n-bit numbers are added, and in particular the mean and variance are the same (again up to terms tending to 0). This result applies to all possible orders of performing the carry-save additions.
 
We provide efficient constant factor approximation algorithms for the problems of finding a hierarchical clustering of a point set in any metric space, minimizing the sum of minimimum spanning tree lengths within each cluster, and in the hyperbolic or Euclidean planes, minimizing the sum of cluster perimeters. Our algorithms for the hyperbolic and Euclidean planes can also be used to provide a pants decomposition, that is, a set of disjoint simple closed curves partitioning the plane minus the input points into subsets with exactly three boundary components, with approximately minimum total length. In the Euclidean case, these curves are squares; in the hyperbolic case, they combine our Euclidean square pants decomposition with our tree clustering method for general metric spaces. Comment: 22 pages, 14 figures. This version replaces the proof of what is now Lemma 5.2, as the previous proof was erroneous
 
We show conditional lower bounds for well-studied #P-hard problems: The number of satisfying assignments of a 2-CNF formula with n variables cannot be computed in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. The permanent of an n× n matrix with entries 0 and 1 cannot be computed in time exp(o(n)). The Tutte polynomial of an n-vertex multigraph cannot be computed in time exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it cannot be computed in time exp(o(n/poly log n)) in the case of simple graphs. Our lower bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH; namely, that the satisfying assignments cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds, we transfer the sparsification lemma for d-CNF formulas to the counting setting.
 
We investigate the combinatorial complexity of geodesic Voronoi diagrams on polyhedral terrains using a probabilistic analysis. Aronov etal [ABT08] prove that, if one makes certain realistic input assumptions on the terrain, this complexity is \Theta(n + m \sqrt n) in the worst case, where n denotes the number of triangles that define the terrain and m denotes the number of Voronoi sites. We prove that under a relaxed set of assumptions the Voronoi diagram has expected complexity O(n+m), given that the sites have a uniform distribution on the domain of the terrain(or the surface of the terrain). Furthermore, we present a worst-case construction of a terrain which implies a lower bound of Vmega(n m2/3) on the expected worst-case complexity if these assumptions on the terrain are dropped. As an additional result, we can show that the expected fatness of a cell in a random planar Voronoi diagram is bounded by a constant.
 
In this paper, a fully compressed pattern matching problem is studied. The compression is represented by straight-line programs (SLPs), i.e. a context-free grammar generating exactly one string; the term fully means that both the pattern and the text are given in the compressed form. The problem is approached using a recently developed technique of local recompression: the SLPs are refactored, so that substrings of the pattern and text are encoded in both SLPs in the same way. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. This technique yields an \(\mathcal{O}((n+m)\log M \log(n+m))\) algorithm for compressed pattern matching, where n (m) is the size of the compressed representation of the text (pattern, respectively), while M is the size of the decompressed pattern. Since M ≤ 2m , this substantially improves the previously best \(\mathcal{O}(m^2n)\) algorithm. Since LZ compression standard reduces to SLP with log( N / n) overhead and in \(\mathcal{O}(n \log(N/n))\) time, the presented algorithm can be applied also to the fully LZ-compressed pattern matching problem, yielding an \(\mathcal{O}(s \log s \log M)\) running time, where s = n log(N/n) + m log(M/m).
 
Given a sequence of n bits with binary zero-order entropy H 0, we present a dynamic data structure that requires nH 0 + o(n) bits of space, which is able of performing rank and select, as well as inserting and deleting bits at arbitrary positions, in O(logn) worst-case time. This extends previous results by Hon et al. [ISAAC 2003] achieving O(logn/loglogn) time for rank and select but \(\Theta({\textrm{polylog}}(n))\) amortized time for inserting and deleting bits, and requiring n + o(n) bits of space; and by Raman et al. [SODA 2002] which have constant query time but a static structure. In particular, our result becomes the first entropy-bound dynamic data structure for rank and select over bit sequences. We then show how the above result can be used to build a dynamic full-text self-index for a collection of texts over an alphabet of size σ, of overall length n and zero-order entropy H 0. The index requires nH 0 + o(n logσ) bits of space, and can count the number of occurrences of a pattern of length m in time O(m logn logσ). Reporting the occ occurrences can be supported in O(occ log2n logσ) time, paying O(n) extra space. Insertion of text to the collection takes O(logn logσ) time per symbol, which becomes O(log2n logσ) for deletions. This improves a previous result by Chan et al. [CPM 2004]. As a consequence, we obtain an O(n logn logσ) time construction algorithm for a compressed self-index requiring nH 0 + o(n logσ) bits working space during construction.
 
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most $k$ of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a $\BigOh(4^kkmn)$ time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed $k$. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most $\BigOh(4^k)$, a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in $k$, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in $k$. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size $k$. The process is randomized with one-sided error exponentially small in $k$, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an $\BigOh(\sqrt{\log n})$-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size $\BigOh(k^{4.5})$, implying a randomized polynomial kernelization.
 
We describe a new sampling-based method to determine cuts in an undirected graph. For a graph (V, E), its cycle space is the family of all subsets of E that have even degree at each vertex. We prove that with high probability, sampling the cycle space identifies the cuts of a graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and cut pairs (a set of 2 edges that form a cut) of a graph. In the model of distributed computing in a graph G = (V, E) with O(log |V|)-bit messages, our approach yields faster algorithms for several problems. The diameter of G is denoted by D, and the maximum degree by Δ. We obtain simple O(D)-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving upon previous time bounds. Under natural conditions these new algorithms are universally optimal—that is, a Ω(D)-time lower bound holds on every graph. We obtain a O(D+Δ/log |V|)-time distributed algorithm for finding cut vertices; this is faster than the best previous algorithm when Δ, D = O(&sqrt;|V|). A simple extension of our work yields the first distributed algorithm with sub-linear time for 3-edge-connected components. The basic distributed algorithms are Monte Carlo, but they can be made Las Vegas without increasing the asymptotic complexity. In the model of parallel computing on the EREW PRAM, our approach yields a simple algorithm with optimal time complexity O(log V) for finding cut pairs and 3-edge-connected components.
 
A fundamental problem in computational geometry is to compute an obstacle-avoiding Euclidean shortest path between two points in the plane. The case of this problem on polygonal obstacles is well studied. In this article, we consider the problem version on curved obstacles, which are commonly modeled as splinegons . A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge (polygons are special splinegons), and the combinatorial complexity of each curved edge is assumed to be O (1). Given in the plane two points s and t and a set s of h pairwise disjoint splinegons with a total of n vertices, after a bounded degree decomposition of S is obtained, we compute a shortest s -to- t path avoiding the splinegons in O ( n + h log h + k ) time, where k is a parameter sensitive to the geometric structures of the input and is upper bounded by O ( h ² ). The bounded degree decomposition of S , which is similar to the triangulation of the polygonal domains, can be computed in O ( n log n ) time or O ( n + h log 1 + ϵ h ) time for any ϵ > 0. In particular, when all splinegons are convex, the decomposition can be computed in O ( n + h log h ) time and k is linear to the number of common tangents in the free space (called “free common tangents”) among the splinegons. Our techniques also improve several previous results: (1) For the polygon case (i.e., when all splinegons are polygons), the shortest path problem was previously solved in O ( n log n ) time, or in O ( n + h ² log n ) time. Thus, our algorithm improves the O ( n + h ² log n ) time result, and is faster than the O ( n log n ) time solution for sufficiently small h , for example, h = o (√ n ,log n . (2) Our techniques produce an optimal output-sensitive algorithm for a basic visibility problem of computing all free common tangents among h pairwise disjoint convex splinegons with a total of n vertices. Our algorithm runs in O ( n + h log h + k ) time and O ( n ) working space, where k is the number of all free common tangents. Note that k = O ( h ² ). Even for the special case where all splinegons are convex polygons , the previously best algorithm for this visibility problem takes O ( n + h ² log n ) time. (3) We improve the previous work for computing the shortest path between two points among convex pseudodisks of O (1) complexity each. In addition, a by-product of our techniques is an optimal O ( n + h log h ) time and O ( n ) space algorithm for computing the Voronoi diagram of a set of h pairwise disjoint convex splinegons with a total of n vertices.
 
We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with $n$ vertices, among which $r$ are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in $O(n (\log n)\log r)$ time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected $O(n \sqrt{h+1}\log^2 n)$ time for a polygon with $h$ holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in $O(n (\log n) \log r + r^{4/3+\varepsilon})$ time for any $\varepsilon>0$. On degenerate input, our time bound increases to $O(n (\log n) \log r + r^{17/11+\varepsilon})$.
 
Consider the following problem: given a graph with edge-weights and a subset Q of vertices, find a minimum-weight subgraph in which there are two edge-disjoint paths connecting every pair of vertices in Q. The problem is a failure-resilient analog of the Steiner tree problem, and arises in telecommunications applications. A more general formulation, also employed in telecommunications optimization, assigns a number (or requirement) r v ∈ {0,1,2} to each vertex v in the graph; for each pair u,v of vertices, the solution network is required to contain min{r u , r v } edge-disjoint u-to-v paths. We address the problem in planar graphs, considering a popular relaxation in which the solution is allowed to use multiple copies of the input-graph edges (paying separately for each copy). The problem is SNP-hard in general graphs and NP-hard in planar graphs. We give the first polynomial-time approximation scheme in planar graphs. The running time is O(n logn). Under the additional restriction that the requirements are in {0,2} for vertices on the boundary of a single face of a planar graph, we give a linear-time algorithm to find the optimal solution.
 
Top-cited authors
Saket Saurabh
  • The Institute of Mathematical Sciences
Venkatesh Raman
  • The Institute of Mathematical Sciences
Gonzalo Navarro
  • University of Chile
Rajeev Raman
  • University of Leicester
Srinivasa Rao Satti
  • Seoul National University