Polynomial Time Approximation Schemes for Dense Instances of N P Hard Problems

Journal of Computer and System Sciences (Impact Factor: 1). 01/1999; 58(1):193-210. DOI: 10.1006/jcss.1998.1605
Source: CiteSeer

ABSTRACT We present a unified framework for designing polynomial time approximation schemes (PTASs) for “dense” instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimumk-way cut with and without specified terminals, and maximum 3-satisfiability. By dense graphs we mean graphs with minimum degreeΩ(n), although our algorithms solve most of these problems so long as the average degree isΩ(n). Denseness for nongraph problems is defined similarly. The unified framework begins with the idea ofexhaustive sampling:picking a small random set of vertices, guessing where they go on the optimum solution, and then using their placement to determine the placement of everything else. The approach then develops into a PTAS for approximating certainsmoothinteger programs, where the objective function and the constraints are “dense” polynomials of constant degree.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Let A be a real symmetric n× n-matrix with eigenvalues λ 1,⋯,λ n ordered after decreasing absolute value, and b an n× 1-vector. We present an algorithm finding approximate solutions to min x*(Ax + b) and max x*(Ax + b) over x∈ {–1,1} n , with an absolute error of at most (c1|l1|+|léc2 lognù|)2n+O((an+b)Ö{n logn})(c_{1}|{\rm \lambda_{1}}|+|{\rm \lambda}_{\lceil c_{2} {\rm log}n\rceil}|)2n+O((\alpha n+\beta)\sqrt{n {\rm log}n}), where α and β are the largest absolute values of the entries in A and b, respectively, for any positive constants c 1 and c 2, in time polynomial in n. We demonstrate that the algorithm yields a PTAS for MAXCUT in regular graphs on n vertices of degree d of w(Ö{nlogn})\omega(\sqrt{n{\rm log}n}), as long as they contain O(d 4log n) 4-cycles. The strongest previous result showed that Ω(n/log n) average degree graphs admit a PTAS. We also show that smooth n-variate polynomial integer programs of constant degree k, always can be approximated in polynomial time leaving an absolute error of o(n k ), answering in the affirmative a suspicion of Arora, Karger, and Karpinski in STOC 1995.
    Algorithms - ESA 2005, 13th Annual European Symposium, Palma de Mallorca, Spain, October 3-6, 2005, Proceedings; 01/2005
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider the rank aggregation problem for information retrieval over Web making use of a kind of metric, the coherence, which considers both the normalized Kendall-τ distance and the size of overlap between two partial rankings. In general, the top-d coherence aggregation problem is defined as: given collection of partial rankings Π={τ 1 ,τ 2 ,⋯,τ K }, how to find a final ranking π with specific length d, which maximizes the total coherence Φ(π,Π)=∑ i=1 K Φ(π,τ i ). The corresponding complexity and algorithmic issues are discussed in this paper. Our main technical contribution is a polynomial time approximation scheme (PTAS) for a restricted top-d coherence aggregation problem.
    4th international conference on Frontiers in algorithmics, Wuhan, China; 07/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We address the problem of partitioning a set of n points into clusters, so as to minimize the sum, over all intracluster pairs of points, of the cost associated with each pair. We obtain a randomized approximation algorithm for this problem, for the cost functions £22, £1 and £2, as well as any cost function isometrically embeddable in £22. In the 2-cluster case the algorithm computes, with high probability, a solution which differs in its labelling of no more than an e fraction of the points, from a clustering whose cost is within (1 + e) times optimal. Given a fixed approximation parameter e, the runtime is linear in n for £~ problems of dimension o(log n~ log log n); and n °(l°g log ,~) in the general case. The case £22 is addressed by combining three elements: (a) Variable-probabiLity sampling of the given points, to reduce the size of the data set. (b) Near-isometric dimension reduction. (c) A deterministic exact algorithm which runs in time exponential in the dimension (rather than the number of points). The remaining cases are addressed by reduction to £~.
    Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, May 21-23, 2000, Portland, OR, USA; 01/2000

Full-text (2 Sources)

Available from
Jun 1, 2014