BookPDF Available
Algorithms and Complexity
Staff
Faculty:
Univ.-Prof. Dr. Berthold V¨
ocking (chair)
Priv. Doz. Dr. Walter Unger
Dr. Matthias Westermann (DFG Research Group)
http://www-i1.informatik.rwth-aachen.de
Secretary:
Helga Jussen
Phone: +49 241 8021101
Fax: +49 241 8022216
Email: jussen@cs.rwth-aachen.de
Researchers:
Dipl. Inform. Heiner Ackermann (since July 2005)
Dipl. Inform. Helge Bals
Dr. Hans-Joachim B¨
ockenhauer (until February 2005)
Dipl. Inform. Dirk Bongartz
Dipl. Inform. Matthias Englert (DFG Research Group)
Dipl. Inform. Simon Fischer
Dipl. Inform. Thomas Franke
Dr. (PhD) Alantha Newman (until August 2005)
Dr. Harald R¨
acke (July – August 2005)
Dipl. Inform. Heiko R¨
oglin
Guests:
Nir Ailon (Princeton University)
Artur Czumaj (NJIT Newark)
Prahladh Harsha (TTI Chicago)
Matthias Ruhl (Google, California)
Technical Staff:
Viktor Keil
71
Overview
The group focusses both in research and teaching on following topics:
randomized algorithms
approximation and online algorithms
algorithms for interconnection networks
probabilistic analysis of algorithms
algorithmic game theory
Approaches for the design of algorithmic solutions to hard problems are manifold. For
optimization problems, a very suitable concept is that of approximation algorithms,
where one tries to obtain provably good solutions for the problem, in the sense that the
cost of the computed solution is at most a fraction apart from the cost of the optimal
one. Another approach is to apply randomized algorithms, which are designed to give
an optimal (or good approximative) solution with high probability. Besides positive
results as in the design of algorithms, also the according hardness results with respect
to the particular concepts are of high interest, since they guide the way for appropriate
algorithmic approaches.
In many applications the input data for a given optimization problem is not completely
given in advance, but is revealed step by step. Nevertheless, the algorithm must already
make decisions based on the partial input only. Typical problems in this area include
for instance elevator movement planning and paging strategies. These algorithms are
referred to as online algorithms and their performance can be evaluated by comparing
their solutions to an optimal offline strategy, i.e., a strategy for which the complete
input for the problem is assumed to be known in advance.
In particular, the merge of economic game theory and algorithmics for modelling
problems arising for instance in today’s networks opens a completely new field of al-
gorithmic research and received a lot of attention in recent years. Here, one focus is on
the comparison between the cost of optimal solutions obtained by globally coordinated
operators on one hand and the cost of equilibria yielded by selfish agents on the other
hand. Another focus is the design of algorithms for optimization problems, where the
input data is not necessarily reliable, as it is given by selfish agents. In this setting, the
goal is to design algorithms solving the optimization problem and additionally forcing
the agents to “reveal” the true input data — “algorithms” of these types are usually
denoted as mechanisms. In this context, the analysis and design of auctions, and in
particular of combinatorial auctions, reveals interesting insights.
Besides classes concerning the above mentioned topics, the department regularly of-
fers courses on algorithmic cryptography and parallel algorithms.
72
Research Projects
DFG Research Group: Flexible Online Algorithms
M. Englert, M. Westermann
(funded by DFG)
Online algorithms studied in theory are characterized by the fact that they do not have
knowledge about the whole input sequence of jobs in advance. Instead, the input
sequence is generated job by job, and a new job is not issued until the previous one
is handled by the online algorithm. In real applications, jobs can usually be delayed
for a short amount of time, and hence the input sequence of jobs can be rearranged
in a limited fashion to optimize the performance. This flexible online scenario occurs
in many applications in computer science and economics, e.g., in computer graphics:
A rendering system displays a sequence of primitives. The number of state changes
of such a system are a significant factor for the performance. State changes occur
when two consecutively rendered primitives differ in their attribute values, e.g., in their
texture or shader program. With the help of a reordering buffer in which primitives
can be buffered the sequence of primitives can be reordered online in such a way that
the number of the state changes is reduced.
According to the above described research topic, the group offers in particular regular
courses and seminars on design and analysis of algorithms.
vtraffic: Managing Variable Data Streams in Networks
(Management variabler Datenstr¨
ome in Netzwerken)
S. Fischer, T. Franke, B. V¨
ocking
(funded by DFG)
This project deals with dynamic routing algorithms in large networks like the Inter-
net. The goal is to improve our understanding of communication patterns as well
as to design algorithms routing the data in such a way that the communication load
is as evenly distributed over the available resources as possible. This gives us the
opportunity to avoid congestion on the one hand and to guarantee a fair treatment
of all participating users on the other hand. In particular, we aim at the design of
algorithms for allocating streams of data on web servers as well as for performing
intra-domain routing in networks. The resulting research problems will be tackled
theoretically, practically, and experimentally. The project is part of the DFG research
program “Algorithmik groer und komplexer Netzwerke”. We closely cooperate with
73
the networking group of the TU M¨
unchen headed by Anja Feldmann. Our particular
focus in this cooperation is mainly on the theoretical part.
DELIS: Dynamically Evolving Large Scale Information Systems
S. Fischer, A. Newman, B. V¨
ocking
(funded by European Union, Integrated Project)
Most of the existing and foreseen complex networks are built, operated and used by
a multitude of diverse economic interests. A prime example is the Internet, perhaps
the most complex computational artifact of our times. The (possibly) selfish nature of
the participating entities calls for a deeper understanding of the network dynamics
in order to efficiently achieve their cooperation, by possibly considering bounded
rationality aspects. In the past few years, there has been a flourishing amount of
work in the border of Computer Science, Economics, Game Theory and Biology
that has started to address the above issues. For example, (a) selfish network routing
(and flows) were addressed in a number of recent research papers, (b) mechanism
design for algorithmic cooperation of selfish users was proposed by many authors, (c)
evolutionary economics addresses the dynamics of self-organization in large networks,
and (d) the issues of bounded rationality of machines versus their ability for game
playing were examined by several research groups, among them the Nobel-prized
Economists work of 2001 and 2002.
Activities within the project can be grouped into two main classes:
Basic Research: basic research to understand the dynamics of the network and the
effect of concepts like self-organization, selfishness and bounded rationalism as well
as the structure of equilibria (and the form of dynamics) in such systems.
Efficient Algorithms: design of mechanisms and algorithms that efficiently achieve
the cooperation between the involved selfish entities, possibly applying results from
evolutionary models.
Probabilistic Analysis of Discrete Optimization Problems
H. R¨
oglin, B. V¨
ocking
(funded by the DFG)
Many algorithmic problems are hard from a worst-case point of view but can be solved
quite well on typical inputs by heuristic approaches. Hence, worst-case complexity
does not seem to be an appropriate measure for the complexity of these problems. This
74
research project deals with the probabilistic analysis of such problems and heuristics in
order to narrow the gap between the observations made in practice and the theoretical
understanding of these problems.
For many problems, average-case analyses do not provide much insight either since
inputs which occur in practice usually possess certain properties and a certain structure
which cannot be reflected by an average-case analysis alone as it is not clear how to
choose the underlying probability distribution over the set of possible inputs. In this
project, we turn our attention to more general probabilistic input models like, e.g.,
the model of smoothed analysis. The semi-random input model used in a smoothed
analysis consists of two stages. First an adversary chooses an input, then this input
is randomly perturbed in the second step. In particular, the adversary can specify a
worst-case input with certain properties which is only slightly perturbed in the second
stage.
The focus of our research are problems which can be expressed in the form of integer
linear programs. In our previous analyses we have characterized the class of integer
optimization problems with polynomial smoothed complexity. The algorithms with
polynomial smoothed complexity we designed, however, are clearly outperformed by
common heuristics used in practice, like, e.g., Branch and Bound and Branch and Cut
approaches. One of the main goals of this research project is the probabilistic analysis
of these heuristics in order to understand why they perform so extraordinary well in
practice. Our approach consists of two steps: First structural parameters like, e.g.,
the number of Pareto optimal solutions or the integrality gap, are analyzed. Then the
running time of the heuristics is analyzed in terms of these parameters.
GRAAL: Graphs and Algorithms in Communication Networks
W. Unger
(funded by European Science Foundation, Cost Action)
The main objective of this Action is to create a discussion space between applied
communities and theorists in the context of communication networks in which models
and assumptions can be reviewed and formalized into the appropriate language.
Inside the context of communication networks, the Action focusses on, but is not
restricted to the following specific fields:
1. QoS networks: Quality of Service (QoS) refers to a broad collection of net-
working technologies and techniques. The goal of QoS is to provide guarantees
on traffic transmission. Elements of network performance within the scope
of QoS include availability (uptime), bandwidth (throughput), latency (delay),
delay jitter, and error rate.
75
2. Optimization in optical networks: Optical networks using light paths in optical
fibers as communication media induce a number of problems that cannot be
directly resolved by using standard solutions from electronic networks, but re-
quire new approaches and techniques, instead. These problems include routing
techniques, wavelength assignment on switches and cross connects, signalling,
topologies design, and path recovery (backup) for protection and restoration.
3. Optimization in wireless networks: Wireless networks were traditionally related
with voice and telephony. Nowadays, packet networks are also supported in
mobile, such as in GPRS and UMTS technologies. Trends on wireless networks
include QoS for multimedia transmission and backup paths. Therefore, prob-
lems for static networks are moving to wireless, such as delay minimization,
traffic engineering, frequency assignment and localization. But there are several
additional challenges for wireless networks, one is for instance the coordination
of the single uncontrolled agents participation in the network.
Algorithmics in Computational Biology
D. Bongartz
This project is devoted to the study of algorithmic problems arising in the area of
molecular biology. Most of these problems are computationally hard and therefore
may be approached by various algorithmic techniques as for instance approxima-
tion algorithms. Special focus is given to the design and analysis of algorithms for
problems arising in the area of protein folding. One of the basic tasks here, and in
bioinformatics in general, is to first model the problem in a mathematically suitable
way. Our focus is in particular on modelling protein folding as a special kind of
embedding problem.
Further interests include problems arising in genome analysis and comparison. One
example are the haplotyping problems, where the goal is to reassign DNA sequencing
data to paternal and maternal chromosomes, respectively. During DNA sequencing
this information gets lost and has to be regained, for instance to improve the under-
standing of genetic diseases.
To analyze the similarity between species on the level of genes (and not on the level
of DNA), one searches for the minimum number of genome rearrangements needed
to transform one genome into the other one. Several types of rearrangements were
introduced in the literature, here we focus on reversal and transposition operations.
76
Other Activities
Courses
Our group offered the following lectures and seminars:
Summer semester 2005
Lecture on Optimization and Game Theory
Lecture on Parallel Algorithms
Lecture on Online Algorithms
Seminar on Auctions, Games, Algorithms — Algorithmic Game Theory and the
Internet
Seminar on Combinatorial Optimization
Seminar on Algorithmic Cryptography
Seminar on Online Algorithms
Proseminar on Algorithms and Datastructures
Winter semester 2005/06
Lecture on Computability and Complexity
Lecture on Algorithmic Graph Theory
Seminar on Algorithmic Game Theory
Seminar on Parallel Algorithms
Seminar on Randomized Algorithms
Proseminar on Algorithm Design
PC Memberships
B. V¨
ocking was active as a PC member for the following conferences:
46th Annual IEEE Symposium on Foundations of Computer Science
(FOCS 2005)
13th Annual European Symposium on Algorithms (ESA 2005)
Third Workshop on Approximation and Online Algorithms (WAOA 2005)
77
Talks and Publications
Talks
Heiner Ackermann:
Decision Making Based on Approximate and Smoothed Pareto
Curves,
Invited by Prof. Dr. Eckart Zitzler, ETH Zurich, Switzerland, December 02,
2005.
Heiner Ackermann:
Decision Making Based on Approximate and Smoothed Pareto
Curves,
16th International Symposium on Algorithms and Computation (ISAAC 2005),
Sanya, China, December 19-21, 2005.
Matthias Englert:
Reordering Buffer Management for Non-uniform Cost Models,
32nd International Colloquium on Automata, Languages and Programming (ICALP
2005), Lisbon, Portugal, July 11-15, 2005.
Simon Fischer:
Adaptive Routing with Stale Information,
24th Annual ACM Sym-
posium on Principles of Distributed Computing (PODC 2005), Las Vegas, USA, July
17-20, 2005.
Simon Fischer:
On the Structure and Complexity of Worst-Case Equilibria,
1st Inter-
national Workshop on Internet and Network Economics (WINE 2005), Hong Kong,
China, December 15-17, 2005.
Simon Fischer:
Evolutionary Game Theory with Applications to Adaptive Routing,
European Conference on Complex Systems (ECCS 2005), Paris, France, November
14-18, 2005.
Simon Fischer:
Adaptive Routing with Stale Information,
Jahrestreffen DFG Schwer-
punktprogramm 1126, Paderborn, Germany, March 10-12, 2005.
Heiko R¨
oglin:
Smoothed Analysis of Integer Programming,
11th International IPCO
Conference (IPCO 2005), Berlin, Germany, June 8-10, 2005.
Heiko R¨
oglin:
Smoothed Analysis of Integer Programming,
51. Workshop ¨
uber Daten-
strukturen, Effiziente Algorithmen und Komplexit¨
atstheorie, Erlangen-N¨
urnberg, Ger-
many, March 15, 2005.
Walter Unger:
A 1.5-Approximation of the Minimal Manhattan Network Problem,
16th International Symposium on Algorithms and Computation (ISAAC 2005), Sanya,
China, December 19-21, 2005.
Berthold V¨
ocking:
Approximation Techniques for Utilitarian Mechanism Design,
Dag-
stuhl Seminar: Computing and Markets, No. 05011, Schloss Dagstuhl, Germany,
January 3-7, 2005.
Berthold V¨
ocking:
Selfish Routing and Evolutionary Game Theory,
DFG Workshop
on Selfish Routing in Networks, Universit¨
at zu Kiel, Germany, January 28-29, 2005.
Berthold V¨
ocking:
Typical Properties of Winners and Losers in Discrete Optimiza-
tion,
Annual Meeting of the EU Integrated Project DELIS, Prague, Czech Republic,
February 10-11, 2005.
78
Berthold V¨
ocking:
Approximation Techniques for Utilitarian Mechanism Design,
Dag-
stuhl Seminar: Design and Analysis of Randomized and Approximation Algorithms,
No. 05201, Schloss Dagstuhl, Germany, May 15-20, 2005.
Berthold V¨
ocking:
Approximation Techniques for Utilitarian Mechanism Design,
Foun-
dations of Computational Mathematics (FOCM) conference, Santander, Spain, June
30 - July 9, 2005.
Berthold V¨
ocking:
Approximation Techniques for Utilitarian Mechanism Design,
Gra-
duiertenkolleg “Combinatorics, Geometry and Computation”, Humboldt-Universit¨
at
zu Berlin, Germany, June 27, 2005.
Berthold V¨
ocking:
Selfish Routing with Evolutionary Strategies,
DELIS Workshop:
Modeling, adjusting and predicting the evolution of dynamic networks. Schloss Dag-
stuhl, Germany, September 3-4, 2005.
Berthold V¨
ocking:
Mechanism Design for Routing Unsplittable Flow,
Dagstuhl Sem-
inar: Algorithmic Aspects of Large and Complex Networks, No. 05361, Schloss
Dagstuhl, Germany, September 4-9, 2005.
Berthold V¨
ocking:
On the Structure and Complexity of Worst-Case Equilibria,
DELIS
Workshop: Analysis and Design of Selfish and Complex Systems, Patras, Greece,
December 2005.
Publications
Heiner Ackermann, Alantha Newman, Heiko R¨
oglin, Berthold V ¨
ocking:
Decision
Making Based on Approximate and Smoothed Pareto Curves,
Proc. of the 16th Inter-
national Symposium on Algorithms and Computation (ISAAC 2005), Lecture Notes
in Computer Science 3827, Springer 2005, pp. 675–684.
Nir Ailon, Moses Charikar, Alantha Newman:
Aggregating Inconsistent Information:
Ranking and Clustering,
Proc. of the 37th Annual ACM Symposium on Theory of
Computing (STOC 2005), ACM 2005, pp. 684–693.
Patrick Briest, Piotr Krysta, Berthold V¨
ocking:
Approximation Techniques for Utili-
tarian Mechanism Design,
Proc. of the 37th Annual ACM Symposium on Theory of
Computing (STOC 2005), ACM 2005, pp. 39–48.
Matthias Englert, Matthias Westermann:
Reordering Buffer Management for Non-
uniform Cost Models,
Proc. of the 32nd International Colloquium on Automata, Lan-
guages and Programming (ICALP 2005), Lecture Notes in Computer Science 3580,
Springer 2005, pp. 627–638.
Simon Fischer, Berthold V¨
ocking:
Adaptive Routing with Stale Information,
Proc. of
the 24th Annual ACM Symposium on Principles of Distributed Computing (PODC
2005), ACM 2005, pp. 276–283.
79
Simon Fischer, Berthold V¨
ocking:
On the Structure and Complexity of Worst-Case
Equilibria,
Proc. of the 1st International Workshop on Internet and Network Eco-
nomics (WINE 2005), Lecture Notes in Computer Science 3828, Springer 2005, pp. 151–
160.
Simon Fischer, Berthold V¨
ocking:
Evolutionary Game Theory with Applications to
Adaptive Routing,
Proc. of the European Conference on Complex Systems (ECCS
2005), to appear.
Simon Fischer, Ingo Wegener:
The One-dimensional Ising Model: Mutation versus
Recombination,
Theoretical Computer Science 344(2-3), 2005, pp. 208–225.
Heiko R¨
oglin, Berthold V¨
ocking:
Smoothed Analysis of Integer Programming,
Proc. of
the 11th International IPCO Conference (IPCO 2005), Lecture Notes in Computer
Science 3509, Springer 2005, pp. 276–290.
Sebastian Seibert, Walter Unger:
A 1.5-Approximation of the Minimal Manhattan
Network Problem,
Proc. of the 16th International Symposium on Algorithms and
Computation (ISAAC 2005), Lecture Notes in Computer Science 3827, Springer 2005,
pp. 246–255. Best paper award of ISAAC 2005.
80

Chapters (32)

In this invited talk we address the algorithmic problems behind a truly distributed Web search engine. The main goal is to reduce the cost of a Web search engine while keeping all the benefits of a centralized search engine in spite of the intrinsic network latency imposed by Internet. The key ideas to achieve this goal are layered caching, online prediction mechanisms and exploit the locality and distribution of queries.
Starting with two models fifty years ago, the discrete marriage game [1] and the continuous assignment game [2], the study of stable matchings has evolved into a rich theory with applications in many areas. Most notably, it has lead to a number of truthful mechanisms that have seen a recent rejuvenation in the context of sponsored search. In this paper we survey the history of these problems and provide several links to ongoing research in the field.
Modern memory devices may suffer from faults, where some bits may arbitrarily flip and corrupt the values of the affected memory cells. The appearance of such faults may seriously compromise the correctness and performance of computations. In recent years, many algorithms for computing in the presence of memory faults have been introduced in the literature: in particular, an algorithm or a data structure is called resilient if it is able to work correctly on the set of uncorrupted values. In this invited talk I will survey recent work on resilient algorithms and data structures.
In the Connected Red-Blue Dominating Set problem we are given a graph G whose vertex set is partitioned into two parts R and B (red and blue vertices), and we are asked to find a connected subgraph induced by a subset S of B such that each red vertex of G is adjacent to some vertex in S. The problem can be solved in O*(2n - |B|)\mathcal{O}^*(2^{n - |B|}) time by reduction to the Weighted Steiner Tree problem. Combining exhaustive enumeration when |B| is small with the Weighted Steiner Tree approach when |B| is large, solves the problem in O*(1.4143n)\mathcal{O}^*(1.4143^n). In this paper we present a first non-trivial exact algorithm whose running time is in O*(1.3645n)\mathcal{O}^*(1.3645^n). We use our algorithm to solve the Connected Dominating Set problem in O*(1.8619n)\mathcal{O}^*(1.8619^n). This improves the current best known algorithm, which used sophisticated run-time analysis via the measure and conquer technique to solve the problem in O*(1.8966n)\mathcal{O}^*(1.8966^n). Keywordsexact algorithms-dominating set-weighted steiner tree
For a given node t in a directed graph G(V G ,E G ) and a positive integer k we study the problem of computing a set of k new links pointing to t – so called backlinks to t – producing the maximum increase in the PageRank value of t. This problem is known as Link Building in the www context. We present a theorem describing how the topology of the graph comes in to play when evaluating potential new backlinks. Based on the theorem we show that no FPTAS exists for Link Building under the assumption NP≠P and we also show that Link Building is W[1]-hard.
In this paper, we consider an arbitrary class H{\cal H} of rooted graphs such that each biconnected component is given by a representation with reflectional symmetry, which allows a rooted graph to have several different representations, called embeddings. We give a general framework to design algorithms for enumerating embeddings of all graphs in H{\cal H} without repetition. The framework delivers an efficient enumeration algorithm for a class H{\cal H} if the class B{\cal B} of biconnected graphs used in the graphs in H{\cal H} admits an efficient enumeration algorithm. For example, for the class B{\cal B} of rooted cycles, we can easily design an algorithm of enumerating rooted cycles so that delivers the difference between two consecutive cycles in constant time in a series of all outputs. Hence our framework implies that, for the class H{\cal H} of all rooted cacti, there is an algorithm that enumerates each cactus in constant time.
In this paper, we consider variants of the traveling salesman problem with precedence constraints. We characterize hard input instances for Christofides’ algorithm and Hoogeveen’s algorithm by relating the two underlying problems, i. e., the traveling salesman problem and the problem of finding a minimum-weight Hamiltonian path between two prespecified vertices. We show that the sets of metric worst-case instances for both algorithms are disjoint in the following sense. There is an algorithm that, for any input instance, either finds a Hamiltonian tour that is significantly better than 1.5-approximative or a set of Hamiltonian paths between all pairs of endpoints, all of which are significantly better than 5/3-approximative. In the second part of the paper, we give improved algorithms for the ordered TSP, i. e., the TSP, where the precedence constraints are such that a given subset of vertices has to be visited in some prescribed linear order. For the metric case, we present an algorithm that guarantees an approximation ratio of 2.5 − 2/k, where k is the number of ordered vertices. For near-metric input instances satisfying a β-relaxed triangle inequality, we improve the best previously known ratio to kblog2 (3k-3)k\beta^{\log_2 (3k-3)}.
Inclusion/exclusion and measure and conquer are two of the most important recent new developments in the field of exact exponential time algorithms. Algorithms that combine both techniques have been found very recently, but thus far always use exponential space. In this paper, we try to obtain fast exponential time algorithms for graph domination problems using only polynomial space. Using a novel treewidth based annotation procedure to deal with sparse instances, we give an algorithm that counts the number of dominating sets of each size κ in a graph in O(1.5673n)\mathcal{O}(1.5673^n) time and polynomial space. We also give an algorithm for the domatic number problem running in OO(2.7139n)\mathcal{O}O(2.7139^n) time and polynomial space.
In this paper, we investigate the parameterized complexity of the problem of finding k edges (vertices) in a graph G to form a subgraph (respectively, induced subgraph) H such that H belongs to one the following four classes of graphs: even graphs, Eulerian graphs, odd graphs, and connected odd graphs. We also study the parameterized complexity of their parametric dual problems. Among these sixteen problems, we show that eight of them are fixed parameter tractable and four are W[1]-hard. Our main techniques are the color-coding method of Alon, Yuster and Zwick, and the random separation method of Cai, Chan and Chan.
Popular matchings have recently been a subject of study in the context of the so-called House Allocation Problem, where the objective is to match applicants to houses over which the applicants have preferences. A matching M is called popular if there is no other matching Mwith the property that more applicants prefer their allocation in Mto their allocation in M. In this paper we study popular matchings in the context of the Roommates Problem, including its special (bipartite) case, the Marriage Problem. We investigate the relationship between popularity and stability, and describe efficient algorithms to test a matching for popularity in these settings. We also show that, when ties are permitted in the preferences, it is NP-hard to determine whether a popular matching exists in both the Roommates and Marriage cases.
Consider the following coloring process in a simple directed graph G(V,E) with positive indegrees. Initially, a set S of vertices are white. Thereafter, a black vertex is colored white whenever the majority of its in-neighbors are white. The coloring process ends when no additional vertices can be colored white. If all vertices end up white, we call S an irreversible dynamic monopoly (or dynamo for short). We derive upper bounds of 0.7732|V| and 0.727|V| on the minimum sizes of irreversible dynamos depending on whether the majority is strict or simple. When G is an undirected connected graph without isolated vertices, upper bounds of ⌈|V|/2 ⌉ and ë|V|/2 û\lfloor |V|/2 \rfloor are given on the minimum sizes of irreversible dynamos depending on whether the majority is strict or simple. Let ε> 0 be any constant. We also show that, unless \textNP Í \textTIME(nO(lnlnn)),\text{NP}\subseteq \text{TIME}(n^{O(\ln \ln n)}), no polynomial-time, ((1/2 − ε)ln |V|)-approximation algorithms exist for finding a minimum irreversible dynamo.
Given an arbitrary graph G = (V,E) and an additional set of admissible edges F, the Chordal Sandwich problem asks whether there exists a chordal graph (V,E ( F 0 ) such that F 0 µ F. This problem arises from perfect phy- logeny in evolution and from sparse matrix computations in numerical analysis, and it generalizes the widely studied problems of completions and deletions of arbitrary graphs into chordal graphs. As many related problems, Chordal Sandwich is NP-complete. In this paper we show that the problem becomes tractable when parameterized with a suitable natural measure on the set of admissible edges F. In particular, we give an algorithm with running time O(2 k n 5 ) to solve this problem, where k is the size of a minimum vertex cover of the graph (V,F). Hence we show that the problem is fixed parameter tractable when parameterized by k. Note that the parameter does not assume any re- striction on the input graph, and it concerns only the additional edge set F.
Property testing is concerned with deciding whether an object (e.g. a graph or a function) has a certain property or is “far” (for a prespecified distance measure) from every object with that property. In this work we design and analyze an algorithm for testing functions for the property of being computable by a read-once width-2 Ordered Binary Decision Diagram (OBDD), also known as a branching program, where the order of the variables is not known to us. That is, we must accept a function f if there exists an order of the variables according to which a width-2 OBDD can compute f. The query complexity of our algorithm is [(O)\tilde](log n)poly(1/e)\tilde{O}({\rm log n}){\rm poly}(1/\epsilon). In previous work (in Proceedings of RANDOM, 2009) we designed an algorithm for testing computability by an OBDD with a fixed order, which is known to the algorithm. Thus, we extend our knowledge concerning testing of functions that are characterized by their computability using simple computation devices and in the process gain some insight concerning these devices.
We investigate the relationship between two kinds of vertex colorings of graphs: unique-maximum colorings and conflict-free colorings. In a unique-maximum coloring, the colors are ordered, and in every path of the graph the maximum color appears only once. In a conflict-free coloring, in every path of the graph there is a color that appears only once. We also study computational complexity aspects of conflict-free colorings and prove a completeness result. Finally, we improve lower bounds for those chromatic numbers of the grid graph.
We study a strategic game in which every node of a graph is owned by a player who has to choose a color. A player’s payoff is 0 if at least one neighbor selected the same color; otherwise, it is the number of players who selected the same color. The social cost of a state is defined as the number of distinct colors that the players use. It is ideally equal to the chromatic number of the graph, but it can substantially deviate because every player cares about his own payoff, however bad the social cost may be. Following previous work in [Panagopoulou and Spirakis 0817. Panagopoulou , [Panagopoulou and Spirakis 08] P. N. and Spirakis , P. G. 2008. “A Game Theoretic Approach for Efficient Graph Coloring.”. In Proc. of ISAAC 2008, 183–195. Springer. LNCS 5369 [CrossRef]View all references] on the Nash equilibria of the coloring game, we give worst-case bounds on the social cost of stable states. Our main contribution is an improved (tight) bound for the worst-case social cost of a Nash equilibrium, as well as the study of strong equilibria, their existence, and how far they are from social optima.
Various forms of multicut problems are of great importance in the area of network design. In general, these problems are intractable. However, several parameters have been identified which lead to fixed-parameter tractability (FPT). Recently, Gottlob and Lee have proposed the treewidth of the structure representing the graph and the set of pairs of terminal vertices as one such parameter. In this work, we show how this theoretical FPT result can be turned into efficient algorithms for optimization, counting, and enumeration problems in this area.
In this paper, we deal with several reoptimization variants of the Steiner tree problem in graphs obeying a sharpened β-triangle inequality. A reoptimization algorithm exploits the knowledge of an optimal solution to a problem instance for finding good solutions for a locally modified instance. We show that, in graphs satisfying a sharpened triangle inequality (and even in graphs where edge-costs are restricted to the values 1 and 1 + γ for an arbitrary small γ> 0), Steiner tree reoptimization still is NP-hard for several different types of local modifications, and even APX-hard for some of them. As for the upper bounds, for some local modifications, we design linear-time (1/2 + β)-approximation algorithms, and even polynomial-time approximation schemes, whereas for metric graphs (β= 1), none of these reoptimization variants is known to permit a PTAS. As a building block for some of these algorithms, we employ a 2β-approximation algorithm for the classical Steiner tree problem on such instances, which might be of independent interest since it improves over the previously best known ratio for any β
In this paper we consider a natural generalization of the well-known Max Leaf Spanning Tree problem. In the generalized Weighted Max Leaf problem we get as input an undirected connected graph G = (V,E), a rational number k ≥ 1 and a weight function w: V -> Q_{>=1} on the vertices, and are asked whether a spanning tree T for G exists such that the combined weight of the leaves of T is at least k. We show that it is possible to transform an instance (G,w,k) of Weighted Max Leaf in linear time into an equivalent instance (G′,w′,k′) such that |V′| ≤ 5.5k′ and k′ ≤ k. In the context of fixed parameter complexity this means that Weighted Max Leaf admits a kernel with 5.5k vertices. The analysis of the kernel size is based on a new extremal result which shows that every graph G that excludes some simple substructures always contains a spanning tree with at least |V|/5.5 leaves.
The linear arboricity la(G) of a graph G is the minimum number of linear forests (graphs where every connected component is a path) that partition the edges of G. In 1984, Akiyama et al. [1] stated the Linear Arboricity Conjecture (LAC), that the linear arboricity of any simple graph of maximum degree Δ is either \(\big \lceil \tfrac{\Delta}{2} \big \rceil\) or \(\big \lceil \tfrac{\Delta+1}{2} \big \rceil\). In [14,15] it was proven that LAC holds for all planar graphs. LAC implies that for Δ odd, \({\rm la}(G)=\big \lceil \tfrac{\Delta}{2} \big \rceil\). We conjecture that for planar graphs this equality is true also for any even Δ ≥ 6. In this paper we show that it is true for any Δ ≥ 10, leaving open only the cases Δ= 6, 8. We present also an O(nlogn) algorithm for partitioning a planar graph into max {la(G), 5} linear forests, which is optimal when Δ ≥ 9.
We study the sensor and movement capabilities that simple robots need in order to create a map of an unknown polygon of size n, and to meet. We consider robots that can move from vertex to vertex, can backtrack movements, and see distant vertices in counter-clockwise order but have no means of visibly identifying them. We show that such robots can always solve the weak rendezvous problem and reconstruct the visibility graph, given an upper bound on n. Our results are tight: The strong rendezvous problem, in which robots need to gather at a common location, cannot be solved in general, and without a bound on n, not even n can be determined. In terms of mobile agents exploring a graph, our result implies that they can reconstruct any graph that is the visibility graph of a simple polygon. This is in contrast to the known result that the reconstruction of arbitrary graphs is impossible in general, even if n is known.
The study of simple stochastic games (SSGs) was initiated by Condon for analyzing the computational power of randomized space-bounded alternating Turing machines. The game is played by two players, MAX and MIN, on a directed multigraph, and when the play terminates at a sink s, MAX wins from MIN a payoff p(s) ∈ [0,1]. Condon showed that the SSG value problem, which given a SSG asks whether the expected payoff won by MAX exceeds 1/2 when both players use their optimal strategies, is in NP ∩ coNP. However, the exact complexity of this problem remains open as it is not known whether the problem is in P or is hard for some natural complexity class. In this paper, we study the computational complexity of a strategy improvement algorithm by Hoffman and Karp for this problem. The Hoffman-Karp algorithm converges to optimal strategies of a given SSG, but no nontrivial bounds were previously known on its running time. We show a bound of O(2 n /n) on the convergence time of this algorithm, and a bound of O(20.78 n ) on a randomized variant. These are the first non-trivial upper bounds on the convergence time of these strategy improvement algorithms.
The problem of sharing the cost of a common infrastructure among a set of strategic and cooperating players has been the subject of intensive research in recent years. However, most of these studies consider cooperative cost sharing games in an offline setting, i.e., the mechanism knows all players and their respective input data in advance. In this paper, we consider cooperative cost sharing games in an online setting: Upon the arrival of a new player, the mechanism has to take instantaneous and irreversible decisions without any knowledge about players that arrive in the future. We propose an online model for general demand cost sharing games and give a complete characterization of both weakly group-strategyproof and group-strategyproof online cost sharing mechanisms for this model. Moreover, we present a simple method to derive incremental online cost sharing mechanisms from online algorithms such that the competitive ratio is preserved. Based on our general results, we develop online cost sharing mechanisms for several binary demand and general demand cost sharing games.
We study the complexity of local search in the max-cut problem with FLIP neighborhood, in which exactly one node changes the partition. We introduce a technique of constructing instances which enforce certain sequences of improving steps. Using our technique we can show that already graphs with maximum degree four satify the following two properties. 1 There are instances with initial solutions for which every local search takes exponential time to converge to a local optimum. 1 The problem of computing a local optimum reachable from a given solution by a sequence of improving steps is PSPACE-complete. Schäffer and Yannakakis (JOC ’91) showed via a so called “tight” PLS-reduction that the properties (1) and (2) hold for graphs with unbounded degree. Our improvement to the degree four is the best possible improvement since Poljak (JOC ’95) showed for cubic graphs that every sequence of improving steps has polynomial length, whereby his result is easily generalizable to arbitrary graphs with maximum degree three. In his paper Poljak also asked whether (1) holds for graphs with maximum degree four, which is settled by our result. Many tight PLS-reductions in the literature are based on the max-cut problem. Via some of them our constructions carry over to other problems and show that the properties (1) and (2) already hold for very restricted sets of feasible inputs of these problems. Since our paper provides the two results that typically come along with tight PLS-reductions it does naturally put the focus on the question whether it is even PLS-complete to compute a local optimum on graphs with maximum degree four – a question that was recently asked by Ackermann et al. We think that our insights might be helpful for tackling this question.
For a set S\mathcal{S} of graphs, a perfect S\mathcal{S}-packing (S\mathcal{S}-factor) of a graph G is a set of mutually vertex-disjoint subgraphs of G that each are isomorphic to a member of S\mathcal{S} and that together contain all vertices of G. If G allows a covering (locally bijective homomorphism) to a graph H, then G is an H-cover. For some fixed H let S(H)\mathcal{S}(H) consist of all H-covers. Let K k,ℓ be the complete bipartite graph with partition classes of size k and ℓ, respectively. For all fixed k,ℓ ≥ 1, we determine the computational complexity of the problem that tests if a given bipartite graph has a perfect S(Kk,l)\mathcal{S}(K_{k,\ell})-packing. Our technique is partially based on exploring a close relationship to pseudo-coverings. A pseudo-covering from a graph G to a graph H is a homomorphism from G to H that becomes a covering to H when restricted to a spanning subgraph of G. We settle the computational complexity of the problem that asks if a graph allows a pseudo-covering to K k,ℓ for all fixed k,ℓ ≥ 1.
In this paper we provide algorithms faster than O(2n ) for two variants of the Irredundant Set problem. More precisely, we give: a branch-and-reduce algorithm solving Largest Irredundant Set in \(\mathcal{O}(1.9657^{n})\) time and polynomial space; the time complexity can be reduced using memoization to \(\mathcal{O}(1.8475^{n})\) at the cost of using exponential space, and a simple iterative-DFS algorithm for Smallest Inclusion-Maximal Irredundant Set that solves it in \(\mathcal{O}(1.999956^{n})\) time and polynomial space. Inside the second algorithm time complexity analysis we use a structural approach which allows us to break the O(2n ) barrier. We find this structural approach more interesting than the algorithm itself. Despite the fact that the discussed problems are quite similar to the Dominating Set problem solving them faster than the obvious O(2n ) solution seemed harder; that is why they were posted as an open problems at the Dagstuhl seminar in 2008.
Given a binary dominance relation on a set of alternatives, a common thread in the social sciences is to identify subsets of alternatives that satisfy certain notions of stability. Examples can be found in areas as diverse as voting theory, game theory, and argumentation theory. Brandt and Fischer[1] proved that it is NP-hard to decide whether an alternative is contained in some inclusion-minimal unidirectional (i.e., either upward or downward) covering set. For both problems, we raise this lower bound to the Q2p\Theta_2^p level of the polynomial hierarchy and provide a S2p\Sigma_2^p upper bound. Relatedly, we show that a variety of other natural problems regarding minimal or minimum-size unidirectional covering sets are hard or complete for either of NP, coNP, andQ2p\Theta_2^p. An important consequence of our results is that neither minimal upward nor minimal downward covering sets (even when guaranteed to exist) can be computed in polynomial time unless P=NP. This sharply contrasts with Brandt and Fischer’s result that minimal bidirectional covering sets are polynomial-time computable.
The lower and the upper irredundance numbers of a graphG, denoted ir(G) and IR(G) respectively, are conceptually linked to domination and independence numbers and have numerous relations to other graph parameters. It is a long-standing open question whether determining these numbers for a graph G on n vertices admits exact algorithms running in time less than the trivial Ω(2 n ) enumeration barrier. We solve this open problem by devising parameterized algorithms for the duals of the natural parameterizations of the problems with running times faster than O*(4k)\mathcal{O}^*(4^{k}). For example, we present an algorithm running in time O*(3.069k))\mathcal{O}^*(3.069^{k})) for determining whether IR(G) is at least n − k. Although the corresponding problem has been shown to be in FPT by kernelization techniques, this paper offers the first parameterized algorithms with an exponential dependency on the parameter in the running time. Furthermore, these seem to be the first examples of a parameterized approach leading to a solution to a problem in exponential time algorithmics where the natural interpretation as exact exponential-time algorithms fails.
It is shown how to compute the lexicographically maximum suffix of a string of n≥2 characters over a totally ordered alphabet using at most (4/3)n−5/3 three-way character comparisons. The best previous bound, which has stood unchallenged for more than 25 years, is (3/2)n−O(1) comparisons. We also prove an interesting property of an algorithm for computing the maximum suffix both with respect to a total order < and with respect to its inverse order >.
A weighted sequence is a string in which a set of characters may appear at each position with respective probabilities of occurrence. A common task is to locate a given motif in a weighted sequence in exact, approximate or bounded gap form, with presence probability not less than a given threshold. The motif could be a normal non-weighted string or even a string with don’t care symbols. We give an algorithmic framework that is capable of tackling above motif discovery problems. Utilizing the notion of maximal factors, the framework provides an approach for reducing each problem to equivalent problem in non-weighted strings without any time degradation.
A flow on a directed network is said to be confluent if the flow uses at most one outgoing arc at each node. Confluent flows arise naturally from destination-based routing. We study the Maximum Confluent Flow Problem (MaxConf) with a single commodity but multiple sources and sinks. Unlike previous results, we consider heterogeneous arc capacities. The supplies and demands of the sources and sinks can also be bounded. We give a pseudo-polynomial time algorithm and an FPTAS for graphs with constant treewidth. Somewhat surprisingly, MaxConf is NP-hard even on trees, so these algorithms are, in a sense, best possible. We also show that it is NP-complete to approximate MaxConf better than 3/2 on general graphs.
During the last years, preprocessing-based techniques have been developed to compute shortest paths between two given points in a road network. These speed-up techniques make the computation a matter of microseconds even on huge networks. While there is a vast amount of experimental work in the field, there is still large demand on theoretical foundations. The preprocessing phases of most speed-up techniques leave open some degree of freedom which, in practice, is filled in a heuristical fashion. Thus, for a given speed-up technique, the problem arises of how to fill the according degree of freedom optimally. Until now, the complexity status of these problems was unknown. In this work, we answer this question by showing NP-hardness for the recent techniques.
We study the stable marriage problem in a distributed environment, in which there are 2n players, n men and n women, each holding a private ranking of the n persons of the opposite set, and there is a server who communicates with the players and finds a matching for them. We restrict our attention on two communication models: the sketch model and the query model. In the sketch model, each player compresses his/her ranking into a sketch and sends it to the server, while in the query model, the server itself adaptively queries individual bits on each player’s ranking. We show that for the server to output even a slightly stable matching, in which a small constant fraction of matched pairs are stable, it must receive Ω(n 2logn) bits from the players in the sketch model, and it must query Ω(n 2logn) bits on their rankings in the query model. This implies that even to find a slightly stable matching, it is impossible to have an algorithm which compresses the input into a sketch of sub-linear size or to have an algorithm which runs in sub-linear time.
Article
Full-text available
The upcoming fifth generation (5G) wireless networks making use of higher-frequency spectrum bands suffer from serious propagation issues due to high path loss and beam directivity requirements. This promotes the device-to-device communications to boost the transmission reliability at the network edges, providing remarkable benefits in terms of the energy and spectrum efficiency, essential for a wide class of sensors networks and Internet-of-Things. More in general, applications where devices are usually constrained in computational and transmission range capabilities. In such a context, the selection of the proper number of devices arranged as a relay plays a crucial role. Towards this goal, this paper proposes an efficient relay selection scheme minimizing both the delivery transmission delay and the overall energy consumption, i.e., the overall number of relays to be used. By focusing on a multicast content delivery application scenario the problem of interest is formulated as a one-sided preferences matching game. In addition, the strategy designed takes into account specific information, named reputation coefficient, associated to each device jointly with link propagation conditions for allowing the selection of suitable relays for disseminating the content among the devices. The effectiveness of the proposed solution is underpinned by computer simulations, and the performance is evaluated in terms of power consumption, end-to-end delay, and number of selected relays. As confirmed by results, the proposed approach improves network performance compared to the greedy approach, the random algorithm, a scheme previously proposed in literature, and with two game theory-based strategies.
Article
The adiabatic quantum computation (AQC) is based on the adiabatic theorem to approximate solutions of the Schrodinger equation. The design of an AQC algorithm involves the construction of a Hamiltonian that describes the behavior of the quantum system. This Hamiltonian is expressed as a linear interpolation of an initial Hamiltonian whose ground state is easy to compute, and a final Hamiltonian whose ground state corresponds to the solution of a given combinatorial optimization problem. The adiabatic theorem asserts that if the time evolution of a quantum system described by a Hamiltonian is large enough, then the system remains close to its ground state. An AQC algorithm uses the adiabatic theorem to approximate the ground state of the final Hamiltonian that corresponds to the solution of the given optimization problem. In this book, we investigate the computational simulation of AQC algorithms applied to the MAX-SAT problem. A symbolic analysis of the AQC solution is given in order to understand the involved computational complexity of AQC algorithms. This approach can be extended to other combinatorial optimization problems and can be used for the classical simulation of an AQC algorithm where a Hamiltonian problem is constructed. This construction requires the computation of a sparse matrix of dimension 2n 2n, by means of tensor products, where n is the dimension of the quantum system. Also, a general scheme to design AQC algorithms is proposed, based on a natural correspondence between optimization Boolean variables and quantum bits. Combinatorial graph problems are in correspondence with pseudo-Boolean maps that are reduced in polynomial time to quadratic maps. Finally, the relation among NP-hard problems is investigated, as well as its logical representability, and is applied to the design of AQC algorithms. It is shown that every monadic second-order logic (MSOL) expression has associated pseudo-Boolean maps that can be obtained by expanding the given expression, and also can be reduced to quadratic forms.
Article
Full-text available
As cyberspace becomes an integral part of our daily life, its mastering becomes harder. To help, cyberspace can be represented by resources arranged in a multid i- mensional space. With geographical maps to exhibit the topology of this virtual space, people can have a better visual understanding. In this paper, methods focu s- ing on the construction of lower dimension representations of this space are exa m- ined and illustrated with the World-Wide Web. It is expected that this work will contribute to addressing issues of navigation in cyberspace and, especially, avoiding the lost-in-cyberspace syndrome.
Article
Full-text available
The transfer-matrix technique is a convenient way for studying strip lattices in the Potts model since the compu- tational costs depend just on the periodic part of the lattice and not on the whole. However, even when the cost is reduced, the transfer-matrix technique is still an NP-hard problem since the time T(|V|, |E|) needed to compute the matrix grows ex- ponentially as a function of the graph width. In this work, we present a parallel transfer-matrix implementation that scales performance under multi-core architectures. The construction of the matrix is based on several repetitions of the deletion- contraction technique, allowing parallelism suitable to multi-core machines. Our experimental results show that the multi-core implementation achieves speedups of 3.7X with p = 4 processors and 5.7X with p = 8. The efficiency of the implementation lies between 60% and 95%, achieving the best balance of speedup and efficiency at p = 4 processors for actual multi-core architectures. The algorithm also takes advantage of the lattice symmetry, making the transfer matrix computation to run up to 2X faster than its non-symmetric counterpart and use up to a quarter of the original space.
Conference Paper
The deletion-contraction algorithm is perhaps the most popular method for computing a host of fundamental graph invariants such as the chromatic, flow, and reliability polynomials in graph theory, the Jones polynomial of an alternating link in knot theory, and the partition functions of the models of Ising, Potts, and Fortuin-Kasteleyn in statistical physics. Prior to this work, deletion-contraction was also the fastest known general-purpose algorithm for these invariants, running in time roughly proportional to the number of spanning trees in the input graph.Here, we give a substantially faster algorithm that computes the Tutte polynomial-and hence, all the aforementioned invariants and more-of an arbitrary graph in time within a polynomial factor of the number of connected vertex sets. The algorithm actually evaluates a multivariate generalization of the Tutte polynomial by making use of an identity due to Fortuin and Kasteleyn. We also provide a polynomial-space variant of the algorithm and give an analogous result for Chung and Graham's cover polynomial.
Conference Paper
We study notions of equivalence and refinement for probabilistic programs formalized in the second-order fragment of Probabilistic Idealized Algol. Probabilistic programs implement randomized algorithms: a given input yields a probability distribution on the set of possible outputs. Intuitively, two programs are equivalent if they give rise to identical distributions for all inputs. We show that equivalence is decidable by studying the fully abstract game semantics of probabilistic programs and relating it to probabilistic finite automata. For terms in β-normal form our decision procedure runs in time exponential in the syntactic size of programs; it is moreover fully compositional in that it can handle open programs (probabilistic modules with unspecified components). In contrast, we show that the natural notion of program refinement, in which the input-output distributions of one program uniformly dominate those of the other program, is undecidable.
Article
Full-text available
The massive increase in computation power over the last few decades has substantially enhanced our ability to solve complex problems with their performance evaluations in diverse areas of science and engineering. With the recent developments in the field of optimizations, these methods are now become lucrative to make decisions. Dynamic Programming is one of the elegant algorithm design standards and is powerful tool which yields classic algorithms for a variety of combinatorial optimization problems. In this paper fundamental working principles, major area of applications of this approach has been introduced. The strengths which make it more prevailing than the others is also opened up. Focusing the imperative drawbacks afterward comparison study of this algorithm design technique in this paper brings a general awareness to the implementation strategies.
Article
Full-text available
We survey the applications of an elementary identity used by Euler in one of his proofs of the Pentagonal Number Theorem. Using a suitably reformulated version of this identity that we call Euler's Telescoping Lemma, we give alternate proofs of all the key summation theorems for terminating Hypergeometric Series and Basic Hypergeometric Series, including the terminating Binomial Theorem, the Chu--Vandermonde sum, the Pfaff--Saalch\" utz sum, and their q-analogues. We also give a proof of Jackson's q-analog of Dougall's sum, the sum of a terminating, balanced, very-well-poised 8ϕ7_8\phi_7 sum. Our proofs are conceptually the same as those obtained by the WZ method, but done without using a computer. We survey identities for Generalized Hypergeometric Series given by Macdonald, and prove several identities for q-analogs of Fibonacci numbers and polynomials and Pell numbers that have appeared in combinatorial contexts. Some of these identities appear to be new.
Article
A computable economist's view of the world of computational complexity theory is described. This means the model of computation underpinning theories of computational complexity plays a central role. The emergence of computational complexity theories from diverse traditions is emphasised. The unifications that emerged in the modern era was codified by means of the notions of efficiency of computations, non-deterministric computations, completeness, reducibility and verifiability - all three of the latter concepts had their origins on what may be called 'Post's Program for Research for Higher Recursion Theory'. The recent real model of computation as a basis for studying computational complexity in the domain of the reals is also presented and discussed, albeit critically. A brief sceptical section on algorithmic complexity theory is included in an appendix.
Article
One of the most important problems in large communication networks like the Internet is the problem of routing traffic through the network. Current Internet tech-nology based on the TCP protocol does not route traffic adaptively to the traffic pattern but uses fixed end-to-end routes and adjusts only the injection rates in order to avoid congestion. A more flexible approach uses load-adaptive rerouting policies that reconsider their routing strategies from time to time depending on the observed latencies. In this manuscript, we survey recent results from [1, 2] about the application of methods from evolutionary game theory to such an adaptive traffic management.
Article
We present a probabilistic analysis of integer linear programs (ILPs). More specifically, we study ILPs in a so-called smoothed analysis in which it is assumed that first an adversary specifies the coefficients of an integer program and then (some of) these coefficients are randomly perturbed, e.g., using a Gaussian or a uniform distribution with small standard deviation. In this probabilistic model, we investigate structural properties of ILPs and apply them to the analysis of algorithms. For example, we prove a lower bound on the slack of the optimal solution. As a result of our analysis, we are able to specify the smoothed complexity of classes of ILPs in terms of their worst case complexity. This way, we obtain polynomial smoothed complexity for packing and covering problems with any fixed number of constraints. Previous results of this kind were restricted to the case of binary programs.
Conference Paper
We present a probabilistic analysis of integer linear programs (ILPs). More specifically, we study ILPs in a so-called smoothed analysis in which it is assumed that first an adversary specifies the coefficients of an integer program and then (some of) these coefficients are randomly perturbed, e.g., using a Gaussian or a uniform distribution with small standard deviation. In this probabilistic model, we investigate structural properties of ILPs and apply them to the analysis of algorithms. For example, we prove a lower bound on the slack of the optimal solution. As a result of our analysis, we are able to specify the smoothed complexity of classes of ILPs in terms of their worst case complexity. For example, we obtain polynomial smoothed complexity for packing and covering problems with any fixed number of constraints. Previous results of this kind were restricted to the case of binary programs.
Conference Paper
We consider bicriteria optimization problems and investigate the relationship between two standard approaches to solving them: (i) computing the Pareto curve and (ii) the so-called decision maker’s approach in which both criteria are combined into a single (usually non-linear) objective function. Previous work by Papadimitriou and Yannakakis showed how to efficiently approximate the Pareto curve for problems like Shortest Path, Spanning Tree, and Perfect Matching. We wish to determine for which classes of combined objective functions the approximate Pareto curve also yields an approximate solution to the decision maker’s problem. We show that an FPTAS for the Pareto curve also gives an FPTAS for the decision maker’s problem if the combined objective function is growth bounded like a quasi-polynomial function. If these functions, however, show exponential growth then the decision maker’s problem is NP-hard to approximate within any factor. In order to bypass these limitations of approximate decision making, we turn our attention to Pareto curves in the probabilistic framework of smoothed analysis. We show that in a smoothed model, we can efficiently generate the (complete and exact) Pareto curve with a small failure probability if there exists an algorithm for generating the Pareto curve whose worst case running time is pseudopolynomial. This way, we can solve the decision maker’s problem w.r.t. any non-decreasing objective function for randomly perturbed instances of, e.g., Shortest Path, Spanning Tree, and Perfect Matching.
Conference Paper
We study an intensively studied resource allocation game introduced by Koutsoupias and Papadimitriou where n weighted jobs are allocated to m identical machines. It was conjectured by Gairing et al. that the fully mixed Nash equilibrium is the worst Nash equilibrium for this game w. r. t. the expected maximum load over all machines. The known algorithms for approximating the so-called “price of anarchy” rely on this conjecture. We present a counter-example to the conjecture showing that fully mixed equilibria cannot be used to approximate the price of anarchy within reasonable factors. In addition, we present an algorithm that constructs so-called concentrated equilibria that approximate the worst-case Nash equilibrium within constant factors.
Conference Paper
A sequence of objects which are characterized by their color has to be processed. Their processing order influences how efficiently they can be processed: Each color change between two consecutive objects produces non-uniform cost. A reordering buffer which is a random access buffer with storage capacity for k objects can be used to rearrange this sequence in such a way that the total cost are minimized. This concept is useful for many applications in computer science and economics. We show that a reordering buffer reduces the cost of each sequence by a factor of at most 2k–1. This result even holds for cost functions modeled by arbitrary metric spaces. In addition, a matching lower bound is presented. From this bound follows that each strategy that does not increase the cost of a sequence is at least (2k–1)-competitive. As main result, we present the deterministic Maximum Adjusted Penalty (MAP) strategy which is O(log k)-competitive. Previous strategies only achieve a competitive ratio of k in the non-uniform model. For the upper bound on MAP, we introduce a basic proof technique. We believe that this technique can be interesting for other problems.
Article
In the resource allocation game introduced by Koutsoupias and Papadimitriou, n jobs of different weights are assigned to m identical machines by selfish agents. For this game, it has been conjectured by several authors that the fully mixed Nash equilibrium (FMNE) is the worst possible w.r.t. the expected maximum load over all machines. Assuming the validity of this conjecture, computing a worst-case Nash equilibrium for a given instance was trivial, and approximating the Price of Anarchy for this instance would be possible by approximating the expected social cost of the FMNE by applying a known FPRAS.We present a counter-example to this conjecture showing that fully mixed Nash equilibria cannot be used to approximate the Price of Anarchy. We show that the factor between the social cost of the worst Nash equilibrium and the social cost of the FMNE can be as large as the Price of Anarchy itself, up to a constant factor. In addition, we present an algorithm that constructs so-called concentrated equilibria that approximate the worst-case Nash equilibria within constant factors.
Article
We investigate the behaviour of load-adaptive rerouting policies in the Wardrop model where decisions must be made on the basis of stale information. In this model, each one of an infinite number of agents controls an infinitesimal amount of flow, thus contributing to a network flow which induces latency. In our dynamic extension of this model, agents are activated in a concurrent and asynchronous fashion and may reroute their flow with the aim of reducing their sustained latency. It is a well-known problem that in settings where latency information is not always up-to-date, such behaviour may lead to oscillation effects which seriously harm network performance. Two quantities determine the difficulty of avoiding oscillation: the steepness of the latency functions and the maximum possible age of the information T.In this work we ask for conditions that the rerouting policies must adhere to in order to converge to an equilibrium despite the information being stale. We consider simple policies which sample another path in a first step and then migrate from the current path to the new one with a probability that is a function of the anticipated latency gain. In fact we can show that our class of policies guarantees convergence if the latter migration probability function satisfies a certain smoothness condition that resembles Lipschitz continuity. It turns out that for smooth adaptation policies where the migration probability is chosen small enough relative to the inverse of the steepness of the latency functions and T, the population actually converges to an equilibrium.In addition, we analyse the speed of convergence towards approximate equilibria of two specific variants of smooth adaptive routing policies, e.g., for a replication policy adopted from evolutionary game theory.
Article
The investigation of genetic and evolutionary algorithms on Ising model problems gives much insight into how these algorithms work as adaptation schemes. The one-dimensional Ising model with periodic boundary conditions has been considered as a typical example with a clear building block structure suited well for two-point crossover. It has been claimed that GAs based on recombination and appropriate diversity-preserving methods by far outperform EAs based on mutation only. Here, a rigorous analysis of the expected optimization time proves that mutation-based EAs are surprisingly effective. The (1+λ) EA with an appropriate λ-value is almost as efficient as typical GAs. Moreover, it is proved that specialized GAs do even better and this holds for two-point crossover as well as for one-point crossover.
Conference Paper
This paper deals with the design of efficiently computable incentive-compatible mechanisms for combinatorial optimization problems with single-minded agents each possibly having multiple private parameters. We focus on approximation algorithms for NP-hard mechanism design problems. These algorithms need to satisfy certain monotonicity properties to ensure truthfulness. Since most of the known approximation techniques do not fulfill these properties, we study alternative techniques. Our first contribution is a quite general method to transform a pseudopolynomial algorithm into a monotone fully polynomial time approximation scheme (FPTAS). This can be applied to various problems like, e.g., knapsack, constrained shortest path, or job scheduling with deadlines. For example, the monotone FPTAS for the knapsack problem gives a very efficient, truthful mechanism for single-minded multiunit auctions. The best previous result for such auctions was a 2-appro-xi-ma-tion. In addition, we present a monotone PTAS for the generalized assignment problem with any constant number of private parameters per agent. The most efficient way to solve packing integer programs (PIPs) is linear programming-based randomized rounding, which also is in general not monotone. We show that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding. The advantage is that these algorithms are inherently monotone. This way, we can significantly improve the approximation ratios of truthful mechanisms for various fundamental mechanism design problems like single-minded combinatorial auctions (CAs), unsplittable flow routing, and multicast routing. Our primal-dual approximation algorithms can also be used for the winner determination in CAs with general bidders specifying their bids through an oracle.