ACM SIGACT News

Published by Association for Computing Machinery
Print ISSN: 0163-5700
Publications
Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.
 
How can complexity theory and algorithms benefit from practical advances in computing? We give a short overview of some prior work using practical computing to attack problems in computational complexity and algorithms, informally describe how linear program solvers may be used to help prove new lower bounds for satisfiability, and suggest a research program for developing new understanding in circuit complexity.
 
We draw two incomplete, biased maps of challenges in computational complexity lower bounds.
 
A decade ago, a beautiful paper by Wagner developed a ``toolkit'' that in certain cases allows one to prove problems hard for parallel access to NP. However, the problems his toolkit applies to most directly are not overly natural. During the past year, problems that previously were known only to be NP-hard or coNP-hard have been shown to be hard even for the class of sets solvable via parallel access to NP. Many of these problems are longstanding and extremely natural, such as the Minimum Equivalent Expression problem (which was the original motivation for creating the polynomial hierarchy), the problem of determining the winner in the election system introduced by Lewis Carroll in 1876, and the problem of determining on which inputs heuristic algorithms perform well. In the present article, we survey this recent progress in raising lower bounds.
 
We discuss the history and uses of the parallel census technique---an elegant tool in the study of certain computational objects having polynomially bounded census functions. A sequel will discuss advances (including Cai, Naik, and Sivakumar [CNS95] and Glasser [Gla00]), some related to the parallel census technique and some due to other approaches, in the complexity-class collapses that follow if NP has sparse hard sets under reductions weaker than (full) truth-table reductions.
 
This issue's column, Part II of the article started in the preceding issue, is about progress on the question of whether NP has sparse hard sets with respect to weak reductions.Upcoming Complexity Theory Column articles include A. Werschulz on information-based complexity; J. Castro, R. Gavaldà, and D. Guijarro on what complexity theorists can learn from learning theory; S. Ravi Kumar and D. Sivakumar on a to-be-announced topic; M. Holzer and P. McKenzie on alternating stack machines; and R. Paturi on the complexity of k -SAT.
 
We give a quantum algorithm that finds collisions in arbitrary r-to-one functions after only O(3√N/r) expected evaluations of the function, where N is the cardinality of the domain. Assuming the function is given by a black box, this is more efficient than the best possible classical algorithm, even allowing probabilism. We also give a similar algorithm for finding claws in pairs of functions. Further, we exhibit a space-time tradeoff for our technique. Our approach uses Grover's quantum searching algorithm in a novel way.
 
The continuously increasing amount of digital data generated by today's society asks for better storage solutions. This survey looks at a new generation of coding techniques designed specifically for the needs of distributed networked storage systems, trying to reach the best compromise among storage space efficiency, fault tolerance, and maintenance overheads. Four families of codes tailor-made for distributed settings, namely - pyramid, hierarchical, regenerating and self-repairing codes - are presented at a high level, emphasizing the main ideas behind each of these codes, and discussing their pros and cons, before concluding with a quantitative comparison among them. This survey deliberately excluded technical details for the codes, nor does it provide an exhaustive summary of the numerous works. Instead, it provides an overview of the major code families in a manner easily accessible to a broad audience, by presenting the big picture of advances in coding techniques for distributed storage solutions.
 
The well-studied local postage stamp problem (LPSP) is the following: given a positive integer k, a set of postive integers 1 = a1 < a2 < ... < ak and an integer h >= 1, what is the smallest positive integer which cannot be represented as a linear combination x1 a1 + ... + xk ak where x1 + ... + xk <= h and each xi is a non-negative integer? In this note we prove that LPSP is NP-hard under Turing reductions, but can be solved in polynomial time if k is fixed.
 
Information complexity is the interactive analogue of Shannon's classical information theory. In recent years this field has emerged as a powerful tool for proving strong communication lower bounds, and for addressing some of the major open problems in communication complexity and circuit complexity. A notable achievement of information complexity is the breakthrough in understanding of the fundamental direct sum and direct product conjectures, which aim to quantify the power of parallel computation. This survey provides a brief introduction to information complexity, and overviews some of the recent progress on these conjectures and their tight relationship with the fascinating problem of compressing interactive protocols.
 
We discuss the use of projects in first-year graduate complexity theory courses.
 
The study of semifeasible algorithms was initiated by Selman's work a quarter of century ago [Sel79,Sel81,Sel82]. Informally put, this research stream studies the power of those sets L for which there is a deterministic (or in some cases, the function may belong to one of various nondeterministic function classes) polynomial-time function f such that when at least one of x and y belongs to L, then f(x,y) \in L \cap \{x,y\}. The intuition here is that it is saying: ``Regarding membership in L, if you put a gun to my head and forced me to bet on one of x or y as belonging to L, my money would be on f(x,y).'' In this article, we present a number of open problems from the theory of semifeasible algorithms. For each we present its background and review what partial results, if any, are known.
 
This short note reports a master theorem on tight asymptotic solutions to divide-and-conquer recurrences with more than one recursive term: for example, T(n) = 1/4 T(n/16) + 1/3 T(3n/5) + 4 T(n/100) + 10 T(n/300) + n^2.
 
this paper describes the complete problems that motivate interest in these classes, discusses some surprising recent discoveries, and points out open problems where progress can reasonably be expected
 
A semi-membership algorithm for a set A is, informally, a program that when given any two strings determines which is logically more likely to be in A. A flurry of interest in this topic in the late seventies and early eighties was followed by a relatively quiescent half-decade. However, in the 1990s there has been a resurgence of interest in this topic. We survey recent work on the theory of semi-membership algorithms. 1 Introduction A membership algorithm M for a set A takes as its input any string x and decides whether x 2 A. Informally, a semi-membership algorithm M for a set A takes as its input any 1 Supported in part by NSF grant CCR-8957604, NSF/JSPS grant INT-9116781/ENGR-207, HC&M grant ERB4050PL93-0516, and an NSF REU supplement. 2 Department of Computer Science, University of Rochester, Rochester, NY, 14627. 3 Work done in part while visiting the Tokyo Institute of Technology and the University of Amsterdam. 4 Departments of Mathematics and Computer Science, Unive...
 
Introduction The intellectual foundations of computer science, like those of other engineering disciplines, are theoretical. Theoretical computer science plays the same role for computer science as do linear system theory, circuit theory, semiconductor theory, and electromagnetic theory for electrical engineering [6]. This essay will explore the past achievements, present challenges, and future opportunities for research in theoretical computer science. The next section, which explains the goals of theoretical computer science, is addressed to all scientists at NSF. The remaining sections are intended for computer scientists. What Is Theoretical Computer Science? From theory, we get models and terminology for talking about the basic phenomena of the field. We provide the fundamentals for thinking about computing. . . . Theory occasionally yields valuable insights, sometimes that the problem we have been desperately trying to solve for the last few years has
 
This report is concerned only with logic activities related to computer science, and Europe here means usually Western Europe (one can learn only so much in one semester). The idea of such a visit may seem ridiculous to some. The modern world is quickly growing into a global village. There is plenty of communication between Europe and the US. Many European researchers visit the US, and many American researchers visit Europe. Neither Americans nor Europeans make secret of their logic research. Quite the opposite is true. They advertise their research. From ESPRIT reports, the Bulletin of European Association for Theoretical Computer Science, the Newsletter of European Association for Computer Science Logics, publications of European Foundation for Logic, Language and Information, publications of particular European universities, etc., one can get a good idea of what is going on in Europe and who is doing what. Some European colleagues asked me jokingly if I was on a reconnaissance mission. Well, sometimes a cow wants to suckle more than the calf wants to suck (a Hebrew proverb). It is amazing, however, how different computer science is, especially theoretical computer science, in Europe and the US. American theoretical computer science centers heavily around complexity theory. The two prime American theoretical conferences --- ACM Symposium on Theory of Computing (STOC) and IEEE Foundation of Computer Science Conference (FOCS) --- are largely devoted to complexity theory (in a wider sense of the term). That does not exclude logic. As a matter of fact, important logic results have been published in those conferences. However, STOC and FOCS logic papers belong, as a rule, to branches of logic intimately related to complexity. Finite model theory is a good example of that; ...
 
this article, we review some of the characteristic features of ad hoc networks, formulate problems and survey research work done in the area. We focus on two basic problem domains: topology control, the problem of computing and maintaining a connected topology among the network nodes, and routing. This article is not intended to be a comprehensive survey on ad hoc networking. The choice of the problems discussed in this article are somewhat biased by the research interests of the author
 
A new Intel's 64-bit chip Itanium has a new instruction set which includes a fused multiplyadd instruction x 1 +x 2 Delta x 3 (see, e.g., [7]). In this short article, we explain the empirical reasons behind the choice of this instruction, give possible theoretical explanation for this choice, and mention a related theoretical challenge. Empirical reasons behind the new basic operation. A natural reason for introducing a new basic operation is to speed up commonly occurring time-consuming computations. The selection of such operations was done a decade or so ago, when decisions were made which operations to implement in a (speedy) math co-processor. The main operation selected for this implementation is a dot product which transforms two arrays a 1 ; : : : ; a n and b 1 ; : : : ; b n into their dot (scalar) product c = a 1 Delta b 1 +: : : +a n Delta b n . A natural sequential computation of the dot product consists of sequentially computing s 1 := a 1 Delta b 1 and s k := s kG...
 
A semi-membership algorithm for a set A is, informally, a program that when given any two strings determines which is logically more likely to be in A. A flurry of interest in this topic in the late seventies and early eighties was followed by a relatively quiescent half-decade. However, in the 1990s there has been a resurgence of interest in this topic. We survey recent work on the theory of semi-membership algorithms. 1 Introduction A membership algorithm M for a set A takes as its input any string x and decides whether x 2 A. Informally, a semi-membership algorithm M for a set A takes as its input any strings x and y and decides which is "no less likely" to belong to A in the sense that if exactly one of the strings is in A, then M outputs that one string. Semi-membership algorithms have been studied in a number of settings. Recursive semi-membership algorithms (and the associated semi-recursive sets---those sets having recursive semi-membership algorithms) were introduced in the 1...
 
: The twin disciplines of Pessimal Algorithm Design and Simplexity Analysis are introduced and illustrated by means of representative problems. 1. Introduction Consider the following problem: we are given a table of n integer keys A 1 ; A 2 ; . . ., An and a query integer X . We want to locate X in the table, but we are in no particular hurry to succeed; in fact, we would like to delay success as much as possible. We might consider using the trivial algorithm, namely test X against A 1 ; A 2 , etc. in turn. However, it might happen that X = A 1 , in which case the algorithm would terminate right away. This shows the nave algorithm has O(1) best-case running time. The question is, can we do better, that is, worse? Of course, we can get very slow algorithms by adding spurious loops before the rst test of X against the A i . However, such easy solutions are unacceptable, partly because any fool can see that the algorithm is just wasting time (which would be very embarrassing to it...
 
Inspired by Ian Parberry's How to present a paper in theoretical computer science," (SIGACT News 19, 2 (1988), pp. 42-47), we provide some advice on how to present results from experimental and empirical research on algorithms. 1 Introduction This note is written primarily for researchers in algorithms who nd themselves called upon to present the results of computational experiments. While there has been much recent growth in the amount and quality of experimental research on algorithms, there is still some uncertainty about how to describe the research and present the conclusions. For general advice on presenting papers in theoretical computer science, read Ian Parberry 's excellent paper [6]. Here we focus on aspects directly relevant to experimentation and data analysis. Of course, the quality of the talk depends on the quality of the research. For advice on conducting respectable experimental research on algorithms, read McGeoch [5], or Barr et al. [1], or the articles on m...
 
: We describe a method, which we call the Pruning Method, for designing dynamic programming algorithms that does not require the algorithm designer to be comfortable with recursion. 1 Introduction In teaching algorithms courses, dynamic programming is the topic that maximizes the ratio of my students' perceived difficulty of the topic to my perceived difficulty of the topic. Most of the standard textbooks (e.g. [1, 2, 3, 5]) on algorithms offer the following strategy for designing a dynamic programming algorithm for an optimization problem P: 1. Find a recursive algorithm/formula/property that computes/defines/characterizes the optimal solution to an instance of P. 2. Then determine how to compute an optimal solution in a bottom-up iterative manner. My students experience great difficulty with devising a recursive algorithm when the inductive hypothesis has to be strengthened. As an example, consider the following Longest Increasing Subsequence (LIS) Problem: INPUT: A sequence X...
 
Introduction The last few years have seen much progress in proving "non-approximability results" for well-known NP-hard optimization problems. As we know, the breakthrough has come by the application of results from probabilistic proof checking. It is an area that seems to continue to surprise: since the connection was discovered in 1991 (Feige et. al. [21]), not only have non-approximability results emerged for a wide range of problems, but the factors shown hard steadily increase. Today, tight results are known for central problems like Max-Clique and Min-Set-Cover. (That is, the approximation algorithms we have for these problems can be shown to be the best possible). Such results also seem to be in sight for Chrom-Num. These are remarkable things, especially in the light of our knowledge of just five years ago. And meanwhile we continue to make progress on the Max-SNP front, where both the algor
 
Introduction Can we store an infinite set in a database? Clearly not, but instead we can store a finite representation of an infinite set and write queries as if the entire infinite set were stored. This is the key idea behind constraint databases, which emerged relatively recently as a very active area of database research. The primary motivation comes from geographical and temporal databases: how does one store a region in a database? More importantly, how does one design a query language that makes the user view a region as it if were an infinite collection of points stored in the database? Finite representations used in constraint databases are first-order formulae; in geographical applications, one often uses Boolean combinations of linear or polynomial inequalities. One of the most challenging questions in the development of the theory of constraint databases was that of the expressive power: what are the limitations of qu
 
In 1986, Babai, Frankl and Simon [BFS86] defined the polynomial hierarchy in communication complexity and asked whether Sigma cc 2 = Pi cc 2 . In order to tackle this problem, researchers have looked at an infinite version. We recently became aware of a paper from 1979 where Miller [Mil79] shows that this infinite version is independent of the axioms of set theory. In this note we will describe Miller's result and give a simplified proof of one direction by showing that the continuum hypothesis implies that Sigma r 2 = Pi r 2 = P(R Theta R). One approach to solving problems in complexity theory is to look at infinite versions of problems where the solutions may be easier. One can then try to apply these proof techniques to the finite complexity theory question. In one of the best examples of this technique, Sipser (see [Sip83]) showed that an infinite version of parity does not have bounded depth countable-size circuits. Furst, Saxe and Sipser [FSS84] used the techniques ...
 
Most database theory focused on investigating databases containing sets of tuples. In practice databases often implement relations using bags, i.e. sets with duplicates. In this paper we study how database query languages are affected by the use of duplicates. We consider query languages that are simple extensions of the (nested) relational algebra, and investigate their resulting expressive power and complexity. 1 Introduction In the standard approach to database modeling, relations are assumed to be sets, and no duplicates are allowed. For real applications, many systems relax this restriction [Fis87, HM81] and support bags in their data model, often to save the cost of duplicate elimination. Efforts have been made for providing a theoretical framework for such systems. Algebras for manipulating bags were developed by extending the relational algebra [Alb91, Klu82, OOM87], and optimization techniques for these algebras were studied [BK90, Mum90, Alb91]. Computational aspects of...
 
The Cracker Barrel peg game is a simple, one-player game commonly found on tables at pancake restaurants. In this paper, we consider the computational complexity of the Cracker Barrel problem.
 
As a service to our readers, SIGACT News has an agreement with Computing Reviews to reprint reviews of books and articles of interest to the theoretical computer science community. Computing Reviews is a monthly journal that publishes critical reviews on a broad range of computing subjects, including models of computation, formal languages, computational complexity theory, analysis of algorithms, and logics and semantics of programs. ACM members can receive a subscription to Computing Reviews for $45 per year by writing to ACM headquarters.
 
When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.
 
this paper to give simple proofs, in a uniform format, of the major known (pre-1992) results relating how polynomial-time reductions of SAT to sparse sets collapse the polynomial-time hierarchy. To help the reader familiar with basic facts of complexity theory follow the main flow of ideas, while keeping the exposition self-contained, straight forward proofs from elementary complexity theory are relegated to footnotes. We treat polynomial-time Turing reductions (i.e., Cook reductions) in Section 2. Bounded truth-table reductions (and many-one reductions) are treated in Section 3. Sections 2 and 3 may be read independently of each other. Section 4 uses the definitions of Section 3 to give simple proofs of results on conjunctive and disjunctive reductions. A comprehensive discussion of early work on how reductions to sparse sets collapse the polynomial-time hierarchy may be found in [Ma-89]. Additional discussions of this topic, as well as extensive bibliographies, may be found in [JY-90] and in [Yo-90]. 2 Polynomial-Time Turing Reductions 2.1 Introduction In [Lo-82], Long explicitly used the result (KL-1) to prove:
 
15.26> Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constant-factor overhead ; if g = O(t log t) it has a factor-of-O(log t) overhead , and so on. The simulation is on-line if each step of M 1 is simulated by step(s) of M 2
 
truly abstract the essential contribution of the paper? Is the author's grammar, syntax, semantics, and spelling correct? Does the
 
The principles underlying this report can be summarized as follows: 1. A strong theoretical foundation is vital to computer science. 2. Theory can be enriched by practice. 3. Practice can be enriched by theory. 4. If we consider (2) and (3), the value, impact, and funding of theory will be enhanced. In order to achieve a greater synergy between theory and application, and to sustain and expand on the remarkable successes of Theory of Computing (TOC), we consider it essential to increase the impact of theory on key application areas. This requires additional financial resources in support of theory, and closer interaction between theoreticians and researchers in other areas of computer science and in other disciplines. The report does not make a detailed assessment of the overall state of theoretical computer science or fully chronicle the achievements of this field. Instead, it has the specific objective of recommending ways to harness these remarkable achievements for the solution of ...
 
the Major Results Describe the key results of the paper. You may present the statements of the major theorems, but not their proofs. You will probably have to get a little technical here, but do so gradually and carefully.
 
this paper, I wish to discuss a number of issues related to the support of research in Theory of Computing, including funding, evaluation of research, communication, interaction with other research areas, education, and employment. It is directed primarily to researchers in Theory of Computing and is meant to complement the Infrastructure Section of the committee report "Strategic Directions for Research in Theory of Computing", which is directed primarily to people outside our community. I will examine some things our community has done well, some recent innovations, and some suggestions to enhance our research environment. Finally, I will address the relationship of Theory of Computing with experimental computing and algorithmic engineering. Funding and Evaluation of Research
 
We consider the problem of scheduling a conference to achieve the benefits in timecompression of parallel sessions, but without the associated high degree of conflict between talks. We describe a randomized construction meeting these goals that we analyze based on an expansion property of an associated graph. We also give algorithms for attendees scheduling their time within such a conference, and algorithms for verifying a proposed conference schedule. Finally, we present simulation results for typical conference sizes. 1 Introduction Single sessions or parallel sessions? It is a continuing debate in the scheduling of our premier research conferences. On the one hand, parallel sessions allow for more talks to be presented in a given time period. On the other hand, parallel sessions result in many attendees being frustrated by conflicts. For instance, if an attendee is interested in a constant fraction of the talks, then on average (say the schedule is determined randomly) this...
 
ation is to use digital signatures. Here you would verify a digital signature that is computed over the program using TrustMe's private key. But this is not much help in the scenario above. It merely provides you with confirmation that the program came from TrustMe so that they can be held accountable if some day you discover that the program did misbehave. By that time there is no telling how many "data warehouses" [13] already store the information. To appear in SIGACT News, 1998 But suppose we have a formal system, or logic, in which to reason about a program's ability to preserve privacy. Then our trust in a program could be based on the program itself, not on some digital signature for it. Further, depending on the logic, we might even have an algorithm for deciding whether programs have "privacy proofs" in the logic. And this in turn could lead to an efficient static program analyzer. All this req
 
> 0 ) v is nonempty c) uv i wx i y is in L for all i 0 This simplification seems to be unobserved in standard textbooks [Ha78] [HU79] [LP81] [Su88], and sometimes reduces the number of cases for students to consider when proving a language is not context-free. References
 
Round-based register example
Round-based consensus example
Weak leader election example
Architecture
This paper presents a deconstruction of the algorithm by factoring out its fundamental algorithmic principles within two abstractions: an eventual leader election and an eventual register abstractions. In short, the leader election abstraction encapsulates the liveness property of Paxos whereas the register abstraction encapsulates its safety property. Our deconstruction is faithful in that it preserves the resilience and eciency of the original Paxos algorithm in terms of stable storage logs, message complexity, and communication steps. In a companion paper, we show how to use our abstractions to reconstruct powerful variants of Paxos
 
Behavior of a system scheduling 1000 parallel jobs with two different schedulers. The offered load is 0.7 of the system's capacity. With the default scheduler the system saturates, and only completes the work because of the finite number of jobs involved. In effect, the system's capacity with this scheduler is less than 0.7 of the theoretical maximum, but considering the makespan or average completion time of these 1000 jobs may lead to the erroneous conclusion that the system is within its operating range. With the memory-cognizant scheduler, the system indeed remains stable. Figure courtesy of Anat Batat [1]. 
The conventional model of on-line scheduling postulates that jobs have non-trivial release dates, and are not known in advance. However, it fails to impose any stability constraints, leading to algorithms and analyses that must deal with unrealistic load conditions arising from trivial release dates as a special case. In an effort to make the model more realistic, we show how stability can be expressed as a simple constraint on release times and processing times. We then give empirical and theoretical justifications that such a constraint can close the gap between the theory and practice. As it turns out, this constraint seems to trivialize the scheduling problem.
 
We survey the background and challenges of a number of open problems in the theory of relativization. Among the topics covered are pseudorandom generators, time hierarchies, the potential collapse of the polynomial hierarchy, and the existence of complete sets. Relativization (i.e., oracle) theory has seen its share of ups and downs. Extensive surveys of current knowledge [Ver94] and debates as to relativization theory's merits [Har85,All90, HCC + 92,For94] can be found in the literature. However, in a nutshell, one could rather fairly say that as ups and downs go, relativization theory is on the mat. Still, that is not to say that relativization theory has no interesting open issues left with which to challenge theoretical computer scientists. It does, and here are a few such issues. Problem 1: Show that with probability one, the polynomial hierarchy is proper. The above statement is, to say the least, elliptic. However, the problem is well-known in this formulation. The underl...
 
This article focuses on routing messages in distributed networks with efficient data structures. After an overview of the various results of the literature, we point some interestingly open problems.
 
Yes, the lucky 13th column is here, and it is a guest column written by J. Goldsmith, M. Levy, and M. Mundhenk on the topic of limited nondeterminism---classes and hierarchies derived when nondeterminism itself is viewed as a quantifiable resource (as it indeed is!).Coming up in the Complexity Theory Column in the very special 100th issue of SIGACT News : A forum on the future of complexity theory. Many of the field's leading lights share their exciting insights on what lies ahead, so please be there in three!
 
"It is remarkable to see how different paths have led to rather similar results so close in time." -- Kalai, 1992 ([8]). Three papers were published in 1992, each providing a combinatorial, randomized algorithm solving linear programming in subexponential expected time. Bounds on independent algorithms were proven, one by Kalai, and the other by Matousek, Sharir, and Welzl. Results by Gartner combined techniques from these papers to solve a much more general optimization problem in similar time bounds. Although the algorithms by Kalai and Sharir--Welzl seem remarkably different in style and evolution, this paper demonstrates that one of the variants of Kalai's algorithm is identical (although dual) to the algorithm of Sharir--Welzl. Also the implication of Gartner's framework on future improvements is examined more carefully. 1 Introduction Linear programming has long been an important problem in computer science. Since 1950, when the simplex method was introduced by Dantzig [4], thou...
 
Introduction There are now numerous sites on the web that contain useful information for theoretical computer scientists. Among the more popular sites are those maintaining bibliographies and surveys for specic subject areas; many journals and conferences now maintain an online presence as well, allowing easy access to published papers. It is now common to perform literature searches directly on the web, as well as on specialized databases like INSPEC. There are sites that maintain links for specic subject areas [3, 5, 7], as well as sites that maintain information about conference announcements and deadlines [24, 7, 13]. In addition, there are paper and bibliography databases like the Hypertext Bibliography Project [14], the Computing Research Repository [17], and the Computer Science Research Paper Search Engine [19]. Searching for relevant material however is still a time-consuming task, given the volume of information available and the lack of contextual precision of mo
 
A superpolylogarithmic subexponential function is any function that asymptotically grows faster than any polynomial of any logarithm but slower than any exponential. We present a recently discovered nineteenth-century manuscript about these functions, which in part because of their applications in cryptology, have received considerable attention in contemporary computer science research. Attributed to the little-known yet highly-suspect composer/mathematician Maria Poopings, the manuscript can be sung to the tune of "Supercalifragilisticexpialidocious " from the musical Mary Poppins. In addition, we prove three ridiculous facts about superpolylogarithmic subexponential functions. Using novel extensions to the popular DTIME notation from complexity theory, we also define the complexity class SuperPolyLog/SubExp, which consists of all languages that can be accepted within deterministic superpolylogarithmic subexponential time. We show that this class is notationally intractabl...
 
Data Structures is a first book on algorithms and data structures, using an object- oriented approach. The target audience for the book is a second-year CS class introducing fundamental data structures and their associated algorithms. This second ...
 
Top-cited authors
Seth Gilbert
  • National University of Singapore
Jeffrey Ullman
  • Stanford University
Harry Lewis
  • Harvard University
Oded Goldreich
  • Weizmann Institute of Science
D. L. Kreher
  • Michigan Technological University