# Journal of Computer and System Sciences

Published by Elsevier

Online ISSN: 1090-2724

Print ISSN: 0022-0000

Published by Elsevier

Online ISSN: 1090-2724

Print ISSN: 0022-0000

Publications

The authors study two different ways to restrict the power of NP.
They consider languages accepted by nondeterministic polynomial-time
machines with a small number of accepting paths in case of acceptance,
and they investigate three subclasses of NP that are low for complexity
classes not known to be in the polynomial-time hierarchy. The
subclasses, UP, FewP, and Few, are all defined in terms of
nondeterministic machines with a bounded number of accepting paths for
every input string, but for the last two classes this number is not
known beforehand and can range over a space of polynomial size. The
authors prove lowness properties of the class Few and some other
interesting sets that are low for the class PP. The lowness results are
used to obtain positive relativizations of complexity classes

…

We introduce a new model for computing polynomials-a depth 2 circuit with a symmetric gate at the top and plus gates at the bottom, i.e. the circuit computes a symmetric function in linear functions-S<sub>m</sub><sup>d</sup>(l<sub>1</sub>, l<sub>2</sub>, …, l<sub>m</sub>) (S<sub>m</sub><sup>d</sup> is the d'th elementary symmetric polynomial in m variables, and the l<sub>i</sub>'s are linear functions). We refer to this model as the symmetric model. This new model is related to standard models of arithmetic circuits, especially to depth 3 circuits. In particular we show that, in order to improve the results of Shpilka and Wigderson (1999), i.e. to prove super-quadratic lower bounds for depth 3 circuits, one must first prove a super-linear lower bound for the symmetric model. We prove two nontrivial linear lower bounds for our model. The first lower bound is for computing the determinant, and the second is for computing the sum of two monomials. The main technical contribution relates the maximal dimension of linear subspaces on which S<sub>m</sub><sup>d</sup> vanishes, and lower bounds to the symmetric model. In particular we show that an answer of the following problem (which is very natural, and of independent interest) will imply lower bounds on symmetric circuits for many polynomials: “what is the maximal dimension of a linear subspace of C<sup>m</sup>, on which S<sub>m</sub><sup>d</sup> vanishes?” We give two partial solutions to the problem above, each enables us to prove a different lower bound

…

A goal of research on DNA computing is to solve problems that are beyond the capabilities of the fastest silicon-based supercomputers. Adleman and Lipton present exhaustive search algorithms for 3Sat and 3-Coloring, which can only be run on small instances and hence are not practical. In this paper, we show how improved algorithms can be developed for the 3-Coloring and Independent Set problems. Our algorithms use only the DNA operations proposed by Adleman and Lipton, but combine them in more powerful ways, and use polynomial preprocessing on a standard computer to tailor them to the specific instance to be solved. The main contribution of this paper is a more general model of DNA algorithms than that proposed by Lipton. We show that DNA computation for NP-complete problems can do more than just exhaustive search. Further research in this direction will help determine whether or not DNA computing is viable for NP-hard problems. A second contribution is the first analysis of errors that arise in generating the solution space for DNA computation

…

We consider the complexity of computing Boolean functions by
analog circuits of bounded fan-in, i.e. by circuits of gates computing
real-valued functions, either exactly or as a sign-representation. Sharp
upper bounds are obtained for the complexity of the most difficult
n-variable function over certain bases (sign-representation by
arithmetic circuits and exact computation by piecewise linear circuits).
Bounds are given for the computational power gained by adding
discontinuous gate functions and nondeterminism. We also prove explicit
nonlinear lower bounds for the formula size of analog circuits over
bases containing addition, subtraction, multiplication, the sign
function and all real constants

…

A parallel pointer machine, (PPM) is a parallel model having
pointers as its principal data type. PPMs have been characterized as
PRAMs obeying two restrictions: restricted arithmetic capabilities and
the CROW (concurrent read, owner write) memory access restriction.
Results concerning the relative power of PPMs (and other arithmetically
restricted PRAMs) versus CROW PRAMs having ordinary arithmetic
capabilities are presented. First, lower bounds separating PPMs from
CROW PRAMs are proved. Second, it is shown that this lower bound is
tight. As a corollary, sharply improved PPM algorithms are obtained for
a variety of problems, including deterministic context-free language
recognition

…

We study the problem of constructing a sorting circuit, network,
or PRAM algorithm that is tolerant to faults. For the most part, we
focus on fault patterns that are random, e.g., where the result of each
comparison is independently faulty with probability upper-bounded by
some constant. All previous fault-tolerant sorting circuits, networks,
and parallel algorithms require Ω(log<sup>2</sup> n) depth (time)
and/or Ω(nlog<sup>2</sup> n) comparisons to sort n items. In this
paper, we construct a passive-fault-tolerant sorting circuit with O(nlog
nloglog n) comparators, a reversal-fault-tolerant sorting network with
O(n log<sup>log(2)</sup> <sup>3</sup> n) comparators, and a
deterministic O(log n)-step O(n)-processor EREW PRAM fault-tolerant
sorting algorithm. The results are based on a new analysis of the AKS
circuit, which uses a much weaker notion of expansion that can be
preserved in the presence of faults. Previously, the AKS circuit was not
believed to be fault-tolerant because the expansion properties that were
believed to be crucial for the performance of the circuit are destroyed
by random faults. Extensions of our results for worst-case faults are
also presented

…

We investigate a model of gate failure for Boolean circuits in which a faulty gate is restricted to output one of its input values. For some types of gates, the model (which we call the short-circuit model of gate failure) is weaker than the traditional von Neumann model where faulty gates always output precisely the wrong value. Our model has the advantage that it allows us to design Boolean circuits that can tolerate worst-case faults, as well as circuits that have arbitrarily high success probability in the case of random faults. Moreover, the short-circuit model captures a particular type of fault that commonly appears in practice, and it suggests a simple method for performing post-test alterations to circuits that have more severe types of faults. A variety of bounds on the size of fault-tolerant circuits are proved in the paper. Perhaps, the most important is a proof that any k-fault-tolerant circuit for any input-sensitive function using any type of gates (even arbitrarily powerful, multiple-input gates) must have size at least Ω(k log k/log log k). Obtaining a tight bound on the size of a circuit for computing the AND of two values if up to k of the gates are faulty is one of the central questions left open in the paper

…

This paper gives nearly optimal, logarithmic upper and lower bounds on the minimum degree of Nullstellensatz refutations (i.e., polynomials) of the propositional induction principle

…

The authors propose and develop a complexity theory of feasible closure properties. For each of the classes #P, SpanP, OptP, and MidP, they establish complete characterizations-in terms of complexity class collapses-of the conditions under which the class has all feasible closure properties. In particular, #P is P-closed if and only if PP=UP; SpanP is P-closed if and only if R-MidP is P-closed if and only if P<sup>PP</sup>=NP; and OptP is P-closed if and only if NP=co-NP. Furthermore, for each of these classes, the authors show natural operations-such as subtraction and division-to be hard closure properties, in the sense that if a class is closed under one of these, then it has all feasible closure properties. They also study potentially intermediate closure properties for #P. These properties-maximum, minimum, median, and decrement-seem neither to be possessed by #P nor to be #P-hard

…

It is shown that while absolute answers to open questions about relationships between counting classes seem hard to get, it is still possible to obtain relative answers that help us to develop intuition about or understanding of these relationships. In particular, a structural approach to extending such understanding is proposed

…

A logic-based framework for defining counting problems is given,
and it is shown that it exactly captures the problems in Valiant's
counting class #P. The expressive power of the framework is studied
under natural syntactic restrictions, and it is shown that some of the
subclasses obtained in this way contain problems in #P with interesting
computational properties. In particular, using syntactic conditions, a
class of polynomial-time-computable #P problems is isolated, as well as
a class in which every problem is approximable by a polynomial-time
randomized algorithm. These results set the foundation for further study
of the descriptive complexity of the class #P. In contrast, it is shown,
under reasonable complexity theoretic assumptions, that it is an
undecidable problem to tell if a counting problem expressed in the
framework is polynomial-time computable or is approximable by a
randomized polynomial-time algorithm. Some open problems are discussed

…

We demonstrate an oracle relative to which there are one-way
functions but every paddable 1-li-degree collapses to an isomorphism
type, thus yielding a relativized failure of the Joseph-Young (1985)
conjecture (JYC). We then use this result to construct an oracle
relative to which the isomorphism conjecture (IC) is true but one-way
functions exist, which answers an open question of Fenner, Fortnow, and
Kurtz (1992). Thus, there are now relativizations realizing every one of
the four possible states of affairs between the IC and the existence of
one-way functions

…

Finding the connected components of an undirected graph G= (V,E) on n = |V| vertices and m= |E| edges is a fundamental computational problem. The best known parallel algorithm for the CREW PRAM model runs in O(log2n) time using n2/log2 n processors. For the CRCW PRAM model, in which concurrent writing is permitted, the best known algorithm runs in O(log n) time using slightly more than (n+m)/log n processors. Simulating this algorithm on the weaker CREW model increases its running time to O(log2 n). We present here a simple algorithm that runs in O(log3/2n) time using n + m CREW processors. Finding an o(log2 n) parallel connectivity algorithm for this model was an open problem for many years.

…

We present new expressibility lower bounds for a logic with a weak
form of ordering using model theoretic games. Our lower bound is on
first-order logic augmented with counting quantifiers, a logical
language that over structures with a total-ordering has exactly the
power of the class TC<sup>0</sup>. We prove that it cannot express a
property ORD in L, over structures with a successor relation. This holds
even in light of the fact that the class L itself has a logical
characterization as the properties expressible in first-order logic with
a deterministic transitive closure operator over structures with a
successor relation. The proof uses an extension of the well known
Ehrenfeucht-Fraisse Games for logics with counting. We also show that
ORD is actually complete for L (via quantifier free projections), and
this fact is of independent interest

…

We present a membership-query algorithm for efficiently learning DNF with respect to the uniform distribution. In fact, the algorithm properly learns the more general class of functions that are computable as a majority of polynomially-many parity functions. We also describe extensions of this algorithm for learning DNF over certain nonuniform distributions and from noisy examples as well as for learning a class of geometric concepts that generalizes DNF. The algorithm utilizes one of Freund's boosting techniques and relies on the fact that boosting does not require a completely distribution-independent weak learner. The boosted weak learner is a nonuniform extension of a Fourier-based algorithm due to Kushilevitz and Mansour (1991)

…

Let G=(V, A) be a directed, planar graph, let s, t ∈ V,
s≠t, and let c<sub>a</sub>>0 be the capacity of an arc a∈A.
The problem is to find a maximum flow from s to t in G: subject to these
capacities. The fastest algorithm known so far requires
𝒪(|V|·<sup>3</sup>√|V|·log|V|) time, whereas
the algorithm introduced in this paper requires only 𝒪(|V|log|V|)
time

…

It is shown that every language in PSPACE, or equivalently every
language accepted by an unbounded round interactive proof system, has a
one-round, two-prover interactive proof with exponentially small error
probability. To obtain this result, the correctness of a simple but
powerful method for parallelizing two-prover interactive proofs to
reduce their error is proved

…

The power of randomness to save a query to an NP-complete set is
studied. Error probabilities for the random reduction of the P
<sup>SAT</sup>||<sup>[k]</sup>⩽<sub>m</sub><sup>P</sup>-complete
language to a language in P <sup>SAT||[k-1]</sup> are
obtained. It is proved that these probability bounds are tight unless PH
collapses. Tight performance bounds on several randomized reductions
between classes in the Boolean hierarchy are also obtained. These bounds
provide probability thresholds for completeness under randomized
reductions in these classes. Using these thresholds, hardness properties
are proved for some languages in the Boolean hierarchy that are not
known to be ⩽<sub>m</sub><sup>P</sup>-complete. It is also shown
that randomness is far less effective in saving a query in bounded query
function computations

…

We consider the problem of computing the permanent of a 0,1n by n matrix. For a class of matrices corresponding to constant degree expanders we construct a deterministic polynomial time approximation algorithm to within a multiplicative factor n(1+ϵ), for arbitrary ϵ>0. This is an improvement over the best known approximation factor en obtained in Linial, Samorodnitsky and Wigderson (2000) [9], though the latter result was established for arbitrary non-negative matrices. Our results use a recently developed deterministic approximation algorithm for counting partial matchings of a graph (Bayati, Gamarnik, Katz, Nair and Tetali (2007) [2]) and Jerrum–Vazirani method (Jerrum and Vazirani (1996) [8]) of approximating permanent by near perfect matchings.

…

One of the most promising ways to determine evolutionary distance between two organisms is to compare the order of appearance of orthologous genes in their genomes. The resulting genome rearrangement problem calls for finding a shortest sequence of rearrangement operations that sorts one genome into the other. In this paper we provide a 1.5-approximation algorithm for the problem of sorting by transpositions and transreversals, improving on a five-year-old 1.75 ratio for this problem. Our algorithm is also faster than current approaches and requires time for n genes.

…

The translocation operation is one of the popular operations for genome rearrangement. In this paper, we present a 1.75-approxi-
mation algorithm for computing unsigned translocation distance which improves upon the best known 2-approximation algorithm
[1].
KeywordsUnsigned translocation distance-Approximation algorithm

…

We prove the following conjecture stated by Harrison and Ibarra in (Inform. and Control 13 (1968), 462): There are languages accepted by (k + 1)-head 1-way deterministic pushdown automata (k+ 1)-DPDA) but not by k-head 1-way pushdown automata (k-PDA), for every k. On the assumption that their conjecture holds, Harrison and Ibarra also derived some other consequences. Now all those consequences become theorems. For example, the class of languages accepted by k-PDAs is not closed under intersection and complementation. Several other interesting consequences also follow: CFL ] ∪k DPDA(k) and FA(2) ǹ∪k DPDA(k) where DPDA(k)= [L|L is accepted by a k-DPDA and (FAQ) = [L|L is accepted by a 2-head FA s. Our proof is constructive (that is, not based on diagonalization ). Before, the “k + 1 versus k heads” problem was solved by diagonalization and translation methods for stronger machines (2-way, etc) and by traditional counting arguments for weaker machi (k-FA, k-head counter machines, etc).

…

The surjectivity problem for 2D cellular automata was proved undecidable in 1989 by Jarkko Kari. The proof consists in a reduction of a problem concerning finite tilings into the previous one. This reduction uses a special and very sophisticated tile set. In this article, we present a much more simple tile set which can play the same role.

…

The concept of intuitionistic fuzzy sets is the generalization of the concept of fuzzy sets. The theory of intuitionistic fuzzy sets is well suited to dealing with vagueness. Recently, intuitionistic fuzzy sets have been used to build soft decision making models that can accommodate imprecise information, and two solution concepts about the intuitionistic fuzzy core and the consensus winner for group decision-making have also been developed by other researchers using intuitionistic fuzzy sets. However, it seems that there is little investigation on multicriteria and/or group decision making using intuitionistic fuzzy sets with multiple criteria being explicitly taken into account. In this paper, multiattribute decision making using intuitionistic fuzzy sets is investigated, in which multiple criteria are explicitly considered, several linear programming models are constructed to generate optimal weights for attributes, and the corresponding decision-making methods have also been proposed. Feasibility and effectiveness of the proposed method are illustrated using a numerical example.

…

It is shown that the validity problem for propositional dynamic logic (PDL), which is decidable and actually DEXPTIME-complete for the usual class of regular programs, becomes highly undecidable, viz. Π11-complete, when the single nonregular one-letter program L = {a2i |; i ⩾ 0} is added. This answers a question of Harel, Pnueli, and Stavi.

…

We consider the problem of determining if two finite groups are isomorphic. The groups are assumed to be represented by their multiplication tables. We present an O(n) algorithm that determines if two Abelian groups with n elements each are isomorphic. This improves upon the previous upper bound of O(nlogn) [Narayan Vikas, An O(n) algorithm for Abelian p-group isomorphism and an O(nlogn) algorithm for Abelian group isomorphism, J. Comput. System Sci. 53 (1996) 1–9] known for this problem. We solve a more general problem of computing the orders of all the elements of any group (not necessarily Abelian) of size n in O(n) time. Our algorithm for isomorphism testing of Abelian groups follows from this result. We use the property that our order finding algorithm works for any group to design a simple O(n) algorithm for testing whether a group of size n, described by its multiplication table, is nilpotent. We also give an O(n) algorithm for determining if a group of size n, described by its multiplication table, is Abelian.

…

We consider new parameterizations of NP-optimization problems that have nontrivial lower and/or upper bounds on their optimum solution size. The natural parameter, we argue, is the quantity above the lower bound or below the upper bound. We show that for every problem in MAX SNP, the optimum value is bounded below by an unbounded function of the input-size, and that the above-guarantee parameterization with respect to this lower bound is fixed-parameter tractable. We also observe that approximation algorithms give nontrivial lower or upper bounds on the solution size and that the above or below guarantee question with respect to these bounds is fixed-parameter tractable for a subclass of NP-optimization problems.We then introduce the notion of ‘tight’ lower and upper bounds and exhibit a number of problems for which the above-guarantee and below-guarantee parameterizations with respect to a tight bound is fixed-parameter tractable or W-hard. We show that if we parameterize “sufficiently” above or below the tight bounds, then these parameterized versions are not fixed-parameter tractable unless P=NP, for a subclass of NP-optimization problems. We also list several directions to explore in this paradigm.

…

We study ordinal embedding relaxations in the realm of parameterized complexity. We prove the existence of a quadratic kernel for the Betweenness problem parameterized above its tight lower bound, which is stated as follows. For a set V of variables and set C of constraints “vi is between vj and vk”, decide whether there is a bijection from V to the set {1,…,|V|} satisfying at least |C|/3+κ of the constraints in C. Our result solves an open problem attributed to Benny Chor in Niedermeier's monograph “Invitation to Fixed-Parameter Algorithms”. The betweenness problem is of interest in molecular biology. An approach developed in this paper can be used to determine parameterized complexity of a number of other optimization problems on permutations parameterized above or below tight bounds.

…

Absolutely parallel grammars are defined, and it, is shown that the family of languages generated is equal to the family of languages generated by two-way deterministic finite-state transducers (abbreviated 2ft). Furthermore it is shown that this family forms a full AFL closed under substitution. It is shown that the family of languages generated by two-way nondeterministic finite-state transducers is equal to the family of checking automata languages and that it properly contains the family of languages generated by 2ft.

…

In this paper we analyze the behavior of quantum random walks. In particular, we present several new results for the absorption probabilities in systems with both one and two absorbing walls for the one-dimensional case. We compute these probabilities both by employing generating functions and by use of an eigenfunction approach. The generating function method is used to determine some simple properties of the walks we consider, but appears to have limitations. The eigenfunction approach works by relating the problem of absorption to a unitary problem that has identical dynamics inside a certain domain, and can be used to compute several additional interesting properties, such as the time dependence of absorption. The eigenfunction method has the distinct advantage that it can be extended to arbitrary dimensionality. We outline the solution of the absorption probability problem of a (D−1)-dimensional wall in a D-dimensional space.

…

An instance of a control structure is a mapping which takes one or more programs into a new program whose behavior is based on that of the original programs. An instance of a control structure is effective iff it is effectively computable. In order to study the interrelationships of control structures, . we consider abstract programming systems (numberings of the partial recursive functions) in which some control structures, effective or otherwise, are present, but others are not. This paper uses the techniques of recursive function theory, including recursion theorems and priority arguments to prove the independence of certain control structures in abstract programming systems. For example, we have obtained the following results. In effective numberings of the partial recursive functions, the one-one effective Kleene recursion theorem and the one-one effective (partial) if-then-else control structure are independent, but together, they yield all effective control structures. In any effective numbering, the effective Kleene form of the double recursion theorem yields all effective control structures.

…

Implementations of abstract data types are defined via enrichments of a target type. We propose to use an extended typed λ-calculus for enrichments in order to meet the conceptual requirement that an implementation has to bring us closer to a (functional) program. Composability of implementations is investigated, the main result being that composition of correct implementations is correct if terminating programs are implemented by terminating programs. Moreover, we provide syntactical criteria to guarantee correctness of composition. The proof is based on strong normalization and Church-Rosser results of the extended λ-calculus which seem to be of interest in their own right.

…

We initiate the study of incentives in a general machine learning framework. We focus on a game-theoretic regression learning setting where private information is elicited from multiple agents with different, possibly conflicting, views on how to label the points of an input space. This conflict potentially gives rise to untruthfulness on the part of the agents. In the restricted but important case when every agent cares about a single point, and under mild assumptions, we show that agents are motivated to tell the truth. In a more general setting, we study the power and limitations of mechanisms without payments. We finally establish that, in the general setting, the VCG mechanism goes a long way in guaranteeing truthfulness and economic efficiency.

…

We introduce an abstract model of exact learning via queries that can be instantiated to all the query learning models currently in use, while being closer to them than previous unifying attempts. We present a characterization of those Boolean function classes learnable in this abstract model, in terms of a new combinatorial notion that we introduce, the abstract identification dimension. Then we prove that the particularization of our notion to specific known protocols such as equivalence, membership, and membership and equivalence queries results in exactly the same combinatorial notions currently known to characterize learning in these models, such as strong consistency dimension, extended teaching dimension, and certificate size. Our theory thus fully unifies all these characterizations. For models enjoying a specific property that we identify, the notion can be simplified while keeping the same characterizations. From our results we can derive combinatorial characterizations of all those other models for query learning proposed in the literature. We can also obtain the first polynomial-query learning algorithms for specific interesting problems such as learning DNF with proper subset and superset queries.

…

The properties of a simple and natural notion of observational equivalence of algebras and the corresponding specification-building operation are studied. We begin with a definition of observational equivalence which is adequate to handle reachable algebras only, and show how to extend it to cope with unreachable algebras and also how it may be generalised to make sense under an arbitrary institution. Behavioural equivalence is treated as an important special case of observational equivalence, and its central role in program development is shown by means of an example.

…

The reducibility “polynomial time computable in” for arbitrary functions isintroduced. It generalizes Cook's definition from sets to arbitrary functions. A complexity-theoretic as well as a syntactic characterization is given and their equivalence is shown. This equivalence and the naturalness of both definitions give evidence that our notion is “correct.” The computable functions are classified into polynomial classes according to this reducibility. The ordering of these classes under set inclusion is studied. Honest classes are introduced and using them, the classification is related to the computational complexity of the functions classified. The algebraic structure of honest classes is also investigated.In Section II abstract subrecursive reducibilities are introduced. An axiomaticdefinition in purely recursion-theoretic terms is given; in particular, no reference to a particular machine model is needed. The definition is in the spirit of Strong's and Wagner's characterizations of basic recursive function theories. All known reducibilities are abstract reducibilities. The algebraic structure of abstract classes is explored.

…

In the context of abstract geometrical computation, computing with colored
line segments, we study the possibility of having an accumulation with small
signal machines, ie, signal machines having only a very limited number of
distinct speeds. The cases of 2 and 4 speeds are trivial: we provide a proof
that no machine can produce an accumulation in the case of 2 speeds and exhibit
an accumulation with 4 speeds. The main result is the twofold case of 3 speeds.
On the one hand, we prove that accumulations cannot happen when all ratios
between speeds and all ratios between initial distances are rational. On the
other hand, we provide examples of an accumulation in the case of an irrational
ratio between 2 speeds and in the case of an irrational ratio between two
distances in the initial configuration. This dichotomy is explained by the
presence of a phenomenon computing Euclid's algorithm (gcd): it stops if and
only if its input is commensurate (ie, of rational ratio).

…

Databases and other transaction-processing systems use concurrency control and recovery algorithms to ensure that transactions are atomic (i.e., serializable and recoverable). We present a new algorithm based on locking that permits more concurrency than existing commutativity-based algorithms. The algorithm exploits type-specific properties of objects; necessary and sufficient constraints on lock conflicts are derived directly from a data type specification. In addition, the algorithm permits operations to be both partial and non-deterministic, and it permits the lock mode for an operation to be determined by its results as well as its name and arguments. We give a complete formal description of the algorithm, encompassing both concurrency control and recovery, and prove that the algorithm satisfies hybrid atomicity, a local atomicity property that combines aspects of static and dynamic atomic algorithms. We also show that the algorithm is optimal in the sense that no hybrid atomic locking scheme can permit more concurrency.

…

We show that the uniform validity is equivalent to the non-uniform validity for Blass' semantics of [A. Blass, A game semantics for linear logic, Ann. Pure Appl. Logic 56 (1992) 183–220]. We present a shorter proof (than that of [G. Japaridze, The intuitionistic fragment of computability logic at the propositional level, Ann. Pure Appl. Logic 147 (3) (2007) 187–227]) of the completeness of the positive fragment of intuitionistic logic for this semantics, computability logic semantics, and the abstract resource semantics.

…

This paper is concerned with a method for expanding (or reducing) a Petri net representation to the desired level of detail using step-by-step refinement of transitions and places (or abstraction of subnets to transitions). In particular, we present conditions under which a subnet can be substituted for a single transition while preserving properties such as liveness and boundedness. The present method is general enough to include previously reported methods as special cases. The refinement technique can be used as a top-down approach for synthesizing Petri net models of concurrent systems, while the abstraction technique can be used as a “divide-and-conquer” approach to the analysis of Petri nets.

…

We prove that the perfect matching for regular graphs (even if restricted to degree 3 and 2-connected 4-regular graphs) is AC0-equivalent with the general perfect matching problem for arbitrary graphs.

…

We answer the question: What are the Boolean functions that can be computed with a constant number of bit-exchange in a two-processor environment no matter how the input bits are distributed among the processors?The characterization uses “programs over a monoid M,” a construction introduced by D. Barrington. We prove that if the symmetric communication complexity of a Boolean function f is at most c (i.e., the communication complexity is at most c for all possible partitions of the input into two parts) then there is a commutative monoid M of size at most exp(exp(exp(exp(exp c)))) such that a program over the monoid M can be built that computes f. We also give size and depth upper bounds for synchronous circuits that compute functions with bounded symmetric communication complexity, as well as width upper bounds for read-only once branching programs that compute these functions.

…

This paper concerns a generalization of finite automata, the “tree acceptors,” which have as their inputs finite trees of symbols rather than the usual sequences of symbols. Ordinary finite automata prove to be special cases of tree acceptors, and many of the results of finite automata theory continue to hold in their appropriately generalized forms. The tree acceptors provide new characterizations of the classes of regular sets and of context-free languages. The theory of tree acceptors is applied to a decision problem of mathematical logic. It is shown here that the weak secondorder theory of two successors is decidable, thus settling a problem of Buchi. This result is in turn applied to obtain positive solutions to the decision problems for various other theories, e.g., the weak second-order theories of order types built up from the finite types, ω, and η (the type of the rationals) by finitely many applications of the operations of order type addition, multiplication, and converse; and the weak second-order theory of locally free algebras with only unary operations.

…

Complexity classes of formal languages defined by time- and tape-bounded Turing acceptors are studied. Sufficient conditions for these classes to be AFLs are given. Further, it is shown that a time-bounded nondeterministic Turing acceptor need have only two storage tapes.

…

The RAM, an abstract model for a random access computer, is introduced. A unique feature of the model is that the execution time of an instruction is defined in terms of l(n), a function of the size of the numbers manipulated by the instruction. This model has a fixed program, but it is shown that the computing speeds of this model and a stored-program model can differ by no more than a constant factor. It is proved that a T(n) time-bounded Turing machine can be simulated by an O(T(n)·l(T(n))) timebounded RAM, and that a T(n) time-bounded RAM can be simulated by a Turing machine whose execution time is bounded by (T(n))3 if l(n) is constant, or (T(n))2 if l(n) is logarithmic.The main result states that if T2(n) is a function such that there is a RAM that computes T2(n) in time O(T2(n)), and if T1(n) is any function such that , then there is a set S that can be recognized by some RAM in time O(T2(n)), but no RAM recognizes S in time O(T1(n)). This is a sharper diagonal result than has been obtained for Turing machines.The proofs of most of the above results are constructive and are aided by the introduction of an ALGOL-like programming language for RMA's.

…

Graph-based specification formalisms for access control (AC) policies combine the advantages of an intuitive visual framework with a rigorous semantical foundation that allows the detailed comparison of different policy models. A security policy framework specifies a set of (constructive) rules to build the system states and sets of positive and negative (declarative) constraints to specify wanted and unwanted substates. Several models for AC (e.g. role-based, lattice-based or an access control list) can be specified in this framework. The framework is used for an accurate analysis of the interaction between policies and of the behavior of their integration with respect to the problem of inconsistent policies. Using formal properties of graph transformations, it is possible to systematically detect inconsistencies between constraints, between rules and between a rule and a constraint and lay the foundation for their resolutions.

…

The ROBUST PRAM is a concurrent-read concurrent-write (CRCW) parallel random access machine in which any value might appear in a memory cell as a result of a write conflict. This paper addresses the question of whether a PRAM with such a weak form of write conflict resolution can compute functions faster than the concurrent-read exclusive-write (CREW) PRAM. We prove a lower bound on the time required by the ROBUST PRAM to compute Boolean functions in terms of the number of different values each memory cell of the PRAM can contain and the degree of the function when expressed as a polynomial over a finite field. In the case of 1-bit memory cells, our lower bound for the problem of computing the OR ofnBoolean variables exactly matches Cook, Dwork, and Reischuk's upper bound on the CREW PRAM. We extend our result to obtain a lower bound, depending on the number of processors, for computing Boolean functions on the ROBUST PRAM, even with memory cells of unbounded size. A ?particular consequence is that the ROBUST PRAM with[formula]processors requires[formula]steps to compute OR. These results are obtained by defining a class of CRCW PRAMs, the fixed adversary PRAMs, all of which are at least as powerful as the ROBUST PRAM. We prove our lower bounds using carefully chosen PRAMs from this class. We also show the limitations of this technique by describing how, withn-bit memory cells, any fixed adversary PRAM can compute OR and, more generally, simulate a PRIORITY PRAM in constant time. Finally, we consider the effect of adding randomization to the ROBUST PRAM. For any algorithm that computes OR without error, its expected running time on its worst input is no better than the worst case deterministic time complexity of computing OR. However, allowing a small probability of error enables the ROBUST PRAM with single bit memory cells to compute OR in almost constant time.

…

On any general sequential model of computation with random-access input (e.g., a logarithmic cost RAM or a Turing machine with random-access input heads) the product time · space is: 1.(1) Not o(N2), hence not , for computing the discrete Fourier transform over finite prime fields, even when each entry in the input vector has length O(log N). Here N denotes the number of entries and n denotes the input length.2.(2) Ω(M3), hence not for M by M matrix multiplication over the integers or over finite prime fields, even when each entry in the matrices has length O(log M).For this range of entries length these lower bounds on time · space coincide, up to a log no(1) factor, with the upper bounds achieved by the straightforward arithmetic algorithms. Time-space tradeoffs for the discrete Fourier transform and for matrix multiplication on the restricted model of a straight-line algorithm were previously obtained by Grigoryev (“Notes on Scientific Seminars 60,” pp. 38–48, Steklov Math. Inst., Leningrad, 1976. [Russian]) Ja'Ja' (“Proceedings 12th Annual ACM Sympos. Theory Comput., 1980,” pp. 339–349) and Tompa (University of Toronto, Dept. of Comput. Sci. Tech. Report ). The model considered is general, meaning that it is not restricted to performing a sequence of arithmetic operations. Arbitrary bit processing and branching is allowed.

…

We study the problem of label ranking, a machine learning task that consists of inducing a mapping from instances to rankings over a finite number of labels. Our learning method, referred to as ranking by pairwise comparison (RPC), first induces pairwise order relations (preferences) from suitable training data, using a natural extension of so-called pairwise classification. A ranking is then derived from a set of such relations by means of a ranking procedure. In this paper, we first elaborate on a key advantage of such a decomposition, namely the fact that it allows the learner to adapt to different loss functions without re-training, by using different ranking procedures on the same predicted order relations. In this regard, we distinguish between two types of errors, called, respectively, ranking error and position error. Focusing on the position error, which has received less attention so far, we then propose a ranking procedure called ranking through iterated choice as well as an efficient pairwise implementation thereof. Apart from a theoretical justification of this procedure, we offer empirical evidence in favor of its superior performance as a risk minimizer for the position error.

…

Top-cited authors