Information and Computation

Published by Elsevier
Online ISSN: 1090-2651
Print ISSN: 0890-5401
Publications
Gives a new denotational semantics for a shared variable parallel programming language and proves full abstraction. The semantics gives identical meanings to commands if and only if they induce the same partial correctness behavior in all program contexts. The meaning of a command is a set of transition traces, which record the ways in which a command may interact with and be affected by its environment. It is shown how to modify the semantics to incorporate new program constructs, to allow for different levels of granularity or atomicity, and to model fair infinite computation, in each case achieving full abstraction with respect to an appropriate notion of program behavior
 
In distributed systems subject to random communication delays and component failures, atomic broadcast can be used to implement the abstraction of synchronous replicated storage, a distributed storage that displays the same contents at every correct processor as of any clock time. This paper presents a systematic derivation of a family of atomic broadcast protocols that are tolerant of increasingly general failure classes: omission failures, timing failures, and authentication-detectable Byzantine failures. The protocols work for arbitrary point-to-point network topologies, and can tolerate any number of link and process failures up to network partitioning. After proving their correctness, we also prove two lower bounds that show that the protocols provide in many cases the best possible termination times. (C) 1995 Academic Press, Inc.
 
A finitary axiomatization of the algebra of regular events involving only equations and equational implications that is sound for all interpretations over Kleene algebras is given. Axioms for Kleene algebra are presented, and some basic consequences are derived. Matrices over a Kleene algebra are considered. The notion of an automaton over an arbitrary Kleen algebra is defined and used to derive the classical results of the theory of finite automata as a result of the axioms. The completeness of the axioms for the algebra of regular events is treated. Open problems are indicated
 
Much structural work on NP-complete sets has exploited SAT's d-self-reducibility. We exploit the additional fact that SAT is a d-cylinder to show that NP-complete sets are p-superterse unless P=NP. In fact, every set that is NP-hard under polynomial-time n<sup>o(1</sup>)-tt reductions is p-superterse unless P=NP. In particular no p-selective set is NP-hard under polynomial-time n<sup>o(1</sup>)-tt reductions unless P=NP. In addition, no easily countable set is NP-hard under Turing reductions unless P=NP. Self-reducibility does not seem to suffice for our main result: in a relativized world, we construct a d-self-reducible set in NP-P that is polynomial-time 2-tt reducible to a p-selective set
 
We investigate properties of functions that are good measures of the CRCW PRAM complexity of computing them. While the block sensitivity is known to be a good measure of the CREW PRAM complexity, no such measure is known for CRCW PRAMs. We show that the complexity of computing a function is related to its everywhere sensitivity, introduced by Vishkin and Wigderson (1985). Specifically we show that the time required to compute a function f:D<sup>n</sup>→R of everywhere sensitivity es(f) with P&ges;n processors and unbounded memory is Ω(log[log es(f)/(log 4P|D|- log es(f))]). This improves previous results of Azar (1992), and Vishkin and Wigderson. We use this lower bound to derive new lower bounds for some approximate problems. These problems can often be solved faster than their exact counterparts and for many applications, it is sufficient to solve the approximate problem. We show that approximate selection requires time Ω(log[log n/log k]) with kn, processors and approximate counting with accuracy λ&ges;2 requires time Ω(log[log n/(log k+log λ)]) with kn processors. In particular, for constant accuracy, no lower bounds were known for these problems
 
The generalized firing squad synchronization problem (GFSSP) is the well-known firing squad synchronization problem extended to arbitrarily connected networks of finite automata. When the transmission delays associated with the links of a network are allowed to be arbitrary nonnegative integers, the problem is called GFSSP-NUD (GFSSP with nonuniform delays). A solution of GFSSP-NUD is given for the first time. The solution is independent of the structure of the network and the actual delays of the links. The firing time of the solution is bounded by O (Δ<sup>3</sup>+τ<sub>max</sub>), where τ<sub>max</sub> is the maximum transmission delay of any single link and Δ is the maximum transmission delay between the general and any other node of a given network. Extensions of GFSSP and GFSSP-NUD to networks with more than one general are presented
 
We show that for any graph G, k non-trivial automorphisms of G-if as many exist-can be computed in time |G|<sup>O(log k</sup>) with nonadaptive queries to GA, the decision problem for Graph Automorphism. As a consequence we show that some problems related to GA and GI are polynomial-time truth-table equivalent to GA
 
The authors consider whether it is possible to devise a complete normalization algorithm that minimizes (rather than eliminates) the wasteful reductions for the entire class of regular systems. A solution is proposed to this problem using the concept of a necessary set of redexes. In such a set, at least one of the redexes must be reduced to normalize a term. An algorithm is devised to compute a necessary set for any term not in normal form, and it is shown that a strategy that repeatedly reduces all redexes in such a set is complete for regular programs. It is also shown that the algorithm is optimal among all normalization algorithms that are based on left-hand sides alone. This means that the algorithm is lazy (like Huet-Levy's) on strongly sequential parts of a program, relaxes laziness minimally to handle the other parts, and thus does not sacrifice generality for the sake of efficiency
 
A timing-based variant of the mutual-exclusion problem is considered. In this variant, only an upper bound on the time it takes to release the resource is known, and no explicit signal is sent when the resource is released; furthermore, the only mechanism to measure real time is an inaccurate clock, whose tick intervals take time between two constants. A new technique involving shifting and shrinking executions is combined with a careful analysis of the best allocation policy to prove a corresponding lower bound when control is distributed among processes connected by communication lines with an upper bound for message delivery time. These combinatorial results shed some light on modeling and verification issues related to real-time systems
 
The use of lambda calculus in richer settings, possibly involving parallelism, is examined in terms of its effect on the equivalence between lambda terms, focusing on S. Abramsky's (Ph.D thesis, Univ. of London, 1987) lazy lambda calculus. First, the lambda calculus is studied within a process calculus by examining the equivalence induced by R. Milner's (1992) encoding into the π-calculus. Exact operational and denotational characterizations for this equivalence are given. Second, Abramsky's applicative bisimulation is examined when the lambda calculus is augmented with (well-formed) operators, i.e. symbols equipped with reduction rules describing their behavior. Then, maximal discrimination is obtained when all operators are considered; it is shown that this discrimination coincides with the one given by the above equivalence and that the adoption of certain nondeterministic operators is sufficient and necessary to induce it
 
This research brings together, in a methodical way, several approaches to giving a compositional theory of Petri nets using category theory and to the use of linear logic in specifying and reasoning about Petri nets. The authors construct categories of nets based on V.C.V. de Paiva's dialectica category models (1989) of linear logic in which they are able to exploit the structure of de Paiva's models to give constructions on categories of nets. Using a category of safe nets as an example, it is shown how this approach yields both existing and novel constructions on nets and their computational interpretation is discussed. The authors also indicate how more general categories of nets can be expressed in this framework
 
An extension of Milner's CCS with a priority choice operator called prisum is investigated. This operator is very similar to the PRIALT construct of Occam. The binary prisum operator only allows execution of its second component in the case in which the environment is not ready to allow the first component to proceed. This dependency on the set of actions the environment is ready to perform goes beyond that encountered in traditional CCS. Its expression leads to a novel operational semantics in which transitions carry read-sets (of the environment) as well as the normal action symbols from CCS. A notion of strong bisimulation is defined on agents with priority by means of this semantics. It is a congruence and satisfies new equational laws (including a new expansion law) which are shown to be complete for finite agents with prisum. The laws are conservative over agents of traditional CCS
 
A model-checking method for linear-time temporal logic that avoids the state explosion due to the modeling of concurrency by interleaving is presented. The method relies on the concept of the Mazurkiewicz trace as a semantic basis and uses automata-theoretic techniques, including automata that operate on words of ordinality higher than ω. In particular, automata operating on words of length ω× n , n ∈ω are defined. These automata are studied, and an efficient algorithm to check whether such automata are nonempty is given. It is shown that when it is viewed as an ω× n automaton, the trace automaton can be substituted for the production automaton in linear-time model checking. The efficiency of the method of P. Godefroid (Proc. Workshop on Computer Aided Verification, 1990) is thus fully available for model checking
 
R. Beigel et al. (1991) showed that PP is closed under intersection and a variety of special cases of truth-table closure. In the present work, the authors extend the techniques of Beigel et al. to show that PP is closed under general polynomial-time truth-time reductions
 
A unifying framework for the study of real-time logics is developed. In analogy to the untimed case, the underlying classical theory of timed state sequences is identified, it is shown to be nonelementarily decidable, and its complexity and expressiveness are used as a point of reference. Two orthogonal extensions of PTL (timed propositional temporal logic and metric temporal logic) that inherit its appeal are defined: they capture elementary, yet expressively complete, fragments of the theory of timed state sequences, and thus are excellent candidates for practical real-time specification languages
 
Specific maximization problems, such as the maximal independent set problem and the minimal unsatisfiability problem, are studied in a general framework. The goal is to show what factors make maximization problems hard or easy to solve and how the factors influence the complexity of solving the problems. Maximization problems are divided into several classes, and both upper and lower bounds for them are proved. An important consequence of the results is that finding an X -minimal satisfying truth assignment to a given CNF Boolean formula is complete for NPMV/OptP[ O (log n )], solving an open question of C.H. Papadimitriou (1991)
 
For a monoid G, the iterated multiplication problem is the computation of the product of n elements from G. By refining known completeness arguments, we show that as G varies over a natural series of important groups and monoids, the iterated multiplication problems are complete for most natural, low-level complexity classes. The completeness is with respect to ''first-order projections''-low-level reductions that do not obscure the algebraic nature of these problems. (C) 1995 Academic Press. Inc.
 
A proof that a simple compiler correctly uses the static properties in its symbol table is presented. This is done by regarding the target code produced by the compiler as a syntactic variant of a λ-term. In general, this λ-term C may not be equal to the semantics S of the source program: they need to equal only when information in the symbol table is valid. Rules of inference for conditional λ-judgements are presented, and their soundness is proven. These rules are then used to prove the correctness of a simple compiler that relies on a symbol table. The form of the proof suggests that such proofs may be largely mechanizable
 
Managing a connection between two hosts in a network is an important service to provide in order to make the network useful for many applications. The two main subproblems are the management of serial incarnations of a connection and the transfer of messages within an incarnation. This paper investigates whether it is necessary for connection management protocols to retain state information across node crashes and between incarnations. The following results were obtained: When information is not retained across node crashes, incarnation management is not possible at all. When information is not retained between incarnations, incarnation management is possible if the network is FIFO and not possible if the network is non-FIFO. When information is not retained across node crashes, message transfer can be accomplished in networks that lose packets if the network is FIFO and the protocol is allowedavariable length grace period after a crash during which it need not deliver messages. However, message transfer cannot be accomplished if the network is
 
The problem of minimizing the number of late tasks in the imprecise computation model is considered. Each task consists of two subtasks, mandatory and optional. A task is said to be on-time if its mandatory part is completed by its deadline; otherwise, it is said to be late. An on-time task incurs an error if its optional part is not computed by the deadline, and the error is simply the execution time of the unfinished portion. The authors consider the problem of finding a preemptive schedule for a set of tasks on p &ges; 1 identical processors, such that the number of on-time tasks is maximized, (or equivalently, the number of late task is minimized), and the total error of the on-time tasks is no more than a given threshold K . Such a schedule is called an optimal schedule. It is shown that the problem of finding an optimal schedule is NP-hard for each fixed p &ges;1, even if all tasks have the same ready time and the same deadline
 
An n -variable Boolean formula can have anywhere from 0 to 2<sup>n</sup> satisfying assignments. The question of whether a polynomial-time machine, given such a formula, can reduce this exponential number of possibilities to a small number of possibilities is explored. Such a machine is called an enumerator, and it is proved that if there is a good polynomial-time enumerator for #P (i.e. one where the small set has at most O(| f |<sup>1-e</sup>) numbers), then P=NP=P<sup>#P</sup> and probabilistic polynomial time equals polynomial time. Furthermore, #P and enumerating #P are polynomial-time Turing equivalent
 
Decidability and expressiveness issues for two first-order logics of probability are considered. In one the probability is on possible worlds, whereas in the other it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. It is shown that, when the probability is on the domain, if the language contains only unary predicates, then the validity problem is decidable. However, if the language contains even one binary predicate, the validity problem is Π<sub>1</sub><sup>2 </sup> as hard as elementary analysis with free predicate and function symbols. With equality in the language, even with no other symbol, the validity problem is at least as hard as that for elementary analysis, Π<sub>∞</sub><sup>1</sup>. Thus, the logic cannot be axiomatized in either case. When the probability is on the set of possible worlds, the validity problem is Π<sub>1</sub><sup>2</sup> complete with as little as one unary predicate in the language, even without equality. With equality, Π<sub>∞</sub><sup>1</sup> hardness with only a constant symbol is obtained. In many applications it suffices to restrict attention to domains of a bounded size; it is shown that the logics are decidable in this case
 
The type and effect discipline, a framework for reconstructing the principal type and the minimal effect of expressions in implicitly typed polymorphic functional languages that support imperative constructs, is introduced. The type and effect discipline outperforms other polymorphic type systems. Just as types abstract collections of concrete values, effects denote imperative operations on regions. Regions abstract sets of possibly aliased memory locations. Effects are used to control type generalization in the presence of imperative constructs while regions delimit observable side effects. The observable effects of an expression range over the regions that are free in its type environment and its type; effects related to local data structures can be discarded during type reconstruction. The type of an expression can be generalized with respect to the variables that are not free in the type environment or in the observable effect
 
A procedure is given for extracting from a GSOS specification of an arbitrary process algebra a complete axiom system for bisimulation equivalence (equational, except for possibly one conditional equation). The methods apply to almost all SOSs for process algebras that have appeared in the literature, and the axiomatizations compare reasonably well with most axioms that have been presented. In particular, they discover the L characterization of parallel composition. It is noted that completeness results for equational axiomatizations are tedious and have become rather standard in many cases. A generalization of extant completeness results shows that in principle this burden can be completely removed if one gives a GSOS description of a process algebra
 
A new formal embodiment of J.-Y. Girard's (1989) geometry of interaction program is given. The geometry of interaction interpretation considered is defined, and the computational interpretation is sketched in terms of dataflow nets. Some examples that illustrate the key ideas underlying the interpretation are given. The results, which include the semantic analogue of cut-elimination, stated in terms of a finite convergence property, are outlined
 
The intuitionistic notion of context is refined by using a fragment of J.-Y. Girard's (Theor. Comput. Sci., vol.50, p.1-102, 1987) linear logic that includes additive and multiplicative conjunction, linear implication, universal quantification, the of course exponential, and the constants for the empty context and for the erasing contexts. It is shown that the logic has a goal-directed interpretation. It is also shown that the nondeterminism that results from the need to split contexts in order to prove a multiplicative conjunction can be handled by viewing proof search as a process that takes a context, consumes part of it, and returns the rest (to be consumed elsewhere). Examples taken from theorem proving, natural language parsing, and database programming are presented: each example requires a linear, rather than intuitionistic, notion of context to be modeled adequately
 
Petri nets are known to be useful for modeling concurrent systems. Once modeled by a Petri net, the behavior of a concurrent system can be characterized by the set of all executable transition sequences, which in turn can be viewed as a language over an alphabet of symbols corresponding to the transitions of the underlying Petri net. In this paper, we study the language issue of Petri nets from a computational complexity viewpoint. We analyze the complexity of the regularity problem (i.e., the problem of determining whether a given Petri net defines an irregular language or not) for a variety of classes of Petri nets, including conflict-free, trap-circuit, normal, sinkless, extended trap-circuit, BPP, and general Petri nets. (Extended trap-circuit Petri nets are trap-circuit Petri nets augmented with a specific type of circuits.) As it turns out, the complexities for these Petri net classes range from NL (nondeterministic logspace), PTIME (polynomial time), and NP (nondeterministic polynomial time), to EXPSPACE (exponential space). In the process of deriving the complexity results, we develop a decomposition approach which, we feel, is interesting in its own right, and might have other applications to the analysis of Petri nets as well. As a by-product, an NP upper bound of the reachability problem for the class of extended trap-circuit Petri nets (which properly contains that of trap-circuit (and hence, conflict-free) and BPP-nets, and is incomparable with that of normal and sinkless Petri nets) is derived.
 
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in the limit on a sequence of correct grammars representing the target concept). In both cases we show that returning to wrong conjectures is necessary to achieve full learning power. In contrast, one can modify learners (without losing learning power) such that they never show inverted U-shaped learning behaviour, that is, never return to old wrong conjecture with a correct conjecture in-between. Furthermore, one can also modify a learner (without losing learning power) such that it does not return to old “overinclusive” conjectures containing non-elements of the target language. We also consider our problem in the context of vacillatory learning (when a learner stabilizes on a finite number of correct grammars) and show that each of the following four constraints is restrictive (that is, reduces learning power): the learner does not return to old wrong conjectures; the learner is not inverted U-shaped; the learner does not return to old overinclusive conjectures; the learner does not return to old overgeneralizing conjectures. We also show that learners that are consistent with the input seen so far can be made decisive: on any text, they do not return to any old conjectures—wrong or right.
 
An important problem in genome rearrangements is sorting permutations by transpositions. The complexity of the problem is still open, and two rather complicated 1.5-approximation algorithms for sorting linear permutations are known (Bafna and Pevzner, 98 and Christie, 99). The fastest known algorithm is the quadratic algorithm of Bafna and Pevzner. In this paper, we observe that the problem of sorting circular permutations by transpositions is equivalent to the problem of sorting linear permutations by transpositions. Hence, all algorithms for sorting linear permutations by transpositions can be used to sort circular permutations. Our main result is a new 1.5-approximation algorithm, which is considerably simpler than the previous ones, and whose analysis is significantly less involved.
 
In Information and Computation 204 (2006), 1756–1781, the structure of Eilenberg-Moore algebras for the Giry monad for subprobabilities on Polish spaces is investigated in some detail by the present author. This note corrects a gap in one of the proofs. Additionally, it adapts the general results for the discrete Giry monad.
 
Many different methods have been devised for automatically verifying finite state systems by examining state-graph models of system behavior. These methods all depend on decision procedures that explicitly represent the state space using a list or a table that grows in proportion to the number of states. We describe a general method that represents the state space symbolically instead of explicitly. The generality of our method comes from using a dialect of the Mu-Calculus as the primary specification language. We describe a model checking algorithm for Mu-Calculus formulas that uses Bryant's Binary Decision Diagrans (Bryant, R. E., 1986, IEEE Trans. Comput.C-35) to represent relations and formulas. We then show how our new Mu-Calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment for finite ω-automata. The fixed point computations for each decision procedure are sometimes complex, but can be concisely expressed in the Mu-Calculus. We illustrate the practicality of our approach to symbolic model checking by discussing how it can be used to verify a simple synchronous pipeline circuit.
 
The problem of identifying an unknown regular set from examples of its members and nonmembers is addressed. It is assumed that the regular set is presented by a minimally adequate Teacher, which can answer membership queries about the set and can also test a conjecture and indicate whether it is equal to the unknown set and provide a counterexample if not. (A counterexample is a string in the symmetric difference of the correct set and the conjectured set.) A learning algorithm L∗ is described that correctly learns any regular set from any minimally adequate Teacher in time polynomial in the number of states of the minimum dfa for the set and the maximum length of any counterexample provided by the Teacher. It is shown that in a stochastic setting the ability of the Teacher to test conjectures may be replaced by a random sampling oracle, EX( ). A polynomial-time learning algorithm is shown for a particular problem of context-free language identification.
 
In this paper, we develop a unified approach for deriving complexity results for a number of Petri net problems. We first define a class of formulas for paths in Petri nets. We then show that the satisfiability problem for our formulas is EXPSPACE complete. Since a wide range of Petri net problems can be reduced to the satisfiability problem in a straightforward manner, our approach offers an umbrella under which many Petri net Problems can be shown to be solvable in EXPSPACE.
 
The quantified constraint satisfaction problem (QCSP) is a framework for modelling PSPACE computational problems. The general intractability of the QCSP has motivated the pursuit of restricted cases that avoid its maximal complexity. In this paper, we introduce and study a new model for investigating QCSP complexity in which the types of constraints given by the existentially quantified variables, is restricted. Our primary technical contribution is the development and application of a general technology for proving positive results on parameterizations of the model, of inclusion in the complexity class coNP.
 
This paper proposes a fast algorithm for computing multiplicative inverses in GF(2m) using normal bases. Normal bases have the following useful property: In the case that an element x in GF(2m) is represented by normal bases, 2k power operation of an element x in GF(2m) can be carried out by k times cyclic shift of its vector representation. C. C. Wang et al. proposed an algorithm for computing multiplicative inverses using normal bases, which requires (m − 2) multiplications in GF(2m) and (m − 1) cyclic shifts. The fast algorithm proposed in this paper also uses normal bases, and computes multiplicative inverses iterating multiplications in GF(2m). It requires at most 2[log2(m − 1)] multiplications in GF(2m) and (m − 1) cyclic shifts, which are much less than those required in the Wang's method. The same idea of the proposed fast algorithm is applicable to the general power operation in GF(2m) and the computation of multiplicative inverses in GF(qm) (q = 2n).
 
Almost perfect nonlinear (APN) mappings are of interest for applications in cryptography We prove for odd n and the exponent d=22r+2r−1, where 4r+1≡0 mod n, that the power functions xd on GF(2n) is APN. The given proof is based on a new class of permutation polynomials which might be of independent interest. Our result supports a conjecture of Niho stating that the power function xd is even maximally nonlinear or, in other terms, that the crosscorrelation function between a binary maximum-length linear shift register sequences of degree n and a decimation of that sequence by d takes on precisely the three values −1, −1±2(n+1)/2.
 
We study the problem of scheduling transmissions on the downlink of IEEE 802.16/WiMAX systems that use the OFDMA technology. These transmissions are scheduled using a matrix whose dimensions are frequency and time, where every matrix cell is a time slot on some carrier channel. The IEEE 802.16 standard mandates that: (i) every transmission occupies a rectangular set of cells, and (ii) transmissions must be scheduled according to a given order. We show that if the number of cells required by a transmission is not limited (up to the matrix size), the problem of maximizing matrix utilization is very hard to approximate. On the positive side we show that if the number of cells of every transmission is limited to some constant fraction of the matrix area, the problem can be approximated to within a constant factor. As far as we know this is the first paper that considers this sequential rectangle placement problem.
 
As an evidence of the power of finite unary substitutions we show that the inclusion problem for finite substitutions on the language L=ab*c is undecidable, i.e., it is undecidable whether for two finite substitutions ϕ and ψ the relation ϕ(w)⊆ψ(w) holds for all w in L.
 
Cryptographic protocols are small programs which involve a high level of concurrency and which are difficult to analyze by hand. The most successful methods to verify such protocols are based on rewriting techniques and automated deduction in order to implement or mimic the process calculus describing the execution of a protocol. We are interested in the intruder deduction problem, that is vulnerability to passive attacks in presence of equational theories which model the protocol specification and properties of the cryptographic operators. In the present paper, we consider the case where the encryption distributes over the operator of an Abelian group or over an exclusive-or operator. We prove decidability of the intruder deduction problem in both cases. We obtain a PTIME decision procedure in a restricted case, the so-called binary case. These decision procedures are based on a careful analysis of the proof system modeling the deductive power of the intruder, taking into account the algebraic properties of the equational theories under consideration. The analysis of the deduction rules interacting with the equational theory relies on the manipulation of Z-modules in the general case, and on results from prefix rewriting in the binary case.
 
It is well known that simulation equivalence is an appropriate abstraction to be used in model checking because it strongly preserves ACTL* and provides a better space reduction than bisimulation equivalence. However, computing simulation equivalence is harder than computing bisimulation equivalence. A number of algorithms for computing simulation equivalence exist. Let Sigma denote the state space, -> the transition relation and P_sim the partition of Sigma induced by simulation equivalence. The algorithms by Henzinger, Henzinger, Kopke and by Bloom and Paige run in O(|Sigma||->|)-time and, as far as time-complexity is concerned, they are the best available algorithms. However, these algorithms have the drawback of a quadratic space complexity that is bounded from below by \Omega(|Sigma|^2). The algorithm by Gentilini, Piazza, Policriti appears to be the best algorithm when both time and space complexities are taken into account. Gentilini et al.'s algorithm runs in O(|P_sim|^2 |->|)-time while the space complexity is in O(|P_sim|^2 + |Sigma|log(|P_sim|)). We present here a new efficient simulation equivalence algorithm that is obtained as a modification of Henzinger et al.'s algorithm and whose correctness is based on some techniques used in recent applications of abstract interpretation to model checking. Our algorithm runs in O(|P_sim||->|)-time and O(|P_sim||Sigma|)-space. Thus, while retaining a space complexity which is lower than quadratic, our algorithm improves the best known time bound. An experimental evaluation showed very good comparative results with respect to Henzinger, Henzinger and Kopke's algorithm.
 
Using a genealogically ordered infinite regular language, we know how to represent an interval of . Numbers having an ultimately periodic representation play a special role in classical numeration systems. The aim of this paper is to characterize the numbers having an ultimately periodic representation in generalized systems built on a regular language. The syntactical properties of these words are also investigated. Finally, we show the equivalence of the classical θ-expansions with our generalized representations in some special case related to a Pisot number θ.
 
The notion of uniform closure operator is introduced, and it is shown how this concept surfaces in two different areas of application of abstract interpretation, notably in semantics design for logic programs and in the theory of abstract domain refinements. In logic programming, uniform closures permit generalization, from an order-theoretic perspective, of the standard hierarchy of declarative semantics. In particular, we show how to reconstruct the model-theoretic characterization of the well-known s-semantics using pure order-theoretic concepts only. As far as the systematic refinement operators on abstract domains are concerned, we show that uniform closures capture precisely the property of a refinement of being invertible, namely of admitting a related operator that simplifies as much as possible a given abstract domain of input for that refinement. Exploiting the same argument used to reconstruct the s-semantics of logic programming, we yield a precise relationship between refinements and their inverse operators: we demonstrate that they form an adjunction with respect to a conveniently modified complete order among abstract domains.
 
In this paper, we present a lattice of graphs particularly suitable for semantic analysis of dynamic data structures. We consider LISP-like structures only, as generalization to any dynamic structure is easy. Viewing those structures as data graphs, we introduce a special kind of graphs (called heap-graphs or h-graphs), each of these being able to figure out a set of data graphs. Then we build a finite subset of those h-graphs by means of a notion of normalized h-graphs, and define an algebraic structure on this subset, thus building a (finite) lattice. Finally, we define abstract operations on this subset, corresponding to (some) LISP primitives. We show how this analysis can be used, and give some results produced by an experimental analyzer.
 
The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of a state partition which is a bisimulation on some Kripke structure is well known. It is also well known in model checking that bisimulation is equivalent to strong preservation of CTL, or, equivalently, of Hennessy-Milner logic. Drawing on these observations, we analyze the basic steps of the PT algorithm from an abstract interpretation perspective, which allows us to reason on strong preservation in the context of generic inductively defined (temporal) languages and of possibly non-partitioning abstract models specified by abstract interpretation. This leads us to design a generalized Paige-Tarjan algorithm, called GPT, for computing the minimal refinement of an abstract interpretation-based model that strongly preserves some given language. It turns out that PT is a straight instance of GPT on the domain of state partitions for the case of strong preservation of Hennessy-Milner logic. We provide a number of examples showing that GPT is of general use. We first show how a well-known efficient algorithm for computing stuttering equivalence can be viewed as a simple instance of GPT. We then instantiate GPT in order to design a new efficient algorithm for computing simulation equivalence that is competitive with the best available algorithms. Finally, we show how GPT allows to compute new strongly preserving abstract models by providing an efficient algorithm that computes the coarsest refinement of a given partition that strongly preserves the language generated by the reachability operator.
 
We study syntax-free models for name-passing processes. For interleaving semantics, we identify the indexing structure required of an early labelled transition system to support the usual π-calculus operations, defining Indexed Labelled Transition Systems. For non-interleaving causal semantics we define Indexed Labelled Asynchronous Transition Systems, smoothly generalizing both our interleaving model and the standard Asynchronous Transition Systems model for CCS-like calculi. In each case we relate a denotational semantics to an operational view, for bisimulation and causal bisimulation respectively. We establish completeness properties of, and adjunctions between, categories of the two models. Alternative indexing structures and possible applications are also discussed. These are first steps towards a uniform understanding of the semantics and operations of name-passing calculi.
 
We study the notion of stratification, as used in subsystems of linear logic with low complexity bounds on the cut-elimination procedure (the so-called light logics), from an abstract point of view, introducing a logical system in which stratification is handled by a separate modality. This modality, which is a generalization of the paragraph modality of Girard's light linear logic, arises from a general categorical construction applicable to all models of linear logic. We thus learn that stratification may be formulated independently of exponential modalities; when it is forced to be connected to exponential modalities, it yields interesting complexity properties. In particular, from our analysis stem three alternative reformulations of Baillot and Mazza's linear logic by levels: one geometric, one interactive, and one semantic.
 
In abstract interpretation-based static analysis, approximation is encoded by abstract domains. They provide systematic guidelines for designing abstract semantic functions that approximate some concrete system behaviors under analysis. It may happen that an abstract domain contains redundant information for the specific purpose of approximating a given concrete semantic function. This paper introduces the notion of correctness kernel of abstract interpretations, a methodology for simplifying abstract domains, i.e. removing abstract values from them, in a maximal way while retaining exactly the same approximate behavior of the system under analysis. We show that in abstract model checking correctness kernels provide a simplification paradigm of the abstract state space that is guided by examples, meaning that this simplification preserves spuriousness of examples (i.e., abstract paths). In particular, we show how correctness kernels can be integrated with the well-known CEGAR (CounterExample-Guided Abstraction Refinement) methodology.
 
We introduce an abstract interpretation framework for Mobile Ambients, based on a new semantics called normal semantics. Then, we derive within this setting two analyses computing a safe approximation of the run-time topological structure of processes. Such a static information can be successfully used to establish interesting security properties.
 
Probabilistic Automata (PAs) are a widely-recognized mathematical framework for the specification and analysis of systems with non-deterministic and stochastic behaviors. This paper proposes Abstract Probabilistic Automata (APAs), that is a novel abstraction model for PAs. In APAs uncertainty of the non-deterministic choices is modeled by may/must modalities on transitions while uncertainty of the stochastic behaviour is expressed by (underspecified) stochastic constraints. We have developed a complete abstraction theory for PAs, and also propose the first specification theory for them. Our theory supports both satisfaction and refinement operators, together with classical stepwise design operators. In addition, we study the link between specification theories and abstraction in avoiding the state-space explosion problem.
 
Abstract state machines (ASMs) form a relatively new computation model holding the promise that they can simulate any computational system in lockstep. In particular, an instance of the ASM model has recently been introduced for computing queries to relational databases. This model, to which we refer as the BGS model, provides a powerful query language in which all computable queries can be expressed. In this paper, we show that when one is only interested in polynomial-time computations, BGS is strictly more powerful than both QL and while new , two well-known computationally complete query languages. We then show that when a language such as while new is extended with a duplicate elimination mechanism, polynomial-time simulations between the language and BGS become possible.
 
Top-cited authors
Joseph Sifakis
  • Université Grenoble Alpes
Xavier Nicollin
  • École Nationale Supérieure d'Informatique et de Mathématiques
Janka Chlebikova
  • University of Portsmouth
Miroslav Chlebik
  • University of Sussex
Giorgi Japaridze
  • Villanova University