# Logical Methods in Computer Science

Published by IfCoLog

Published by IfCoLog

Publications

We study the equational theory of Parigot's second-order λμ-calculus in connection with a call-by-name continuation-passing style (CPS) translation into a fragment of the second-order λ-calculus. It is observed that the relational parametricity on the target calculus induces a natural notion of equivalence on the λμ-terms. On the other hand, the unconstrained relational parametricity on the λμ-calculus turns out to be inconsistent with this CPS semantics. Following these facts, we propose to formulate the relational parametricity on the λμ-calculus in a constrained way, which might be called "focal parametricity".

…

The linear continuous functionals $F:C[0;1]\to\mathbb R$ can be characterized
by functions $g:[0;1]\to \mathbb R$ of bounded variation, $F(h)=\int h{\:\rm d}
g$ (Riesz representation theorem with Riemann-Stieltjes integral), or by signed
measures $\mu$ on the Borel-subsets, $F(h)= \int h{\:\rm d}\mu$. Each of these
objects has a (even minimal) Jordan decomposition into non-negative or
non-decreasing objects ($F=F^+-F^-$, $g=g^+-g^-$, $\mu=\mu^+-\mu^-$). Using the
representation approach to computable analysis, a computable version of the
Riesz representation theorem has been proved by Jafarikhah, Lu and Weihrauch.
In this article we extend this result. We study the computable relation between
three Banach spaces, the space of linear continuous functionals with operator
norm, the space of (normalized) functions of bounded variation with total
variation norm, and the space of bounded signed Borel measures with variation
norm. We introduce natural representations for defining computability. We prove
that the canonical linear bijections $F\mapsto g$, $g\mapsto\mu$ and
$\mu\mapsto F$ between these spaces and their inverses are computable. We also
prove that Jordan decomposition is computable on each of these spaces.

…

The Algebraic Dichotomy Conjecture states that the Constraint Satisfaction
Problem over a fixed template is solvable in polynomial time if the algebra of
polymorphisms associated to the template lies in a Taylor variety, and is
NP-complete otherwise. This paper provides two new characterizations of
finitely generated Taylor varieties. The first characterization is using
absorbing subalgebras and the second one cyclic terms. These new conditions
allow us to reprove the conjecture of Bang-Jensen and Hell (proved by the
authors) and the characterization of locally finite Taylor varieties using weak
near-unanimity terms (proved by McKenzie and Mar\'oti) in an elementary and
self-contained way.

…

The notion of absorption was developed a few years ago by Barto and Kozik and
immediately found many applications, particularly in topics related to the
constraint satisfaction problem. We investigate the behavior of absorption in
semigroups and n-ary semigroups (that is, algebras with one n-ary associative
operation). In the case of semigroups, we give a simple necessary and
sufficient condition for a semigroup to be absorbed by its subsemigroup. We
then proceed to n-ary semigroups, where we conjecture an analogue of this
necessary and sufficient condition, and prove that the conjectured condition is
indeed necessary and sufficient for B to absorb A (where A is an n-ary
semigroup and B is its n-ary subsemigroup) in the following three cases: when A
is commutative, when |A-B|=1 and when A is an idempotent ternary semigroup.

…

Initial Semantics aims at interpreting the syntax associated to a signature
as the initial object of some category of 'models', yielding induction and
recursion principles for abstract syntax. Zsid\'o proves an initiality result
for simply-typed syntax: given a signature S, the abstract syntax associated to
S constitutes the initial object in a category of models of S in monads.
However, the iteration principle her theorem provides only accounts for
translations between two languages over a fixed set of object types. We
generalize Zsid\'o's notion of model such that object types may vary, yielding
a larger category, while preserving initiality of the syntax therein. Thus we
obtain an extended initiality theorem for typed abstract syntax, in which
translations between terms over different types can be specified via the
associated category-theoretic iteration operator as an initial morphism. Our
definitions ensure that translations specified via initiality are type-safe,
i.e. compatible with the typing in the source and target language in the
obvious sense. Our main example is given via the propositions-as-types
paradigm: we specify propositions and inference rules of classical and
intuitionistic propositional logics through their respective typed signatures.
Afterwards we use the category--theoretic iteration operator to specify a
double negation translation from the former to the latter. A second example is
given by the signature of PCF. For this particular case, we formalize the
theorem in the proof assistant Coq. Afterwards we specify, via the
category-theoretic iteration operator, translations from PCF to the untyped
lambda calculus.

…

Nominal abstract syntax is a popular first-order technique for encoding, and
reasoning about, abstract syntax involving binders. Many of its applications
involve constraint solving. The most commonly used constraint solving algorithm
over nominal abstract syntax is the Urban-Pitts-Gabbay nominal unification
algorithm, which is well-behaved, has a well-developed theory and is applicable
in many cases. However, certain problems require a constraint solver which
respects the equivariance property of nominal logic, such as Cheney's
equivariant unification algorithm. This is more powerful but is more
complicated and computationally hard. In this paper we present a novel
algorithm for solving constraints over a simple variant of nominal abstract
syntax which we call non-permutative. This constraint problem has similar
complexity to equivariant unification but without many of the additional
complications of the equivariant unification term language. We prove our
algorithm correct, paying particular attention to issues of termination, and
present an explicit translation of name-name equivariant unification problems
into non-permutative constraints.

…

Terminal coalgebras for a functor serve as semantic domains for state-based
systems of various types. For example, behaviors of CCS processes, streams,
infinite trees, formal languages and non-well-founded sets form terminal
coalgebras. We present a uniform account of the semantics of recursive
definitions in terminal coalgebras by combining two ideas: (1) abstract GSOS
rules l specify additional algebraic operations on a terminal coalgebra; (2)
terminal coalgebras are also initial completely iterative algebras (cias). We
also show that an abstract GSOS rule leads to new extended cia structures on
the terminal coalgebra. Then we formalize recursive function definitions
involving given operations specified by l as recursive program schemes for l,
and we prove that unique solutions exist in the extended cias. From our results
it follows that the solutions of recursive (function) definitions in terminal
coalgebras may be used in subsequent recursive definitions which still have
unique solutions. We call this principle modularity. We illustrate our results
by the five concrete terminal coalgebras mentioned above, e.\,g., a finite
stream circuit defines a unique stream function.

…

We present a formalization of modern SAT solvers and their properties in a
form of abstract state transition systems. SAT solving procedures are described
as transition relations over states that represent the values of the solver's
global variables. Several different SAT solvers are formalized, including both
the classical DPLL procedure and its state-of-the-art successors. The
formalization is made within the Isabelle/HOL system and the total correctness
(soundness, termination, completeness) is shown for each presented system (with
respect to a simple notion of satisfiability that can be manually checked). The
systems are defined in a general way and cover procedures used in a wide range
of modern SAT solvers. Our formalization builds up on the previous work on
state transition systems for SAT, but it gives machine-verifiable proofs,
somewhat more general specifications, and weaker assumptions that ensure the
key correctness properties. The presented proofs of formal correctness of the
transition systems can be used as a key building block in proving correctness
of SAT solvers by using other verification approaches.

…

In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. In Part I (Interactive Small-Step Algorithms I: Axiomatization), the axiomatization was extended to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend here the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. We prove the characterization theorem for extended abstract state machines with respect to general algorithms as axiomatized in Part I.

…

We present a new approach for performing predicate abstraction based on symbolic decision procedures. A symbolic decision procedure for a theory T (SDP
T
) takes sets of predicates G and E and symbolically executes a decision procedure for T on G′ ∪ {– e | e ∈ E}, for all the subsets G′ of G. The result of SDP
T
is a shared expression (represented by a directed acyclic graph) that implicitly represents the answer to a predicate abstraction query.
We present symbolic decision procedures for the logic of Equality and Uninterpreted Functions(EUF) and Difference logic (DIF) and show that these procedures run in pseudo-polynomial (rather than exponential) time. We then provide a method to construct SDP’s for simple mixed theories (including EUF + DIF) using an extension of the Nelson-Oppen combination method. We present preliminary evaluation of our procedure on predicate abstraction benchmarks from device driver verification in SLAM.

…

In previous work with Pous, we defined a semantics for CCS which may both be
viewed as an innocent form of presheaf semantics and as a concurrent form of
game semantics. We define in this setting an analogue of fair testing
equivalence, which we prove fully abstract w.r.t. standard fair testing
equivalence.
The proof relies on a new algebraic notion called playground, which
represents the `rule of the game'. From any playground, we derive two languages
equipped with labelled transition systems, as well as a strong, functional
bisimulation between them.

…

We propose an abstraction-based model checking method which relies on
refinement of an under-approximation of the feasible behaviors of the system
under analysis. The method preserves errors to safety properties, since all
analyzed behaviors are feasible by definition. The method does not require an
abstract transition relation to be generated, but instead executes the concrete
transitions while storing abstract versions of the concrete states, as
specified by a set of abstraction predicates. For each explored transition the
method checks, with the help of a theorem prover, whether there is any loss of
precision introduced by abstraction. The results of these checks are used to
decide termination or to refine the abstraction by generating new abstraction
predicates. If the (possibly infinite) concrete system under analysis has a
finite bisimulation quotient, then the method is guaranteed to eventually
explore an equivalent finite bisimilar structure. We illustrate the application
of the approach for checking concurrent programs.

…

The symmetric interaction combinators are an equally expressive variant of
Lafont's interaction combinators. They are a graph-rewriting model of
deterministic computation. We define two notions of observational equivalence
for them, analogous to normal form and head normal form equivalence in the
lambda-calculus. Then, we prove a full abstraction result for each of the two
equivalences. This is obtained by interpreting nets as certain subsets of the
Cantor space, called edifices, which play the same role as Boehm trees in the
theory of the lambda-calculus.

…

We study the semantics of a resource-sensitive extension of the lambda calculus in a canonical reflexive object of a category of sets and relations, a relational version of Scott's original model of the pure lambda calculus. This calculus is related to Boudol's resource calculus and is derived from Ehrhard and Regnier's differential extension of Linear Logic and of the lambda calculus. We extend it with new constructions, to be understood as implementing
a very simple exception mechanism, and with a "must" parallel composition. These new operations allow to associate a context of this calculus with any point of the model and to prove full abstraction for the finite sub-calculus where ordinary lambda calculus application is not allowed. The result is then extended to the full calculus by means of a Taylor Expansion formula. As an
intermediate result we prove that the exception mechanism is not essential in the finite sub-calculus.

…

The goal of this work is to formally abstract a Markov process evolving in
discrete time over a general state space as a finite-state Markov chain, with
the objective of precisely approximating its state probability distribution in
time, which allows for its approximate, faster computation by that of the
Markov chain. The approach is based on formal abstractions and employs an
arbitrary finite partition of the state space of the Markov process, and the
computation of average transition probabilities between partition sets. The
abstraction technique is formal, in that it comes with guarantees on the
introduced approximation that depend on the diameters of the partitions: as
such, they can be tuned at will. Further in the case of Markov processes with
unbounded state spaces, a procedure for precisely truncating the state space
within a compact set is provided, together with an error bound that depends on
the asymptotic properties of the transition kernel of the original process. The
overall abstraction algorithm, which practically hinges on piecewise constant
approximations of the density functions of the Markov process, is extended to
higher-order function approximations: these can lead to improved error bounds
and associated lower computational requirements. The approach is practically
tested to compute probabilistic invariance of the Markov process under study,
and is compared to a known alternative approach from the literature.

…

We propose a method for automatically generating abstract transformers for
static analysis by abstract interpretation. The method focuses on linear
constraints on programs operating on rational, real or floating-point variables
and containing linear assignments and tests. Given the specification of an
abstract domain, and a program block, our method automatically outputs an
implementation of the corresponding abstract transformer. It is thus a form of
program transformation. In addition to loop-free code, the same method also
applies for obtaining least fixed points as functions of the precondition,
which permits the analysis of loops and recursive functions. The motivation of
our work is data-flow synchronous programming languages, used for building
control-command embedded systems, but it also applies to imperative and
functional programming. Our algorithms are based on quantifier elimination and
symbolic manipulation techniques over linear arithmetic formulas. We also give
less general results for nonlinear constraints and nonlinear program
constructs.

…

An infinite run of a timed automaton is Zeno if it spans only a finite amount
of time. Such runs are considered unfeasible and hence it is important to
detect them, or dually, find runs that are non-Zeno. Over the years important
improvements have been obtained in checking reachability properties for timed
automata. We show that some of these very efficient optimizations make testing
for Zeno runs costly. In particular we show NP-completeness for the
LU-extrapolation of Behrmann et al. We analyze the source of this complexity in
detail and give general conditions on extrapolation operators that guarantee a
(low) polynomial complexity of Zenoness checking. We propose a slight weakening
of the LU-extrapolation that satisfies these conditions.

…

AC-completion efficiently handles equality modulo associative and commutative
function symbols. When the input is ground, the procedure terminates and
provides a decision algorithm for the word problem. In this paper, we present a
modular extension of ground AC-completion for deciding formulas in the
combination of the theory of equality with user-defined AC symbols,
uninterpreted symbols and an arbitrary signature disjoint Shostak theory X. Our
algorithm, called AC(X), is obtained by augmenting in a modular way ground
AC-completion with the canonizer and solver present for the theory X. This
integration rests on canonized rewriting, a new relation reminiscent to
normalized rewriting, which integrates canonizers in rewriting steps. AC(X) is
proved sound, complete and terminating, and is implemented to extend the core
of the Alt-Ergo theorem prover.

…

We study mechanisms that permit program components to express role
constraints on clients, focusing on programmatic security mechanisms, which
permit access controls to be expressed, in situ, as part of the code realizing
basic functionality. In this setting, two questions immediately arise: (1) The
user of a component faces the issue of safety: is a particular role sufficient
to use the component? (2) The component designer faces the dual issue of
protection: is a particular role demanded in all execution paths of the
component? We provide a formal calculus and static analysis to answer both
questions.

…

Compact categories have lately seen renewed interest via applications to
quantum physics. Being essentially finite-dimensional, they cannot accomodate
(co)limit-based constructions. For example, they cannot capture protocols such
as quantum key distribution, that rely on the law of large numbers. To overcome
this limitation, we introduce the notion of a compactly accessible category,
relying on the extra structure of a factorisation system. This notion allows
for infinite dimension while retaining key properties of compact categories:
the main technical result is that the choice-of-duals functor on the compact
part extends canonically to the whole compactly accessible category. As an
example, we model a quantum key distribution protocol and prove its correctness
categorically.

…

The Algebraic lambda-calculus and the Linear-Algebraic lambda-calculus extend
the lambda-calculus with the possibility of making arbitrary linear
combinations of terms. In this paper we provide a fine-grained, System F-like
type system for the linear-algebraic lambda-calculus. We show that this
"scalar" type system enjoys both the subject-reduction property and the
strong-normalisation property, our main technical results. The latter yields a
significant simplification of the linear-algebraic lambda-calculus itself, by
removing the need for some restrictions in its reduction rules. But the more
important, original feature of this scalar type system is that it keeps track
of 'the amount of a type' that is present in each term. As an example of its
use, we shown that it can serve as a guarantee that the normal form of a term
is barycentric, i.e that its scalars are summing to one.

…

We introduce a new domain for finding precise numerical invariants of
programs by abstract interpretation. This domain, which consists of level sets
of non-linear functions, generalizes the domain of linear "templates"
introduced by Manna, Sankaranarayanan, and Sipma. In the case of quadratic
templates, we use Shor's semi-definite relaxation to derive computable yet
precise abstractions of semantic functionals, and we show that the abstract
fixpoint equation can be solved accurately by coupling policy iteration and
semi-definite programming. We demonstrate the interest of our approach on a
series of examples (filters, integration schemes) including a degenerate one
(symplectic scheme).

…

We present a state-based regression function for planning domains where an
agent does not have complete information and may have sensing actions. We
consider binary domains and employ a three-valued characterization of domains
with sensing actions to define the regression function. We prove the soundness
and completeness of our regression formulation with respect to the definition
of progression. More specifically, we show that (i) a plan obtained through
regression for a planning problem is indeed a progression solution of that
planning problem, and that (ii) for each plan found through progression, using
regression one obtains that plan or an equivalent one.

…

We present a restriction of the solos calculus which is stable under
reduction and expressive enough to contain an encoding of the pi-calculus. As a
consequence, it is shown that equalizing names that are already equal is not
required by the encoding of the pi-calculus. In particular, the induced solo
diagrams bear an acyclicity property that induces a faithful encoding into
differential interaction nets. This gives a (new) proof that differential
interaction nets are expressive enough to contain an encoding of the
pi-calculus. All this is worked out in the case of finitary (replication free)
systems without sum, match nor mismatch.

…

We propose the concept of adaptable processes as a way of overcoming the
limitations that process calculi have for describing patterns of dynamic
process evolution. Such patterns rely on direct ways of controlling the
behavior and location of running processes, and so they are at the heart of the
adaptation capabilities present in many modern concurrent systems. Adaptable
processes have a location and are sensible to actions of dynamic update at
runtime; this allows to express a wide range of evolvability patterns for
concurrent processes. We introduce a core calculus of adaptable processes and
propose two verification problems for them: bounded and eventual adaptation.
While the former ensures that the number of consecutive erroneous states that
can be traversed during a computation is bound by some given number k, the
latter ensures that if the system enters into a state with errors then a state
without errors will be eventually reached. We study the (un)decidability of
these two problems in several variants of the calculus, which result from
considering dynamic and static topologies of adaptable processes as well as
different evolvability patterns. Rather than a specification language, our
calculus intends to be a basis for investigating the fundamental properties of
evolvable processes and for developing richer languages with evolvability
capabilities.

…

For the additive real BSS machines using only constants 0 and 1 and order
tests we consider the corresponding Turing reducibility and characterize some
semi-decidable decision problems over the reals. In order to refine,
step-by-step, a linear hierarchy of Turing degrees with respect to this model,
we define several halting problems for classes of additive machines with
different abilities and construct further suitable decision problems. In the
construction we use methods of the classical recursion theory as well as
techniques for proving bounds resulting from algebraic properties. In this way
we extend a known hierarchy of problems below the halting problem for the
additive machines using only equality tests and we present a further
subhierarchy of semi-decidable problems between the halting problems for the
additive machines using only equality tests and using order tests,
respectively.

…

Checking the admissibility of quasiequations in a finitely generated (i.e.,
generated by a finite set of finite algebras) quasivariety Q amounts to
checking validity in a suitable finite free algebra of the quasivariety, and is
therefore decidable. However, since free algebras may be large even for small
sets of small algebras and very few generators, this naive method for checking
admissibility in $\Q$ is not computationally feasible. In this paper,
algorithms are introduced that generate a minimal (with respect to a multiset
well-ordering on their cardinalities) finite set of algebras such that the
validity of a quasiequation in this set corresponds to admissibility of the
quasiequation in Q. In particular, structural completeness (validity and
admissibility coincide) and almost structural completeness (validity and
admissibility coincide for quasiequations with unifiable premises) can be
checked. The algorithms are illustrated with a selection of well-known finitely
generated quasivarieties, and adapted to handle also admissibility of rules in
finite-valued logics.

…

This paper investigates what is essentially a call-by-value version of PCF under a complexity-theoretically motivated type system. The programming formalism, ATR, has its first-order programs characterize the polynomial-time computable functions, and its second-order programs characterize the type-2 basic feasible functionals of Mehlhorn and of Cook and Urquhart. (The ATR-types are confined to levels 0, 1, and 2.) The type system comes in two parts, one that primarily restricts the sizes of values of expressions and a second that primarily restricts the time required to evaluate expressions. The size-restricted part is motivated by Bellantoni and Cook's and Leivant's implicit characterizations of polynomial-time. The time-restricting part is an affine version of Barber and Plotkin's DILL. Two semantics are constructed for ATR. The first is a pruning of the naive denotational semantics for ATR. This pruning removes certain functions that cause otherwise feasible forms of recursion to go wrong. The second semantics is a model for ATR's time complexity relative to a certain abstract machine. This model provides a setting for complexity recurrences arising from ATR recursions, the solutions of which yield second-order polynomial time bounds. The time-complexity semantics is also shown to be sound relative to the costs of interpretation on the abstract machine.

…

Logics for security protocol analysis require the formalization of an
adversary model that specifies the capabilities of adversaries. A common model
is the Dolev-Yao model, which considers only adversaries that can compose and
replay messages, and decipher them with known keys. The Dolev-Yao model is a
useful abstraction, but it suffers from some drawbacks: it cannot handle the
adversary knowing protocol-specific information, and it cannot handle
probabilistic notions, such as the adversary attempting to guess the keys. We
show how we can analyze security protocols under different adversary models by
using a logic with a notion of algorithmic knowledge. Roughly speaking,
adversaries are assumed to use algorithms to compute their knowledge; adversary
capabilities are captured by suitable restrictions on the algorithms used. We
show how we can model the standard Dolev-Yao adversary in this setting, and how
we can capture more general capabilities including protocol-specific knowledge
and guesses.

…

In a previous work Baillot and Terui introduced Dual light affine logic
(DLAL) as a variant of Light linear logic suitable for guaranteeing complexity
properties on lambda calculus terms: all typable terms can be evaluated in
polynomial time by beta reduction and all Ptime functions can be represented.
In the present work we address the problem of typing lambda-terms in
second-order DLAL. For that we give a procedure which, starting with a term
typed in system F, determines whether it is typable in DLAL and outputs a
concrete typing if there exists any. We show that our procedure can be run in
time polynomial in the size of the original Church typed system F term.

…

Given a universal Horn formula of Kleene algebra with hypotheses of the form
r = 0, it is already known that we can efficiently construct an equation which
is valid if and only if the Horn formula is valid. This is an example of
elimination of hypotheses , which is useful because the equational theory
of Kleene algebra is decidable while the universal Horn theory is not. We show
that hypotheses of the form r = 0 can still be eliminated in the presence of
other hypotheses. This lets us extend any technique for eliminating hypotheses
to include hypotheses of the form r = 0.

…

Terms are a concise representation of tree structures. Since they can be
naturally defined by an inductive type, they offer data structures in
functional programming and mechanised reasoning with useful principles such as
structural induction and structural recursion. However, for graphs or
"tree-like" structures - trees involving cycles and sharing - it remains
unclear what kind of inductive structures exists and how we can faithfully
assign a term representation of them. In this paper we propose a simple term
syntax for cyclic sharing structures that admits structural induction and
recursion principles. We show that the obtained syntax is directly usable in
the functional language Haskell and the proof assistant Agda, as well as
ordinary data structures such as lists and trees. To achieve this goal, we use
a categorical approach to initial algebra semantics in a presheaf category.
That approach follows the line of Fiore, Plotkin and Turi's models of abstract
syntax with variable binding.

…

Burkart, Caucal, Steffen (1995) showed a procedure deciding bisimulation
equivalence of processes in Basic Process Algebra (BPA), i.e. of sequential
processes generated by context-free grammars. They improved the previous
decidability result of Christensen, H\"uttel, Stirling (1992), since their
procedure has obviously an elementary time complexity and the authors claim
that a close analysis would reveal a double exponential upper bound. Here a
self-contained direct proof of the membership in 2-ExpTime is given. This is
done via a Prover-Refuter game which shows that there is an alternating Turing
machine deciding the problem in exponential space. The proof uses similar
ingredients (size-measures, decompositions, bases) as the previous proofs, but
one new simplifying factor is an explicit addition of infinite regular strings
to the state space. An auxiliary claim also shows an explicit exponential upper
bound on the equivalence level of nonbisimilar normed BPA processes. The
importance of clarifying the 2-ExpTime upper bound for BPA bisimilarity has
recently increased due to the shift of the known lower bound from PSpace (Srba,
2002) to ExpTime (Kiefer, 2012).

…

We introduce two-sorted theories in the style of [CN10] for the complexity
classes \oplusL and DET, whose complete problems include determinants over Z2
and Z, respectively. We then describe interpretations of Soltys' linear algebra
theory LAp over arbitrary integral domains, into each of our new theories. The
result shows equivalences of standard theorems of linear algebra over Z2 and Z
can be proved in the corresponding theory, but leaves open the interesting
question of whether the theorems themselves can be proved.

…

The study of finite automata and regular languages is a privileged meeting
point of algebra and logic. Since the work of Buchi, regular languages have
been classified according to their descriptive complexity, i.e. the type of
logical formalism required to define them. The algebraic point of view on
automata is an essential complement of this classification: by providing
alternative, algebraic characterizations for the classes, it often yields the
only opportunity for the design of algorithms that decide expressibility in
some logical fragment.
We survey the existing results relating the expressibility of regular
languages in logical fragments of MSO[S] with algebraic properties of their
minimal automata. In particular, we show that many of the best known results in
this area share the same underlying mechanics and rely on a very strong
relation between logical substitutions and block-products of pseudovarieties of
monoid. We also explain the impact of these connections on circuit complexity
theory.

…

The theory of regular cost functions is a quantitative extension to the
classical notion of regularity. A cost function associates to each input a
non-negative integer value (or infinity), as opposed to languages which only
associate to each input the two values "inside" and "outside". This theory is a
continuation of the works on distance automata and similar models. These models
of automata have been successfully used for solving the star-height problem,
the finite power property, the finite substitution problem, the relative
inclusion star-height problem and the boundedness problem for monadic-second
order logic over words. Our notion of regularity can be -- as in the classical
theory of regular languages -- equivalently defined in terms of automata,
expressions, algebraic recognisability, and by a variant of the monadic
second-order logic. These equivalences are strict extensions of the
corresponding classical results. The present paper introduces the cost monadic
logic, the quantitative extension to the notion of monadic second-order logic
we use, and show that some problems of existence of bounds are decidable for
this logic. This is achieved by introducing the corresponding algebraic
formalism: stabilisation monoids.

…

We provide a computational definition of the notions of vector space and
bilinear functions. We use this result to introduce a minimal language
combining higher-order computation and linear algebra. This language extends
the Lambda-calculus with the possibility to make arbitrary linear combinations
of terms alpha.t + beta.u. We describe how to "execute" this language in terms
of a few rewrite rules, and justify them through the two fundamental
requirements that the language be a language of linear operators, and that it
be higher-order. We mention the perspectives of this work in the field of
quantum computation, whose circuits we show can be easily encoded in the
calculus. Finally, we prove the confluence of the entire calculus.

…

We examine the relationship between the algebraic lambda-calculus, a fragment
of the differential lambda-calculus and the linear-algebraic lambda-calculus, a
candidate lambda-calculus for quantum computation. Both calculi are algebraic:
each one is equipped with an additive and a scalar-multiplicative structure,
and their set of terms is closed under linear combinations. However, the two
languages were built using different approaches: the former is a call-by-name
language whereas the latter is call-by-value; the former considers algebraic
equalities whereas the latter approaches them through rewrite rules. In this
paper, we analyse how these different approaches relate to one another. To this
end, we propose four canonical languages based on each of the possible choices:
call-by-name versus call-by-value, algebraic equality versus algebraic
rewriting. We show that the various languages simulate one another. Due to
subtle interaction between beta-reduction and algebraic rewriting, to make the
languages consistent some additional hypotheses such as confluence or
normalisation might be required. We carefully devise the required properties
for each proof, making them general enough to be valid for any sub-language
satisfying the corresponding properties.

…

We present an effect inference algorithm for Eff, an ML-style language with
handlers of not only exceptions, but of any other algebraic effect such as
input & output, mutable references, non-determinism and many others.
Our main aim is to offer the programmer a useful insight into the effectful
behaviour of programs. Handlers help here by cutting down possible effects and
the resulting lengthy output that often plagues precise effect systems.
Additionally, we present a set of methods that further simplify the displayed
types, some even by deliberately hiding inferred information from the
programmer.

…

We present an effect system for core Eff, a simplified variant of Eff, which
is an ML-style programming language with first-class algebraic effects and
handlers. We define an expressive effect system and prove safety of operational
semantics with respect to it. Then we give a domain-theoretic denotational
semantics of core Eff, using Pitts's theory of minimal invariant relations, and
prove it adequate. We use this fact to develop tools for finding useful
contextual equivalences, including an induction principle. To demonstrate their
usefulness, we use these tools to derive the usual equations for mutable state,
including a general commutativity law for computations using non-interfering
references. We have formalized the effect system, the operational semantics,
and the safety theorem in Twelf.

…

Traces and their extension called combined traces (comtraces) are two formal
models used in the analysis and verification of concurrent systems. Both models
are based on concepts originating in the theory of formal languages, and they
are able to capture the notions of causality and simultaneity of atomic actions
which take place during the process of a system's operation. The aim of this
paper is a transfer to the domain of comtraces and developing of some
fundamental notions, which proved to be successful in the theory of traces. In
particular, we introduce and then apply the notion of indivisible steps, the
lexicographical canonical form of comtraces, as well as the representation of a
comtrace utilising its linear projections to binary action subalphabets. We
also provide two algorithms related to the new notions. Using them, one can
solve, in an efficient way, the problem of step sequence equivalence in the
context of comtraces. One may view our results as a first step towards the
development of infinite combined traces, as well as recognisable languages of
combined traces.

…

We extend the higher-order termination method of dynamic dependency pairs to
Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms
with algebraic reduction and separate {\beta}-steps are considered. For
left-linear AFSs, the method is shown to be complete. For so-called local AFSs
we define a variation of usable rules and an extension of argument filterings.
All these techniques have been implemented in the higher-order termination tool
WANDA.

…

Five algebraic notions of termination are formalised, analysed and compared:
wellfoundedness or Noetherity, L\"ob's formula, absence of infinite iteration,
absence of divergence and normalisation. The study is based on modal semirings,
which are additively idempotent semirings with forward and backward modal
operators. To model infinite behaviours, idempotent semirings are extended to
divergence semirings, divergence Kleene algebras and omega algebras. The
resulting notions and techniques are used in calculational proofs of classical
theorems of rewriting theory. These applications show that modal semirings are
powerful tools for reasoning algebraically about the finite and infinite
dynamics of programs and transition systems.

…

Nested words, a model for recursive programs proposed by Alur and Madhusudan,
have recently gained much interest. In this paper we introduce quantitative
extensions and study nested word series which assign to nested words elements
of a semiring. We show that regular nested word series coincide with series
definable in weighted logics as introduced by Droste and Gastin. For this we
establish a connection between nested words and the free bisemigroup. Applying
our result, we obtain characterizations of algebraic formal power series in
terms of weighted logics. This generalizes results of Lautemann, Schwentick and
Therien on context-free languages.

…

Let \Gamma be a structure with a finite relational signature and a
first-order definition in (R;*,+) with parameters from R, that is, a relational
structure over the real numbers where all relations are semi-algebraic sets. In
this article, we study the computational complexity of constraint satisfaction
problem (CSP) for \Gamma: the problem to decide whether a given primitive
positive sentence is true in \Gamma. We focus on those structures \Gamma that
contain the relations \leq, {(x,y,z) | x+y=z} and {1}. Hence, all CSPs studied
in this article are at least as expressive as the feasibility problem for
linear programs. The central concept in our investigation is essential
convexity: a relation S is essentially convex if for all a,b\inS, there are
only finitely many points on the line segment between a and b that are not in
S. If \Gamma contains a relation S that is not essentially convex and this is
witnessed by rational points a,b, then we show that the CSP for \Gamma is
NP-hard. Furthermore, we characterize essentially convex relations in logical
terms. This different view may open up new ways for identifying tractable
classes of semi-algebraic CSPs. For instance, we show that if \Gamma is a
first-order expansion of (R;*,+), then the CSP for \Gamma can be solved in
polynomial time if and only if all relations in \Gamma are essentially convex
(unless P=NP).

…

This paper describes a formalization of discrete real closed fields in the
Coq proof assistant. This abstract structure captures for instance the theory
of real algebraic numbers, a decidable subset of real numbers with good
algorithmic properties. The theory of real algebraic numbers and more generally
of semi-algebraic varieties is at the core of a number of effective methods in
real analysis, including decision procedures for non linear arithmetic or
optimization methods for real valued functions. After defining an abstract
structure of discrete real closed field and the elementary theory of real roots
of polynomials, we describe the formalization of an algebraic proof of
quantifier elimination based on pseudo-remainder sequences following the
standard computer algebra literature on the topic. This formalization covers a
large part of the theory which underlies the efficient algorithms implemented
in practice in computer algebra. The success of this work paves the way for
formal certification of these efficient methods.

…

Algebraic effects are computational effects that can be represented by an
equational theory whose operations produce the effects at hand. The free model
of this theory induces the expected computational monad for the corresponding
effect. Algebraic effects include exceptions, state, nondeterminism,
interactive input/output, and time, and their combinations. Exception handling,
however, has so far received no algebraic treatment.
We present such a treatment, in which each handler yields a model of the
theory for exceptions, and each handling construct yields the homomorphism
induced by the universal property of the free model. We further generalise
exception handlers to arbitrary algebraic effects. The resulting programming
construct includes many previously unrelated examples from both theory and
practice, including relabelling and restriction in Milner's CCS, timeout,
rollback, and stream redirection.

…

Using coalgebraic methods, we extend Conway's theory of games to possibly
non-terminating, i.e. non-wellfounded games (hypergames). We take the view that
a play which goes on forever is a draw, and hence rather than focussing on
winning strategies, we focus on non-losing strategies. Hypergames are a
fruitful metaphor for non-terminating processes, Conway's sum being similar to
shuffling. We develop a theory of hypergames, which extends in a non-trivial
way Conway's theory; in particular, we generalize Conway's results on game
determinacy and characterization of strategies. Hypergames have a rather
interesting theory, already in the case of impartial hypergames, for which we
give a compositional semantics, in terms of a generalized Grundy-Sprague
function and a system of generalized Nim games. Equivalences and congruences on
games and hypergames are discussed. We indicate a number of intriguing
directions for future work. We briefly compare hypergames with other notions of
games used in computer science.

…

C
*-algebras form rather general and rich mathematical structures that can be studied with different morphisms (preserving multiplication, or not), and with different properties (commutative, or not). These various options can be used to incorporate various styles of computation (set-theoretic, probabilistic, quantum) inside categories of C
*-algebras. This paper concentrates on the commutative case and shows that there are functors from several Kleisli categories, of monads that are relevant to model probabilistic computations, to categories of C
*-algebras. This yields a new probabilistic version of Gelfand duality, involving the “Radon” monad on the category of compact Hausdorff spaces. We also show that a commutative C
*-algebra is isomorphic to the space of convex continuous functionals from its state space to the complex numbers. This allows us to obtain an appropriately commuting state-and-effect triangle for commutative C
*-algebras.

…

In this paper we revise and simplify Simpson and Schroeder's notion of
observationally induced algebra introduced for the purpose of modelling
computational effects for the particular case where the ambient category is
given by classical domain theory. As examples of the general framework we
consider the various powerdomains. For the particular case of the Plotkin
powerdomain the general recipe leads to a somewhat unexpected result which,
however, makes sense from a Computer Science perspective. We analyze this
"deviation" and show how to reobtain the original Plotkin powerdomain by
imposing further conditions previously considered by R.~Heckmann and
J.~Goubault-Larrecq.

…

Top-cited authors