# States and exceptions considered as dual effects

**ABSTRACT** In this paper we consider the two major computational effects of states and

exceptions, from the point of view of diagrammatic logics. We get a surprising

result: there exists a symmetry between these two effects, based on the

well-known categorical duality between products and coproducts. More precisely,

the lookup and update operations for states are respectively dual to the throw

and catch operations for exceptions. This symmetry is deeply hidden in the

programming languages; in order to unveil it, we start from the monoidal

equational logic and we add progressively the logical features which are

necessary for dealing with either effect. This approach gives rise to a new

point of view on states and exceptions, which bypasses the problems due to the

non-algebraicity of handling exceptions.

**0**Bookmarks

**·**

**107**Views

- Citations (1)
- Cited In (0)

- [Show abstract] [Hide abstract]

**ABSTRACT:**Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few years later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained precedence. A generation later, the definition of monad began to appear extensively in theoretical computer science in order to model computational effects, without reference to universal algebra. But since then, the relevance of universal algebra to computational effects has been recognised, leading to renewed prominence of the notion of Lawvere theory, now in a computational setting. This development has formed a major part of Gordon Plotkin's mature work, and we study its history here, in particular asking why Lawvere theories were eclipsed by monads in the 1960's, and how the renewed interest in them in a computer science setting might develop in future.Electr. Notes Theor. Comput. Sci. 01/2007; 172:437-458.

Page 1

States and exceptions

considered as dual effects

Jean-Guillaume Dumas∗, Dominique Duval†, Laurent Fousse‡, Jean-Claude Reynaud§

May 19., 2011

Abstract

Abstract. In this paper we consider the two major computational effects of states and exceptions,

from the point of view of diagrammatic logics. We get a surprising result: there exists a symmetry

between these two effects, based on the well-known categorical duality between products and coproducts.

More precisely, the lookup and update operations for states are respectively dual to the throw and catch

operations for exceptions. This symmetry is deeply hidden in the programming languages; in order to

unveil it, we start from the monoidal equational logic and we add progressively the logical features which

are necessary for dealing with either effect. This approach gives rise to a new point of view on states and

exceptions, which bypasses the problems due to the non-algebraicity of handling exceptions.

Introduction

In this paper we consider two major computational effects: states and exceptions. We get a surprising

result: there exists a symmetry between these two effects, based on the well-known categorical duality

between products and coproducts (or sums).

In order to get these results we use the categorical approach of diagrammatic logics, as introduced

in [Duval 2003] and developed in [Dom´ ınguez & Duval 2010]. For instance, in [Dumas et al. 2011] this ap-

proach is used for studying an issue related to computational effects: controling the order of evaluation of the

arguments of a function. This paper provides one more application of diagrammatic logics to computational

effects; a preliminary approach can be found in [Duval & Reynaud 2005].

To our knowledge, the first categorical treatment of computational effects is due to Moggi [Moggi 1989,

Moggi 1991]; this approach relies on monads, it is implemented in the programming language Haskell

[Wadler 1992, Haskell]. Although monads are not used in this paper, the basic ideas underlying our ap-

proach rely on Moggi’s remarks about notions of computations and monads. In view of comparing Moggi’s

approach and ours, let us quote [Moggi 1991, section 1].

below is that, in order to interpret a programming language in a category C, we distinguish the object A of

values (of type A) from the object TA of computations (of type A), and take as denotations of programs (of

type A) the elements of TA. In particular, we identify the type A with the object of values (of type A) and

obtain the object of computations (of type A) by applying an unary type-constructor T to A. We call T a

notion of computation, since it abstracts away from the type of values computations may produce. There

are many choices for TA corresponding to different notions of computations. [...] Since the denotation of

programs of type B are supposed to be elements of TB, programs of type B with a parameter of type A ought

to be interpreted by morphisms with codomain TB, but for their domain there are two alternatives, either

A or TA, depending on whether parameters of type A are identified with values or computations of type A.

The basic idea behind the categorical semantics

∗LJK, Universit´ e de Grenoble, France. Jean-Guillaume.Dumas@imag.fr

†LJK, Universit´ e de Grenoble, France. Dominique.Duval@imag.fr

‡LJK, Universit´ e de Grenoble, France. Laurent.Fousse@imag.fr

§Malhivert, Claix, France. Jean-Claude.Reynaud@imag.fr

1

hal-00445873, version 4 - 19 May 2011

Page 2

We choose the first alternative, because it entails the second. Indeed computations of type A are the same as

values of type TA. The examples proposed by Moggi include the side-effects monad TA = (A ×S)Swhere

S is the set of states and the exceptions monad TA = A + E where E is the set of exceptions.

Later on, using the correspondence between monads and algebraic theories, Plotkin and Power proposed

to use Lawvere theories for dealing with the operations and equations related to computational effects

[Plotkin & Power 2002, Hyland & Power 2007]. The operations lookup and update are related to states, and

the operations raise and handle are related to exceptions. In this framework, an operation is called algebraic

when it satisfies some relevant genericity properties. It happens that lookup, update and raise are algebraic,

while handle is not [Plotkin & Power 2003]. It follows that the handling of exceptions is quite difficult to

formalize in this framework; several solutions are proposed in [Schr¨ oder & Mossakowski 2004, Levy 2006,

Plotkin & Pretnar 2009]. In these papers, the duality between states and exceptions does not show up. One

reason might be that, as we will see in this paper, exceptions catching is encapsulated in several nested

conditionals which hide this duality.

Let us look more closely at the monad of exceptions TA = A + E. According to the point of view

of monads for effects, a morphism from A to TB provides a denotation for a program of type B with a

parameter of type A. Such a program may raise an exception, by mapping some a ∈ A to an exception

e ∈ E. In order to catch an exception, it should also be possible to map some e ∈ E to a non-exceptional

value b ∈ B. We formalize this property by choosing the second alternative in Moggi’s discussion: programs

of type B with a parameter of type A are interpreted by morphisms with codomain TB and with domain TA,

where the elements of TA are seen as computations of type A rather than values of type TA. This example

enlightens one of the reasons why we generalize Moggi’s approach. What is kept, and even emphasized, is

the distinction between several kinds of programs. In fact, for states as well as for exceptions, we distinguish

three kinds of programs, and moreover two kinds of equations. A computational effect is seen as an apparent

lack of soundness: the intended denotational semantics is not sound, in the sense that it does not satisfy the

given axioms, however it becomes sound when some additional information is given.

In order to focus on the effects, our study of states and exceptions is based on a very simple logic: the

monadic equational logic. First we provide a detailed description of the intended denotational semantics

of states and exceptions, using explicitly a set of states and a set of exceptions (claims 1.1 and 1.5). The

duality between states and exceptions derives in an obvious way from our presentation (proposition 1.6).

It is a duality between the lookup and update operations for states, on one hand, and the key throwing

and catching operations for exceptions, on the other hand. The key part in throwing an exception is the

mapping of some non-exceptional value to an exception, while the key part in catching an exception is the

mapping of some exception to a non-exceptional value. Then these key operations have to be encapsulated in

order to get the usual raising and handling of exceptions: handling exceptions is obtained by encapsulating

the key catching operation inside conditionals. Then we describe the syntax of states and exceptions. The

computational effects lie in the fact that this syntax does not mention any “type of states” or “type of

exceptions”, respectively. There are two variants for this syntax: the intended semantics is not a model of

the apparent syntax, but this lack of soundness is fixed in the decorated syntax by providing some additional

information (propositions 3.5 and 4.7). The duality between states and the key part of exceptions holds at the

syntax level as a duality of effects (theorem 5.1), from which the duality at the semantics level derives easily.

We use three different logics for formalizing each computational effect: the intended semantics is described in

the explicit logic, the apparent syntax in the apparent logic and the decorated syntax in the decorated logic.

The explicit and apparent logics are “usual” logics; in order to focus on the effects we choose two variants

of the monadic equational logic. The framework of diagrammatic logics provides a simple description of

the three logics, including the “unusual” decorated logic; most importantly, it provides a relevant notion of

morphisms for relating these three logics.

The paper is organized as follows. The intended semantics of states and exceptions is given in section 1,

and the duality is described at the semantics level. Then a simplified version of the framework of diagram-

matic logics for effects is presented in section 2, together with a motivating example in order to introduce

the notion of “decoration”. Section 3 is devoted to states and section 4 to exceptions. In section 5, the

duality is extended to the syntax level. In appendix A, some fundamental properties of states and excep-

2

hal-00445873, version 4 - 19 May 2011

Page 3

tions are proved in the decorated logic. In this paper, the word “apparent” is used in the sense of “seeming”

(“appearing as such but not necessarily so”).

1States and exceptions: duality of denotational semantics

In this section, the symmetry between states and exceptions is presented as a duality between their intended

denotational semantics (proposition 1.6). The aim of the next sections is to extend this result so as to get

a symmetry between the syntax of states and exceptions, considered as computational effects, from which

the duality between their semantics can be derived (theorem 5.1). In this section we are dealing with sets

and functions; the symbols × and?are used for cartesian products, + and?for disjoint unions; cartesian

products are products in the category of sets and disjoint unions are sums or coproducts in this category.

1.1States

Let St denote the set of states. Let Loc denote the set of locations (also called variables or identifiers). For

each location i, let Validenote the set of possible values for i. For each i ∈ Loc there is a lookup function

li: St → Valifor reading the value of location i in the given state. In addition, for each i ∈ Loc there is an

update function ui: Vali× St → St for setting the value of location i to the given value, without modifying

the values of the other locations in the given state. This is summarized as follows. For each i ∈ Loc there

are:

• a set Vali(values)

• two functions li: St → Vali(lookup)

and ui: Vali× St → St (update)

• and two equalities

?

∀a ∈ Vali, ∀s ∈ St , li(ui(a,s)) = a

∀a ∈ Vali, ∀s ∈ St , lj(ui(a,s)) = lj(s)for every j ?= i ∈ Loc

(1)

Let us assume that St =?

Claim 1.1. This description provides the intended semantics of states.

i∈LocValiwith the li’s as projections. Then two states s and s′are equal if and

only if li(s) = li(s′) for each i, and the equalities 1 form a coinductive definition of the functions ui’s.

In [Plotkin & Power 2002] an equational presentation of states is given, with seven families of equations.

In [Melli` es 2010] these equations are expressed as follows.

1. Annihilation lookup-update: reading the value of a location i and then updating the location i with the

obtained value is just like doing nothing.

2. Interaction lookup-lookup: reading twice the same location loc is the same as reading it once.

3. Interaction update-update: storing a value a and then a value a′at the same location i is just like

storing the value a′in the location.

4. Interaction update-lookup: when one stores a value a in a location i and then reads the location i, one

gets the value a.

5. Commutation lookup-lookup: The order of reading two different locations i and j does not matter.

6. Commutation update-update: the order of storing in two different locations i and j does not matter.

7. Commutation update-lookup: the order of storing in a location i and reading in another location j does

not matter.

3

hal-00445873, version 4 - 19 May 2011

Page 4

These equations can be translated in our framework as follows, with li(2): St → Vali× St defined by

li(2)(s) = (li(s),s) and prrVali: Vali× St → St by prrVali(a,s) = s.

(1)

(2)

(3)

(4)

(5)

(6)

(7)

∀i ∈ Loc, ∀s ∈ St, ui(li(2)(s)) = s ∈ St

∀i ∈ Loc, ∀s ∈ St, li(prrVali(li(2)(s))) = li(s) ∈ Vali

∀i ∈ Loc, ∀s ∈ St, a,a′∈ Vali, ui(a′,ui(a,s)) = ui(a′,s) ∈ St

∀i ∈ Loc, ∀s ∈ St, a ∈ Vali, li(ui(a,s)) = a ∈ Vali

∀i ?= j ∈ Loc, ∀s ∈ St, (li(s),lj(li(2)(s))) = (li(lj(2)(s)),lj(s)) ∈ Vali× Valj

∀i ?= j ∈ Loc, ∀s ∈ St, a ∈ Vali, b ∈ Valj, uj(b,ui(a,s)) = ui(a,uj(b,s)) ∈ St

∀i ?= j ∈ Loc, ∀s ∈ St, a ∈ Vali, lj(2)(ui(a,s)) = (lj(s),ui(a,s)) ∈ Valj× St

Proposition 1.2. Let us assume that St =?

In fact, we prove that, without the assumption about St, equations 1 are equivalent to equations 2

considered as observational equations: two states s and s′are observationaly equivalent when lk(s) = lk(s′)

for each location k. These properties are revisited in proposition 3.6 and in appendix A.

(2)

i∈LocValiwith the li’s as projections. Then equations 1 and 2

are equivalent.

Proof. Equations (2) and (5) follow immediately from prrVali(li(2)(s)) = s. Equation (4) is the first equation

in 1. Equation (7) is (lj(ui(a,s)),ui(a,s)) = (lj(s),ui(a,s)), which is equivalent to lj(ui(a,s)) = lj(s): this

is the second equation in 1. For the remaining equations (1), (3) and (6), which return states, it is easy to

check that by applying lk to both members and using equations 1 we get the same value in Valk for each

location k.

1.2Exceptions

The syntax for exceptions heavily depends on the language. For instance:

• In ML-like languages there are several exception names, called constructors; the keywords for raising

and handling exceptions are raise and handle, which are used in syntactic constructions like:

raise i a and ...handle i a => g(a) | j b => h(b) | ...

where i,j are exception constructors, a,b are parameters and g,h are functions.

• In Java there are several exception types; the keywords for raising and handling exceptions are throw

and try-catch which are used in syntactic constructions like:

throw new i(a) and try { ...

} catch (i a) g catch (j b) h ...

where i,j are exception types, a,b are parameters and g,h are programs.

In spite of the differences in the syntax, the semantics of exceptions is rather similar in many languages.

A major point is that there are two kinds of values: the ordinary (i.e., non-exceptional) values and the

exceptions; it follows that the operations may be classified according to the way they may, or may not,

interchange these two kinds of values.

First let us focus on the raising of exceptions. Let Exc denote the set of exceptions. Let ExCstr denote

the set of exception constructors. For each exception constructor i, there is a set of parameters Pariand a

function ti: Pari→ Exc for building the exception ti(a) of constructor i with the given parameter a ∈ Pari,

called the key throwing function. Then the function raisei,Y : Pari→ Y + Exc for raising (or throwing) an

exception of constructor i into a type Y is made of the key throwing function tifollowed by the inclusion

inrY : Exc → Y + Exc.

raisei,Y = throwi,Y = inrY ◦ ti : Pari→ Y + Exc

Pari

raisei,Y

??

ti

?? ?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

Y + Exc

=

Exc

inr

??

(3)

4

hal-00445873, version 4 - 19 May 2011

Page 5

Claim 1.3. The function ti: Pari→ Exc is the key function for throwing an exception: in the construction

of the raising function (raisei,Y), only titurns a non-exceptional value a ∈ Parito an exception ti(a) ∈ Exc.

Given a function f : X → Y + Exc and an element x ∈ X, if f(x) = raisei,Y(a) ∈ Y + Exc for

some a ∈ Pari then one says that f(x) raises an exception of constructor i with parameter a into Y .

One says that a function f : X + Exc → Y + Exc propagates exceptions when it is the identity on Exc.

Clearly, any function f : X → Y + Exc can be extended by propagating exceptions: the extended function

Ppg(f) : X + Exc → Y + Exc coincides with f on X and with the identity on Exc.

Now let us study the handling of exceptions, starting from its description in Java [Java, Ch. 14].

A try statement without a finally block is executed by first executing the try block. Then there is a choice:

1. If execution of the try block completes normally, then no further action is taken and the try statement

completes normally.

2. If execution of the try block completes abruptly because of a throw of a value V , then there is a choice:

(a) If the run-time type of V is assignable to the parameter of any catch clause of the try statement,

then the first (leftmost) such catch clause is selected. The value V is assigned to the parameter of

the selected catch clause, and the block of that catch clause is executed.

i. If that block completes normally, then the try statement completes normally;

ii. if that block completes abruptly for any reason, then the try statement completes abruptly for

the same reason.

(b) If the run-time type of V is not assignable to the parameter of any catch clause of the try statement,

then the try statement completes abruptly because of a throw of the value V .

3. If execution of the try block completes abruptly for any other reason, then the try statement completes

abruptly for the same reason.

In fact, points 2(a)i and 2(a)ii can be merged. Our treatment of exceptions is similar to the one in Java

when execution of the try block completes normally (point 1) or completes abruptly because of a throw of

an exception of constructor i ∈ ExCstr (point 2). Thus, for handling exceptions of constructors i1,...,in

raised by some function f : X → Y +Exc, using functions g1: Pari1→ Y +Exc,...,gn: Parin→ Y +Exc,

for every n ≥ 1, the handling process builds a function:

f handle i1⇒g1| ... |in⇒gn = try{f}catch i1{g1}catch i2{g2}...catch in{gn}

which may be seen, equivalently, either as a function from X to Y + Exc or as a function from X + Exc to

Y + Exc which propagates the exceptions. We choose the second case, and we use compact notations:

f handle (ik⇒gk)1≤k≤n = try{f}catch ik{gk}1≤k≤n: X + Exc → Y + Exc

This function can be defined as follows.

For each x ∈ X + Exc, (f handle (ik⇒gk)1≤k≤n)(x) ∈ Y + Exc is defined by:

if x ∈ Exc then return x ∈ Exc ⊆ Y + Exc;

// now x is not an exception

compute y := f(x) ∈ Y + Exc;

if y ∈ Y then return y ∈ Y ⊆ Y + Exc;

// now y is an exception

for k = 1..n repeat

if y = tik(a) for some a ∈ Parikthen return gk(a) ∈ Y + Exc;

// now y is an exception not constructed from any i ∈ {i1,...,in}

return y ∈ Exc ⊆ Y + Exc.

5

hal-00445873, version 4 - 19 May 2011

Page 6

In order to express more clearly the apparition of the parameter a when y is an exception of constructor

ik, we introduce for each i ∈ ExCstr the function ci: Exc → Pari+ Exc, called the key catching function,

defined as follows:

For each e ∈ Exc, ci(e) ∈ Pari+ Exc is defined by:

if e = ti(a) then return a ∈ Pari⊆ Pari+ Exc;

// now e is an exception not constructed from i

return e ∈ Exc ⊆ Pari+ Exc.

This means that the function citests whether the given exception e has constructor i, if so then it catches the

exception by returning the parameter a ∈ Parisuch that e = ti(a), otherwise cipropagates the exception e.

Using the key catching function ci, the definition of the handling function can be re-stated as follows, with

the three embedded conditionals numerated from the innermost to the outermost, for future use.

For each x ∈ X + Exc, (f handle (ik⇒gk)1≤k≤n)(x) ∈ Y + Exc is defined by:

(3) if x ∈ Exc then return x ∈ Exc ⊆ Y + Exc;

// now x is not an exception

compute y := f(x) ∈ Y + Exc;

(2) if y ∈ Y then return y ∈ Y ⊆ Y + Exc;

// now y is an exception

for k = 1..n repeat

compute y := cik(y) ∈ Parik+ Exc;

(1) if y ∈ Parikthen return gk(y) ∈ Y + Exc;

// now y is an exception not constructed from any i ∈ {i1,...,in}

return y ∈ Exc ⊆ Y + Exc.

Note that whenever several i’s are equal in (i1,...,in), then only the first gimay be used.

Claim 1.4. The function ci : Exc → Pari+ Exc is the key function for catching an exception: in the

construction of the handling function (f handle i ⇒ g), only ci may turn an exception e ∈ Exc to a non-

exceptional value ci(e) ∈ Pari, the other parts of the construction propagate all exceptions.

The definition of the handling function is illustrated by the following diagrams; each diagram corresponds

to one of the three nested conditionals, from the innermost to the outermost. The inclusions are denoted by

inlA: A → A + Exc and inrA: Exc → A + Exc (subscripts may be dropped) and for every a : A → B and

e : Exc → B the corresponding conditional is denoted by [a|e] : A + Exc → B, it is characterized by the

equalities [a|e] ◦ inlA= a and [a|e] ◦ inrA= e.

1. The catching functions catch ik{gk}p≤k≤n: Exc → Y + Exc are defined recursively by

?

[gp| catch ik{gk}p+1≤k≤n] ◦ cip

catch ik{gk}p≤k≤n =

[gn| inrY] ◦ cin

when p = n

when p < n

Parip

inl

??

gp

?

?? ?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

Exc

cip

??Parip+ Exc

[gp| ...]

??

=

=

Y + Exc

Exc

inr

??

...

?? ?

?

?

(4)

where ... stands for inrY when p = n and for catch ik{gk}p+1≤k≤nwhen p < n.

6

hal-00445873, version 4 - 19 May 2011

Page 7

2. Then the function H : X → Y + Exc, which defines the handling function on non-exceptional values,

is defined as

H = [inlY | catch ik{gk}1≤k≤n] ◦ f : X → Y + Exc

Y

inl

??

inl

?

?

?

?

?

?

?

?

?? ?

?

?? ?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

X

f

??Y + Exc

[inl | catch ik{gk}1≤k≤n]

??

=

=

Y + Exc

Exc

inr

??

catch ik{gk}1≤k≤n

(5)

3. Finally the handling function is the extension of H which propagates exceptions

try{f}catch ik{gk}1≤k≤n = [H | inrY]

X

inl

??

H

?

?? ?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

X + Exc

try{f}catch ik{gk}1≤k≤n

??

=

=

Y + Exc

Exc

inr

??

inr

?? ?

?

?

?

(6)

The next claim is based on our previous analysis of Java exceptions; it is also related to the notion of

monadic reflection in [Filinski 1994].

Claim 1.5. This description provides the intended semantics of exceptions.

Let us come back to the key operations tiand cifor throwing and catching exceptions. For each i ∈ ExCstr

there are:

• a set Pari(parameters)

• two functions ti: Pari→ Exc (key throwing)

and ci: Exc → Pari+ Exc (key catching)

• and two equalities

?

∀a ∈ Pari, ci(ti(a)) = a ∈ Pari⊆ Pari+ Exc

∀b ∈ Parj, ci(tj(b)) = tj(b) ∈ Exc ⊆ Pari+ Exc for every j ?= i ∈ Loc

(7)

This means that, given an exception e of the form ti(a), the corresponding key catcher ci recovers the

non-exceptional value a while the other key catchers propagate the exception e. Let us assume that Exc =

?

i∈ExCstrPari with the ti’s as coprojections. Then the equalities 7 form an inductive definition of the

functions ci’s.

1.3States and exceptions: the duality

Figure 1 recapitulates the properties of the functions lookup (li) and update (ui) for states on the left, and

the functions key throw (ti) and key catch (ci) for exceptions on the right. Intuitively: for looking up the

value of a location i, only the previous updating of this location is necessary, and dually, when throwing an

exception of constructor i only the next catcher for this constructor is necessary (see section 5.2). The next

result follows immediately from figure 1.

7

hal-00445873, version 4 - 19 May 2011

Page 8

StatesExceptions

i ∈ Loc, Vali,

St (=?

cartesian products:

prli

??

i ∈ ExCstr, Pari,

Exc (=?

disjoint unions:

inri??Pari+ Exc

i∈LocVali)

i∈ExCstrPari)

Vali

Vali× St

prri??St Exc

Pari

inli

??

li: St → Vali

Exc ← Pari: ti

ui: Vali× St → StPari+ Exc ← Exc : ci

Vali× St

prli

??

ui??

Vali

id

??

St

li

??Vali

=

Pari+ ExcPari

inli

??

Exc

ci??

Pari

ti

??

id

??

=

Vali× St

prri

??

ui??

St

lj??Valj

id

??

St

lj

??Valj

=

Pari+ Exc

Exc

inri

??

Parj

tj

??

Exc

ci

??

Parj

tj

??

id

??

=

(j ?= i)(j ?= i)

Figure 1: Duality of semantics

Proposition 1.6. The well-known duality between categorical products and coproducts can be extended as a

duality between the semantics of the lookup and update functions for states on one side and the semantics of

the key throwing and catching functions for exceptions on the other.

It would be unfair to consider states and exceptions only from this denotational point of view. Indeed,

states and exceptions are computational effects, which do not appear explicitly in the syntax: in an imperative

language there is no type of states, and in a language with exceptions the type of exceptions that may be

raised by a program is not seen as a return type for this program. In fact, our result (theorem 5.1) is that

there is a duality between states and exceptions considered as computational effects, which provides the

above duality (propostion 1.6) between their semantics.

2Computational effects

In sections 3 and 4 we will deal with states and exceptions as computational effects. In this section, we present

our point of view on computational effects. First a motivating example from object-oriented programming

is given, then a simplified version of the framework of diagrammatic logics is presented, and finally this

framework is applied to effects.

2.1An example

In this section we use a toy example dealing with the state of an object in an object-oriented language, in

order to outline our approach of computational effects. Let us build a class BankAccount for managing (very

simple!) bank accounts. We use the types int and void, and we assume that int is interpreted as the set

of integers

returns the current balance of the account and a method deposit(x) for the deposit of x Euros on the

account. The deposit method is a modifier, which means that it can use and modify the state of the current

account. The balance method is an inspector, or an accessor, which means that it can use the state of the

? and void as a singleton {⋆}. In the class BankAccount, there is a method balance() which

8

hal-00445873, version 4 - 19 May 2011

Page 9

current account but it is not allowed to modify this state. In the object-oriented language C++, a method

is called a member function; by default a member function is a modifier, when it is an accessor it is called

a constant member function and the keyword const is used. So, the C++ syntax for declaring the member

functions of the class BankAccount looks like:

int balance () const ;

void deposit (int) ;

Forgetting the keyword const, this piece of C++ syntax can be translated as a signature Σbank,app, which

we call the apparent signature:

?

deposit : int → void

Σbank,app:

balance : void → int

(8)

In a model (or algebra) of the signature Σbank,app, the operations would be interpreted as functions:

?

[[deposit]] :

[[balance]] : {⋆} →

?

? → {⋆}

which clearly is not the intended interpretation.

In order to get the right semantics, we may use another signature Σbank,expl, which we call the explicit

signature, with a new symbol state for the “type of states”:

?

deposit : int × state → state

Σbank,expl:

balance : state → int

(9)

The intended interpretation is a model of the explicit signature Σbank,expl, with St denoting the set of states

of a bank account:

?

[[deposit]] :

[[balance]] : St →

?

? × St → St

So far, in this example, we have considered two different signatures. On the one hand, the apparent

signature Σbank,appis simple and quite close to the C++ code, but the intended semantics is not a model of

Σbank,app. On the other hand, the semantics is a model of the explicit signature Σbank,expl, but Σbank,expl

is far from the C++ syntax: actually, the very nature of the object-oriented language is lost by introducing

a “type of states”. Let us now define a decorated signature Σbank,deco, which is still closer to the C++ code

than the apparent signature and which has a model corresponding to the intended semantics. The decorated

signature is not exactly a signature in the classical sense, because there is a classification of its operations.

This classification is provided by superscripts called decorations: the decorations “(1)” and “(2)” correspond

respectively to the object-oriented notions of accessor and modifier.

?

deposit(2): int → void

Σbank,deco:

balance(1): void → int

(10)

The decorated signature is similar to the C++ code, with the decoration “(1)” corresponding to the keyword

“const”. In addition, we claim that the intended semantics can be seen as a decorated model of this decorated

signature.

In order to add to the signature the constants of type int like 0, 1, 2, ...and the usual operations on

integers, a third decoration is used: the decoration “(0)” for pure functions, which means, for functions

which neither inspect nor modify the state of the bank account. So, we add to the apparent and explicit

signatures the constants 0, 1, ...: void → int and the operations +, -, ∗ : int × int → int, and we

add to the decorated signature the pure constants 0(0), 1(0), ...: void → int and the pure operations

+(0), -(0),∗(0): int × int → int. For instance in the C++ expressions

deposit(7); balance() and 7 + balance()

9

hal-00445873, version 4 - 19 May 2011

Page 10

composition is expressed in several different ways: in the functional way f(a), in the infix way af b and in

the imperative way c;c′. In the explicit signature, these expressions can be seen as the terms balance ◦

deposit◦ (7 × idstate) and + ◦ (7 × balance), with void× state identified with state:

state ≃ void× state

7×idstate??int × state

deposit

??state

balance

??int

state ≃ void× state

7×balance

??int × int

+

??int

In the decorated signature, they can be seen as the decorated terms

balance(1)◦ deposit(2)◦ 7(0)and +(0)◦ ?7(0),balance(1)?:

void

7(0)

??int

deposit(2)

??void

balance(1)

??int

void

?7(0),balance(1)???int × int

+(0)

??int

These two expressions have different effects: the first one is a modifier while the second one is an accessor;

however, both return the same result (an integer). We introduce the symbol ∼ for the relation “same result,

maybe distinct effects”; the relation ∼ will be considered as a decorated version of the equality.

balance(1)◦ deposit(2)◦ 7(0)∼ +(0)◦ ?7(0),balance(1)?

2.2Simplified diagrammatic logics

In this paper, as in [Dom´ ınguez & Duval 2010] and [Dumas et al. 2011], we use the point of view of diagram-

matic logics for dealing with computational effects. One fundamental feature of the theory of diagrammatic

logics is the distinction between a logical theory and its presentations (or specifications). This is the usual

point of view in the framework of algebraic specifications [Ehrig & Mahr 1985], but not always in logic, as

mentioned by F.W. Lawvere in his foreword to [Ad´ amek et al. 2011]:

(and model theory generally) continue anachronistically to confuse a presentation in terms of signatures with

the presented theory itself. A second fundamental feature of the theory of diagrammatic logics is the defini-

tion of a rich family of morphisms of logics. Computational effects, from our point of view, heavily depend

on some morphisms of logics. Thus, in this paper, in order to focus on states and exceptions as effects, we

use a simplified version of diagrammatic logics by dropping the distinction between a logical theory and its

presentations. It is only in remark 2.9 that we give some hints about non-simplified diagrammatic logics.

On the other hand, with the same goal of focusing on states and exceptions as effects, in sections 3 and 4

the base logic is the very simple (multi-sorted) monadic equational logic, where a theory is made of types,

unary terms and equations. We will occasionally mention the equational logic, where in addition a theory

may have terms of any finite arity. In order to keep the syntactic aspect of the logics, we use a congruence

relation between terms rather than the equality; in the denotational semantics, this congruence is usually

interpreted as the equality.

Yet many works in general algebra

Definition 2.1. A simplified diagrammatic logic is a category T with colimits; its objects are called the

T-theories and its morphisms the morphisms of T-theories. A morphism of simplified diagrammatic logics

F : T → T′is a left adjoint functor. This yields the category of simplified diagrammatic logics.

Example 2.2 (Monadic equational logic). A monadic equational theory might be called a “syntactic cat-

egory”: it is a category where the axioms hold only up to some congruence relation. Precisely, a monadic

equational theory is a directed graph (its vertices are called objects or types and its edges are called morphisms

or terms) with an identity term idX: X → X for each type X and a composed term g ◦f : X → Z for each

pair of consecutive terms (f : X → Y,g : Y → Z); in addition it is endowed with equations f ≡ g : X → Y

that form an equivalence relation on parallel terms, denoted by ≡, which is a congruence with respect to

the composition and such that the associativity and identity axioms hold up to congruence. This definition

of the monadic equational logic can be described by a set of inference rules, as in figure 2. A morphism of

monadic equational theories might be called a “syntactic functor”: it maps types to types, terms to terms

and equations to equations.

10

hal-00445873, version 4 - 19 May 2011

Page 11

(comp)f : X → Yg : Y → Z

g ◦ f : X → Z

(id)

X

idX: X → X

(assoc)f : X → Yg : Y → Zh : Z → W

h ◦ (g ◦ f) ≡ (h ◦ g) ◦ f

(id-src)

f : X → Y

f ◦ idX≡ f

(id-tgt)

f : X → Y

idY ◦ f ≡ f

(≡-refl)f ≡ f

(≡-sym)f ≡ g

g ≡ f

(≡-trans)f ≡ gg ≡ h

f ≡ h

(≡-subs)f : X → Yg1≡ g2: Y → Z

g1◦ f ≡ g2◦ f : X → Z

(≡-repl)f1≡ f2: X → Y

g ◦ f1≡ g ◦ f2: X → Z

g : Y → Z

Figure 2: Rules of the monadic equational logic

Example 2.3 (Equational logic). An equational theory might be called a “syntactic category with finite

products”. Precisely, an equational theory is a monadic equational theory with in addition, for each finite

family (Yi)1≤i≤nof types, a product (up to congruence) made of a cone (qi:?n

qi◦ ?f1,...,fn? ≡ fi for each i, and whenever some g : X →?n

that for each type X there is a term ??X : X →

every g : X →

theories which preserves products. This definition can be described by a set of inference rules, as in figure 3.

When there are several parts in the conclusion of a rule, this must be understood as a conjunction (which

might be avoided by writing several rules). The monadic equational logic may be seen as the restriction of

the equational logic to terms with exactly one “variable”. The functor which maps each monadic equational

theory to its generated equational theory is a morphism of simplified diagrammatic logics, with right adjoint

the forgetful functor.

j=1Yj→ Yi)1≤i≤nsuch that

j=1Yj such that

j=1Yj is such that qi◦ g ≡ fi for each i

for each cone (fi: X → Yi)1≤i≤nwith the same base there is a term ?f1,...,fn? : X →?n

then g ≡ ?f1,...,fn?. When n = 0 this means that in an equational theory there is a terminal type

? such

?, which is unique up to congruence in the sense that

? satisfies g ≡ ??X. A morphism of equational theories is a morphism of monadic equational

Given a simplified diagrammatic logic, we define the associated notions of model and inference system.

We often write “logic” instead of “simplified diagrammatic logic”.

Definition 2.4. Let T be a logic. Let Φ and Θ be T-theories, a model of Φ in Θ is a morphism from Φ

to Θ in T. Then the triple Λ = (Φ,Θ,M) is a language on T with syntax Φ and semantics M. The set of

models of Φ in Θ is denoted by ModT(Φ,Θ).

Remark 2.5. The definitions are such that every simplified diagrammatic logic T has the soundness prop-

erty: in every language, the semantics is a model of the syntax.

Definition 2.6. Let T be a logic. An inference rule is a morphism ρ : C → H in T. Then H is the hypothesis

and C is the conclusion of the rule ρ. Let Φ0and Φ be T-theories, an instance of Φ0in Φ is a morphism

κ : Φ0→ Φ in T. The inference step applying a rule ρ : C → H to an instance κ : H → Φ of H in Φ is the

composition in T, which builds the instance κ ◦ ρ : C → Φ of C in Φ.

11

hal-00445873, version 4 - 19 May 2011

Page 12

Rules of the monadic equational logic, and for each n ∈

?:i.e., when n = 0:

Y1 ... Yn

j=1Yj→ Yi)1≤i≤n

(qi:?n

j=1Yj→ Yi)1≤i≤n

?

(qi:?n

(fi: X → Yi)1≤i≤n

∀i qi◦ ?f1,...,fn? ≡ fi

g : X →?n

g ≡ ?f1,...,fn?

?f1,...,fn? : X →?n

j=1Yj→ Yi)1≤i≤n

j=1Yj

X

??X: X →

?

(qi:?n

j=1Yj

∀i qi◦ g ≡ fi

g : X →

g ≡ ??X

?

Figure 3: Rules of the equational logic

Remark 2.7. The rule ρ : C → H may be represented in the usual way as a “fraction”

when H is the colimit of several theories, see example 2.8. In addition, in [Dom´ ınguez & Duval 2010] it

is explained why an inference rule written in the usual way as a “fraction”

categorical sense of [Gabriel & Zisman 1967], but with H on the denominator side and C on the numerator

side.

H

ρ(C), or as

H1,...,Hk

ρ(C)

H

ρ(C)is really a fraction in the

Example 2.8 (Composition rule). Let us consider the equational logic Teq, as in example 2.3. The category

of sets can be seen as an equational theory Θset, with the equalities as equations and the cartesian products

as products. Let us define the equational theory “of integers” Φintas the equational theory generated by a

type I, three terms z :

a unique model Mintof Φintin Θsetwhich interprets the sort I as the set

as 0 and the terms s and p as the functions x ?→ x + 1 and x ?→ x − 1. In the equational logic Teq, let us

consider the composition rule:

f : X → Y

g ◦ f : X → Z

? → I and s,p : I → I and two equations s ◦ p ≡ idI and p ◦ s ≡ idI. Then there is

? of integers, the constant term z

g : Y → Z

Let H be the equational theory generated by three types X, Y , Z and two consecutive terms f : X → Y ,

g : Y → Z; let C be the equational theory generated by two types T, T′and a term t : T → T′. The

composition rule corresponds to the morphism of equational theories from C to H which maps t to g◦f. Let

us consider the instance κ of H in Φintwhich maps f and g respectively to z and s, then the inference step

applying the composition rule to this instance κ builds the instance of C in Φintwhich maps t to s ◦ z, as

required. Moreover, H can be obtained as the pushout of H1(generated by X, Y and f : X → Y ) and H2

(generated by Y , Z and g : Y → Z) on their common part (the equational theory generated by Y ). Then

the instance κ of H in Φintcan be built from the instance κ1of H1in Φintmapping f to z and the instance

κ2of H2in Φintmapping g to s.

Remark 2.9. In this simplified version of diagrammatic logic, the morphisms of theories serve for many

purposes. However in the non-simplified version there is a distinction between theories and their presentations

(called specifications), which results in more subtle definitions. This is outlined here, more details can be

found in [Dom´ ınguez & Duval 2010]. This will not be used in the next sections. As usual a locally presentable

category is a category C which is equivalent to the category of set-valued realizations (or models) of a

limit sketch [Gabriel & Ulmer 1971]. In addition, a functor F : C1 → C2 which is the left adjoint to the

precomposition with some morphism of limit sketches [Ehresmann 1968] will be called a locally presentable

functor.

12

hal-00445873, version 4 - 19 May 2011

Page 13

• A diagrammatic logic is defined as a locally presentable functor L : S → T such that its right adjoint

R is full and faithful. This means that L is a localization, up to an equivalence of categories: it

consists of adding inverse morphisms for some morphisms, constraining them to become isomorphisms

[Gabriel & Zisman 1967]. The categories S and T are called the category of specifications and the

category of theories, respectively, of the diagrammatic logic L. A specification Σ presents a theory Θ

if Θ is isomorphic to L(Σ). The fact that R is full and faithful means that every theory Θ, when seen

as a specification R(Θ), presents itself.

• A model M of a specification Σ in a theory Θ is a morphism of theories M : LΣ → Θ or equivalently,

thanks to the adjunction, a morphism of specifications M : Σ → RΘ.

• An entailment is a morphism τ in S such that Lτ is invertible in T; a similar notion can be found

in [Makkai 1997]. An instance κ of a specification Σ0 in a specification Σ is a cospan in S made of

a morphism σ : Σ0→ Σ′and an entailment τ : Σ → Σ′. It is also called a fraction with numerator

σ and denominator τ [Gabriel & Zisman 1967]. The instances can be composed in the usual way as

cospans, thanks to pushouts in S. This forms the bicategory of instances of the logic, and T is, up

to equivalence, the quotient category of this bicategory. An inference rule ρ with hypothesis H and

conclusion C is an instance of C in H. Then an inference step is a composition of fractions.

• An inference system for a diagrammatic logic L is a morphism of limit sketches which gives rise to

the locally presentable functor L. The elementary inference rules are the rules in the image of the

inference system by the Yoneda contravariant functor. Then a derivation, or proof, is the description

of a fraction in terms of elementary inference rules.

• A morphism of logics F : L1→ L2, where L1: S1→ T1and L2: S2→ T2, is a pair of locally pre-

sentable functors (FS,FT) with FS: S1→ S2and FT: T1→ T2, together with a natural isomorphism

FT◦ L1∼= L2◦ FSinduced by a commutative square of limit sketches.

2.3Diagrammatic logics for effects

Now let us come back to computational effects. Our point of view is that a language with computational

effect is a kind of language with an apparent lack of soundness: a language with computational effect is made

of a syntax, called the apparent syntax, and a semantics which (in general) is not a model of the apparent

syntax, together with some additional information which may be added to the apparent syntax in order to

get another syntax, called the decorated syntax, such that the semantics is a model of the decorated syntax.

This approach leads to a new point of view about effects, which can be seen as a generalization of the point

of view of monads: the distinction between values and computations provided by the monad can be seen

as a kind of decoration. In our framework every logic is sound (remark 2.5), and a computational effect is

defined with respect to a span of logics, which means, a pair of morphisms of logics with the same domain.

Definition 2.10. Let Z be a span in the category of simplified diagrammatic logics:

Tdeco

Fapp

?????????????

Fexpl

?

?

?

?

?

???

?

?

?

?

?

Tapp

Texpl

We call Tappthe apparent logic, Tdecothe decorated logic and Texplthe explicit logic. Let Gexpldenote the

right adjoint of Fexpl. A language with effect with respect to Z is a language Λdeco= (Φdeco,Θdeco,Mdeco)

in Tdecotogether with a theory Θexplin Texplsuch that Θdeco= GexplΘexpl. The apparent syntax of Λdeco

is Φapp= FappΦdecoin Tapp. The expansion of Λdecois the language Λexpl= (Φexpl,Θexpl,Mexpl) in Texpl

with Φexpl= FexplΦdecoand Mexpl= ϕMdeco, where ϕ : ModTdeco(Φdeco,Θdeco) → ModTexpl(Φexpl,Θexpl) is

the bijection provided by the adjunction Fexpl⊣ Gexpl.

13

hal-00445873, version 4 - 19 May 2011

Page 14

Φdeco

?

Fapp

??????????

?

?

Fexpl

? ?

?

???

? ?

? ?

Mdeco

??Θdeco

Φapp

Φexpl

Mexpl

??Θexpl

?

Gexpl

???????????

Remark 2.11. Since a language with effect Λdecois defined as a language on Tdeco, according to remark 2.5

it is sound. Similarly, the expansion Λexplof Λdecois a language on Texpl, hence it is sound. Both languages

are equivalent from the point of view of semantics, thanks to the bijection ϕ. This may be used for formalizing

a computational effect when the decorated syntax corresponds to the programs while the explicit syntax does

not, as in the bank account example in section 2.1.

Remark 2.12. It is tempting to look for a language Λapp = (Φapp,Θapp,Mapp) on Tapp, where Φapp =

FappΦdecois the apparent syntax of Λdeco. However, in general such a language does not exist (as for instance

in remark 3.4).

3States

In the syntax of an imperative language there is no type of states (the state is “hidden”) while the interpre-

tation of this language involves a set of states St. More precisely, if the types X and Y are interpreted as

the sets [[X]] and [[Y ]], then each term f : X → Y is interpreted as a function [[f]] : [[X]]×St → [[Y ]]×St.

In Moggi’s papers introducing monads for effects [Moggi 1989, Moggi 1991] such a term f : X → Y is called

a computation, and whenever the function [[f]] is [[f]](0)× idSt for some [[f]](0): [[X]] → [[Y ]] then f is

called a value. We keep this distinction, using modifier and pure term instead of computation and value,

respectively. In addition, an accessor (or inspector) is a term f : X → Y that is interpreted by a function

[[f]] = ?[[f]](1),prr[[X]]?, for some [[f]](1): [[X]]×St → [[Y ]], where prr[[X]]: [[X]]×St → St is the projection.

It follows that every pure term is an accessor and every accessor is a modifier. We will use the decorations

(0), (1) and (2), written as superscripts, for pure terms, accessors and modifiers, respectively. Moreover, we

distinguish two kinds of equations: when f,g : X → Y are parallel terms, then a strong equation f ≡ g is

interpreted as the equality [[f]] = [[g]] : [[X]] × St → [[Y ]] × St, while a weak equation f ∼ g is interpreted

as the equality prl[[Y ]]◦ [[f]] = prl[[Y ]]◦ [[g]] : [[X]] × St → [[Y ]], where prl[[Y ]]: [[Y ]] × St → [[Y ]] is the

projection. Clearly, both notions coincide on accessors, hence on pure terms.

3.1A span of logics for states

Let Loc be a given set, called the set of locations. Let us define a span of logics for dealing with states (with

respect to the set of locations Loc) denoted by Zst:

Tdeco,st

Fapp,st

?????????????

Fexpl,st

?

?

?

?

?

???

?

?

?

?

?

Tapp,st

Texpl,st

In this section the subscript “st” will be omitted. First the decorated logic is defined, then the apparent logic

and the morphism Fapp, and finally the explicit logic and the morphism Fexpl. For each logic the definition

of the morphisms of theories is omitted, since it derives in a natural way from the definition of the theories.

In order to focus on the fundamental properties of states as effects, these logics are based on the monadic

equational logic (as in example 2.2).

The logic Tdeco is the decorated monadic equational logic for states (with respect to Loc), defined as

follows. A theory Θdecofor this logic is made of:

• Three nested monadic equational theories Θ(0)⊆ Θ(1)⊆ Θ(2)with the same types, such that the

congruence on Θ(0)and on Θ(1)is the restriction of the congruence ≡ on Θ(2). The objects of any of

14

hal-00445873, version 4 - 19 May 2011

Page 15

the three categories are called the types of the theory, the terms in Θ(2)are called the modifiers, those

in Θ(1)may be called the accessors, and if they are in Θ(0)they may be called the pure terms. The

relations f ≡ g are called the strong equations.

• An equivalence relation ∼ between parallel terms, which satisfies the properties of substitution and

pure replacement (defined in figure 4). The relations f ∼ g are called the weak equations. Every strong

equation is a weak equation and every weak equation between accessors is a strong equation.

• A distinguished type

is a pure term ??X: X →

? which has the following decorated terminality property: for each type X there

? such that every modifier f : X →

? satisfies f ∼ ??X.

• And Θ may have decorated products on Loc, where a decorated product on Loc is defined as a cone

of accessors (qi: Y → Yi)i∈Locsuch that for each cone of accessors (fi: X → Yi)i∈Locwith the same

base there is a modifier ?fj?j∈Loc: X → Y such that qi◦?fj?j∈Loc∼ fifor each i, and whenever some

modifier g : X → Y is such that qi◦ g ∼ fifor each i then g ≡ ?fj?j∈Loc.

Figure 4 provides the decorated rules for states, which describe the properties of the decorated theories.

We use the following conventions: X,Y,Z,... are types, f,g,h,... are terms, f(0)means that f is a pure

term, f(1)means that f is an accessor, and similarly f(2)means that f is a modifier (this is always the

case but the decoration may be used for emphasizing). Decoration hypotheses may be grouped with other

hypotheses: for instance, “f(1)∼ g(1)” means “f(1)and g(1)and f ∼ g”. A decorated product on Loc is

denoted by (q(1)

i

:?

jYj→ Yi)i.

Remark 3.1. There is no general replacement rule for weak equations: if f1∼ f2: X → Y and g : Y → Z

then in general g ◦ f1?∼ g ◦ f2, except when g is pure.

Example 3.2. Let us derive the following rule, which says that ??Xis the unique accessor from X to

to strong equations:

(≡-final)f(1): X →

?, up

?

f ≡ ??X

The derivation tree is:

f(1)

f : X →

f ∼ ??X

f ≡ ??X

?

(∼-final)

X

(0-final)

??(0)

X

??(1)

X

(0-to-1)

(∼-to-≡)

Now let us describe the “apparent” side of the span. The logic Tapp extends the monadic equational

logic as follows : a theory of Tappis a monadic equational theory with a terminal object

products on Loc (i.e., with their base indexed by Loc). The morphism Fapp : Tdeco → Tapp maps each

theory Θdecoof Tdecoto the theory Θappof Tappmade of:

? which may have

• A type? X for each type X in Θdeco.

terms), such that?

• An equation?f ≡ ? g for each weak equation f ∼ g in Θdeco(which includes the strong equations).

• A term?f :? X →?Y for each modifier f : X → Y in Θdeco(which includes the accessors and the pure

(f,g).

idX= id?

Xfor each type X and?

g ◦ f = ? g ◦?f for each pair of consecutive modifiers

• A product (? qi:?

j?

Yj→?Yi)i∈Locfor each decorated product (q(1)

i

:?

jYj→ Yi)i∈Locin Θdeco.

15

hal-00445873, version 4 - 19 May 2011

#### View other sources

#### Hide other sources

- Available from Jean-Guillaume Dumas · May 19, 2014
- Available from ArXiv