# Categorical semantics for arrows.

**ABSTRACT** Arrows are an extension of the well-established notion of a monad in functional programming languages. This article presents several examples and constructions, and develops

denotational semantics of arrows as monoids in categories of bifunctors C^op x C -> C. Observing similarities to monads -- which are monoids in categories of endofunctors C -> C

-- it then considers Eilenberg-Moore and Kleisli constructions for arrows. The latter yields

Freyd categories, mathematically formulating the folklore claim “arrows are Freyd categories”.

**0**Bookmarks

**·**

**78**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**The lambda calculus, subject to typing restrictions, provides a syntax for the internal language of cartesian closed categories. This paper establishes a parallel result: staging annotations, subject to named level restrictions, provide a syntax for the internal language of (the codomain of) a functor which preserves a premonoidal enrichment. Such functors are uniquely determined by functors from an {\it enriching} category to an {\it enriched} category. An equational axiomatization for the latter form is given, and Arrows are shown to constitute a special case. This result is most useful when the enrichment is that of a category of types and terms in a category of judgments. In such a scenario, the proof of correspondence yields a translation from well typed multi-level terms to well typed one-level terms which are parameterized over an instance of a {\it generalized arrow} (GArrow). The translation is defined by induction on the structure of the proof that the multi-level program is well typed, relying on information encoded in the proof's use of structural rules (weakening, contraction, exchange, and context associativity). When supplied with an appropriate GArrow instance, the translation preserves big-step semantics. From a practical perspective, this permits metalanguage designers to factor out the syntactic machinery of metaprogramming by providing a single translation from multi-level syntax into expressions of GArrow type. Object language {\it providers} need only implement the functions of the GArrow type class in point-free style. Object language {\it users} may write metaprograms over these object languages in a pointful style, sharing the same binding, scoping, abstraction, and application mechanisms across both the object language and metalanguage.Computing Research Repository - CORR. 01/2010; -
##### Conference Paper: The sequential semantics of producer effect systems

[Show abstract] [Hide abstract]

**ABSTRACT:**Effects are fundamental to programming languages. Even the lambda calculus has effects, and consequently the two famous evaluation strategies produce different semantics. As such, much research has been done to improve our understanding of effects. Since Moggi introduced monads for his computational lambda calculus, further generalizations have been designed to formalize increasingly complex computational effects, such as indexed monads followed by layered monads followed by parameterized monads. This succession prompted us to determine the most general formalization possible. In searching for this formalization we came across many surprises, such as the insufficiencies of arrows, as well as many unexpected insights, such as the importance of considering an effect as a small component of a whole system rather than just an isolated feature. In this paper we present our semantic formalization for producer effect systems, which we call a productor, and prove its maximal generality by focusing on only sequential composition of effectful computations, consequently guaranteeing that the existing monadic techniques are specializations of productors.Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages; 01/2013 - SourceAvailable from: sciencedirect.com[Show abstract] [Hide abstract]

**ABSTRACT:**Lawvere theories provide a categorical formulation of the algebraic theories from universal algebra. Freyd categories are categorical models of first-order effectful programming languages. The notion of sound limit doctrine has been used to classify accessible categories. We provide a definition of Lawvere theory that is enriched in a closed category that is locally presentable with respect to a sound limit doctrine. For the doctrine of finite limits, we recover Power's enriched Lawvere theories. For the empty limit doctrine, our Lawvere theories are Freyd categories, and for the doctrine of finite products, our Lawvere theories are distributive Freyd categories. In this sense, computational effects are algebraic.Electronic Notes in Theoretical Computer Science 01/2014; 303:197–206.

Page 1

Under consideration for publication in J. Functional Programming

1

Categorical semantics for arrows

Bart Jacobs, Chris Heunen, Ichiro Hasuo

Institute for Computing and Information Sciences

Radboud University, Nijmegen, the Netherlands

(e-mail: {b.jacobs,c.heunen,i.hasuo}@cs.ru.nl)

Abstract

Arrows are an extension of the well-established notion of a monad in functional program-

ming languages. This article presents several examples and constructions, and develops

denotational semantics of arrows as monoids in categories of bifunctors Cop×C → C. Ob-

serving similarities to monads – which are monoids in categories of endofunctors C → C

– it then considers Eilenberg-Moore and Kleisli constructions for arrows. The latter yields

Freyd categories, mathematically formulating the folklore claim “arrows are Freyd cate-

gories”.

Contents

1

2

Introduction

Haskell examples

2.1 Monads

2.2Arrows

2.3Monads versus arrows

Arrow constructions and examples

Categorical formulation

4.1Analysing arrow behaviour categorically

4.2Monoidal structure in the ambient category

4.3Internal strength

4.4 The categorical definition

Biarrows

Kleisli and Eilenberg-Moore constructions for arrows

6.1Arrows are Freyd categories

6.2Eilenberg-Moore algebras for arrows

6.3Freyd is Kleisli, for arrows

Conclusion

A Coends

B Bicategorical characterisation

C 2-categorical details in the Kleisli construction for arrows

References

2

3

3

4

5

7

3

4

10

10

12

14

16

16

18

18

21

25

28

29

32

33

35

5

6

7

Page 2

2Jacobs, Heunen, Hasuo

1 Introduction

The motivation to introduce the concept of an arrow comes from functional pro-

gramming (Hughes, 2000; Paterson, 2001). It is intended as a uniform interface to

certain types of computations, streamlining the infrastructure. This enables a high

level of abstraction to uniformly capture for instance quantum computing (Viz-

zotto et al., 2006+). It also facilitates language extensions like secure information

flow (Li & Zdancewic, 2008): instead of building a domain-specific programming

language from the ground up, it can be defined within normal Haskell using the

arrow interface. After all, arrows provide an abstract interface supporting famil-

iar programming constructs like composition, conditional branches and iteration.

Haskell even incorporates convenient syntax to ease the use of such language exten-

sions. The name “arrow” reflects the focus on the provided infrastructure, especially

compositionality.1

Here is a more mathematical intuition. Monoids are probably the most fun-

damental mathematical structures used in computer science. The basic example

(A,;,skip) is given by a set A ∈ Set of programs or actions, with sequential com-

position ; as binary operation, and an empty statement skip as neutral element for

composition. Such a monoid A does not capture input and output. We may like

to add it via parametrisation A(X,Y ), where X,Y are type variables. Since input

is contravariant and output covariant, we may consider such an indexed monoid

A(−,+) as a bifunctor Cop×C → Set for a suitable category C of types for input

and output. But of course, we still want it to have a monoid structure for composi-

tion. Hence we are led to consider monoids in the functor category Cop×C → Set.

Our first main result – stemming from (Heunen & Jacobs, 2006) – is that such

monoids are in fact arrows as introduced by Hughes.

A special case of the above is when there is only output, but no input: these singly

indexed monoids are (categorical) monads. They correspond to the well-known

notion of a monad in Haskell (Moggi, 1989; Wadler, 1992). Arrows are thus similar

to monads in that they are monoids in suitable categories: namely in categories of

endofunctors C → C. Hence we are led to ask: what are the Eilenberg-Moore and

Kleisli constructions – two very basic constructions on monads – for arrows? Our

second main result – from (Jacobs & Hasuo, 2006) – is that the Kleisli construction

for arrows corresponds to Freyd categories (Robinson & Power, 1997), and moreover

the correspondence is isomorphic. Thus, to the folklore claim “arrows are Freyd

categories” that we put in precise terms, we add the slogan “Freyd is Kleisli, for

arrows”.

These main results are streamlined versions of (Heunen & Jacobs, 2006) and

(Jacobs & Hasuo, 2006). The current article proceeds as follows. In Section 2 we

introduce the concepts of monads and arrows in Haskell more thoroughly, gradually

moving towards a more mathematical mindset instead of a functional programming

perspective. We also motivate why one can in fact achieve more with arrows than

1In a categorical context, the name is a bit unfortunate, however. We consistenly use “arrow”

for the programming construct and “morphism” for the categorical notion.

Page 3

Categorical semantics for arrows3

with monads, and give settings where this is useful. Section 3 investigates, still in

a somewhat discursive style, combinations of arrows. It leads up to a deconstruc-

tion into elementary parts of the particular program that motivated Hughes to use

arrows in the first place (Swierstra & Duponcheel, 1996). The formal, categorical,

analysis of arrows takes place in Section 4, culminating in our first main result

mentioned above, Corollary 4.1. An example showing the elegance of this approach

is discussed in Section 5, namely arrows facilitating bidirectional computation. Sec-

tion 6 then considers algebra constructions for arrows, and contains the second

main result, Theorem 6.2. We conclude in section 7. Appendix A contains a proof

of a result used in Section 4 but only sketched there. Next, Appendix B considers

a bicategorical characterisation of the notion of arrow that elegantly exemplifies its

naturality, but is somewhat out of the scope of the main line of this article. Finally,

Appendix C gives the missing details of Section 6.

2 Haskell examples

This section introduces arrows and their use in functional programming languages.

We briefly consider monads first in subsection 2.1, since this construction from

category theory historically paved the way for arrows (subsection 2.2). Subsection

2.3 then consider the advantages of arrows over monads.

2.1 Monads

A major reason for the initial reluctance to adopt functional programming languages

is the need to pass state data around explicitly. Monadic programming provides an

answer to this inconvenience (Moggi, 1989; Wadler, 1992). Through the use of a

monad one can encapsulate the changes to the state data, the “side-effects”, with-

out explicitly carrying states around. monads can efficiently structure functional

programs while improving genericity. This mechanism is even deemed important

enough to be incorporated into Haskell syntax (Peyton Jones, 2003). A monad in

Haskell is defined as a so-called type class:

class Monad M where

return

(> >=)

::

::

X → M X

M X → (X → M Y ) → M Y

To ensure the desired behaviour, the programmer herself should prove certain

monad laws about the operations return and > >= (pronounced bind). These boil

down to the axioms that M be a monad, in the categorical sense. Using the for-

mulation that is standard in the functional programming community, a categorical

monad consist of a mapping X ?→ M(X) on types, together with “return” and

“bind” functions

X

rt

??MX,

(X → MY )

bd??(MX → MY )

satisfying

bd(f) ◦ rt = f,

bd(rt) = id,

bd(f) ◦ bd(g) = bd(bd(f) ◦ g).

Page 4

4Jacobs, Heunen, Hasuo

In categorical style one defines M to be a functor, with multiplication maps µ =

bd(idMX) : M2X → MX satisfying suitable laws. The above equations are more

convenient for equational reasoning. Often one writes u > >= f for bd(f)(u).

The most familiar monads are powerset, list, lift, state and distribution:

bd(f)(a) =?{f(x)|x ∈ a}

1 + (−)

(− × S)S

D

0 if x ?= y

In the latter case we write D for the ‘subdistribution’ monad D(X) = {ϕ: X →

[0,1]|supp(ϕ) is finite and

x ∈ X with ϕ(x) > 0.

Monads are often considered with strength, i.e. come equipped with a suitable

natural transformation st : M(X) × Y → M(X × Y ). For later reference, we use

that in our present informal setting each functor M is strong, as its strength can

be described explicitly as:

st(u,y) = M?λx.?x,y??(u).

It satisfies the following basic equations.

P

(−)?

rt(x) = {x}

rt(x) = ?x?

rt(x) = up(x)

bd(f)(?x1,...,xn?) = f(x1) · ... · f(xn)

bd(f)(v) =

f(x)

bd(f)(h) = λs.f?π1h(s)??π2h(s)?

bd(f)(ϕ) = λy .ϕ(x) · f(x)(y)

?

⊥

if v = ⊥

if v = up(x)

rt(x) = λs.?x,s?

rt(x) = λy .

?1 if x = y

?

x

?

xϕ(x) ≤ 1}, where the support supp(ϕ) is the set of

(1)

M(f × g) ◦ st = st ◦ (M(f) × g),

M(π1) ◦ st = π1,

M(α−1) ◦ st = st ◦ (st × id) ◦ α−1,

where we use

πi: X1× X2→ Xi,

for the familiar product maps fst, snd and assoc.

In other, non-set-theoretic settings one may have to require such strength maps

explicitly. The monad operations interact appropriately with the above strength

map, in the sense that the following equations hold.

α : (X × Y ) × Z

∼

=

−→X × (Y × Z),

st ◦ (rt × id) = rtst ◦ (bd(f) × g) = bd(st ◦ f × g) ◦ st.

In effect, monads are thus functional combinators. They enable the combination

of functions very generally, without many assumptions on the precise functions to

combine. However, these restrictions are severe enough to exclude certain classes of

libraries from implementation with a monadic interface.

2.2 Arrows

Arrows are even more general functional combinators, and can be seen as a gener-

alisation of monads (Hughes, 2000; Hughes, 2005). An arrow in Haskell is a type

Page 5

Categorical semantics for arrows5

class of the form

classArrow A where

arr

(> > >)

first

::

::

::

(X → Y ) → A X Y

A X Y → A Y Z → A X Z

A X Y → A (X,Z) (Y,Z)

where X,Z in Haskell denotes the Cartesian product type X × Y . Analogous to

monads, an arrow must furthermore satisfy the following arrow laws, the proof of

which is up to the programmer.

(a > > > b) > > > c = a > > > (b > > > c),

arr (g ◦ f) = arr f > > > arr g,

arr id > > > a = a = a > > > arr id,

first a > > > arr π1= arr π1> > > a,

first a > > > arr (id × f) = arr (id × f) > > > first a,

first (first a) > > > arr α = arr α > > > first a,

first (arr f) = arr (f × id),

first (a > > > b) = first a > > > first b,

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

In fact, as Section 6.1 shows, less structure than Cartesian products suffices, elimi-

nating the need for projections πiin the above arrow laws. Sometimes, arr(id ×f)

is written as second(arr(f)), where

second(a)=

arr(γ) > > > first(a) > > > arr(γ),

and γ : X × Y

are sometimes given names (Paterson, 2003). Especially noteworthy are the names

“exchange” for (6) and “extension” for (8).

Example of arrows will be given in Section 3.

∼

=

−→Y × X is the well-known swap-map. The arrow laws (2)–(9)

2.3 Monads versus arrows

This article is concerned with a categorical understanding of this notion of arrow.

At this stage we shall reveal some of the structure involved, but are deliberately

a bit vague about the general setting in which we are working. In doing so we

move to a more mathematical notation, for instance writing A(X,Y ) for AX Y in

functional style.

It is not hard to show that an arrow is “bifunctorial” (Lemma 4.1). This means

that for f : X?→ X and g : Y → Y?one also has a map A(X,Y ) → A(X?,Y?).

The maps arr : YX→ A(X,Y ) then form natural transformations (Lemma 4.2).

Even more, composition can also be seen as a natural transformation A ⊗ A → A,

for a suitable tensor product ⊗ of bifunctors (Proposition 4.2). In this way one

can describe the triple (A,arr,> > >) as a monoid in a category of bifunctors. Here

we shall not need these details yet. But in the remainder of this section we shall

introduce arrows as bifunctors of the form Cop× C → Set.

Page 6

6Jacobs, Heunen, Hasuo

Here is a first trivial example. Let (P,m,e) be a monoid, consisting of an asso-

ciative operation m : P × P → P with two-sided unit e : P. It yields probably the

most elementary example of an arrow, namely a constant one. We shall also write

it as P, formally as functor in P(X,Y ) = P, with operations:

arr(f) = e,a > > > b = m(a,b)

first(a) = a.

Standard examples of monoids P are the singleton type 1 (with trivial operations),

the type 2 = {0,1} of truth values or Booleans (with either conjunctions ?,∧ or

disjunctions ⊥,∨), or the type X?of lists of an arbitrary type X (with the empty

list ?? and concatenation ·).

Every monad (M,rt,bd) with a strength gives rise to an arrow M by

M(X,Y ) = M(Y )X,

(10)

with obvious operations (see e.g. Hughes, 2000) – strength is used to provide the

operation first.

Dual to a monad, a comonad is given by a mapping X ?→ N(X) with “coreturn”

and “cobind” operations crt : NX → X and cbd : (NX → Y ) −→ (NX → NY )

satisfying

crt ◦ cbd(f) = f,

It gives rise to an arrow by (X,Y ) ?→ YN(X)– no strength is needed.

Comonads are less well-known, but are fundamental structures for handling con-

texts (among other things), in which the “counit” ε = crt : NX → X is used for

weakening, and the “comultiplication” δ = cbd(idNX) : NX → N2X for contrac-

tion (Jacobs, 1999). The following diagram presents the main comonads X ?→ ···

for handling streams with discrete time input (Uustalu & Vene, 2005).

cbd(crt) = id,

cbd(f) ◦ cbd(g) = cbd(f ◦ cbd(g)).

X?× X

XN× N

(α,n)

?

causality

no future

??

anti-causality

no past

??XN

(?α(0),...,α(n − 1)?,α(n))

The intuition for a pair (α,n) ∈ XN×N is that n represents the present stage in the

stream α = ?α(0),α(1),...,α(n − 1),α(n),α(n + 1),...?, where everything before

n is past input, and everything after n is future input. The two morphisms in the

previous diagram are homomorphisms of comonads, commuting with the relevant

comonad/context structure. There is a similar real-time analogue.

A strong monad M and a comonad N can also be combined to form arrows.

As illustrated for instance in (Uustalu & Vene, 2005; Heunen & Jacobs, 2006),

this happens via a so-called distributive law NM ⇒ MN that commutes with the

(co)monad operations. Then one can define an arrow (M,N) via

??

?

??λm.α(n + m)

(11)

(M,N)(X,Y ) = M(Y )N(X).

(12)

It combines the previous two constructions with monads and comonads separately.

This mapping (X,Y ) ?→ M(Y )N(X)leads to an appealing picture of an arrow in

which the monad M is used for structuring the outputs and the comonad N for

Page 7

Categorical semantics for arrows7

the inputs. But arrows are more general than this. For instance, if we wish to do

“non-deterministic dataflow” we may consider at first maps of the form

XN× N −→ P(Y ),

(13)

with the comonad on the left-hand side structuring the input of streams, and the

monad on the right-hand-side producing non-deterministic output. However, this

requires a distributive law of the form

P(X)N× N −→ P(XN× N).

While it is possible to construct such a function – for instance the power law

from (Jacobs, 2006) – it does not commute with the comonad structure. As a

result, composition is not associative.

The way out is to realise that co-Kleisli maps XN× N → Y correspond to maps

XN→ YNvia Currying. But then non-determinism can be introduced easily into

dataflow, namely by looking at maps

XN−→ P(YN) (14)

instead of maps (13). The corresponding assignment (X,Y ) ?→ P(YN)(XN)in-

deed forms an arrow – with associative composition. It is however not of the form

(X,Y ) ?→ M(Y )N(X). Arrows thus have more to offer than monad-comonad com-

binations. As an aside: it is not so clear how to combine the other comonads in (11)

with non-determinism.

3 Arrow constructions and examples

This section continues in the discursive style of the previous one. It introduces

several elementary ways to combine arrows, and use these constructions to obtain

some well-known examples. The first construction is obvious, but useful. Its proof

is straightforward and left to the reader.

Lemma 3.1

Let (A1,arr1,> > >1) and (A2,arr2,> > >2) be arrows. Then so is their product A =

A1× A2, described by

A(X,Y ) = A1(X,Y ) × A2(X,Y )

with operations

arr(f) = ?arr1(f),arr2(f)?

?a1,a2? > > > ?b1,b2? = ?a1> > >1b1,a2> > >2b2?

first(?a,b?) = ?first1(a),first2(b)?.

The next result now follows from the observation in the previous section that

each monoid forms a (constant) arrow. The result is mentioned explicitly because

it will be used later in this form, in Example 3.1.

Corollary 3.1

Page 8

8Jacobs, Heunen, Hasuo

Let (A,arr,> > >) be an arrow, and (P,m,e) be a monoid. Then A?= P × A, given

by

A?(X,Y ) =?P × A?(X,Y ) = P ×?A(X,Y )?,

arr?(f) = ?e,arr(f)?

?x,a? > > >??y,b? = ?m(x,y),a > > > b?

first?(?x,a?) = ?x,first(a)?.

For the next result we consider functors F that preserve products. This means

that the obvious maps

is again an arrow, with the following operations.

F(X × Y )

?F(π1),F(π2)?

??F(X) × F(Y )

are isomorphisms. In that case we shall write β = βX,Y : F(X)×F(Y ) → F(X×Y )

for the inverse.

Lemma 3.2

Let (A,arr,> > >) be an arrow, and F be a product preserving functor. Defining

AF(X,Y ) = A(F(X),F(Y )),

yields a new arrow AF with the following operations.

arr?(f) = arr(F(f))

a > > >?b = a > > > b

first?(a) = arr(?F(π1),F(π2)?) > > > first(a) > > > arr(β).

Proof

Checking the relevant equations is not hard. For instance:

first?(a > > > b)

=

arr(?F(π1),F(π2)?) > > > first(a > > > b) > > > arr(β)

(9)

=

arr(?F(π1),F(π2)?) > > > first(a) > > > first(b) > > > arr(β)

(4)

=

arr(?F(π1),F(π2)?) > > > first(a) > > > arr(?F(π1),F(π2)? ◦ β)

> > >first(b) > > > arr(β)

(3)

=

arr(?F(π1),F(π2)?) > > > first(a) > > > arr(β)

> > >arr(?F(π1),F(π2)?) > > > first(b) > > > arr(β)

=

first?(a) > > >?first?(b).

Lemma 3.3

Let (A,arr,> > >) be an arrow and S an arbitrary type. The definition

AS×(X,Y ) = A(S × X,S × Y )

again yields an arrow, with corresponding structure:

arrS×(f) = arr(idS× f)

a > > >S×b = a > > > b

firstS×(a) = arr(α−1) > > > first(a) > > > arr(α)

Page 9

Categorical semantics for arrows9

where α is the associativity isomorphism for products from Subsection 2.1.

This particular construction AS×occurs, in a slightly different formulation, al-

ready in (Hughes, 2000, §§9) where it is introduced via a ‘state functor’. A similar

construction (X,Y ) ?→ A(X,S)A(Y,S)is defined there for special arrows with suit-

able apply operations A(A(X,Y ) × X,Y ).

At this stage we can already see how one of the motivating examples for the

notion of arrow can be obtained from the previous constructions.

Example 3.1

In (Hughes, 2000, §§4.2) an arrow SD is introduced to describe a special parser

defined by (Swierstra & Duponcheel, 1996). This arrow can be described as

SD(X,Y ) = (2 × S?) ×?1 + S?× Y?(S?×X).

We show that this arrow SD can be obtained by successive application of the

constructions in this section.

First, the set 2×S?– with 2 = {0,1} – is used as monoid, not with the standard

structure, but with unit and composition given by:

(15)

e = (1,??)

m((b,σ),(c,τ)) = (b ∧ c,σ · (if b = 1 then τ else ??)).

It is not hard to see that this yields a monoid. Corollary 3.1 then tells that (15)

is an arrow if the rightmost part (X,Y ) ?→?1 + S?× Y?(S?×X)is. Using the lift

applying Corollary 3.3 with set S?we obtain the rightmost part, as required.

When we go into the details of these constructions we can also reconstruct the

associated operations of the arrow (15) as follows.

monad 1+(−) we get an arrow (X,Y ) ?→ (1+Y )X, as shown in subsection 2.3. By

arr(f)

=

?(b,σ),f)? > > > ?(c,τ),g?

=

?(b ∧ c,σ · (if b = 1 then τ else ??)),

λ(s,x) ∈ S?× X .

first(?(b,σ),f?)

=

?(b,σ),

λ(s,(x,y)) ∈ S?× (X × Y ).

?(1,??), λ(s,x) ∈ S?× X .up(s,f(x))?

?

⊥

g(t,y)

if f(s,x) = ⊥

if f(s,x) = up(t,y) ?

?

⊥

up(t,(z,y))

if f(s,x) = ⊥

if f(s,x) = up(t,z) ?.

These operations are precisely as described (in Haskell notation) in (Hughes, 2000,

§§4.2).

Example 3.2

Quantum computing (Vizzotto et al., 2006+) can be modelled within a functional

programming language. The states of a quantum program are so-called density

matrices, that we can understand as elements of the monad application D(X)Xto

some set X. These states evolve into each other by superoperators, which can be

Page 10

10Jacobs, Heunen, Hasuo

modelled as arrows (X,Y ) ?→ D(Y ×Y )(X×X). The previous lemmas also enable us

to show that this quantum computation arrow is indeed an arrow, by decomposing

it into elementary parts, without checking the arrow laws by hand.

First, recall that the mapping (X,Y ) ?→ D(Y )Xyields an arrow, induced by the

distribution monad D. Next, notice that the diagonal functor X ?→ X×X preserves

products, so that the mapping (X,Y ) ?→ (Y × Y )(X×X)yields an arrow, with

first(a) = λ((x,z),(x?,z?)) ∈ (X × Z) × (X × Z).((π1a(x,x?),z),(π2a(x,x?),z?))

for given a : X × X → Y × Y ,

Thus, according to Lemma 3.2, the mapping (X,Y ) ?→ D(Y × Y )(X×X)is an

arrow. If we follow through the construction, we obtain the following arrow opera-

tions.

arr(f)= rt ◦ (f × f)

=

λ(x,x?).λ(y,y?).

0otherwise

a > > > b

= bd(b) ◦ a

=

λ(x,x?).λ(z,z?).

(y,y?)

first(a)=

D(?π1× π1,π2× π2?) ◦ st ◦ (a × id) ◦ ?π1× π1,π2× π2?

?

?

1if f(x) = y ∧ f(x?) = y?

a(x,x?)(y,y?) · b(y,y?)(z,z?)

=

λ((x,z1),(x?,z?

1)).λ((y,z2),(y?,z?

2)).

a(x,x?)(y,y?) if z1= z2

and z?

otherwise.

1= z?

2,

0

These indeed coincide exactly with the ones given in (Vizzotto et al., 2006+).

4 Categorical formulation

In this section we shall move towards a categorical formulation of the notion of

arrow. We shall do so by first analysing the structure in a Haskell-like setting. We

denote by HT the category with Haskell types as objects. A morphism σ → τ in

this category is a Haskell function f = λx : σ .f(x) : τ taking input in σ to output

in τ. Composition of such maps is performed by substitution. Essentially, this is a

Cartesian closed category of types and terms, but for the fact that some functions do

not terminate, much like a lambda calculus. Of course there is much more structure

(like general recursion) in Haskell than the types with type variables and terms, like

in system F. Below we shall analyse the behaviour of Haskell arrows as bifunctors

on HT, leading to a more general definition of an arrow over any category C.

4.1 Analysing arrow behaviour categorically

First and foremost, let us show that a Haskell arrow is indeed bifunctorial.

Lemma 4.1

The operation A(−,−) extends to a functor HTop× HT → Set by

(X,Y ) ?→ {a : A(X,Y ) | a closed term},

Page 11

Categorical semantics for arrows11

whose action A(f,g) : A(X,Y ) → A(X?,Y?) on maps f : X?→ X and g : Y → Y?

is given by

A(f,g) = λa.arr(f) > > > a > > > arr(g).

Proof

Using equations (2), (3) and (4) one easily derives the functorial properties for

identity, A(id,id) = id, and composition, A(f ◦ f?,g?◦ g) = A(f?,g?) ◦ A(f,g).

We now examine the arrow operations arr and first in the light of the bifunc-

toriality of A.

Lemma 4.2

The maps arr : HT(X,Y ) → A(X,Y ) form a natural transformation HT(−,+) ⇒

A(−,+) from exponents to arrows, where HT(−,+) is the homset functor.

Similarly, the maps first : A(X,Y ) → A(X ×Z,Y ×Z) are natural in X and Y .

This may be formulated as: first yields a natural transformation ?first? from A

to the functor A× given by (X,Y ) ?→?

Proof

For maps f : X?→ X, g : Y → Y?in HT and h : HT(X,Y ) we have

?A(f,g) ◦ arr?(h) = arr(f) > > > arr(h) > > > arr(g)

ZA(X ×Z,Y ×Z). Of course, this functor

A× only makes sense in a small category with arbitrary (set-indexed) products Π.

(3)

= arr(g ◦ h ◦ f) = arr(gf(h)) =?arr ◦ gf?(h),

?A× (f,g) ◦ ?first?)(a) = ?A(f × id,g × id) ◦ πZ?Z(?first(a)?)

= ?arr(f × id) > > > first(a) > > > arr(g × id)?

(8)

=?first(arr(f)) > > > first(a) > > > first(arr(g))?

(9)

=?first(arr(f) > > > a > > > arr(g))?

= ?first(A(f,g)(a))?

=??first? ◦ A(f,g)?(a).

The next lemma shows that the maps > > > : A(X,P) × A(P,Y ) → A(X,Y ) are

natural in X and Y , just like the maps arr and first in the previous lemma. In

the parameter P they are what is called dinatural (Mac Lane, 1971, Section IX.4).

This means that for each map f : P → Q the following diagram commutes.

A(X,P) × A(P,Y )

id×A(f,id)

?? ?

?

?

?

?

?

?

?

?

?

?

and

= ?A(f × id,g × id)(first(a))?

> > >??A(X,Y )

?

?

?

?

?

?

?

?

?

?

?

?

?

?

A(X,P) × A(Q,Y )

A(id,f)×id

?? ?

?

?

?

A(X,Q) × A(Q,Y )

?

?

?

?

?

?

?

A(X,Y )

?

?

?

?

?

?

?

?

?

?

?

> > >??A(X,Y )

?

?

?

Page 12

12Jacobs, Heunen, Hasuo

Lemma 4.3

The maps > > > : A(X,P) × A(P,Y ) → A(X,Y ) are natural in X and Y , and

dinatural in P.

Proof

Naturality is trivial. As for dinaturality, for a : A(X,P) and b : A(Q,Y ), we have

?> > > ◦ (id × A(f,id))?(a,b) = a > > > A(f,id)(b)

= A(id,f)(a) > > > b

=?> > > ◦ (A(id,f) × id)?(a,b).

Intuitively, dinaturality in P signifies that > > > is parametric in its middle argu-

ment type, and that this middle parameter is auxiliary; it could just have well been

another one, as long as it is the same across the second argument of the first factor,

and the first argument of the second.

= a > > > arr(f) > > > b

4.2 Monoidal structure in the ambient category

Extending from the category HT of (Haskell) types and terms, we would like to

define an arrow over any suitable category C as a monoid in the functor cate-

gory Cat(Cop× C,Set) of bifunctors that carries an internal strength. However,

to do so we need to ensure that the ambient category, Cat(Cop× C,Set), has

monoidal structure. The most elegant way to achieve this is to employ the no-

tion of (parametrized) coends, see Appendix A. This approach generalises to the

V-enriched situation, when an arrow is a suitable bifunctor Cop× C → V. Such

enrichment is necessary if we are to consider (instead of HT) a categorical model

of Haskell which is most probably Cpo-enriched. At this stage we shall present the

construction for the reasonably concrete case where V = Set, mostly to give some

intuition about the monoidal structure.

Proposition 4.1

Let C be a small category. Then the category Cat(Cop× C,Set) of Set-valued

bifunctors has a monoidal structure with unit I and tensor product ⊗.

Proof

The naturality of HT(−,+) ⇒ A(−,+) observed in Lemma 4.2 suggests that the

(internal) homfunctor could serve as the unit of the intended monoidal structure

on Cat(Cop× C,Set). Thus we define I : Cop× C → Set to be HomC; explicitly,

I(X,Y ) = C(X,Y ) and I(f,g) = gf= λh.g ◦ h ◦ f. This requires C to be locally

small.

The main idea now is to let the monoidal product of two bifunctors A,B : Cop×

C → Set be the smallest type containing all bifunctors that behave dinaturally in

the middle parameter. More explicitly, composition > > > is a collection of morphisms

A(X,P) × A(P,Y )

> > >??A(X,Y ),

Page 13

Categorical semantics for arrows13

which can be combined, using the (arbitrary set-indexed) coproduct in Set, into

one natural transformation with the following component at X,Y ∈ C.

??

This requires C to have a (small) set of objects. We take the dinaturality of Lemma

4.3 into account by defining the components of the monoidal product A⊗B as the

coequalizer c

P,Q∈C

of (obvious cotuples of) the morphisms (in Set)

P∈CA(X,P) × A(P,Y )?

> > >??A(X,Y )

?

A(X,P)

×C(P,Q)

×B(P,Y )

d1??

d2

??

??

P∈C

A(X,P)

×B(P,Y )

?

c

?

?? ??

(A ⊗ B)(X,Y )

d1= λ(a,f).A(id,f)(a) : A(X,P) × C(P,Q) → A(X,Q),

d2= λ(f,b).B(f,id)(b) : C(P,Q) × B(Q,Y ) → B(P,Y ),

for all P,Q ∈ C. The composition maps > > > then reappear as the components of

the unique A ⊗ A ⇒ A from the coequalizer.

Remark

The situation sketched in the previous proposition and proof is that of profunctors,

which are also known as distributors or bimodules (B´ enabou, 2000). Profunctors

and natural transformations form a bicategory Prof, which is a well-studied gener-

alisation of the category of sets and relations. The monoidal structure of Prof (as

described above) is well-known. The basic idea is that composition of profunctors,

and hence the tensor product in the above proposition, can also be written in terms

of standard functor composition using left Kan extension along the Yoneda embed-

ding. See (Day, 1970) for the original account, or (Borceux, 1994, Section 7.8) for

a modern record.

The previous lemma puts us in a position to make precise our intuition that

arrow laws (2)–(4) resemble monoid equations.

Proposition 4.2

An instantiation of the Haskell arrow class (A⊗A)

a monoid in the category Cat(HTop×HT,Set) of bifunctors HTop×HT → Set.

> > >

−→A

arr

←−I satisfying (2)–(4) is

Proof

We have to check that the monoid equations hold for the span (A⊗A)

Here we exhibit one of the equations, namely

> > >

−→A

arr

←−I.

A

ρ−1

?

?

?

?

?

?

??

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

A ⊗ I

id⊗arr??A ⊗ A

> > >

??

A,

Page 14

14Jacobs, Heunen, Hasuo

which for a : A(X,Y ) becomes

a?

??(a,id)?

??(a,arr(id))

?

??

a > > > arr(id).

Hence commutation of this diagram amounts to arrow law (4), which states that

a > > > arr(id) = a.

Remark

Although the proof of Proposition 4.1 requires a restriction to small categories, we

will often relax this to locally small categories. We are only after A⊗A anyway, and

indeed, in the construction of A ⊗ A above we used a large coproduct for clarity,

where we could have formulated the composition operation > > > of A via collections

of maps A(X,P) × A(P,Y ) → A(X,Y ) that are natural in X,Y , dinatural in P

and satisfy the arrow equations (2)–(9).

In this way one could include domain theoretic models that are standardly used

for Haskell semantics.

4.3 Internal strength

Now that we have seen that arrow laws (2)–(4) correspond to the monoid equations

on the semantical side, we investigate the remaining laws (5)–(9) concerning first

in more detail.

Recall that a monad T : C → C on a monoidal category C is called strong when

there is a natural transformation “strength” with components stX,Y : T(X)⊗Y →

T(X ⊗ Y ) that satisfies suitable coherence conditions. This section shows that the

availability of the function first is equivalent to an analogous form of strength for

bifunctors, that we call internal strength. Its emergence is motivated in Appendix B.

Definition 4.1

Let C be a category with finite products. The carrier A : Cop× C → Set of a

monoid (A,> > >,arr) in Cat(Cop× C,Set) is said to carry an internal strength

natural transformation with components istX,Y : A(X,Y ) → A(X,Y × X) if these

satisfy

ist(arr(f)) = arr(?f,id?),

ist(a) > > > arr(π1) = a,

ist(a > > > b) = ist(a) > > > ist(arr(π1) > > > b) > > > arr(id × π2),

ist(ist(a)) = ist(a) > > > arr(?id,π2?).

Using the techniques of Appendix A this can again be extended to bifunctors

A : Cop×C → V for a category C with finite products and a suitable category V.

The following proposition shows that having internal strength is in fact equivalent

to having a first operation for arrows – as originally introduced by Hughes.

(16)

(17)

(18)

(19)

Proposition 4.3

Page 15

Categorical semantics for arrows15

Let (A,> > >,arr) be an instantiation of the Haskell arrow class satisfying (2)-(4).

The maps first : A(X,Y ) → A(X × Z,Y × Z) satisfying equations (5)–(9) corre-

spond to maps ist : A(X,Y ) → A(X,Y ×X) which are natural in Y and dinatural

in X, and satisfy (16)–(19).

Proof

The proof of the equivalence of first and ist involves many basic calculations, of

which we only present a few exemplaric cases.

Given maps first satisfying (5)–(9), define internal strength on a : A(X,Y ) as

ist(a) = arr(∆) > > > first(a),

where ∆ = ?id,id?. One then checks naturality in Y , dinaturality in X, and (16)–

(19). The (di)naturality equations can be formulated as:

ist(a) > > > arr(g × id) = ist(a > > > arr(g))

arr(f) > > > ist(a) = ist(arr(f) > > > a) > > > arr(id × f).

By way of illustration we check equation (17):

(20)

(21)

ist(a) > > > arr(π1) = arr(∆) > > > first(a) > > > arr(π1)

(5)

= arr(∆) > > > arr(π1) > > > a

(3)

= arr(π1◦ ∆) > > > a

= arr(id) > > > a

(4)

= a.

Conversely, given internal strength maps ist satisfying (16)–(19), define:

first(a) = ist(arr(π1) > > > a) > > > arr(id × π2),

where π1: X × Z → X and id × π2: Y × (X × Z) → Y × Z. This yields a natural

operation, in the sense that:

arr(f × id) > > > first(a) > > > arr(g × id) = first(f > > > a > > > g).

We shall prove equation (9) in detail, and leave the rest to the interested reader.

first(a) > > > first(b)

= ist(arr(π1) > > > a) > > > arr(id × π2) > > > ist(arr(π1) > > > b)

> > >arr(id × π2)

(dinat)

=ist(arr(π1) > > > a) > > > ist(arr(id × π2) > > > arr(π1) > > > b)

> > >arr(id × (id × π2)) > > > arr(id × π2)

(3)

=ist(arr(π1) > > > a) > > > ist(arr(π1) > > > b)

> > >arr(id × π2) > > > arr(id × π2)

(18)

=ist(arr(π1) > > > a > > > b) > > > arr(id × π2)

=

first (a > > > b).

The alternative formulation in terms of internal strength ist in the previous

#### View other sources

#### Hide other sources

- Available from Chris Heunen · May 16, 2014
- Available from kyoto-u.ac.jp
- Available from kyoto-u.ac.jp
- Available from kyoto-u.ac.jp