## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

To read the full-text of this research,

you can request a copy directly from the author.

... To model these concepts, Moggi used the categorical notion of monad, abstractly representing the extension of the space of values to that of computations, and the associated Kleisli category, whose morphisms are functions from values to computations, which are the denotations of programs. Syntactically, following [Wad95], we can express these ideas by means of a call-by-value λ-calculus with two sorts of terms: values, ranged over by V, W , namely variables or abstractions, and computations denoted by L, M, N . Computations are formed by means of two operators: values are embedded into computations by means of the operator unit written return in Haskell programming language, whose name refers to the unit of a monad in categorical terms; a computation M ⋆ (λx.N ) is formed by the binary operator ⋆, called bind (>>= in Haskell), representing the application to M of the extension to computations of the function λx.N . ...

... This is the first of the three monadic laws in [Wad95]. To understand the others, let us define the composition of the functions λx.M and λy.N as ...

... Indeed, the calculus in [Mog91] is the internal language of a suitable category equipped with a (strong) monad T , and with enough structure to internalize the morphisms of the respective Kleisli category. As such, it is a simply typed λ-calculus, where T is the type constructor associating to each type A the type T A of computations over A. Therefore, unit and ⋆ are polymorphic operators with respective types (see [Wad92,Wad95]): ...

We study the reduction in a lambda-calculus derived from Moggi's computational one, that we call the computational core. The reduction relation consists of rules obtained by orienting three monadic laws. Such laws, in particular associativity and identity, introduce intricacies in the operational analysis. We investigate the central notions of returning a value versus having a normal form, and address the question of normalizing strategies. Our analysis relies on factorization results.

... Function treeSum uses the r end of the channel and transform uses w, the other end. In these calls, both functions are applied to type Skip (lines [25][26]. Analysing the signatures of the two functions (they both return a channel of type α), we see that the channel ends r and w are both consumed to Skip. ...

... Analysing the signatures of the two functions (they both return a channel of type α), we see that the channel ends r and w are both consumed to Skip. Type Skip is unrestricted in nature (of kind unrestricted) and hence its values can be safely discarded (cf. the two wilcards in the lets on lines [25][26]. In addition to the residual of channel end w, function transform also returns a new tree t, which becomes the result of the main function. ...

... This section concentrates on the runtime system, a surprisingly compact system. We build on modules Control .Concurrent and Unsafe.Coerce, and make particular use of the monadic combinators below [26]. The do notation is built on top of these combinators. ...

FreeST is an experimental concurrent programming language. Based on a core linear functional programming language, it features primitives to fork new threads, and for channel creation and communication. A powerful type system of context-free session types governs the interaction on channels. The compiler builds on a novel algorithm for deciding type equivalence of context-free session types. This abstract provides a gentle introduction to the language and discusses the validation process and runtime system.

... As said before, in Moggi's construction C is cartesian. When looking at Wadler's type-theoretic definition of monads [18,4], that is at the basis of their successful implementation in Haskell language, a natural interpretation of the calculus is into a cartesian closed category (ccc), such that two families of combinators, or a pair of polymorphic operators called the "unit" and the "bind" exist, satisfying the monad laws, namely (the syntactic counterpart of) the three equations in Definition 2.1 below (see also Proposition 3.4). This is more directly expressed by defining the interpretation of Wadler's version of the λ c -calculus into a (locally small) subcategory of Set which is a ccc: here C will be called a concrete ccc. ...

... The monadic approach is not only useful when building compilers modularly with respect to various kinds of effects [2], to interpret languages with effects like control operators via a CPS translation [3], or to write effectful programs in a purely functional language such as Haskell [4], but also to reason about such programs. In this respect, typed computational lambda-calculus has been related to static program analysis and type and effect systems [5,6], PER based relational semantics [7], and more recently co-inductive methods for reasoning about effectful programs have been investigated [8]. ...

... It might appear nonsense to speak of monads w.r.t. an untyped calculus, as the monad T interprets a type constructor both in Moggi's and in Wadler's formulation of the computational λ-calculus [2,4]. However, much as the untyped λ-calculus can be seen as a calculus with a single type, as formerly observed by Scott [9], the untyped computational λ-calculus, here dubbed λ u c , has two types: the type of values D and the type of computations T D. Semantically this involves the existence of a call-by-value reflexive object in the categorical model [1]. ...

We study a Curry style type assignment system for untyped λ-calculi with effects, based on Moggi's monadic approach. Moving from the abstract definition of monads, we introduce a version of the call-by-value computational λ-calculus based on Wadler's variant, without let, and with unit and bind operators. We define a notion of reduction for the calculus and prove it confluent.
We then introduce an intersection type system inspired by Barendregt, Coppo and Dezani system for ordinary untyped λ-calculus, establishing type invariance under conversion.
Finally, we introduce a notion of convergence, which is precisely related to reduction, and characterize convergent terms via their types.

... A random generator is not a pure function, because every time it is sampled, the output may be different. A way to describe computations that "change the world" in a pure setting is to use the monadic approach [16]. ...

... Id Mutation effect 1 In typecheck instr instead of fetching the type of the first value and of the second, fetches 2 times the first value 3 stepFunc does not update the store for cons instructions 6 After a step the instructions in instr seq are wrongly reordered 7 Deleted step clause for the instr f etch f ield v1 1 v2 instruction 13 The listcons type cannot be a subtype of another listcons type 15 Copy-paste error in value has tyBool: using constructor ty listcons instead of ty list in the third value-has-ty predicate clause 16 The instr f etch f ield v1 1 v2 instruction does not update the type of v2 with list type 17 Checking the first instruction of a block instead of the whole block 18 Blocks are not checked ...

Software testing can be rather expensive, so there is a lot of incentives in trying to automate it. A common approach to partially automate the testing process is reasoning in terms of the properties that must be always true for every input of the program instead of listing the inputs and the expected output. This technique is called property based testing. The programmer finds some kind of rule that must always be respected and using several combinators provided by a testing library, like QuickCheck for Haskell, encodes it into an executable property, which is then tested over a large set of inputs. As explained in [14], property based testing can be useful also in theorem proving. Sometimes theorem proving can be tiring, especially failed proof attempts. Why would you prove a false statement? This is where property based testing can be useful: before proving a theorem, the programmer rewrites it as an exe-cutable property and tests it on a large set of inputs with the intention of finding a counterexample, thus reducing the number of failed proof attempts. This is the main goal of QuickChick, a clone of QuickCheck for Coq programs. It provides most of the functionalities of QuickCheck and a few unique features like automatic derivation of generators from inductive predicates. However compared to the other QuickCheck clones, like FsCheck for F# and even to the other property based testing tools provided by other proof assistants like Isabelle [6], QuickChick can be considered still in an experimental status. The goal of this thesis is to evaluate whether QuickChick is enough mature to test complex programs and to compare its features to the ones provided by FsCheck. We used as a benchmark the list-machine [1], whose original goal was to compare theorem-proving systems on their ability to express proofs of compiler correctness [1] and for this reason it contains several theorems. Instead of proving the theorems we have tested using QuickChick the progress, preservation and the soundness statements and we compare our implementation to the one of Francesco Komauli [7] in F# using the tool FsCheck. Finally we have done some mutation testing to the list-machine implementation in order to assess the quality of our properties to find errors.

... Monads are a popular pattern [35] (especially in Haskell) which combinator libraries in other domains routinely exploit. Introducing monadic composition to BX programming significantly expands the expressiveness of BX languages and opens up a route for programmers to explore the connection between BX programming and mainstream uni-directional programming. ...

... It is well-known that such parsers are monadic [35], i.e., they have a notion of monadic sequential composition embodied by the interface: ...

Software frequently converts data from one representation to another and vice versa. Naively specifying both conversion directions separately is error prone and introduces conceptual duplication. Instead, bidirectional programming techniques allow programs to be written which can be interpreted in both directions. However, these techniques often employ unfamiliar programming idioms via restricted, specialised combinator libraries. Instead, we introduce a framework for composing bidirectional programs monadically, enabling bidirectional programming with familiar abstractions in functional languages such as Haskell. We demonstrate the generality of our approach applied to parsers/printers, lenses, and generators/predicates. We show how to leverage compositionality and equational reasoning for the verification of round-tripping properties for such monadic bidirectional programs.

... A full technical definition of a monad and it's role in programming is far beyond the scope of this paper (see e.g. [32,52]), but we will give quick sketch of the state monad, which is of relevance in this section. Suppose we are working in a simple functional calculus like System T and we want to capture some overriding global state which keeps track of certain aspects of the computations. ...

... We can, more generally, translate the finite types as a whole to a corresponding hierarchy of monadic types, and using the unit and bind operations define a translation on pure terms of System T to monadic terms in a variety of ways, depending on what kind of computation we are trying to simulate and what kind of information we aim to capture in our state. Again, this is presented in detail in [32,52]. ...

The problem of giving a computational meaning to classical reasoning lies at the heart of logic. This article surveys three famous solutions to this problem - the epsilon calculus, modified realizability and the dialectica interpretation - and re-examines them from a modern perspective, with a particular emphasis on connections with algorithms and programming.

... In programming, an alternative, but equivalent, definition of monads became popular shortly after the original appeared. The following is adapted from Wadler [58]. Definition 3. A monad is a triple (M , unit, > > =) consisting of a type constructor M and two operations of the following types: ...

... Monad as a formalist entity The implementations of bind and unit for a concrete monad can perform a range of different things. Some authors, such as Wadler [58], try to avoid interpreting what the bind and unit operations of a monad represent and only describe concrete examples. ...

Computer science provides an in-depth understanding of technical aspects of programming concepts, but if we want to understand how programming concepts evolve, how programmers think and talk about them and how they are used in practice, we need to consider a broader perspective that includes historical, philosophical and cognitive aspects. In this paper, we develop such broader understanding of monads, a programming concept that has an infamous formal definition, syntactic support in several programming languages and a reputation for being elegant and powerful, but also intimidating and difficult to grasp. This paper is not a monad tutorial. It will not tell you what a monad is. Instead, it helps you understand how computer scientists and programmers talk about monads and why they do so. To answer these questions, we review the history of monads in the context of programming and study the development through the perspectives of philosophy of science, philosophy of mathematics and cognitive sciences. More generally, we present a framework for understanding programming concepts that considers them at three levels: formal, metaphorical and implementation. We base such observations on established results about the scientific method and mathematical entities -- cognitive sciences suggest that the metaphors used when thinking about monads are more important than widely accepted, while philosophy of science explains how the research paradigm from which monads originate influences and restricts their use. Finally, we provide evidence for why a broader philosophical, sociological look at programming concepts should be of interest for programmers. It lets us understand programming concepts better and, fundamentally, choose more appropriate abstractions as illustrated in number of case studies that conclude the paper.

... Pure programming languages also benefit from lazy evaluation, which ensures that values only get calculated once they are actually needed [Wad95]. ...

... The following explanation of monads is based on Wadler [Wad95]. Since all the data flow in pure languages has to be expressed explicitly, there tends to be a lot of code that only deals with moving data from its point of creation to its point of use. ...

Interval arithmetic and affine arithmetic are methods in numerical analysis that deal with ranges of numerical values. Affine arithmetic is often used instead of interval arithmetic since it can result in smaller errors. The result of this thesis is an affine arithmetic library written in Haskell. This library is written in a way that makes it more difficult to make errors when using it. The library was tested using certain mathematical properties of affine arithmetic.

... • Semigroup [6,14] • Monoid [48] • Functor [25] • Applicative Functor [25] • Monad [43,44] The Haskell [23] programming language introduced the monad in order to track e ects within the language. Haskell is a lazilyevaluated, pure FP language and exploited the sequencing ability of monads to sequence e ects in the language. ...

... As the tracking of the randomness e ect is crucial to allow for reproducible experimental work, an abstraction to represent values with randomness applied was required. The resulting abstraction is a data structure known as RVar and is implemented as a state monad [43,44], specialized to manage the state of the random number generator. ...

Reproducible experimental work is a vital part of the scientific method. It is a concern that is often, however, overlooked in modern computational intelligence research. Scientific research within the areas of programming language theory and mathematics have made advances that are directly applicable to the research areas of evolutionary and swarm intelligence. Through the use of functional programming and the established abstractions that functional programming provides, it is possible to define the elements of evolutionary and swarm intelligence algorithms as compositional computations. These compositional blocks then compose together to allow the declarations of an algorithm, whilst considering the declaration as a "sub-program". These sub-programs may then be executed at a later time and provide the blueprints of the computation. Storing experimental results within a robust data-set file format, which is widely supported by analysis tools, provides additional flexibility and allows different analysis tools to access datasets in the same efficient manner. This paper presents an open-source software library for evolutionary and swarm-intelligence algorithms which allows the type-safe, compositional, monadic and functional declaration of algorithms while tracking and managing effects (e.g. usage of a random number generator) that directly influences the execution of an algorithm.

... Even with a fixed programming language environment, like Haskell, at hand the FRP paradigm still can be realized in many different ways, each with well-defined levels of expressive power. In the functional programming world, these levels usually are realized through so called design patterns, like Applicative [103], Monads [145], or Arrows [67], that support standardized ways of how data and control flow is structured and executed within the programming language. Depending of which design pattern is used, there are different advantages, but also restrictions of how the stream processing network is built and executed eventually. ...

... Remember that the primary problem with purely Applicative FRP is that there is a missing link of how the temporal behavior is executed as part of an application specific executable in the end. This problem can be solved by using a Monad [145], which introduces the missing evaluation context. In Haskell, the Monad class is defined as follows: ...

... The validation checking mechanism includes both checking the validation of the results (including memory states and memory values) and checking the execution condition. Because all functions are vulnerable to undefined conditions due to various causes, we develop functions with the help of monad [38]. Here, all functions are tagged by an option type. ...

This paper reports on the development of a formal symbolic process virtual machine (FSPVM) denoted as FSPVM-E for verifying the reliability and security of Ethereum-based services at the source code level of smart contracts, and a Coq proof assistant is employed for both programming the system and for proving its correctness. The current version of FSPVM-E adopts execution-verification isomorphism, which is an application extension of Curry-Howard isomorphism, as its fundamental theoretical framework to combine symbolic execution and higher-order logic theorem proving. The four primary components of FSPVM-E include a general, extensible, and reusable formal memory framework, an extensible and universal formal intermediate programming language denoted as Lolisa, which is a large subset of the Solidity programming language using generalized algebraic datatypes, the corresponding formally verified interpreter of Lolisa, denoted as FEther, and assistant tools and libraries. The self-correctness of all components is certified in Coq. Currently, FSPVM-E supports the ERC20 token standard, and can automatically and symbolically execute Ethereum-based smart contracts, scan their standard vulnerabilities, and verify their reliability and security properties with Hoare-style logic in Coq. To the best of authors' knowledge, the present work represents the first hybrid formal verification system implemented in Coq for Ethereum smart contracts that is applied at the Solidity source code level.

... VI]) has found multiple applications in mathematical foundations of programming science, and they have become an important design pattern in languages such as Haskell [12,33] or Scala [27]. Depending on the context, monads can be viewed as abstract notions of computational effects [26,34], or as collections to gather computed values [24], or as structures of values to be computed upon [3]. These perspectives are not mutually exclusive: for example, the (covariant) powerset monad P can be seen either as a very simple kind of unstructured collections, or as a carrier of nondeterminism as a computational effect. ...

We prove that the double covariant powerset functor PP does not admit any monad structure. The same applies to the n-fold composition of P for any n>1.

... Our refinement calculus is implemented in only 350 lines of Coq (proof scripts included), by a shallow-embedding of our GCL † K which combines computational reflection of weakestpreconditions [11] with monads [32]. However, it can be understood in a much simpler setting using binary relations instead of monads and weakest-preconditions, and classical set theory instead of Coq. ...

Our concern is the modular development of a certified static analyzer in the Coq proof assistant. We focus on the extension of the Verified Polyhedra Library—a certified abstract domain of convex polyhedra—with a linearization procedure to handle polynomial guards. Based on ring rewriting strategies and interval arithmetic, this procedure partitions the variable space to infer precise affine terms which over-approximate polynomials. In order to help formal development, we propose a proof framework, embedded in Coq, that implements a refinement calculus. It is dedicated to the certification of parts of the analyzer—like our linearization procedure—whose correctness does not depend on the implementation of the underlying certified abstract domain. Like standard refinement calculi, it introduces data-refinement diagrams. These diagrams relate “abstract states” computed by the analyzer to “concrete states” of the input program. However, our notions of “specification” and “implementation” are exchanged w.r.t. standard uses: the “specification” (computing on “concrete states”) refines the “implementation” (computing on “abstract states”). Our stepwise refinements of specifications hide several low-level aspects of the computations on abstract domains. In particular, they ignore that the latter may use hints from external untrusted imperative oracles (e.g. a linear programming solver). Moreover, refinement proofs are naturally simplified thanks to computations of weakest preconditions. Using our refinement calculus, we elegantly define our partitioning procedure with a continuation-passing style, thus avoiding an explicit datatype of partitions. This illustrates that our framework is convenient to prove the correctness of such higher-order imperative computations on abstract domains.

... So far, we can express side-effect-free computations on faceted values. To express programs that manipulate both faceted values and mutable reference cells, we introduce the FIO monad-a monad (e.g., [56]) is just a special-purposed data type designed to express computations with side-effects in pure functional languages like Haskell. In this light, the type FIO T characterizes side-effectful secure computations that yield a T value. ...

To enforce non-interference, both Secure Multi-Execution (SME) and Multiple Facets (MF) rely on the introduction of multi-executions. The attractiveness of these techniques is that they are precise: secure programs running under SME or MF do not change their behavior. Although MF was intended as an optimization for SME, it does provide a weaker security guarantee for termination leaks. This paper presents Faceted Secure Multi Execution (FSME), a novel synthesis of MF and SME that combines the stronger security guarantees of SME with the optimizations of MF. The development of FSME required a unification of the ideas underlying MF and SME into a new multi-execution framework (Multef), which can be parameterized to provide MF, SME, or our new approach FSME, thus enabling an apples-to-apples comparison and benchmarking of all three approaches. Unlike the original work on MF and SME, Multef supports arbitrary (and possibly infinite) lattices necessary for decentralized labeling models---a feature needed in order to make possible the writing of applications where each principal can impose confidentiality and integrity requirements on data. We provide some micro-benchmarks for evaluating Multef and write a file hosting service, called ProtectedBox, whose functionality can be securely extended via third-party plugins.

... In this section, we define a soundness proposition for Kleisli arrows [Hughes 2000]. Kleisli arrows are functions A → M(B) parameterized by a monad M. It is well-known that monads are expressive enough to describe a wide range of effects in programming languages [Liang et al. 1995;Moggi 1991;Wadler 1995]. For example, we can describe the two interpreter arrows of section Section 2.2 as Kleisli arrows: ...

interpretation is a technique for developing static analyses. Yet, proving abstract interpreters sound is challenging for interesting analyses, because of the high proof complexity and proof effort. To reduce complexity and effort, we propose a framework for abstract interpreters that makes their soundness proof compositional. Key to our approach is to capture the similarities between concrete and abstract interpreters in a single shared interpreter, parameterized over an arrow-based interface. In our framework, a soundness proof is reduced to proving reusable soundness lemmas over the concrete and abstract instances of this interface; the soundness of the overall interpreters follows from a generic theorem.
To further reduce proof effort, we explore the relationship between soundness and parametricity. Parametricity not only provides us with useful guidelines for how to design non-leaky interfaces for shared interpreters, but also provides us soundness of shared pure functions as free theorems. We implemented our framework in Haskell and developed a k-CFA analysis for PCF and a tree-shape analysis for Stratego. We were able to prove both analyses sound compositionally with manageable complexity and effort, compared to a conventional soundness proof.

... Implicit state machines bear some similarity to state monads in functional programming [36]. From the programmer's perspective, there are three main differences: (1) state monads require programmers to thread-through the state explicitly in the program, while there is no such requirement for implicit state machines; (2) state monads do not allow programmers to specify the initial state in a decentralized way; (3) composing two state monads incurs overhead, while compositionability is a feature of implicit state machines. ...

Finite-state machines (FSM) are a simple yet powerful abstraction widely used for modeling, programming and verifying real-time and reactive systems that control modern factories, power plants, transportation systems and medical equipment.
However, traditionally finite-state machines are either encoded indirectly in an imperative language, such as C and Verilog, or embedded as an imperative extension of a declarative language, such as Lustre. Given the widely accepted advantage of declarative programming, can we have a declarative design of finite-state machines to facilitate design, construction, and verification of embedded programs?
By sticking to the design principle of declarativeness, we show that a novel abstraction emerges, implicit state machines, which is declarative in nature and at the same time supports recursive composition. Given its simplicity and universality, we believe it may serve as a new foundation for programming embedded systems.

... For parser implementation we used, following [13], recursive descent parsing approach. This kind of parsers could be expressed using monadic parsing [14]. ...

... Asynchronous programming is becoming increasingly important, with applications ranging from actor systems [1,14], futures and network programming [8,15], user interfaces [21], to functional stream processing [23]. Traditionally, these programming models were realized either by blocking execution threads (which can be detrimental to performance [4]), or callback-style APIs [8,15,18], or with monads [53]. However, these approaches often feel unnatural, and the resulting programs can be hard to understand and maintain. ...

Coroutines are a general control flow construct that can eliminate control flow fragmentation inherent in event-driven programs, and are still missing in many popular languages. Coroutines with snapshots are a first-class, type-safe, stackful coroutine model, which unifies many variants of suspendable computing, and is sufficiently general to express iterators, single-assignment variables, async-await, actors, event streams, backtracking, symmetric coroutines and continuations. In this paper, we develop a formal model called λ⇝ (lambda-squiggly) that captures the essence of type-safe, stackful, delimited coroutines with snapshots. We prove the standard progress and preservation safety properties. Finally, we show a formal transformation from the λ⇝ calculus to the simply-typed lambda calculus with references.

... Asynchronous programming is becoming increasingly important, with applications ranging from actor systems [1,14], futures and network programming [8,15], user interfaces [21], to functional stream processing [23]. Traditionally, these programming models were realized either by blocking execution threads (which can be detrimental to performance [4]), or callback-style APIs [8,15,18], or with monads [53]. However, these approaches often feel unnatural, and the resulting programs can be hard to understand and maintain. ...

Coroutines are a general control flow construct that can eliminate control flow fragmentation inherent in event-driven programs, and are still missing in many popular languages. Coroutines with snapshots are a first-class, type-safe, stackful coroutine model, which unifies many variants of suspendable computing, and is sufficiently general to express iterators, single-assignment variables, async-await, actors, event streams, backtracking, symmetric coroutines and continuations. In this paper, we develop a formal model called $\lambda_{\rightsquigarrow}$ (lambda-squiggly) that captures the essence of type-safe, stackful, delimited coroutines with snapshots. We prove the standard progress and preservation safety properties. Finally, we show a formal transformation from the $\lambda_{\rightsquigarrow}$ calculus to the simply-typed lambda calculus with references.

... However, functional programming languages extend typed λ-calculus with many notions of computation that are indispensable for programming. Besides simple and pure functions, all realistic programming languages include some kind of computational effects, and there is a long debate about how to structure the semantics of these computational effects in the context of functional programming languages (see papers about pure and impure functional programming languages [8,9]). In this context, the monadic approach by Moggi was elegantly used to structure computational effects in pure functional programming languages [10], like Haskell [11]. ...

To help the understanding and development of quantum algorithms there is an effort focused on the investigation of new semantic models and programming languages for quantum computing. Researchers in computer science have the challenge of deve loping programming languages to support the creation, analysis, modeling and simulation of high level quantum algorithms. Based on previous works that use monads inside the programming language Haskell to elegantly explain the odd characteristics of quantum computation (like superposition and entanglement), in this work we present a monadic Java library for quantum programming. We use the extension of the programming language Java called BGGA Closure, that allow the manipulation of anonymous functions (closures) inside Java. We exemplify the use of the library with an implementation of the Toffoli quantum circuit.

... Clearly, we must know whether the behavior of programs is correct if we wish to reason in a higher-order logic world. Therefore, with reference to the basic API definitions of the GERM framework, we employ a monad-type option [30] to represent the different conditions represented by return values. Here, the return value is annotated as Some if it is meaningful, None if it is nothing, and, otherwise, it is assigned an error message Error. ...

The security of blockchain smart contracts is one of the most emerging issues of the greatest interest for researchers. This article presents an intermediate specification language for the formal verification of Ethereum-based smart contract in Coq, denoted as Lolisa. The formal syntax and semantics of Lolisa contain a large subset of the Solidity programming language developed for the Ethereum blockchain platform. To enhance type safety, the formal syntax of Lolisa adopts a stronger static type system than Solidity. In addition, Lolisa includes a large subset of Solidity syntax components as well as general-purpose programming language features. Therefore, Solidity programs can be directly translated into Lolisa with line-by-line correspondence. Lolisa is inherently generalizable and can be extended to express other programming languages. Finally, the syntax and semantics of Lolisa have been encapsulated as an interpreter in mathematical tool Coq. Hence, smart contracts written in Lolisa can be symbolically executed and verified in Coq.

... For parser implementation we used, following [13], recursive descent parsing approach. This kind of parsers could be expressed using monadic parsing [14]. ...

This paper describes how one can implement distributed {\lambda}-calculus interpreter from scratch. At first, we describe how to implement a monadic parser, than the Krivine Machine is introduced for the interpretation part and as for distribution, the actor model is used. In this work we are not providing general solution for parallelism, but we consider particular patterns, which always can be parallelized. As a result, the basic extensible implementation of call-by-name distributed machine is introduced and prototype is presented. We achieved computation speed improvement in some cases, but efficient distributed version is not achieved, problems are discussed in evaluation section. This work provides a foundation for further research, completing the implementation it is possible to add concurrency for non-determinism, improve the interpreter using call-by-need semantic or study optimal auto parallelization to generalize what could be done efficiently in parallel.

... For parser implementation we used, following [13], recursive descent parsing approach. This kind of parsers could be expressed using monadic parsing [14]. ...

This paper describes how one can implement distributed λ-calculus interpreter from scratch. At first, we describe how to implement a monadic parser, than the Krivine Machine is introduced for the interpretation part and as for distribution, the actor model is used. In this work we are not providing general solution for parallelism, but we consider particular patterns, which always can be parallelized. As a result, the basic extensible implementation of call-by-name distributed machine is introduced and prototype is presented. We achieved computation speed improvement in some cases, but efficient distributed version is not achieved, problems are discussed in evaluation section. This work provides a foundation for further research, completing the implementation it is possible to add concurrency for non-determinism, improve the interpreter using call-by-need semantic or study optimal auto parallelization to generalize what could be done efficiently in parallel.

... Asynchronous programming is becoming increasingly important, with applications ranging from actor systems [1,21], futures and network programming [10,22], user interfaces [30], to functional stream processing [32]. Traditionally, these programming models were realized either by blocking execution threads (which can be detrimental to performance [4]), or callback-style APIs [10,22,26], or with monads [67]. However, these approaches often feel unnatural, and the resulting programs can be hard to understand and maintain. ...

While event-driven programming is a widespread model for asynchronous computing, its inherent control flow fragmentation makes event-driven programs notoriously difficult to understand and maintain. Coroutines are a general control flow construct that can eliminate control flow fragmentation. However, coroutines are still missing in many popular languages. This gap is partly caused by the difficulties of supporting suspendable computations in the language runtime.
We introduce first-class, type-safe, stackful coroutines with snapshots, which unify many variants of suspendable computing. Our design relies solely on the static metaprogramming support of the host language, without modifying the language implementation or the runtime. We also develop a formal model for type-safe, stackful and delimited coroutines, and we prove the respective safety properties. We show that the model is sufficiently general to express iterators, single-assignment variables, async-await, actors, event streams, backtracking, symmetric coroutines and continuations. Performance evaluations reveal that the proposed metaprogramming-based approach has a decent performance, with workload-dependent overheads of 1.03 − 2.11× compared to equivalent manually written code, and improvements of up to 6× compared to other approaches.

... There is an imperative level (the action layer) which sequentializes computation by Haskell's monadic programming features (see e.g. [26,40,23]) and it permits the execution of side-effects like starting further threads and modifying external storage. The pure functional level is the core part. ...

We propose a model for measuring the runtime of concur-
rent programs by the minimal number of evaluation steps. The focus of
this paper are improvements, which are program transformations that
improve this number in every context, where we distinguish between sequential and parallel improvements, for one or more processors, respectively. We apply the methods to CHF, a model of Concurrent Haskell extended by futures. The language CHF is a typed higher-order functional language with concurrent threads, monadic IO and MVars as synchronizing variables. We show that all deterministic reduction rules and
15 further program transformations are sequential and parallel improvements. We also show that introduction of deterministic parallelism is a parallel improvement, and its inverse a sequential improvement, provided it is applicable. This is a step towards more automated precomputation of concurrent programs during compile time, which is also formally proven to be correctly optimizing.

... The validation checking mechanism includes both checking the validation of the results (including memory states and memory values) and checking the execution condition. Because all functions are vulnerable to undefined conditions due to various causes, we develop functions with the help of a monad [40]. Here, all functions are tagged by an option type. ...

This paper reports a formal symbolic process virtual machine (FSPVM) denoted as FSPVM-E for verifying the reliability and security of Ethereum-based services at the source code level of smart contracts. A Coq proof assistant is employed for programming the system and for proving its correctness. The current version of FSPVM-E adopts execution-verification isomorphism, which is an application extension of Curry-Howard isomorphism, as its fundamental theoretical framework to combine symbolic execution and higher-order logic theorem proving. The four primary components of FSPVM-E include a general, extensible, and reusable formal memory framework, an extensible and universal formal intermediate programming language denoted as Lolisa, which is a large subset of the Solidity programming language using generalized algebraic datatypes, the corresponding formally verified interpreter of Lolisa, denoted as FEther, and assistant tools and libraries. The self-correctness of all components is certified in Coq. FSPVM-E supports the ERC20 token standard, and can automatically and symbolically execute Ethereum-based smart contracts, scan their standard vulnerabilities, and verify their reliability and security properties with Hoare-style logic in Coq.

... This formulation of state handling is analogous to the standard monadic implementation of state handling (Wadler, 1995). In the context of handlers, the implementation uses a technique known as parameter passing (Pretnar, 2015). ...

Plotkin and Pretnar’s effect handlers offer a versatile abstraction for modular programming with user-defined effects. This paper focuses on foundations for implementing effect handlers, for the three different kinds of effect handlers that have been proposed in the literature: deep, shallow, and parameterised. Traditional deep handlers are defined by folds over computation trees and are the original construct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. We formulate the extensions both directly and via encodings in terms of deep handlers and illustrate how the direct implementations avoid the generation of unnecessary closures. We give two distinct foundational implementations of all the kinds of handlers we consider: a continuation-passing style (CPS) transformation and a CEK-style abstract machine. In both cases, the key ingredient is a generalisation of the notion of continuation to accommodate stacks of effect handlers. We obtain our CPS translation through a series of refinements as follows. We begin with a first-order CPS translation into untyped lambda calculus which manages a stack of continuations and handlers as a curried sequence of arguments. We then refine the initial CPS translation by uncurrying it to yield a properly tail-recursive translation and then moving towards more and more intensional representations of continuations in order to support different kinds of effect handlers. Finally, we make the translation higher order in order to contract administrative redexes at translation time. Our abstract machine design then uses the same generalised continuation representation as the CPS translation. We have implemented both the abstract machine and the CPS transformation (plus extensions) as backends for the Links web programming language.

... The first declaration M : Type → Type specifies the uncertainly monad M . Discussing the notion of monad here would go well beyond the scope of this manuscript, and we refer interested readers to [39] and [40]. The idea is that M accounts for the uncertainties that affect the decision process. ...

We propose a new method for estimating how much decisions under monadic uncertainty matter. The method is generic and suitable for measuring responsibility in finite horizon sequential decision processes. It fulfills “fairness” requirements and three natural conditions for responsibility measures: agency, avoidance and causal relevance. We apply the method to study how much decisions matter in a stylized greenhouse gas emissions process in which a decision maker repeatedly faces two options: start a “green” transition to a decarbonized society or further delay such a transition. We account for the fact that climate decisions are rarely implemented with certainty and that their consequences on the climate and on the global economy are uncertain. We discover that a “moral” approach towards decision making – doing the right thing even though the probability of success becomes increasingly small – is rational over a wide range of uncertainties.

... Monads have various interpretations, but we shall follow those of Moggi, Plotkin and Power stating that a monad is a notion of computation [109] or a computational effect [119]. With respect to this interpretation, monads are concretely used in purely functional programming 13 languages [157] to implement imperative effects such as exceptions, input, or output. The language Haskell, for instance, has a class Monad that can be instantiated to recover some monads presented in the sequel, such as the maybe monad, the list monad, and the reader monad. ...

Monads are a concept from category theory allowing to model abstractly the notion of computational effect. The non-compositionality of monads is well-known, but the theory of distributive laws is a classical tool that has proved useful to combine effects of several monads. In frequent cases, there is no way of defining a distributive law between a pair of specific monads. When it feels like there almost exists one, a weaker form of distributive law can be used. This thesis studies theoretical properties of weak distributive laws, introduces a dual notion called coweak distributive laws, and provides applications to coalgebra theory: generalised determinisation and up-to techniques for bisimulations, with examples for alternating automata and probabilistic automata. Some specific weak distributive laws are also studied. The unique monotone weak distributive law between the powerset monad and the distribution monad is derived, allowing to combine probabilistic choice and non-deterministic choice in a canonical way. Although it is known that the powerset monad weakly distributes over itself, this result is generalised to arbitrary toposes and to compact Hausdorff spaces, where the role of the powerset is played by the Vietoris monad.

... Code colouring is pretty similar to (and can be viewed as a particular case of) computations with coeffects [39] and the guarantees it provides are very similar to functional monads [59] (e.g., IO monad in Haskell [40]). ...

... The collections considered so far are actually monads. A monad represents a value in a context which can be manipulated in a consistent way (Wadler, 1995). Monads are integral to safe purely functional programming and are used to encapsulate unsafe program behaviour such as IO (input output), async programs, random number generators and partial functions. ...

Bayesian inference involves the specification of a statistical model by a statistician or practitioner, with careful thought about what each parameter represents. This results in particularly interpretable models which can be used to explain relationships present in the observed data. Bayesian models are useful when an experiment has only a small number of observations and in applications where transparency of data driven decisions is important. Traditionally, parameter inference in Bayesian statistics has involved constructing bespoke MCMC (Markov chain Monte Carlo) schemes for each newly proposed statistical model. This results in plausible models not being considered since efficient inference schemes are challenging to develop or implement. Probabilistic programming aims to reduce the barrier to performing Bayesian inference by developing a domain specific language (DSL) for model specification which is decoupled from the parameter inference algorithms. This paper introduces functional programming principles which can be used to develop an embedded probabilistic programming language. Model inference can be carried out using any generic inference algorithm. In this paper Hamiltonian Monte Carlo (HMC) is used, an efficient MCMC method requiring the gradient of the un-normalised log-posterior, calculated using automatic differentiation. The concepts are illustrated using the Scala programming language.

... Here are examples of monads modeling some of the computational effects discussed in Section 1. Further examples, such as global stores and exceptions can be found in, e.g., [49,71]. 1. ...

We investigate program equivalence for linear higher-order(sequential) languages endowed with primitives for computational effects. More specifically, we study operationally-based notions of program equivalence for a linear $\lambda$-calculus with explicit copying and algebraic effects \emph{\`a la} Plotkin and Power. Such a calculus makes explicit the interaction between copying and linearity, which are intensional aspects of computation, with effects, which are, instead, \emph{extensional}. We review some of the notions of equivalences for linear calculi proposed in the literature and show their limitations when applied to effectful calculi where copying is a first-class citizen. We then introduce resource transition systems, namely transition systems whose states are built over tuples of programs representing the available resources, as an operational semantics accounting for both intensional and extensional interactive behaviors of programs. Our main result is a sound and complete characterization of contextual equivalence as trace equivalence defined on top of resource transition systems.

... We prefer the original term, not least because it lends itself nicely to adjectival uses, as in 'idiomatic traversal'.) Monads [24,29] allow the expression of effectful computations within a purely functional language, but they do so by encouraging an imperative [27] programming style; in fact, Haskell's monadic do notation is explicitly designed to give an imperative feel. Since idioms generalize monads, they provide the same access to effectful computations; but they encourage a more applicative programming style, and so fit better within the functional programming milieu. ...

... In [Mog91] Moggi proposed a unified framework to reason about λ-calculi embodying various kinds of effects, including side-effects, that has been used by Wadler [Wad92,Wad95] to cleanly implement non-functional aspects into Haskell, a purely functional programming language. Moggi's approach is based on the categorical notion of computational monad: instead of adding impure effects to the semantics of a pure functional calculus, effects are subsumed by the abstract concept of "notion of computation" represented by the monad T . ...

We study the semantics of an untyped lambda-calculus equipped with operators representing read and write operations from and to a global state. We adopt the monadic approach to model side effects and treat read and write as algebraic operations over a computational monad. We introduce an operational semantics and a type assignment system of intersection types, and prove that types are invariant under reduction and expansion of term and state configurations, and characterize convergent terms via their typings.

We present a formal study of semantics for the relational programming language miniKanren. First, we formulate a denotational semantics which corresponds to the minimal Herbrand model for definite logic programs. Second, we present operational semantics which models interleaving, the distinctive feature of miniKanren implementation, and prove its soundness and completeness w.r.t. the denotational semantics. Our development is supported by a Coq specification, from which a reference interpreter can be extracted. We also derive from our main result a certified semantics (and a reference interpreter) for SLD resolution with cut and prove its soundness.

We discuss how mathematical semantics has evolved, and suggest some new directions for future work. As an example, we discuss some recent work on encapsulating model comparison games as comonads, in the context of finite model theory.

Logic is not only of foundational importance in mathematics, it is also playing a big role in software engineering and formal verification. Its different roles influence its teaching, which has to take into consideration the recent developments in category theory and proof theory. We show that teaching set theory from a categorical viewpoint, in contrast with Zermelo-Fraenkel axioms, helps develop proper skills that are essential in mathematics and software engineering. The use of a proof assistant provides students with another perspective on both subjects: basic category theory and proof theory.

Package repositories for a programming language are increasingly common. A repository can keep a register of the evolution of its packages. In the programming language Haskell, with its defining characteristic monads, we can find the Stackage repository, which is a curated repository for stable Haskell packages in the Hackage repository. Despite the widespread use of Stackage in its industrial target, we are not aware of much empirical research about how this repository has evolved, including the use of monads. This paper presents an empirical study that covers the evolution of fourteen Long-Term Support (LTS) releases (period 2014-2020) of available packages (12.46 gigabytes), including the use of monads from the mtl package that provides the standard monad core (e.g., state, reader, continuations). To the best of our knowledge, this is the first large-scale analysis of the evolution of the Stackage repository with regard to packages used and monads. Our findings show, for example, a growing trend of packages is depending on other packages whose versions are not available in a particular release of Stackage; opening a potential stability issue. Like previous studies, these results may evidence how developers use Haskell and give guidelines to Stackage maintainers.

We present a formal study of semantics for relational programming language miniKanren. First, we formulate denotational semantics which corresponds to the minimal Herbrand model for definite logic programs. Second, we present operational semantics which models the distinctive feature of miniKanren implementation -- interleaving, -- and prove its soundness and completeness w.r.t. the denotational semantics. Our development is supported by a Coq specification, from which a reference interpreter can be extracted. We also derive from our main result a certified semantics (and a reference interpreter) for SLD resolution with cut and prove its soundness.

Tic-Tac-Toe is a simple, familiar, classic game enjoyed by many. This pearl is designed to give a flavour of the world of dependent types to the uninitiated functional programmer. We cover a journey from Tic-Tac-Terrible implementations in the harsh world of virtually untyped |Strings|, through the safe haven of vectors that know their own length, and into a Tic-Tac-Titanium version that is too strongly typed for its own good. Along the way we discover something we knew all along; types are great, but in moderation. This lesson is quickly put to use in a more complex recursive version.

We present the first formal verification of a networked server implemented in C. Interaction trees, a general structure for representing reactive computations, are used to tie together disparate verification and testing tools (Coq, VST, and QuickChick) and to axiomatize the behavior of the operating system on which the server runs (CertiKOS). The main theorem connects a specification of acceptable server behaviors, written in a straightforward "one client at a time" style, with the CompCert semantics of the C program. The variability introduced by low-level buffering of messages and interleaving of multiple TCP connections is captured using network refinement, a variant of observational refinement.

We introduce a new, diagrammatic notation for representing the result of algebraic effectful computations. Our notation explicitly separates the effects produced during a computation from the possible values returned, this way simplifying the extension of definitions and results on pure computations to an effectful setting. We give a formal foundation for our notation in terms of Lawvere theories and generic effects.

Handlers of algebraic effects aspire to be a practical and robust programming construct that allows one to define, use, and combine different computational effects. Interestingly, a critical problem that still bars the way to their popular adoption is how to combine different uses of the same effect in a program, particularly in a language with a static type-and-effect system. For example, it is rudimentary to define the “mutable memory cell” effect as a pair of operations, put and get, together with a handler, but it is far from obvious how to use this effect a number of times to operate a number of memory cells in a single context. In this paper, we propose a solution based on lexically scoped effects in which each use (an “instance”) of an effect can be singled out by name, bound by an enclosing handler and tracked in the type of the expression. Such a setting proves to be delicate with respect to the choice of semantics, as it depends on the explosive mixture of effects, polymorphism, and reduction under binders. Hence, we devise a novel approach to Kripke-style logical relations that can deal with open terms, which allows us to prove the desired properties of our calculus. We formalise our core results in Coq, and introduce an experimental surface-level programming language to show that our approach is applicable in practice.

Interaction trees (ITrees) are a general-purpose data structure for representing the behaviors of recursive programs that interact with their environments. A coinductive variant of “free monads,” ITrees are built out of uninterpreted events and their continuations. They support compositional construction of interpreters from event handlers, which give meaning to events by defining their semantics as monadic actions. ITrees are expressive enough to represent impure and potentially nonterminating, mutually recursive computations, while admitting a rich equational theory of equivalence up to weak bisimulation. In contrast to other approaches such as relationally specified operational semantics, ITrees are executable via code extraction, making them suitable for debugging, testing, and implementing software artifacts that are amenable to formal verification.
We have implemented ITrees and their associated theory as a Coq library, mechanizing classic domain- and category-theoretic results about program semantics, iteration, monadic structures, and equational reasoning. Although the internals of the library rely heavily on coinductive proofs, the interface hides these details so that clients can use and reason about ITrees without explicit use of Coq’s coinduction tactics.
To showcase the utility of our theory, we prove the termination-sensitive correctness of a compiler from a simple imperative source language to an assembly-like target whose meanings are given in an ITree-based denotational semantics. Unlike previous results using operational techniques, our bisimulation proof follows straightforwardly by structural induction and elementary rewriting via an equational theory of combinators for control-flow graphs.

Normal form bisimulation, also known as open bisimulation, is a coinductive technique for higher-order program equivalence in which programs are compared by looking at their essentially infinitary tree-like normal forms, i.e. at their Böhm or Lévy-Longo trees. The technique has been shown to be useful not only when proving metatheorems about \(\lambda \)-calculi and their semantics, but also when looking at concrete examples of terms. In this paper, we show that there is a way to generalise normal form bisimulation to calculi with algebraic effects, à la Plotkin and Power. We show that some mild conditions on monads and relators, which have already been shown to guarantee effectful applicative bisimilarity to be a congruence relation, are enough to prove that the obtained notion of bisimilarity, which we call effectful normal form bisimilarity, is a congruence relation, and thus sound for contextual equivalence. Additionally, contrary to applicative bisimilarity, normal form bisimilarity allows for enhancements of the bisimulation proof method, hence proving a powerful reasoning principle for effectful programming languages.

Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically.
This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies.

We study polymorphic type assignment systems for untyped lambda-calculi with effects. We introduce an intersection type assignment system for Moggi's computational lambda-calculus, where a generic monad T is considered, and provide a concrete model of the calculus via a filter model construction. We prove soundness and completeness of the type system, together with subject reduction and expansion properties.

We present a novel method for ensuring that relational database queries in monadic embedded languages are well-scoped, even in the presence of arbitrarily nested joins and aggregates. Demonstrating our method, we present a simplified version of Selda, a monadic relational database query language embedded in Haskell, with full support for nested inner queries. To our knowledge, Selda is the first relational database query language to support fully general inner queries using a monadic interface.

ResearchGate has not been able to resolve any references for this publication.