• Peter T Breuer added an answer:
    What are the differences between the Prolog Haskell languages?

    I am a Prolog Programmer looking at the Haskell language -- what are the similarities and differences?  Especially around Prolog predicates of "bagof" and "findall"

    Peter T Breuer · Birmingham City University

    Your question is not particular to Haskell, but applies to modern functional languages. You also need to take into account that the proper comparison is not with basic prolog, which is untyped, but with a typed logic progamming language (take your pick). Running prolog is like running some ancient untyped functional language (SASL?). There are advantages and disadvantages.

    As to "findall", I have no idea what it does. The name suggests that it finds all the elements of a list that satisfy a certain predicate. That would be

    [ x | x<-list, p(x) ]

    in a modern functional language.

    SImilarly, I don't know what "bagof" does. You need to describe these things! From the name, I presume it makes a bag out of a list. A bag is only a list reorganised to count repetitions instead of repeat them! So I would define bagof::[x]->[(x,Int)] as follows:

    bagof (x : xs) = x `bagadd` bagof xs

    bagof  [] = []

    bagadd x [] = [(x, 1)]

    bagadd x ((x, n) : ys) = (x, n+1) : ys

    bagadd x ((y, n) : ys) = (y, n) : bagadd x ys 

    I suppose Haskell already has a library of predefined "standard" bag representations and functions much more sophisticated than my invention above. In particular, I made a decision as how to represent bags that may or may not be what you like. I would have expected that Haskellers had already defined an interface called Bag x to which I could access with

    bagof : [x] -> Bag x

    (whatever they call their function - I have no idea if "bagof" is what they would have chosen). Then I would declare my particular construction above to be something that instantiates the Bag type class, and define the required interface functions.

    Indeed, both list and bag are clearly monadic because it's no problem for me to imagine every single entity in the universe as a list, and/or to imagine every single entity as a bag. So I can certainly apply a bag producing function to a bag and get not a bag of bags but a bag by unioning the pointwise results. That's monadic.

    f * b =(| [(y,n*m) | (x,n)<-b, (y,m)<-f x] |)

    (| bs |) = bag * bs

    bag x = [(x,1)] 

    Blah, blah [ I decided to mystify readers by inventing a (| ... |) notation for the union of bags and leaving you to guess what it means from the context]. I would hope that's all been set up by Haskell folks.


    May I point out that the major difference between LP and FP is not over lists and bags and other such low-level things, but lies in the fact that FP is a higher-order language. `Functions are first-class data' is the mantra. But in LP predicates are not first class data .. you can't really make them on the fly and pass them around and so on. In FP, you want a function that produces a function that produces a function? No trouble. A list of functions that produce lists of functions? No trouble. 

    So you can in one sense think of a FP as a sort of improved Prolog, in that predicates can be results of computations, as well as inputs to them. There is no two-level hierarchy of predicates and the things to which predicates apply.

    In another sense, LP is a sort of improved FP, in that it allows multi-valued functions (predicates). In FP, functions produce one result given certain inputs. In LP, more than one output may satisfy the same predicate for given inputs.

    And I won't even mention directionality, or the operational semantics.

    Yes, of course there are FP/LP laguages that bridge those gaps.  I am particularly reminded of some of the languages that use narrowing instead of graph reduction/combinators or unification as their operational mechanism. They can be seen as higher-order LP, or multi-valued FP. They're pretty ambiguous about direction too! I've seen the newton iteration for square root coded up and then asked "what number gives 2 as a square root", and the answer "4" come back.


    Note that if you want to translate my definitions above to Prolog, it's easy for first order functions such as list operators, and impossible for the higher order operators I defined. The important area of comparison is where you don't know that the possibility exists!

    bagof (B, [X | XS]) :- bagof(B1,  XS), bagadd(B, X, B1).

    bagof ([ ], [ ]).


  • Philip K.F. Hölzenspies added an answer:
    Haskell For Parallel Algorithms

    My friend says Haskell is better than Java for parallel programming. Haskell is a functional-programming language. I would like to know which language is better.

    Philip K.F. Hölzenspies · Universiteit Twente

    I agree largely with what David Wonnacott said in his post, but there are a few things that maybe add a bit of value.

    -- Straw man: Purity = Free Concurrency

    First, let me take away a straw man-argument: The idea that functional programming is great for concurrent programming, because purity means that everything is essentially concurrent. This is a very old ideal, which has largely been abandoned. The problem, as it turns out, is that too much concurrency is no good. Threads would be spawned to compute identity functions or very simple arithmetic. We still don't really know how to automatically derive the cost of a computation through static analysis, so we don't know where the profitable fork-points are. Maybe progress in work-stealing and lightweight dispatching warrants a revisit of this problem, but I'm sceptical.

    -- Laziness

    Second, let me say a few things about laziness. David correctly remarks that Haskell is a lazy functional language and that the example of adding elements to a collection is a little less obvious in this context. As it turns out, this is precisely one of those tricky cases to assign meaningful costs. Suppose your collection is a binary tree type. Let's make stuff concrete:

    data Tree a = Node a (Tree a) (Tree a) | Leaf

    add :: Ord a => a -> Tree a -> Tree a
    add x Leaf = Node x Leaf Leaf
    add x (Node y l r)
      | x <= y = Node y (add x l) r
      | otherwise = Node y l (add x r)

    ex5LL, ex15LL, ex30LL, ex20_15_30, example :: Tree Int

    ex5LL = Node 5 Leaf Leaf
    ex15LL = Node 15 Leaf Leaf
    ex30LL = Node 30 Leaf Leaf
    ex20_15_30 = Node 20 ex15LL ex30LL

    example = Node 10 ex5LL ex20_15_30

    In this example, the root of the tree is 10. Suppose that David's example is made more concrete by saying

    add 2 (add 50 example)

    To a certain extent, you can't fully concurrently add these two elements to the tree, because "the tree" does not exist; you're telling your computer to add 2 to precisely that tree that results from adding 50 to the tree 'example'. In a strict language, calling a function first triggers the evaluation of its arguments, so the computation is fully sequential. In a lazy language, however, results of calling (or, as we prefer to call it, applying) a function are not evaluated until they are actually required. Therefore, 'add 50 example' immediately returns, but it returns an unevaluated result. The same holds for 'add 2 <said-result>'. Evaluating the "outer" result forces the (partial!) evaluation of the inner result. What happens is this (I'm not writing out every single step, just the biggies):

    add 2 (add 50 example)
    --> evaluate application of inner-add
    add 2 (Node 10 ex5LL (add 50 ex20_15_30))
    --> evaluate application of outer-add
    Node 10 (add 2 ex5LL) (add 50 ex20_15_30)

    From this point on, the insertions are independent of each other. This part of the concurrency-problem is not language-related; this comes from the data-structure, because it is defined to render different results for different sequences of adding (the same) elements. The beauty of laziness, though, is that you do not need to think further about the order of these computations.

    That being said, there is a giant down-side to laziness when it comes to fork-lock-concurrency. Let's make things concrete again. Haskell has so-called MVars. These are mutexed variables, i.e. locks. Only one thread can read from them, or write to them at the same time. An MVar can be empty, in which case a read operation on it blocks (until some other thread writes to it). One naive way of creating a parallel map - the same function is applied to all elements of a list, but each in a separate thread - is this:

    parMap f [] = return []
    parMap f (x:xs) = do
      resultVar <- newEmptyMVar
      forkIO (writeMVar resultVar (f x))
      fmap (resultVar :) (parMap f xs)

    The problem is that laziness prevents the evaluation of the actual results. Even if 'f' is some really, really expensive function to compute, all threads terminate almost instantly; they simply return the application of 'f' to 'x', without evaluating the result. There are some clever tricks to force evaluation, but, from a software engineering perspective, they are a bit hit-and-miss (mostly because all data types you use, including those from external libraries, need to play along).

    -- Concurrency models & tooling

    The general trend seems to be that locks should be considered the assembly-level of concurrency; it's typically not worth the pain of programming directly at that level, unless you are extremely performance concious and know exactly what you're doing. Depending on what type of concurrency-problem you have, you can use higher-level concurrency models. David alluded to actors and I got the impression he suggested they were a Haskell thing. I know of quite a few Java-libraries that actually implement actor-models for concurrency. The basic idea of specifying concurrent applications as a network of actors (or tasks) that are connected through (possibly bounded) communication channels is not specific to any language. It should be said, though, that purity does give you a great advantage here: If your actors all have their own lexical scope (i.e. if you write what they do in different places), purity guarantees adherence to this model. In Java, an actor could accidentally modify an object that it received from, or already sent to some other actor. You truly have to make an effort to make the same mistake in Haskell. I think, to a large extent, this is the type of problems a lot of Java-tools try to catch for you. The fact that fewer of such tools exist for Haskell does not mean the Haskell ecosystem is less developed (although, to some extent it really is), but rather that Haskell doesn't have nearly as many problems that tooling must catch (other than, of course, the type checker).

    There are many other models of concurrency. One that I commonly use is Software Transactional Memory. In this model, all updates are safe, because when a race occurs, you role back and try again. This brings along a cost, but if conflicts are rare and a lot of concurrency is exposed this way, I often find it worth it. If you know more about dependencies, you may also want to explicitly annotate your program using the "pseq" and "par" operators. These are used largely in SMP-computers.

    Because my answer is starting to be a tad rambly, this seems to be a good time to refer to a bit of further reading. There's quite a decent overview of concurrency-mechanisms in a tutorial by Simon Peyton Jones:


    This par/pseq SMP-stuff is elaborated on in this paper:


    For everything SPJ published on parallel Haskell, have a look here:


    There's also a book by Simon Marlow. I haven't (yet) read it. It's probably worth a look, considering that he seems to be the performance-guy in Haskell-land (if such labels are to be used at all):


    Right... I hope this helps ;)

  • Alleyne Thompson asked a question:
    Haskell programming

    haskell parallel

  • Robin T. Bye added an answer:
    What are the advantages and disadvantages of using Haskell for artificial intelligence?
    Specifically, using Haskell for autonomous mobile robots and cooperative control.
    Robin T. Bye · Aalesund University College
    Some nice blog posts comparing Haskell with other languages used for AI:
    Haskell vs. Scala vs. Python: http://blog.samibadawi.com/2013/02/scala-vs-haskell-vs-python.html
    Haskell vs. Scheme: http://scienceblogs.com/goodmath/2006/10/24/haskell-and-scheme-which-one-a/
  • Stefan Savev added an answer:
    Scala implicits and type classes
    Do you know of any good material describing systematically the relationship between scala implicits and type classes?
    Stefan Savev · ResearchGate
    Thanks Michael! It's a good reference.
  • Robin T. Bye added an answer:
    Any suggestions for a good book on Haskell?
    Any book or page that contains good material for language Haskell.
    Robin T. Bye · Aalesund University College

    There is also "Real World Haskell" by Bryan O'Sullivan, John Goerzen, and Don Stewart. That book is published by O'Reilly, and like many O'Reilly books, it is  available for free online: http://book.realworldhaskell.org/. 

    The online version is particularly useful, as it also contains comments by users, which has ensured that even the tiniest errors are caught and explanations are bootstrapped and perfected.

  • Peter Padawitz added an answer:
    Is there any logic language to implement quantum programming?
    I am just now interested in explore the capabilities of quantum programming. I found some implementations on literature based on C or C++. Also there is a very promising language QML on top of the functional language Haskell that is based on linear logic, but i'll rather prefer to manage a logic language. Any one know if there is a work on such line?
    Peter Padawitz · Technische Universität Dortmund
    Maybe this helps: http://www.mathstat.dal.ca/~selinger/quipper/

About Haskell

Group for Haskell learners and practitioner

Topic Followers (79) See all