in exchange, the compiler catches a lot of bugs and the code is blazing fast.
Curry is a superset of Haskell. it takes Haskell's pattern matching and makes it extremely general (full unification), extends it to non-determinism with choice points. it does have a REPL, like ghci.
Like Haskell, Curry is lazy. Mercury (like Prolog) uses mostly eager, depth-first evaluation (SLDNF resolution.) Clause order doesn't matter in Curry, which uses a strategy of "needed narrowing" - variables are narrowed when they need to be.
Unlike Mercury (and Prolog), and like Haskell and other FP languages, Curry draws a distinction between function inputs and outputs. You can do relational programming via guards and pattern matching, but it doesn't feel as Prolog-y.
Curry is more niche than Mercury, which is at least being used to build Souffle (a static analysis language built on Datalog), which is actually being used in industry somewhat. But it's a shame because Curry has a lot to offer, especially to Haskellers. They're both worth checking out though.
% This generates or recognizes any palindrome: pal --> [_]. pal --> X,pal,X.
% Here we try it out and press ; to generate more answers. ?- phrase(pal,P). P = [A]; P = [B,A,B]; ...
% Here we plug in a value and it fails with [A], fails with [B,A,B], etc. until it gets to [D,C,B,A,B,C,D], which can be unified with "racecar." ?- phrase(pal, "racecar") true.
Another example is just (X=a;X=b),(Y=b;Y=a),X=Y. This has two answers: X=a, Y=a, and X=b,Y=b. What happens is that it first tries X=a, then moves onto the second clause and tries Y=b, then moves onto the third clause and fails, because a≠b! So we backtrack to the last choicepoint, and try Y=a, which succeeds. If we tell Prolog we want more answers (by typing ;) we have exhausted both options of Y, so we'll go back to the first clause and try X=b, then start afresh with Y again (Y=b), and we get the second solution.
Prolog goes in order, and goes deep. This is notoriously problematic, because it's incomplete. Curry only evaluates choicepoints that a function's output depends on, and only when that output is needed. Curry does have disjunctions (using ? rather than Prolog's ;), unification (by =:= rather than =), and pattern guards rather than clause heads, and the evaluation strategy is different because laziness, but in terms of the fundamentals this is what "non-determinism" means in logic programming. it doesn't mean random, it means decisions are left to the machine to satisfy your constraints.
Off the top of my head but I think that should be backticks, not double quotes? So that `racecar` is read as a list of characters? I might try it later.
>> Prolog goes in order, and goes deep. This is notoriously problematic, because it's incomplete.
Yes, because it can get stuck in left-recursive loops. On the upside that makes it fast and light-weight in terms of memory use. Tabled execution with memoization (a.k.a. SLG-Resolution) avoids incompleteness but trades off time for space so you now risk running out of RAM. There's no perfect solution.
Welcome to classical AI. Note the motto over the threshold: "Soundness, completeness, efficiency: choose two".
https://curry-lang.org/docs/report/curry-report.pdf
Interesting, the email at the end of this thread: https://news.ycombinator.com/item?id=12668591
speed, compare code samples of small algorithms, any notable dependencies, features (immutable data, static typing etc.), etc.
I really feel like Prolog and its horn clause syntax are underappreciated. For as much as lispers will rant and rave about macros, how their code is data, it always struck me as naive cope. How can you say that code is data (outside of the obvious von neumann meaning), but still require a special atomic operation to distinguish the two? In Prolog, there is no such thing as a quote. It literally doesn't make sense as a concept. Code is just data. There is no distinguishing between the two, they're fully unified as concepts (pun intended). It's a special facet of Prolog that only makes sense in its exotic execution model that doesn't even have a concept of a "function".
For that reason, I tend to have a pessimistic outlook on things like Curry. Static types are nice, and they don't work well with horn clauses (without abusing atoms/terms as a kind of type-system), but it's really not relevant enough to the paradigm that replacing beautiful horn clauses with IYSWIM/ML syntax makes sense to me. Quite frankly, I have great disdain even for Elixir which trades the beautiful Prolog-derived syntax of Erlang for a psuedo-Ruby.
One thing I really would like to see is further development of the abstract architectures used for logic programming systems. The WAM is cool, but it's absolute ancient and theory has progressed lightyears since it was designed. The interaction calculus, or any graph reduction architecture, promises huge boons for a neo-prolog system. GHC has incidentally paved the way for a brand new generation of logic programming. Sometimes I feel crazy for being the only one who sees it.
I'm speaking from personal experience here. DFS with backtracking has always featured very prominently in discussions I've had with functional programming folks about logic programming and Prolog and for a while I didn't understand why. Well it's because they have an extremely simplified, reductive model of logic programming in mind. As a consequence there's a certain tendency to dismiss logic programming as overly simplistic. I remember a guy telling me the simplest exercise in some or other of the classic functional programming books is implementing Prolog in (some kind of) Lisp and it's so simple! I told him the simplest exercise in Prolog is implementing Prolog in Prolog but I don't think he got what I meant because what the hell is a Prolog meta-interpreter anyway [2]?
I've also noticed that functional programmers are scared of unification - weird pattern matching on both sides, why would anyone ever need that? They're also freaked out by the concept of logic varibles and what they call "patterns with holes" like [a,b,C,D,_,E] which are magickal and mysterious, presumably because you have to jump through hoops to do something like that in Lisp. Like you have to jump through hoops to treat your code as data, as you say.
And of course if you drop Resolution, you drop SLD-Resolution, and if you drop SLD-Resolution you drop the Horn clauses, whose big advantage is that they make SLD-Resolution a piece of cake. Hence the monstrous abomination of "logic programming" languages that look like ... Haskell. Or sometimes like Scheme.
Beh, rant over. It's late. Go to sleep grandma. yes yes you did it all with Horn clauses in your time yadda yadda...
___________
[1] Like in this MIT lecture by H. Abelson, I believe with G. Sussman looking on:
https://youtu.be/rCqMiPk1BJE?si=VBOWeS-K62qeWax8
[2] It's a Prolog interpreter written in Prolog. Like this:
prove(true):-
!. %OMG
prove((Literal,Literals):-
prove(Literal)
,prove(Literals).
prove(Literal):-
Literal \= (_,_)
,clause(Literal,Body)
,prove(Body).
Doubles as a programmatic definition of SLD-Resolution.> I remember a guy telling me the simplest exercise in some or other of the classic functional programming books is implementing Prolog in (some kind of) Lisp and it's so simple!
it's really easy to underestimate just how well engineered prolog's grammar is, because it's so deceptively simple. the only way you're getting simpler is like, assembly. and it's a turing equivalent kind of machine, but because if you squint your eyes you can delude yourself into thinking it kind of looks procedural, people can fool themselves into satisfaction that they "get" it, without actually getting it.
but the moment NAF and resolution as a concept clicks, it's like you brushed up against the third rail of the universe. it's insane to me we let these paradigms rot in the stuffy archives of history. the results this language pulls with natural language processing should raise any sensible person's alarm bells to maximum volume: something is Very Different here. if lisp comes from another planet, prolog came from an alternate dimension. technological zenith will be reached when we push a prolog machine into an open time-like curve and make our first hypercomputation.
Well, hello fellow traveler :)
>> but the moment NAF and resolution as a concept clicks, it's like you brushed up against the third rail of the universe.
I know, it's mind blowing. Maybe one day there will be a resurgence.
Been messing with it & Answer Set Programming recently and still trying to work out my own thoughts on it