Logic programming is experiencing something of a resurgence in the Clojure community, sparked by core.logic, a port of miniKanren, an embedding of Prolog in Scheme. When Clojure programmers first hear about core.logic, they go rushing off for a problem to solve like a hammer in search of a nail.

This means that every few months, we see a new blog post demonstrating how to use core.logic to solve a logic puzzle. The latest one to catch my attention was this blog post by Kris Jenkins detailing a logic puzzle tackled by the London Clojure group. The blog post begins by describing the puzzle and says:

It cries out, "Logic Programming", doesn't it?

**NO, NO, NO!!!!** It doesn't cry out logic programming, for reasons I'll get into in a moment.
The blog post explains that the London group was unable to solve the puzzle in a single session, and goes on to explain the complications they ran into -- for example, negation rules and rules involving inequalities prove harder than expected to encode. They also needed to introduce macros to make it more readable. You'd think that after detailing all these problems, the post would end with the observation that logic programming turned out to be a convoluted way to express the logic puzzle, and failed to live up to expectations; yet amazingly, it does not.

The reason why logic programming is so rarely useful is that, essentially, core.logic is just a complex DSL for doing exhaustive search. Clojure already *has* an elegant, compact DSL for doing exhaustive search -- it is called the `for`

comprehension.

So let's take a look at just how much more cleanly and easily this logic puzzle can be solved just using Clojure's built-in for comprehension. Take a look at this code. To run it, you'll need to:

`(use 'clojure.math.combinatorics)`

for the `permutations`

function.

As you read the code, I want you to look at how clear and concise the translation is from the English constraints to Clojure. You've got a few lines setting up the variables, and then every single line of Clojure code is a straightforward expression of the original problem statement. Notice how trivial it is to handle things like negation, ordering, and inequalities.

Now go back and read the core.logic version. Admit it, it's ugly in comparison.

I think a lot of people mistakenly believe that core.logic is working some magic behind the scenes to solve puzzles in some amazingly efficient manner. On the contrary, it's just brute force search, using complex machinery which just slows it down versus a for comprehension. Not only is the for version elegant, but it's also faster.

But the disadvantages of core.logic for solving logic puzzles don't end there. core.logic is very sensitive to the way that goals are ordered, in a bad way. Certain orderings, for example, will cause the program to go into an infinite loop, and the DSL is complex enough that this is not always readily apparent.

The for comprehension version is also sensitive to ordering, but in a good way. In the for version, the code is evident and there is no mystery about how it will execute, so it is a trivial matter to rearrange the rules to *improve the program's running time*. The rule of thumb is simple: move each constraint up so that it occurs just after the definition of any variables it depends on. Here is a version where the rules have been shifted around in this straightforward way, making the search run ten times faster.

So all this is to say that I think logic programming is overrated, at least for solving logic puzzles. I write logic puzzles for a living, and I haven't yet found a practical use for core.logic. So far, every blog post I've seen that demonstrated core.logic on a logic puzzle could have been solved just as easily or better with a for comprehension.

Does that mean core.logic is useless? Of course not. Logic programming is good for running programs backwards (I've seen some wonderful toy examples of this, like the program that generates Quines, but have yet to find a real-world example where this is useful), unification is important for writing type checkers, and the new constraint programming piece of core.logic based on cKanren has wonderful potential to be directly useful to me for the kinds of things I program (although probably not until it comes with tools for guiding and visualizing the search strategy).

So the purpose of this article is not to demean core.logic, but rather, to elevate the level of discourse surrounding it. Can we move past solving logic puzzles which would be better solved with a for comprehension? Please, show me things I can do with core.logic that would be hard to express any other way. I look forward to it!

Ok so I'll bite though you must be ready for it. The canonical sudoko example http://news.ycombinator.com/item?id=4325746 using for?

ReplyDeleteThat example uses the new constraint propagation and solving engine (infd and distinctfd), so really isn't what I was talking about. As I mention toward the end of the article, I like constraint propagation and think it has the potential to be quite useful, once it gets better docs and tools. (For example, take a look at the search explorer tool in Alice ML at http://www.ps.uni-saarland.de/alice/manual/cptutorial/node25.html).

ReplyDeleteSudoku is a nice showcase for the new features of core.logic, but you've got to admit that Sudoku is inherently a pure exercise in constraint propagation of numbers, and about as perfectly suited for that kind of engine as one can imagine. It will be nice to eventually see more examples of constraint propagation on real-world problems.

For example, I think it is often just as important or more important for a solving engine to return a tree of its solution process; the solution is not enough. core.logic's constraint solver as it stands might be too much of a "black box". Time will tell.

Also, as a community we really need to gain a better understanding of how core.logic constructs compose with other Clojure functions. In my limited experience, relations and functions don't compose all that well, and everything you want to use in your logic program has to be converted into a relation. Often that's not practical. So if relations turn out not to compose well with larger Clojure systems, then core.logic loses a lot of its appeal and one might as well use one of the many other advanced constraint solvers (e.g., Zinc) or Java constraint libraries (e.g. JaCoP).

But overall, yes, the sudoku example is a good showcase for constraint satisfaction programming, and points towards interesting discussions and discoveries that lie ahead for the Clojure and core.logic communities.

Interesting post. Too many things for me to talk about for a blog comment. I'll try to formulate a proper response blog post in a few days.

ReplyDeleteI figured you'd have something to say :) I look forward to your response!

DeleteThis comment has been removed by the author.

ReplyDeleteGood post mark, and I like your solution.

ReplyDeleteI think you're missing the point of the hack night though. The aims are to pick a fun way to solve a fun problem, and learn something new along the way. On those criteria, core.logic met my expectations entirely. It was a great evening.

But the leap from pizza-n-beer motivations to real-world coding is a large one, and there I entirely agree with your point. If this were work instead of play, I'd solve it your way too.

I'll be interested to read David Nolan's response when it comes. I don't have any firm thoughts on where core.logic might be a good fit. It seems to me that there's only been one /really/ successful use of this style of programming - the what-but-not-how approach - and that's SQL.

Why has SQL worked well? I think partly because the problem space is rigidly defined, the grammar is small, but the number of resulting queries you can put together is vast. In SQL we have a high-level mini language for exploration of well-known terrain.

If we change problem space every 5 minutes (sudoku today, logic puzzles tomorrow), logic programming will take took much up-front effort just to understand the shape of the solution.

If we don't need to explore because there's only kind of answer (sudoku, logic puzzles), logic programming is massive overengineering. And yet, we're glad of SQL where we have it, and would be loath to lose it.

So maybe that's one shape to think about. Well defined domain, small grammar, many possible queries.

Can I think of any problem domains that have that structure? Erm...graph analysis maybe? Analyzing abstract syntax trees? Finding significant events from Google Analytics logs? Maybe they'd be fruitful areas to look into...

I've written up a response blog post here - http://swannodette.github.com/2013/03/09/logic-programming-is-underrated/.

ReplyDeleteThanks for the response! Everyone who is following this discussion should go read it now.

DeleteI think that in my original post, I failed to make clear that I make a distinction between "logic programming" (i.e., the original part of core.logic that is inspired by miniKanren), and "constraint programming" (the new features introduced in cKanren). I believe that "logic programming" is overrated, but not "constraint programming". Towards the end of the article I specifically said, "the new constraint programming piece of core.logic based on cKanren has wonderful potential..." but I think that positive endorsement may have gotten lost among the criticisms I raised.

(To readers who are new to core.logic, the point here is that core.logic started out with only "logic programming", ported from miniKanren. This is what nearly all the blog posts about core.logic focus on, and what most people identify with core.logic. Fortunately, there is new support for constraint programming, ported from cKanren, that vastly improves the utility of core.logic. I'm not certain if it is feature complete, and those features aren't well documented, but it is exciting stuff.)

Your post and your code example is right in line with my comments that constraint programming is where it's at. Constraint propagation does in fact work the kind of "clever magic" that people wrongly associate with logic programming and is usually more efficient than the brute force search performed by list comprehensions and/or straightforward logic programming.

If my post, in conjunction with your follow-up, helps readers to understand just how much more expressive power is gained by the constraint functionality, and start focusing on those core.logic features, then "my work here is done."

You expressed concern about my lack of core.logic experience, but despite my limited experience with core.logic, I have a fair amount of related experience to draw upon, including time spent with Mozart, Gecode, AliceML, and of course, core.logic's immediate progenitor -- miniKanren. I assure you that my statement that "logic programming is overrated" comes directly from personal experience, from attempts to employ state-of-the-art logic programming techniques to problems for which one would expect it to be well-suited, only to find a cleaner, more efficient solution by coding my own custom constraint propagators in a functional programming language.

I will leave off with two positive statements about core.logic, and the growth I hope to have with it:

1. One of the design goals for cKanren was to make it easy to design custom constraint propagators. I'm really looking forward to exploring this aspect. In the past, I haven't been able to easily use off-the-shelf constraint packages because the built-in constraints were insufficient.

2. For the most part, I consider logic programming and constraint programming to be orthogonal. There have been plenty of logic programming languages without constraints, and plenty of constraint libraries for ordinary languages. In core.logic, the features coexist in a potentially interesting way, and I'm looking forward to exploring whether the combination is greater than the sum of its parts.

Thanks for the long thoughtful reply. I'll only add that traditional logic programming is in fact constraint logic programming over tree terms (potentially infinite) with only one constraint - unification! This was known in the late 80s and constraint logic programming is simply a generalization of the original idea - not a completely new direction.

Deleteok. So this is a very good discussion. Google brought me here because I was googling the phrase: "aren't the constraints in constraint logic programming just relations?"

ReplyDeleteYou can see why I came to this post: I am bothered by the what I view as an unnecessary distinction of _paradigms_ (not technologies) called "logic programming" and "constraint logic programming" when I don't feel that that distinction is warranted for reasons other than 1) historical and 2) implementation-considerations.

I am either bothered because I don't quite understand (possible) or there really is no _valid_ distinction as explained by (1) and (2) above (also possible). I am hoping someone can add some insight.

In short, as someone new to both fields: when I hear the term 'constraint', and when I see 'constraints' explained in all the examples, they seem to be nothing other than a relation with a special name (like '>') ranging over a well-known domain (like X + 1 > Y).

Facts are tuples or individual points in a cartesian space and thus elements of a subset of that cartesian space called 'relations'. It took me a long time to understand this fundamental statement. And importantly, it exposes the fact that a 'relation' is a 'constraint' on all the chosen cartesian space to limit the space to that subset.

Thus:

likes(daniel, butter).

likes(daniel, jelly).

likes(daniel,avocado).

encodes three points in the entire cartesian space of humans X food and names this subset 'likes'. In a database, it would be a table called 'likes' with two columns: ('human' and 'food') and three rows. These are isomorphic. I have constrained the entire space to three points, and given it a name.

The greather-than relation '>' could (if we had infinite lines in our program) be encoded thus:

'>'(4,3).

'>'(4,2).

'>'(4,1).

'>'(4,0).

'>'(3,2).

.... etc

So when I see a constraint, I see a relation that might not explicitly list its facts, but which provides an inversion to _generate_ these facts.

So in the wikipedia (http://en.wikipedia.org/wiki/Constraint_logic_programming) example of a constraint:

"A(X,Y) :- X+Y>0, B(X), C(Y). In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming"

I fail to see how the X+Y>0 constraint is nothing more than a built-in arithmetic relation. One could imagine we could name that relation as together_greather_than_zero and list out all the (infinite) facts:

together_greather_than_zero(2,-1).

together_greather_than_zero(3,-2).

together_greather_than_zero(4,-3).

together_greather_than_zero(4,-2).

together_greather_than_zero(4,-1).

together_greather_than_zero(4,34).

... etc

Given all these assumptions, the only defining characteristic of what makes a 'constraint logic programming' different, and thus why we have add-on packages like C(Tree), C(R) etc. is an implementation strategy that stores the constraints in a separate data-structure during backtracking during unification (because there is no way to unify infinite facts). In other words, it is an implementation strategy that is different for a well-defined domain (integers, or reals) using some well-known arithmetic operators (*, +, /) with well-known order relations (>,<,>=,<=).

I think I saw this distinction, and the identification of this conflation, in swannodette's last post.

My motivation for this is purely pedagogical. I am brand new to the fields themselves, and was hoping for you to identify a subtlety I might be missing.

I like your motivation project.....Visit our link..for more detail.

ReplyDeleteimeshlab