For me, I think the problem is that normal, boring, stupid, unsophisticated, plebian imperative programming lives in a world of O(1) operations. That is to say, a "normal" line of code you type will be O(1), and then you generally start gluing those together with various things that start stacking on O(n) complexities. As things like the accidentally quadratic blog [1], in the normal programming world it is a bit humorous to even so much as accidentally stack two of those things unnecessarily on top of each other.

Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.

The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you're always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.

I think this is the root cause as to why logic programming is generally not something we see a lot of. It's like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.

It isn't useless, of course. Amazing work has been done in the field of SAT solvers, and there's certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you're going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don't think that will ever happen, because of this line of logic.

This is a bit tangential to the article, it's just what I happened to read that finally crystallized this in my mind.

[1]: https://accidentallyquadratic.tumblr.com/

The issue you have circled around here is a bit deeper and nuanced than trimming down exponential code to something more tractable. In fact, it's often worse, and O(2^n) can be a blessing!

The problem with logic programming is the 'logic' part.

Modern imperative and functional programming languages are constructive.

Logic programming is not, and the expressive power varies with the exact logic being used.

Elementary logic gives you the following intuition. Propositional logic -- easy. First order logic -- easy (with caveats). And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.

For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.

This is much more difficult than it seems like.

For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.

Of the logic programming languages, Prolog is actually pretty friendly when it comes to construction. You can easily force evaluation with an empty left clause, i.e. ":- thing1(x), thing2(y).", and Prolog gives you a lot of tools to control the current database with assert/retract (handy for memoization). Minikanren (which core.logic is based on) is much more restrictive and really wants things to be pure, though it does have the advantage of working as a library in an imperative language as a small runtime.

Neat, I wonder if this idea could be developed into a more refined programming experience where you you spend time in the imperative world by default and only express some iterations as logic puzzles to be solved like this.

I've actually seen this usage pattern in the practical usage of logic programming libraries. Common Lisp's Screamer [1], for instance, or Clojure's less-mature core.logic [2].

Though to be fair, both of these libraries are in Lisp languages, which inherently support the design pattern of slipping into/out-of DSLs.

[1] http://nikodemus.github.io/screamer/ [2] https://github.com/clojure/core.logic