I've been aware of Marpa since about 2010. Every once in a while I read whatever Kegler has written, and I come away from it each time unconvinced that I should put in the effort to use it (which, for the most part, means first porting it to another language). On the plus side, he does acknowledge some of the prior art in the field. But on the flip side, he makes expansive claims, and there has been essentially zero uptake of Marpa or even its ideas, as far as I am aware.

Does Marpa live up to its advertising? Is it more generally useful than GLR or GLL (which, despite having some flurry of effort in the past decade, themselves have not set the parsing world on fire). It's frustrating, because the parsing world has gone around in circles for many years now (PEG being a great example). It would be nice to have algorithms with superior parsing power that are at least as fast as the less powerful ones used by virtually everyone today.

May I ask what kind of criteria you have in mind for being “more generally useful”? Parsers are something I know next to nothing about and hardly ever have to reach for. Is it mostly speed and power? Is the current crop not sufficient to parse any BNF, or is it that people want something to handle more permissive grammars?

The only parsing library I’ve played with recently is Instaparse in Clojure, which is apparently GLL. It was a delight to use. I found a PDF of the old version of Apache Expression Language that I needed to interpret, typed it almost verbatim into a quite readable text file, and Instaparse did the rest. It was an exciting experience to have unlocked this embedded DSL with about three lines of code, not counting the grammar definition.

https://github.com/Engelberg/instaparse