This is talking about evaluating dynamic expressions over (semi-?)-structured row-oriented data, for the purpose of filtering.

It contrasts a tree interpreter in C++ with a JITted dynamically generated Lisp expression, with some hand-waving away of what the equivalent JIT in C++ would be, seemingly dismissing it as taking too long (is that what "unpause cosmic time" alludes to? I'm not sure).

The tree interpreter is a little unorthodox - it isn't how I'd write an AST-walking interpreter - and other interpreter techniques like generating linear programs for a simple virtual machine aren't considered. These can be pretty fast, especially with some use of implementation-specific computed goto, available in gcc and clang. It would get rid of the author's worries about recursion and lack of TCO, increase locality and decrease cache usage.

But of course there's not much need to write such an interpreter. Why not use a JIT framework for C++? Depending on the library, it wouldn't be a whole lot more complex than a traversal of the AST.

And the next question is, if the problem is querying plain-text databases, why not use Apache Impala? It's written in C++, and uses LLVM to compile SQL expressions into native code, and can evaluate filters (but not just filters, the full power of SQL) over CSV text.

Maybe Impala and its dependencies is too big, but if that's the case then your data is small and a simple interpreter would be plenty fast enough.

Implementing a "simple virtual machine" for a particular task is greenspunning[1]. Introducing a JIT library is work, complexity, debug and portability issues. LLVM compiles much slower to native code as the Clasp[2] guys notice comparing to SBCL. And in the end, any of these would be at most "competitive" to the simple Common Lisp implementation in speed (which is even portable across the different implementations).

[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

[2] https://github.com/clasp-developers/clasp