Controversial opinion but I believe that eventually these grammars and complex parsers will be found to have been a huge mistake for computer science. A mistake made for the sole reason of making programming languages resemble natural languages, even though they are not meant to be read fluently.

Everyone who experienced the magic of lisp understands how beneficial it is to have the textual representation as close to the parsed abstract syntax tree. One can create new languages fit for the purpose of a given class of problems and thus reducing size of codebase 10 to 100 fold (or even million times in case of modern multi million LOC software projects).

The biggest mistake lisp ever made was the unintuitive parenthesized prefix notation. Which can be thrown away by having all operations strictly infix.

Ie. instead of doing combinations of the following:

  lisp: (plus 1 2)
  methods: 1.plus(2)
  algol,C: plus(1, 2)
  haskell: plus 1 2
One can use the simplest form ever and achieve even simpler lisp:

  1 + 2
but use it consistently for every function call or operator within the language. (even for functions of multiple arguments).

APL came close, but for some reason decided to ignore the important Phoenician discoveries of 1. reading from left to right 2. use of phonetic alphabet instead of hyeroglyphs.

> One can create new languages fit for the purpose of a given class of problems and thus reducing size of codebase 10 to 100 fold (or even million times in case of modern multi million LOC software projects).

I find that a bit hard to believe. Here's a 100 million line codebase; I seriously doubt that you can express it in 100 lines in any language. Lisp may be good, but it isn't that good.

Yeah to be honest one of the things that made me skeptical of the code compression / productivity claim is looking at the implementations of Chez Scheme and Racket (after also looking at 20+ compilers / interpreters, and working on a language for a few years).

I'm pointing to them as very long-lived and valuable codebases written in Lisp dialects. Chez Scheme is a 35 year old codebase and Racket is also decades old.

So I'm not saying there's anything wrong with them, but I am saying that it doesn't appear that they're 10x easier to understand or modify than LLVM or CPython (Chez being a compiler and Racket being an interpreter as far as I remember). Or that you can get a 10x better result.

Basically for the claim to be true, why can't you write something like Racket in Racket 10x faster? Like 3 years instead of 30 years. And why doesn't it do way better things than CPython or Ruby? Those claims might be "slightly" true depending on who you are, but they're not devastatingly true. There's been more than enough time to evaluate the claims empirically.

In other words they would have already proven themselves in the market if that were the case. You would have extraordinarily productive teams using these languages -- along the lines of what PG hypothesized 15+ years ago.

http://www.paulgraham.com/avg.html

In fact the thing I found was interesting is that at the core of Racket is a big pile of C code, just like CPython. A year or 2 ago I watched a talk about them self-hosting more, and moving to Chez scheme's backend, but I don't recall the details now.

https://github.com/cisco/ChezScheme

https://en.wikipedia.org/wiki/Chez_Scheme

https://github.com/racket/racket/tree/master/racket/src/rack...

(FWIW I also looked at and hacked on femtolisp around the same time, since I was impressed by how Julia uses it.)

correction: it looks like Racket has a JIT too, written in C. Still same point applies: it's not magic and looks a lot like similar codebases in C. Chez is more self hosted AFAIR but it's also hundreds of thousands of lines of code.