What does HackerNews think of ChezScheme?

Chez Scheme

Language: Scheme

The main line of Chez Scheme is here:

https://github.com/cisco/ChezScheme

There is more work to be done before release 10.0.

What is yakihonne? Another blogging platform? Rather confusing to use.

Anyway, would have been nice for the article to link to Chez Scheme's project page, which seems to be this one:

https://github.com/cisco/ChezScheme

Also not clear why should folks use Chez? The article barely covered the why or what successful apps have been written in Chez.

I thought Cisco was invested in Scheme - but perhaps that's at a different level in the stack?

https://github.com/cisco/ChezScheme

Yeah to be honest one of the things that made me skeptical of the code compression / productivity claim is looking at the implementations of Chez Scheme and Racket (after also looking at 20+ compilers / interpreters, and working on a language for a few years).

I'm pointing to them as very long-lived and valuable codebases written in Lisp dialects. Chez Scheme is a 35 year old codebase and Racket is also decades old.

So I'm not saying there's anything wrong with them, but I am saying that it doesn't appear that they're 10x easier to understand or modify than LLVM or CPython (Chez being a compiler and Racket being an interpreter as far as I remember). Or that you can get a 10x better result.

Basically for the claim to be true, why can't you write something like Racket in Racket 10x faster? Like 3 years instead of 30 years. And why doesn't it do way better things than CPython or Ruby? Those claims might be "slightly" true depending on who you are, but they're not devastatingly true. There's been more than enough time to evaluate the claims empirically.

In other words they would have already proven themselves in the market if that were the case. You would have extraordinarily productive teams using these languages -- along the lines of what PG hypothesized 15+ years ago.

http://www.paulgraham.com/avg.html

In fact the thing I found was interesting is that at the core of Racket is a big pile of C code, just like CPython. A year or 2 ago I watched a talk about them self-hosting more, and moving to Chez scheme's backend, but I don't recall the details now.

https://github.com/cisco/ChezScheme

https://en.wikipedia.org/wiki/Chez_Scheme

https://github.com/racket/racket/tree/master/racket/src/rack...

(FWIW I also looked at and hacked on femtolisp around the same time, since I was impressed by how Julia uses it.)

correction: it looks like Racket has a JIT too, written in C. Still same point applies: it's not magic and looks a lot like similar codebases in C. Chez is more self hosted AFAIR but it's also hundreds of thousands of lines of code.

> As a superset of the language described in the Revised6 Report on the Algorithmic Language Scheme (R6RS), Chez Scheme supports all standard features of Scheme [...]

https://github.com/cisco/ChezScheme

Gambit-C [0]. It's a R5RS Scheme, and is near-C level of performance. It compiles to C, so embedding it is fairly easy too. Compiles to static executables, which is easy for distribution.

Chez Scheme [1]. It's a R6RS Scheme, so bigger, but Chez has much better embedding support, is backed by Cisco instead of a single dev. Chez doesn't make standalone executables though, because Chez is jitted. It may be the fastest Scheme. It also includes a compiler, a profiler, a great debugger, live memory-introspection, and an enhanced REPL [2] that can dump out it's definitions and any comments into a lovely Scheme file.

[0] https://github.com/gambit/gambit

[1] https://github.com/cisco/ChezScheme

[2] https://cisco.github.io/ChezScheme/csug9.5/use.html#./use:h2

Chez Scheme [0] is written using the nanopass framework, and it's regarded as one of the fastest Scheme compilers in existence [1]. Before it was rewritten to use the nanopass system, Chez's compiler was known for its performance in terms of lines of code compiled per second; the rewrite slowed it down a bit, but the quality and performance of generated machine code improved. Andy Keep and Kent Dybvig wrote a paper about the project [2]. I haven't browsed the Chez source, but it's a good way to answer your question.

[0] https://github.com/cisco/ChezScheme

[1] http://ecraven.github.io/r7rs-benchmarks/benchmark.html

[2] https://www.cs.indiana.edu/~dyb/pubs/commercial-nanopass.pdf

Note that Chez Scheme is now open source. This is a great gift to all compiler hackers, and I sincerely thank Cisco for the release.

https://github.com/cisco/ChezScheme

Chez scheme was recently released as free software. Not sure how it compares to SBCL, but it blows all the other scheme implementations out of the water: https://www.nexoid.at/tmp/scheme-benchmark-r7rs.html

https://github.com/cisco/ChezScheme

All the arguments against the CL-style macros are stemming from the fact that it's far too easy to get into a total mess with them, simply because of their infinite flexibility. Yet, if you follow some very simple rules, you'll get the opposite.

Firstly, macros must be simple and must essentially correspond to compiler passes.

E.g., if you're implementing an embedded DSL for regular expressions, one macro expansion pass should lower your DSL down to a syntax-expanded regular expression, the second macro would lower it into an NFA, another one would construct an optimised DFA, and then the next one (or more) would generate the actual automaton code in Lisp out of the DFA.

The best possible case is when your macro pass can be expressed as a set of simple term rewriting rules. And most of the transforms you'd find in DSL compilers can actually be represented as such.

Of course, there is also an alternative style, which may be preferable if you have debugging tools for designing your compiler passes at least equivalent to the Lisp macro expansion debugging. You can simply write your entire DSL compiler as a single function, and then wrap it into a macro, as `(defmacro mydsl (&rest args) (mydsl-compiler args))`.

This way, the same compiler infrastructure can be reused for an interpreted or partially interpreted version of your DSL, if you need it. Still, all the same rules apply to how the passes are implemented in that compiler function.

Another very useful trick is to make your DSLs composable, which is easy if you split your compilation pipeline into separate macro expansion passes. Multiple DSLs may easily share features of their internal IRs or even front-end languages, so any new DSL would simply cherry-pick features from a number of existing DSLs and wrap them into a simple front-end. This degree of composability is simply impossible with the function-based interpreted eDSLs.

A can recommend taking a look at the Nanopass [1] framework, which is built around this very ideology, or at my own DSL construction framework [2].

[1] http://andykeep.com/pubs/np-preprint.pdf

And some examples:

https://github.com/cisco/ChezScheme

https://github.com/eholk/harlan

[2] https://github.com/combinatorylogic/mbase