What does HackerNews think of femtolisp?

a lightweight, robust, scheme-like lisp implementation

Language: Scheme

> In short, Julia is very similar to Common Lisp, but brings a lot of extra niceties to the table

This is probably because Jeff Bezanson, the creator of Julia, created a Lisp prior to Julia, which I think still exists inside Julia in some fashion

https://github.com/JeffBezanson/femtolisp

Well let's flip this around: do you think you could write a performant minimal Python in a weekend? Scheme is a very simple and elegant idea. Its power derives from the fact that smart people went to considerable pains to distill computation to limited set of things. "Complete" (i.e. rXrs) schemes build quite a lot of themselves... in scheme, from a pretty tiny core. I suspect Jeff Bezanson spent more than a weekend writing femtolisp, but that isn't really important. He's one guy who wrote a pretty darned performant lisp that does useful computation as a passion project. Check out his readme; it's fascinating: https://github.com/JeffBezanson/femtolisp

You simply can't say these things about Python (and I generally like Python!). It's truer for PyPy, but PyPy is pretty big and complex itself. Take a look at the source for the scheme or scheme-derived language of your choice sometime. I can't claim to be an expert in any of what's going on in there, but I think you'll be surprised how far down those parens go.

The claim I was responding to asserted that lisps and smalltalks can only be fast because of complex JIT compiling. That is trueish in practice for Smalltalk and certainly modern Javascript... but it simply isn't true for every lisp. Certainly JIT-ed lisps can be extremely fast, but it's not the only path to a performant lisp. In these benchmarks you'll see a diversity of approaches even among the top performers: https://ecraven.github.io/r7rs-benchmarks/

Given how many performant implementations of Scheme there are, I just don't think you can claim it's because of complex implementations by well-resourced groups. To me, I think the logical conclusion is that Scheme (and other lisps for the most part) are intrinsically pretty optimizable compared to Python. If we look at Common Lisp, there are also multiple performant implementations, some approximately competitive with Java which has had enormous resources poured into making it performant.

  $ julia --lisp
  ;  _
  ; |_ _ _ |_ _ |  . _ _
  ; | (-||||_(_)|__|_)|_)
  ;-------------------|------------------------------------ 
 ----------------------
  
  > (+ 1 2)
  3
  
  > (exit)

https://github.com/JeffBezanson/femtolisp
A fun Julia easter egg I recently discovered.

Running 'julia --lisp' launches a femtolisp (https://github.com/JeffBezanson/femtolisp) interpreter.

Reminds me of the femtolisp README :)

Almost everybody has their own lisp implementation. Some programmers' dogs and cats probably have their own lisp implementations as well. This is great, but too often I see people omit some of the obscure but critical features that make lisp uniquely wonderful. These include read macros like #. and backreferences, gensyms, and properly escaped symbol names. If you're going to waste everybody's time with yet another lisp, at least do it right damnit.

https://github.com/JeffBezanson/femtolisp

If PicoLisp is too heavy for you, there's always femtolisp[1]. Anybody up for a challenge claim attolisp? ;)

[1] https://github.com/JeffBezanson/femtolisp

Jeff (creator of Julia) created femtolisp [0] which is used in the Julia parser.

[0]: https://github.com/JeffBezanson/femtolisp

The “wackiest” Lisp embedding I have seen is of femtolisp [1] into Julia [2] to drive the language parser [3]. In hindsight, this is a fairly sensible design decision, but it did blow my mind when I first spotted it.

    # julia --lisp
    ;  _
    ; |_ _ _ |_ _ |  . _ _
    ; | (-||||_(_)|__|_)|_)
    ;-------------------|----------------------------------------------------------

    > (apply cons '(1 2))
    (1 . 2)
[1]: https://github.com/JeffBezanson/femtolisp

[2]: https://julialang.org/

[3]: https://github.com/JuliaLang/julia/blob/d76a30a7178dd1e9b744...

There are four guys credited as the creators of Julia, but the guy who sort of had the initial idea, Jeff Bezanson, is definitely a lisp enthusiast. He created his own dialect of scheme (like every lisp enthusiast), femtolisp https://github.com/JeffBezanson/femtolisp

This lisp is still used as part of the Julia parser.

Julia isn't really a lisp and it isn't trying to be, though it does have some things in common:

* AST macros

* everything is an expression and has a value

* while infix notation is supported for operators, they are just normal functions.

Still, the language is very much array-oriented and, as far as I know, there isn't even a linked list implementation in the standard library. It's a side-effect of trying to be fast. (pun strongly intended.)

If you have some space to spare, consider femtolisp: https://github.com/JeffBezanson/femtolisp
I've been hacking on femtolisp, which is actually what the Julia parser is written in:

https://github.com/JeffBezanson/femtolisp

femtolisp is more compact than Picrin, for better or worse. It's also pretty highly optimized and relies on some (reasonable) hardware assumptions (not just ANSI C, which is fairly restrictive for interpreters).

The byte code compiler is written in the language itself. It's not a reentrant VM. More details on the Github page.

That's a fair point. It's never easy and I had that rough experience, too. I mean, I wasn't a real expert on these things so mine was a reimplementation of existing work but still hard.

Curious, what was your language's interesting properties?

"I guess I am reluctant to realize how much easier it would have been with a lisp."

It might help if you see a modern example. The recent language impressing me the most with its features is Julia:

http://julialang.org/

Wondering aloud about how they pulled all that off, esp macro's, led someone here to tell me it's actually femtolisp internally:

https://github.com/JeffBezanson/femtolisp

So, they appear to have built a simple LISP, then used it to incrementally build a compiler for a complex language. They just represent the syntax internally in a LISP form and work with it from there. Don't know much more than that but it shows the power of the concept.

Better demo is the one below as it goes step-by-step in stages. One commenter (Orion63) pointed out the author was re-using the proven cheat: "build a LISP, do it all in LISP, profit." Haha. I've considered duplicating that work with different language options.

https://news.ycombinator.com/item?id=9699065

It was a turn of phrase, but really I just meant that in my experience it's healthy to have something to aspire to – that's why we always compare to fully optimized C and Fortran. It's easy to fall into an "interpreter bubble" and forget that there's a whole other realm of performance out there. And of course, interpreters can be quite fast – CPython is no slouch and Wren is giving it a run for its money, so kudos. Femtolisp [1] (which we use for Julia's parser) is a great example of a small and simple, yet fast and featureful interpreter. I really want Jeff to write a little book about it, walking the reader through the implementation.

[1] https://github.com/JeffBezanson/femtolisp

https://github.com/JeffBezanson/femtolisp

Lisp implementation from one of the julialang devs