During Perl’s hegemony as The Glue Language, I feel like the folk wisdom was:

“Performance is a virtue; if Perl ceases to be good enough, or you need to write ‘serious’ software rewrite in C.”

And during Python’s ascension, the common narrative shifted very slightly:

“Performance is a virtue, but developer productivity is a virtue too. Plus, you can drop to C to write performance critical portions.”

Then for our brief all-consuming affair with Ruby, the wisdom shifted more radically:

“Developer productivity is paramount. Any language that delivers computational performance is suspect from a developer productivity standpoint.”

But looking at “high-level” languages (i.e. languages that provide developer productivity enhancing abstraction), we can rewind the clock to look at language families that evolved during more resource-constrained times.

Those languages, the lisps, schemes, smalltalks, etc. are now really, really fast compared to Python, and rarely require developers to shift to alternative paradigms (e.g. dropping to C) just to deliver acceptable performance.

Perl and Python exploded right at the time that Lisp/Scheme hadn’t quite shaken the myth that they were slow, with Python/Perl achieving acceptable performance by having dropped to C most of the time.

Now the adoption moat is the wealth of libraries that exist for Python—and it’s a hell of a big moat. If I were a billionaire, I’d hire a team of software developers to systematically review libraries that were exemplars in various languages, and write / improve idiomatic, performant, stylistically consistent versions in something modern like Racket. I’d like to imagine that someone would use those things :-)

> Those languages, the lisps, schemes, smalltalks, etc.

The main reason those languages got fast despite being highly dynamic is because of very complex JIT VM implementations. (See also: JavaScript.)

The cost of that is that a complex VM is much less hackable and makes it harder to evolve the language. (See also: JavaScript.)

Python and Ruby have, I think, reasonably chosen to have slower simpler implementations so that they are able to nimbly respond to user needs and evolve the language without needing massive funding from giant corporations in order to support an implementation. (See also: JavaScript.)

There are other effects at play, too, of course.

Once your implementation's strategy for speed is "drop to C and use FFI", then it gets much harder to optimize the core language with stuff like a JIT and inlining because the FFI system itself gets in the way. Not having an FFI for JS on the web essentially forced JavaScript users to push to make the core language itself faster.

Spending a weekend or two writing a Scheme that beats Python in performance has been a pastime for computer science students for at least a couple decades now. I'm not sure that I believe that a performant Scheme implementation has more complexity than e.g. PyPy. In fact, I'd wager the converse.

You're either exaggerating or the computer science students you're familiar with are wizards. I've never known the student who could write a Scheme implementation, from scratch, in one weekend that is both complete and which beats Python from a performance perspective.

If it's an exaggeration, it's not much of one.

Two parts to your argument:

- Writing a Scheme implementation quickly: Google "Write a Scheme in 48 hours" and "Scheme from scratch." 48 hours to a functioning Scheme implementation seems to be a feat replicated in multiple programming languages.

- Performance: I haven't benchmarked every hobby scheme, but given the proliferation of Scheme implementations that, despite limited developer resources, beat (pure) Python with it's massive pool of developers (CPython, PyPy), I still don't buy the idea that optimizing Scheme is a harder task than optimizing Python. Again, I'd strongly suggest that optimizing Scheme is a much easier task than optimizing Python simply by virtue of how often the feat has been accomplished.

If you can give me an implementation that implements almost all of R5RS, in 48 hours, beating Python in performance, and all by a single developer, I’ll tip my hat to that guy or gal. But I can’t imagine it’s too commonly done.

Nobody said you can implement a full Scheme implementation in 48 hours or two weeks. That's very much besides the point about how poor CPython performance is.

> Nobody said you can implement a full Scheme implementation in 48 hours or two weeks.

Fair enough, you're right. But if we're only talking about incomplete Scheme implementations it's not a very interesting claim. As I pointed out in another comment, even I could write a fast Scheme implementation in 48 hours if I kept my scope very limited. That doesn't say much about Scheme performance overall or how it relates to Python.

Well let's flip this around: do you think you could write a performant minimal Python in a weekend? Scheme is a very simple and elegant idea. Its power derives from the fact that smart people went to considerable pains to distill computation to limited set of things. "Complete" (i.e. rXrs) schemes build quite a lot of themselves... in scheme, from a pretty tiny core. I suspect Jeff Bezanson spent more than a weekend writing femtolisp, but that isn't really important. He's one guy who wrote a pretty darned performant lisp that does useful computation as a passion project. Check out his readme; it's fascinating: https://github.com/JeffBezanson/femtolisp

You simply can't say these things about Python (and I generally like Python!). It's truer for PyPy, but PyPy is pretty big and complex itself. Take a look at the source for the scheme or scheme-derived language of your choice sometime. I can't claim to be an expert in any of what's going on in there, but I think you'll be surprised how far down those parens go.

The claim I was responding to asserted that lisps and smalltalks can only be fast because of complex JIT compiling. That is trueish in practice for Smalltalk and certainly modern Javascript... but it simply isn't true for every lisp. Certainly JIT-ed lisps can be extremely fast, but it's not the only path to a performant lisp. In these benchmarks you'll see a diversity of approaches even among the top performers: https://ecraven.github.io/r7rs-benchmarks/

Given how many performant implementations of Scheme there are, I just don't think you can claim it's because of complex implementations by well-resourced groups. To me, I think the logical conclusion is that Scheme (and other lisps for the most part) are intrinsically pretty optimizable compared to Python. If we look at Common Lisp, there are also multiple performant implementations, some approximately competitive with Java which has had enormous resources poured into making it performant.