Sure, there are things that can be "poison pills" for performance, even in JITs. For example, another reason why my fib benchmark beats V8 is that V8 has to continually check if the fib function was redefined as a deoptimization check. I don't allow that in Vaiven, so I can produce faster code. Dart was designed to have fewer of these poison pills, for instance.

Note to language designers: Reduce your language's poison pills. For those which can't be avoided, make them easy to profile!

I would be intrigued to see a modern attempt at a dynamic language like Python or Ruby, but with attention paid from day one to ensure that only very JIT-able constructs were used, and with an eye towards the language helping the user stick to those and be aware of when they deviate, rather than designing a language, setting "runs fast" somewhere around the third or fourth priority, then trying to JIT it after 10-15 years of non-JIT development.

Even LuaJIT, which is to my understand the closest anyone has come to this, still was retrofitting JIT onto an existing language.

I'm not convinced we're going to see much more performance out of JS, for instance, which is about as fast as a dynamic language can go nowadays. But I wonder what the real limits of a "dynamic scripting" language would be from this perspective.

Edit: Per my other comment about indirection being a performance poison pill, here's an example of an idea where a new scripting language might be able to get a lot of performance. Suppose you keep the ability to dynamically create classes and load code and so forth, so that (just as an example, not necessarily a good idea) you can write code that dynamically connects to a database and loads in tables as classes with automatically defined properties, etc. But instead of working the way the languages do now, instead of constantly walking through all the layers of indirection that can be used to implement all this at every call site and for every call, what if you could do something like the "pledge()" call that says "OK, this is it, I'm all set up, the dynamism is all done, you may now assume that all the type analysis you've done is now complete". Now the JIT can drop all of its paranoia code. It isn't completely obvious to me how to do this correctly (by implication, if you can "connect to a database" before this pledge()-like call is done, you have to have a pretty complete runtime available just for that), nor is it obvious to me how this would affect the type system, etc. Maybe it's not possible. But it's the type of thing I mean that would be interesting to have examined by someone, who might be able to concretely show why it's not possible, or, whoknows, make it work. And that's just one idea of how to design a dynamic language from the top for performance, basically off the cuff; who knows what a smart person who sat down and thought about this for a couple of weeks before even beginning to code could come up with.

Another one is that it isn't obvious to me that scripting languages must be all based on hash tables that laboriously can be cast back to structs by the JIT; it seems to me that it would also be feasible to go the other way, and allow users to define things as structs, and if you want to offer hash-like access it would be easy to lay down hash-like access to the struct members (iteration, etc.), and either implicitly or explicitly also offer an "overflow" hash table if desired. This would give at least a bit of locality control. Something something arrays too for array control, and suddenly you're starting cook with gas.

The OMR project gave Ruby and Python a try, Ruby was up and running to a decent point. Python turned out much harder.

The Ruby community went in different direction here [2] with RTL MJIT.

[1] https://github.com/eclipse/omr [2] https://medium.com/square-corner-blog/rubys-new-jit-91a5c864...