Looks interesting. Based on prior experience, here are some concerns people will bring up with this approach:

1. Spectre. You may have to assume the plugin code can read anything in the address space including any secrets like passwords, keys, file contents etc. If the plugin can't communicate with the outside world this may not be a problem.

2. When you say WASM has a "sandboxing architecture", this is only partially true. It's quite easy to define a simple language that doesn't provide any useful IO APIs and then claim it's sandboxed - that's practically the default state of a new language that's being interpreted. The problems start when you begin offering actual features exposed to the sandboxed code. The app will have to offer APIs to the code being run by the WASM engine and those APIs can/will contain holes through which sandboxed code can escape. If you look at the history of sandboxing, most sandbox escapes were due to bugs in the higher privileged code exposed to sandboxed code so it could be useful, but you can't help devs with that.

3. WASM is mostly meant for low level languages (C, C++, Rust etc). Not many devs want to write plugins in such low level languages these days, they will often want to be using high level languages. Even game engines are like that: "plugin" code is often written in C#, Lua, Blueprint, etc. This is especially true because WASM doesn't try to solve the API typing/object interop problem (as far as I know?), which is why your example APIs are all C ABI style APIs - the world moved on from those a long time ago. You'll probably end up needing something like COM as otherwise the APIs the host app can expose will be so limited and require so much boilerplate that the plugin extension points will just be kind of trivial.

> 3. WASM is mostly meant for low level languages

I've been saying it for years, but I think finally 2023 has the chance of being the year in which Wasm GC ships and managed languages start targeting Wasm more widely. We've made a lot of progress with the design, and V8 has a basically complete implementation. Google is targeting some internal apps to Wasm GC and seeing perf improvements over compile-to-JS, so I think this will likely be a success.

I think there’s somewhat of a disconnect between the original idea of WASM (in browser) versus headless. In the browser folks get JavaScript for free which collects its own garbage. WASM is there to supplement higher level language for performance-intensive tasks and as such, “lower level” languages make more sense for these code paths.

I’d like to point out also that providing users a million languages to write plugins in for a product could create a lot of bloat. Imagine an image editor with 5 plugins, each written in its own language running in WASM sandboxes: golang, C#, assemblyscript, ruby, python. That’s 5 runtimes each running it’s own garbage collection logic.

I can see the value for compute hosts because the very nature of the provided service is allowing users to write sandboxed apps. But I think for stand-alone applications it’s best to support one or two simple targets, whether sandboxed or otherwise.

There are languages (Lua for example) optimized for this already.

I suppose the benefit is that each application which uses the WASM backend can decide on their “official” language and provide a decent built-in IDE experience.

> That’s 5 runtimes each running it’s own garbage collection logic.

If the WASM runtime provides a GC, then all of those languages can share a GC.

I don't think WASM should/would unify the GC across memory models, that could be extremely problematic.

The gist of the idea is polyglot languages can leverage libraries across many languages. The fastest code is the code that was already built (that you didn't need to write).

It's unlikely applications would actually implement libraries from 5 different runtimes (they could, but shouldn't), and if they use RUST libraries, there definitely wouldn't be any GC anyway.

The benefit of this tech is it allows a new language to leverage historical codebases quickly without needing to re-invent every common utility library.

This will inevitably speed adoption of newer languages, zero code tools, etc .. and is the epitome of Proebstings law, which could also accelerate Proebstings (which is every decade) to being to approach Moores law (but I'm not specifically saying that will happen, only that it could).

> I don't think WASM should/would unify the GC across memory models

WASM already has a GC proposal[0] which is already at the "Implementation stage"[1] so it looks like this IS going to happen, although it's uncertain if language runtimes like Go will actually make use of the feature, or what.

[0]: https://github.com/WebAssembly/gc/blob/main/proposals/gc/Ove...

[1]: https://github.com/WebAssembly/proposals

A glance of the overview and spec seems to indicate that WASM will provide some primitive data types, and any GC language can build their implementation on top of it. As I understand it, it's heavily based on Reference Types[3], which allows acting on host-provided types, and is already considered part of the spec [4]. It doesn't remove the need for the 5 different runtimes to have their own GC, but it lowers the bulk that the runtimes need to carry around, and offloads some of that onto the WASM runtime instead.

[3]: https://github.com/WebAssembly/reference-types/blob/master/p...

[4]: https://github.com/WebAssembly/proposals/blob/main/finished-...