I agree with some of the other comments here, that while this is a very impressive hack (and this is Hacker News!) the real world value seems dubious.
It looks like a very clever way of packaging an x86 fat binary for multiple platforms, without actually duplicating the code. To support ARM I assume it’ll need to be an actual fat binary, with both x86 and ARM code. At that point, unless I actually test the code myself on both architectures, how can I be confident it’s going to work properly?
If you’re using very simple C constructs and not doing anything fancy, it should work, but it’s not clear to me that this approach is preferable to e.g. a Python script. If you’re doing fancy stuff, it’s a bit more chancy as C isn’t memory-safe and has tons of undefined behavior and platform-specified weirdness.
Java is “run anywhere” because they’ve specified the JVM in massive detail and tried to ensure it actually works the same on all platforms. I don’t see how you can have that same confidence if you’re running machine code everywhere.
I guess I just don’t see the use case where this is compelling. If I write a handy Unix utility in C, I’ll just keep the source code around, and compile it as needed.
It might be handy if you need to move such a utility quickly from Unix to Windows, if you don’t have any dev tools set up. But I can’t think of a situation when I’ve needed that.
Isn't this exactly why you don't see the use case? You're willing to compile.
As someone working on a cross-platform, cross-language packaging tool (https://github.com/spack/spack), it's very appealing to not have to build for every OS and Linux distro. Currently we build binaries per-OS/per-distro. This would eliminate a couple dimensions from our combinatorial builds.
We still care a lot about non-x86_64 architectures, so that's still an issue, but the work here is great for distributors of binaries. It seems like it has the potential to replace more cumbersome techniques like manylinux (https://github.com/pypa/manylinux).