> I've watched friends try Go and immediately uninstall the compiler when they see that the resulting no-op demo program is larger than 2 MiB.

That seems a bit extreme

> Overhead breeds complacency — if your program is already several megabytes in size, what's a few extra bytes wasted? Such thinking leads to atrocities like writing desktop text editors bundled on top of an entire web browser

I'm still on board

I'm imagining that the Go people would have a hard time making the compiled program smaller, because they're bundling in a M:N threading system, inter-thread channels, a garbage collector, a stack resizer, a syscall parking and reactivation system, and all of the above are by necessity mutually interdependent.

What irks me most about people that complain about large binaries for empty programs is that every program past Hello World will actually use all these features.

So why waste the effort putting in a code path to disable these things except for some philosophical ideal?

Code in Asm or C if you need a hello world that can fit on a floppy disk, you're not really doing anything substantial anyways...

There's plenty of acceptable middle ground between Go binaries using 2mb minimum and full electron apps eating your ram.

> What irks me most about people that complain about large binaries for empty programs is that every program past Hello World will actually use all these features.

I'm not so sure about that. A lot of embedded systems software works well without allocating any memory at runtime at all. Particularly microcontroller programs. The first version of Virgil targeted AVR (via codegen to C) and did exactly this; allocate all data structures up front, at compile time, and then bake them into the binary. This is still common in realtime systems.

No dynamic memory allocation = no garbage collector, no non-deterministic allocation/deallocation, no write barriers, no out-of-memory possibilities, no fragmentation. For a surprisingly large class of programs, this is a great situation!

> So why waste the effort putting in a code path to disable these things except for some philosophical ideal?

Virgil in particular, does the opposite; it only includes what the program uses. The analysis starts with nothing and then brings in things called and used from main, tracing through those new things, etc, until the transitive closure is included. Unreachable code isn't even seen, except by the parser and semantic checker, so it doesn't get included. Although, admittedly, the GC is a little special in that a single allocation in the program will drag in most of it. And most programs accept command-line arguments, which the default runtime boxes up into an array of strings for "main()", so most programs do end up with the GC included. You can turn that off with a flag or by modifying the runtime startup code (e.g. squirrel away the arguments array pointer and count, pass null to "main()", and parse the arguments as raw pointers without allocating. (but yuk.)

> No dynamic memory allocation = no garbage collector, no non-deterministic allocation/deallocation, no write barriers, no out-of-memory possibilities, no fragmentation. For a surprisingly large class of programs, this is a great situation!

I know you know this already, but your statement is a little too broad. Those problems all still exist, but are greatly reduced. Data structures still need to be compacted, caches evicted, scratch space cleared, etc. It is just that one class of intractable issues gets removed when dynamic memory allocation goes away.

On a side note, have you seen this? https://github.com/udem-dlteam/ribbit https://www.youtube.com/watch?v=A3r0cYRwrSs an extremely compact VM for a version of R4RS.