I once imagined a time in a far, faraway land where the new OS secretly in development was nothing more than a thin interface between the hardware and the software. And the software was a VM. And this was codenamed Fuchsia. And was being worked on by Google. They took away the lessons learned from CHromeOS with its LXC containers and Android Container. And realized the new OSs of the future can be anything and everything for anyone and everyone. And opening 35 applications meant running 35 different VMs made of 17 unique OSs and this was called a software's full-stack. And then I would check the memory usage only to be horrified my 128 GB RAM was nearly full, and RAM was just not enough. Then I snapped out of this nightmare.

Are we intentionally not thinking about RAM usage in this dystopian world where we celebrate WASM-Docker progress without thinking of the drawbacks: memory inefficiencies?

Actually, Wasm goes into the direction you are pointing. A Wasm runtime should add a little overhead to the requirements of the Wasm module.

However, it's true Wasm is not on that point yet. There are open threads about deallocate Wasm memory [1]. However, I expect these features, as well as Garbage Collection [2] will come to the stardard over time. This will allow modules and runtimes to properly manage memory usage.

[1] https://stackoverflow.com/a/51544868

[2] https://github.com/WebAssembly/gc