Everything boils down to dependency management. It doesn't matter whether you're running a distribution, a container system, a VM farm, a herd of machines or anything that loves static linking: a sysadmin needs to be able to ask "what packages are in use right now, what is their provenance, and how can I roll out changes to that with the least amount of tsuris?"

(And the security officer needs to be able to ask a sysadmin to compile that data. And the auditor needs to be able to verify the chain that produced that data. And the poor dev trying to fix a bug needs to be able to use that data to build a version that replicates reported issues, so they can show that they can also fix the issue. And so on. And so forth.)

Just as interpreters usually beat compilers for speed of debugging, a system designed to properly manage and modularize dependencies will be faster to debug than an equivalent system that just builds the final target as fast as possible.

I think a good step in this direction is linuxkit[0] -- it's one of the most exciting projects IMO with regards to improving machine build processes (especially if you're in the build-a-VM/AMI world still).

If we can scan container filesystems for dependencies, or choose languages that let us build containers minimally enough that it's only a binary + static libs, we can start approaching systems that have dependency chains almost fully cataloged.

[0]: https://github.com/linuxkit/linuxkit