If you don't care about saving disk space then just ship all your dynamic libraries with your binaries.
That pretty much sums up what everyone does on Windows. Lots of things are linked dynamically, but apart from the C/C++ runtime library and the OS libraries you just ship all those DLLs with your software.
But this works because in Windows each software is installed in it's own folder, and the search path for dynamic linking starts in the binary's folder. That way you can just dump everything in your installation folder without worrying about comparability with other software. In a Unix or Linux this is much harder to achieve. Sure, you can install into your own folder in /opt and add a wrapper script to load libraries from there, but it's hardly idiomatic.
I did it for 10+ years at my last job, you need a build system that hammers on everything really hard to set the rpath on everything, you shouldn't need wrappers. It definitely isn't idiomatic though.
What is the philosophy behind why it's not done like this Linux? Also, what about Nix?
You assume there’s a philosophy or coherent reasoning behind it, rather than “This is the way we did it with static libraries, so when we adopted shared/dynamic libraries we didn’t change anything else.” Because near as I can tell that’s exactly what happened when BSD and Linux implemented Sun-style .so support in the early 1990s, and there hasn’t been any attempt to rethink anything since then.
Probably because the purpose of the dynamic linker serves the typical O/S layout where there's only one copy of different dynamic libs and everything is linked against those, and packages installed by package managers are authoritative for the things they ship. Distro maintainers want this and lots of system admins expected packages to behave like this.
There's an alternative universe somewhere in which containerization took a different path and Unix distros supported installing blobbier things into /opt, but without (or optionally) the hard container around it. Then fat apps could ship their own deps.
The problem is that there's a lot of pushback from people who want e.g. only one openssl package on the system to manage and it legitimately opens up a security tracking issue where the fat apps have their own security vulns and updates need to get pushed through those channels. It was more important to us though to be able to push a modern ruby language out to e.g. CentOS5, so that work was more than an acceptable tradeoff.
Containerization of course has exactly the same problem, and static compilation probably just hides the problem unless security scanners these days can identify statically compiled vulnerable versions of libraries.
I need to look at NixOS and see if it supports stuff like multiple different versions of interpreted languages like ruby/python/etc linking aginst multiple different installed versions of e.g. openssl 1.x/3.x properly. That would be even better than just fat apps shoved in /opt, but requires a complete rethink of package management to properly support N different versions of packages being installed into properly versioned paths (where `alternatives` is a hugely insufficient hack).
Some scanners like trivy [1] can scan statically compiled binaries, provided they include dependency version information (I think go does this on its own, for rust there's [2], not sure about other languages).
It also looks into your containers.
The problem is what to do when it finds a vulnerability. In a fat app with dynamic linking you could exchange the offending library, check that this doesn't break anything for your use case, and be on your way. But with static linking you need to compile a new version, or get whoever can build it to compile a new version. Which seems to be a major drawback of discouraging fat apps.