My first thought on this was good riddance. The dev model of "we've lost track of our dependencies so ship Ubuntu and a load of state" never sat well.
However it looks like the main effect is going to be moving more of open source onto GitHub, aka under Microsoft's control, and the level of faith people have in Microsoft not destroying their competitor for profit is surreal.
What state are you thinking of? The containers are ephemeral and the dependencies are well specified in it. You can complain about shipping Ubuntu, but the rest of this doesn’t make sense.
Makes perfect sense to me, sadly. The dependencies are specified in excessively, that's why everyone is shipping Ubuntu. This is caused by and further facilitates the development style of "do not track what we use, just ship everything". Also, the dependencies are specified in container images, which themselves are derivative artifacts and not the original source code, and these dependencies often change in different container builds with no explicit relevant change.
There are three practical problems as a result: - huge image sizes with unused dependencies delivered as part of the artifact; - limited ability to share dependencies due to inheritance-based model of layers, instead of composition-based model of package managers; - non-reproducibility of docker images (not containers) due loosely specified build instructions.
Predicting future comments, nix mostly fixes these issues, but it has a bunch of issues of its own. Most importantly, nix is incredibly invasive in development process, adopting it requires heavy time investments. Containers also provide better isolation
> Most importantly, nix is incredibly invasive in development process, adopting it requires heavy time investments.
Typically yes, but Nix actually allows you to be less pure to save time and pick your most economic point on the reproducibility continuum.
I'm fairly sure there was an article about this... ah here it is:
https://www.haskellforall.com/2022/08/incrementally-package-...
This isn't what I was talking about, I'm all for being as pure as possible, dial the reproducibility and isolation to the max. Unfortunately, Nix itself as an application is not isolated. It requires a unique installation process to be available for users, because it wants to manage its store at the root level (/nix/store/), but I hear the situation is different on macOS. Applications packaged with Nix also require special treatment to run in Nix environment, with paths rewritten and binaries patched to support Nix filesystem structure instead of the traditional Linux one.
Yes, for cache hits to happen it has to be this way as far as I remember.
There is a project called nix-portable though that I've seen some HPC users report success with:
https://github.com/DavHau/nix-portable
> Applications packaged with Nix also require special treatment to run in Nix environment, with paths rewritten and binaries patched to support Nix filesystem structure instead of the traditional Linux one.
If you fully package it. If you use something like an buildFHSUserEnv[0] that's not true.
There is also nix-autobahn and nix-alien for automatically running foreign binaries on a more ad-hoc basis or to generate a starting point for packaging.
0: https://nixos.org/manual/nixpkgs/stable/#sec-fhs-environment...