It's a nice idea but it won't gain traction as it's still designed with 2000's era computing in mind. Nobody manages an entire complex system within a specific Linux distribution anymore. There is a vast ecosystem of tooling, systems, protocols, networks, services, that comprise modern distributed systems. All these things exist regardless of the Linux distro.

In fact, I will predict right now the death of Linux as a dominant backend computing platform. It's sort of happened already. Sure, Linux is the kernel that runs the host machines that the cloud runs on. But the services are increasingly serverless/kernel-less, or run on micro-vms running either Linux or another kernel, or run in containers made of a half dozen different base distros. All the Linux I/O is just passed to an outer layer with a universal adapter, so you can mix and match networking, filesystems, logging platforms, policy governance, orchestration, scheduling, etc. I spend nearly all of my time building systems by tuning software and services that have almost nothing to do with Linux.

Linux distros will still be around in one form or another. But long gone are the days where most people define their system or security based on a particular distro. Hell, most of the software used in the cloud today isn't even packaged by distros. Linux has become just a kernel again, the distros just a fancy installer for GNU tools.

They're using a lot of advanced tech and advertising having to "maintain fewer virtual computers", whatever that means. But it also seems their tech is more complex and less compatible with existing components. There's no clear idea of how a developer's supposed to take their code on their laptop, test it, and ship it to production as one immutable image. If it's not as simple as containers, it's not going to replace them.

Also, it's interesting that they seem to allude to immutability, but don't mention the principle by name at all. It seems like whomever is developing this doesn't run large distributed systems.

> Interestingly, of all the buzzwords this distro uses, none include "immutable", which is the single most important concept in modern systems.

Since this system is based on NixOS and the Nix package manager, immutable is implied. Any change you make to a Nix/NixOS system results in a full rebuild of the system, and pointers to the new system updated. Current and all prior builds remain immutable.

Yeah I've never used Nix so I don't know how it relates to cloud systems. As a developer, how would I use Nix to develop an app on my machine, and ship it to 1,000 cloud-orchestrated systems, using the same immutable image? Is there any reason why one would use Nix with Docker instead of Debian with Docker?

Keep in mind that all large-scale systems work based on deploying container images. If it's not in a container image, it's not getting deployed.

I'm still relatively new to NixOS, having switched all my personal systems over to it this spring/summer. I don't do cloud dev atm, so haven't explored that use case yet.

But I believe NixOPs is the canonical way to do what you're describing in production/at scale:

https://github.com/NixOS/nixops

https://nixos.org/nixops/manual/

(more below)

If you want a more experienced answer, I suggest asking on the NixOS forum or subreddit, people are quick to answer in both places.

https://discourse.nixos.org/

https://www.reddit.com/r/NixOS/

----

More useful NixOPs guides

https://www.thedroneely.com/posts/nixops-towards-the-final-f...

https://ops.functionalalgebra.com/nixops-by-example/

https://nixops.readthedocs.io/en/latest/