I am very much a fan of hot-takes, but this one is trash --

> The money was wasted on hype. The same will eventually be said of Docker. I’ve yet to hear a single benefit attributed to Docker that isn’t also true of other VMs, but standard VMs allow the use of standard operating systems that solved all the hard problems decades ago, whereas Docker is struggling to solve those problems today.

Linux containerization (using the word "docker" for everything isn't right either) is an isolation + sandboxing mechanism, NOT a virtual machine. Even if you talk about things like LXC (orchestrated by LXD), that's basically just the addition of the user namespacing feature. A docker container is not a VM, it is a regular process, isolated with the use of cgroups and namespaces, possibly protected (like any other process) with selinux/apparmor/etc.

Containerization is almost objectively a better way of running applications -- there's only one question, do you want your process to be isolated, or not. All the other stuff (using Dockerfiles, pulling images, the ease of running languages that require their own interpreters since you package the filesystem) is on top of this basic value propostion.

An easy way to tell that someone doesn't know what they're talking about when speaking about containerization is if they call it a VM (and don't qualify/note that they're being fast and loose with terminology).

All this said -- I do think Docker will die, and it should die because Docker is no longer the only game in town for reasonably managing (see: podman,crictl) and running containers (see: containerd/cri-o, libcontainer which turned into runc) .

[EDIT] - I want to point out that I do not mean the Docker the company or Docker the project will "die" -- they have done amazing things for the community and development as a whole that will literally go down in history as a paradigm shift. What I should have written was that "docker " where x is "image", "container", "registry", etc should be replaced by "container ".

> do you want your process to be isolated, or not.

No, not always. Why?.

At work I have a few coworkers pushing hard to dockerize (isolate?) everything.

This makes debugging when things go wrong a lot harder.

I see isolation as one of several qualities a process could have, that sometimes is valuable enough to be worth the sacrifice.

Isolation is not some absolute quality that is without significant tradeoffs.

I avoided saying that processes should always be isolated because there are sometimes very good reasons to not isolate a processes with the containerization approach we're talking about, performance being one that came to mind quickly.

Containerization of processes definitely increases complexity but if you can take the time to understand VMs then you can (and should, IMO) take the time to understand how containers work as well, they are lighter and simpler (for example, you don't need to build a kernel or make an initrd). I would argue that people who think VMs are simpler are actually being fooled by huge advancements in tooling over the years and the fact that it's become "easy", not that it was ever simple.

I also want to point out that containers should actually make tracking down some bugs easier, but it does so in a counter-intuitive way -- it removes whole classes of bugs from ever occurring. You'll never have two programs clobber some shared folder or resource, you'll never have programs fight over dependencies, or struggle for locally-bound ports if you're running them in containers.

Containerization definitely represents an increase in complexity, but it is well worth the effort, most of the time, granted you understand the tooling.

> if you can take the time to understand VMs then you can (and should, IMO) take the time to understand how containers work as well

I don't see it as VMs vs containers.

We have a good devops process to deploy onto our instances, so we rarely have resource clashes you mention (ports/directories) because none of that is ever configured manually. All our infrastructure is derived from 'scripts', so it hasn't been a problem at all.

Aside from python, I see no advantage in containerizing any of our processes at all.

As for debugging, I always forget how infuriating it is, till in the heat of the moment I have to open up a shell into someone's badly made docker image and try to use common tools to help diagnose a problem (ps, nslookup, dig, all) all missing from the wonderful little container.

It's like being on a big navy ship, stranded in the ocean because the engines broke down, but everyone left all the tools back at the base. Yay!

> I don't see it as VMs vs containers.

It's not? I didn't mean to pit them against each other in competition, I'm saying that if VMs are worth learning about and taking the time time to understand, so are containers. It doesn't have to be zero sum.

> We have a good devops process to deploy onto our instances, so we rarely have resource clashes you mention (ports/directories) because none of that is ever configured manually. All our infrastructure is derived from 'scripts', so it hasn't been a problem at all.

It seems like it was a class of problems that you have fixed with "good devops process". I'd argue that it probably was a problem at once point, and you improved your devops process to make sure it wasn't.

> Aside from python, I see no advantage in containerizing any of our processes at all.

Well I don't know your infrastructure so I'm can't comment on that. I doubt that python is the only thing you run that could benefit from containerization (which again, means limiting access to system resources through namespaces and cgroups), but if you say so then I have no choice but to believe that it's the case.

> As for debugging, I always forget how infuriating it is, till in the heat of the moment I have to open up a shell into someone's badly made docker image and try to use common tools to help diagnose a problem (ps, nslookup, dig, all) all missing from the wonderful little container.

Sounds like you could use some more of that "good devops process" you had when you set up the deploy machinery.

Also, the fact that all of that stuff is missing from the container is actually beneficial from a security point of view -- the same inconvenience you're experiencing is the same inconvenience an intruder would experience first before breaking out of the container (assuming they had the skill to do that). This means that you have another chance to catch them downloading and/or running `ps`/`nslookup`/`bash` or whatever tooling and flag the suspicious behavior. Whether you're in a VM or not, containers are another line of defense, and that's almost certainly a good thing.

> It seems like it was a class of problems that you have fixed with "good devops process". I'd argue that it probably was a problem at once point, and you improved your devops process to make sure it wasn't.

It certainly was but we fixed it and it's not a problem anymore.

> Sounds like you could use some more of that "good devops process" you had when you set up the deploy machinery.

Yeah, there are people within my group that want to 'modernize' things and put them into containers willy nilly for no real reason.

We have already solved all the difficult problems that containers are supposed to 'save' us from. Many of the proposed containers would just be a single statically linked binary with a config file.

Why?

FYI, our stuff is hosted internally, so security considerations are not such a big deal.

To hear these container advocates, you'd think that till they came around no one ever managed to use linux.

I'm fully expecting linux userland tools to go away, to be replaced by custom 'distributions' with only a kernel and a docker API soon.

> I'm fully expecting linux userland tools to go away, to be replaced by custom 'distributions' with only a kernel and a docker API soon.

They're already here!

- CoreOS Container Linux (now owned by Redhat)[0]

- RancherOS[1]

- Kubic[2] (more focused on running Kubernetes, but same idea)

There are also tools like Linuxkit[3] which focus on helping you actually build images that run the containers you want and nothing else @ startup, which is pretty cool I think.

[0]: https://coreos.com/os/docs/latest/

[1]: https://rancher.com/rancher-os/

[2]: https://kubic.opensuse.org/

[3]: https://github.com/linuxkit/linuxkit