Ignorance admission time: I still have no idea what problem containers are supposed to solve. I understand VMs. I understand chroot. I understand SELinux. Hell, I even understand monads a little bit. But I have no idea what containers do or why I should care. And I've tried.

Am also a huge newcomer to this.

Yeah, I think a lot of it is better resource utilization compared to VMs. At the same time, though, I don't think containers are the thing, but just a thing that paves the way for something very powerful: datacenter-level operating systems.

In 2010, Zaharia et al. presented [1], which basically made the argument that increasing scale of deployments and variety of distributed applications means that we need better deployment primitives than just at the machine level. On the topic of virtualization, it observed:

> The largest datacenter operators, including Google, Microsoft, and Yahoo!, do not appear to use virtualization due to concerns about overhead. However, as virtualization overhead goes down, it is natural to ask whether virtualization could simplify scheduling.

But what they didn't know was that Google has been using containers for a long time. [2] They're deployed with Borg, an internal cluster scheduler (probably better known as the predecessor to the open-source Kubernetes), which essentially serves exactly as an operating system for datacenters that Zaharia et al. described. When you think about it that way, a container is better thought of not as a thinner VM, but as a thicker process.

> Because well-designed containers and container images are scoped to a single application, managing containers means managing applications rather than machines.

In the open-source world, we now have projects like Kubernetes and Mesos. They're not mature enough yet, but they're on the way.

[1] https://cs.stanford.edu/~matei/papers/2011/hotcloud_datacent...

[2] http://queue.acm.org/detail.cfm?id=2898444

The big missing "virtualization" technology is the Apache/CGI model. You essentially upload individual script-language (or compiled on the spot) functions that are then executed on the server in the context of the host process directly.

This exploits the fact that one webserver only differs from another by the contents of it's response method, and other differences are actually unwanted. You can make this a lot more efficient by simply having everything except the contents of the response method be shared between different customers.

This meant that all the Apache mod_x (famously mod_php and mod_perl) can manage websites on behalf of large amounts of customers on extremely limited hardware.

It does provide for a challenging security environment. That can be improved when starting from scratch though.

I think the modern equivalent of what you are describing is basically the AWS Lambda model of "serverless" applications. In the open source world, there are projects like Funktion[1] and IronFunctions[2] for Kubernetes

[1] https://github.com/funktionio/funktion

[2] https://github.com/iron-io/functions