Some day I would like a powwow with all you hackers about whether 99% of apps need more than a $5 droplet from Digital Ocean, set up the old-fashioned way, LAMP --- though feel free to switch out the letters: BSD instead of Linux, Nginx instead of Apache, PostgreSQL instead of MySQL, Ruby or Python instead of PHP.

I manage dozens of apps for thousands of users. The apps are all on one server, its load average around 0.1. I know, it isn't web-scale. Okay, how about Hacker News? It runs on one server. Moore's Law rendered most of our impressive workloads to a golf ball in a football field, years ago.

I understand these companies needing many, many servers: Google, Facebook, Uber, and medium companies like Basecamp. But to the rest I want to ask, what's the load average on the Kubernetes cluster for your Web 2.0 app? If it's high, is it because you are getting 100,000 requests per second, or is it the frameworks you cargo-culted in? What would the load average be if you just wrote a LAMP app?

EDIT: Okay, a floating IP and two servers.

As somebody who has his own colocated server (and has since Bubble 1.0), I definitely agree that the old-fashioned way still works just fine.

On the other hand, I've been building a home Kubernetes cluster to check out the new hotness. And although I don't think Kubernetes provides huge benefits to small-scale operators, I would still probably recommend that newbs look at some container orchestration approach instead of investing in learning old-school techniques.

The problem for me with the old big-server-many-apps approach is the way it becomes hard to manage. 5 years on, I know that I did a bunch of things for a bunch of reasons, but I don't really remember what or why. It mixes intention with execution in a way that gets muddled over time. Moving to a new server or OS is more archaeology than engineering.

The rise of virtual servers and tools like Chef and Puppet provided some ways to manage that complexity. But "virtual server" is like "horseless carriage". The term itself indicates that some transition is happening, but that we don't really understand it yet.

I believe containers are at least the next step in that direction. Done well, I think containers are a much cleaner way of separating intent from implementation than older approaches. Something like Kubernetes strongly encourages patterns that make scaling easier, sure. But even if the scaling never happens, it makes people better prepared for operational issues that certainly will happen. Migrations, upgrades, hardware failures, transfers of control.

> I've been building a home Kubernetes cluster to check out the new hotness

I tried to do this for the same reason, but all of the writeups seem to stop at "getting a cluster running", but that's not enough to actually run apps since you need a load balancer / ingress, dns, and probably a number of other things (ultimately I was overwhelmed by the number of things I needed but didn't completely understand). I haven't had any luck finding a soup-to-nuts writeup, so if you have any recommendations, I'd love to hear them.

I've heard good things from Kelsey Hightower's https://github.com/kelseyhightower/kubernetes-the-hard-way