Oh man, the original article went way over the author's head. The point of the original article was that even though Kubernetes is primarily useful for tackling the challenges involved with running many workloads at enterprise scale, it can also be used to run small hobbyist workloads at a price point acceptable for hobbyist projects.

Does that mean that Kubernetes should now be used for all hobbyist projects? No. If I'm thinking of playing around with a Raspberry Pi or other SBC, do I need to install Kubernetes on the SBC first? If I'm thinking of playing around with IoT or serverless, should I dump AWS- or GCE-proprietary tools because nobody will ever run anything that that can't run on Kubernetes ever again? If I'm going to play around with React or React Native, should I write up a backend just so I can have something that I can run in a Kubernetes cluster, because all hobbyist projects must run Kubernetes now, because it's cheap enough for hobbyist projects? If I'm going to play around with machine learning at home, buy a machine with a heavy GPU, figure out how to get Kubernetes to schedule my machine learning workload correctly instead of just running it directly on that machine, because uhhh maybe someday I'll have three such machines with powerful GPUs plus other home servers for all my other hobbyist projects?

No, no, no, no, no. Clearly.

But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!

Different people will use different solutions for different project requirements.

I do agree with you, but I don't think I really missed the point of the original article. From the original article: > However popular wisdom would suggest that Kubernetes is an overly complex piece of technology only really suitable for very large clusters of machines; that it carries a large operational burden and that therefore using it for anything less than dozens of machines is overkill. I think that's probably wrong.

I don't think that is wrong. I do think it is probably overkill, and IMO it does introduce operational burden and complexity. That doesn't mean you shouldn't do it, though, if you're interested in exploring the technology, for example.

> using it for anything less than dozens of machines is overkill

The question isn't really whether you need dozens of machines, it's whether you can foresee eventually maybe needing dozens of machines.

Remember the bad old days when people said that relational databases were worthless because they "don't scale", that using Mongo and other NoSQL databases were practically a necessity for doing anything modern and "web-scale" because otherwise after you got your big break and you got popular you would need to keep up with all the new traffic and not crash? A lot of engineers have this tendency to worry about scalability long before it's ever a problem. Something about the delusions of grandeur incurred by people who got into engineering because they were inspired by great people building big things.

Starting out by running Kubernetes on a three-node cluster is actually the correct call for a small project if you can reasonably foresee needing to elastically scale your cluster in the future, and don't want to waste days or weeks porting to Kubernetes down the line to deal with your scalability problems that you foresaw having in the first place.

Again, that doesn't mean that Kubernetes is right for every hobbyist project. But there is definitely a (small) subset of hobbyist projects for which it is not overkill.

Kubernetes has a definite whiff of NoSQL - a massively hyped tool/technique originating from Google with oversold benefits.

I tried it about 6 months back with the intent of using it in a corporate prod environment and getting set up was... a massive pain in the ass to say the least - compared to the existing ansible set up. It was supposed to solve headaches, not cause them.

I wasn't impressed. I wouldn't be surprised if it ends up being "Angular 1.0" to someone else's react.

i set up a kubernetes cluster 1 year ago at work and a private one last weekend.

last year took, i think 2 days. my private one was up and running within ~1h, including writing the ansible role to first install binaries/dependencies and join the cluster as a worker node.

either you didn't use kubeadm to set it up or ... i have no idea how you could've possibly failed.

its pretty much

    (all) ${packagemanager} install docker-ce kubectl kubelet kubeadm
    (master) kubeadm init -> prints token
    (node) kubeadm join ${token}

Jeff Geerling even wrote an Ansible role to do all of the heavy lifting for you. I've used it alongside vagrant to spin up a three node cluster in ~15 minutes.

https://github.com/geerlingguy/ansible-role-kubernetes

I've used OpenStack a good bit, but not Kubernetes directly, and I have never set it up. Is there an up-to-date, in-depth tutorial around?