The article seems to focus on K8s with reference to micro-services. How well does K8s do if you're running a monolith?

All of the following assumes you plan on running more than a single instance of your monolithic application. If that's not the case, then ignore Kubernetes, and be glad you don't have the problems it was designed to solve.

If you consider what it takes to manage the end-to-end lifecycle of a single application, monolith or micro-service, you need a solution for the following items: deployments, application configuration, high availability, log and metrics aggregation, autoscaling, and load balancing across multiple application instances.

Kubernetes provides an opinionated way of doing all of those things. For example, Kubernetes leverages container images and declarative configs for packaging and deploying applications. For many people this approach is much simpler than what Puppet, Chef, and Ansible bring to the table in terms managing applications.

When it comes to high availability Kubernetes provides an orchestration layer across multiple machines, grouped in clusters, that deals with distributing applications based on resource requirements and automatically responding to node and application failures. When applications crash, Kubernetes restarts them. When nodes fail, Kubernetes reschedules the applications to healthy nodes, and avoids the failed nodes in the future.

Many of the patterns for managing applications, even monoliths across a handful of nodes, Kubernetes provides out of the box. In essence, Kubernetes is the sum of all the bash scripts and best practices that most system administrators would cobble together over time, presented as a single system behind a declarative set of APIs.

One other major caveat to all of this.

Just like I would not recommend standing up OpenStack from the ground up in order to deploy your monolithic application across a set of virtual machines, I don't recommend rolling your own Kubernetes cluster either. You should strongly consider leveraging a fully managed Kubernetes offering such as Google Kubernetes Engine, Digital Ocean's Managed Kubernetes, or Azure Kubernetes Service.

> Just like I would not recommend standing up OpenStack from the ground up in order to deploy your monolithic application across a set of virtual machines, I don't recommend rolling your own Kubernetes cluster either. You should strongly consider leveraging a fully managed Kubernetes offering such as Google Kubernetes Engine, Digital Ocean's Managed Kubernetes, or Azure Kubernetes Service.

The rest seems reasonable, but I disagree strongly with that claim. If it's worth using the tool then it's also worth learning how it works.

Even just kubespraying a cluster myself helped me build a much stronger mental model of how Kubernetes works than trying to take over a colleague's black box Kops setup. GKE or another managed service would have been even worse.

Setting up a small cluster isn't that hard, and it will teach you a lot about the internals and how things can go south (and what to do when that inevitably happens).

FYI: the person you are responding to is the author of Kubernetes The Hard Way [1], which is effectively a tutorial of learning how all the K8s pieces work together. He also co-authored the first book on it [2]. He's also a Google employee, but I would trust his opinion more than others just because he's probably seen more use cases than anyone else.

[1] https://github.com/kelseyhightower/kubernetes-the-hard-way

[2] https://www.amazon.com/Kubernetes-Running-Dive-Future-Infras...