Thank you for these 30 bullet points with no description or further detail other than 'do this'. Some are pretty common-sense, others not, and for these merely saying it's "best practice" with no extra detail, links or accompanying reasoning is not enough.

Not OP, but I’ve been running a kubernetes cluster for a few months myself, so I’ll try to give context.

> Don’t trust arbitrary base images.

Kinda obvious, the equivalent of "don’t trust any random binary from the web", as it can contain malware at worst.

> Use small base image.

Generally, you should look at using Alpine as base image, to reduce storage size, and memory consumption of the overlay filesystem that docker uses. But be aware, Alpine uses musl and busybox instead of GNU libc and utils, so some software might not work there. Generally, this is also common sense. See more at https://alpinelinux.org/about/ and https://hub.docker.com/_/alpine/ – but be aware, often the projects you want to depend on already have an alpine image (e.g., openjdk, nodejs or postgres images are all available in an alpine version, reducing their size from 500M+ to around 10-20M)

> Use the builder pattern.

With containers, many people include all build dependencies in the final container. Docker has a new syntax to avoid this, by declaring first a builder container, with its dependencies, then building, and then declaring the final container, and COPY'ing the build artifacts from the build container. This, too, keeps container size a lot smaller.

You can find more here: https://docs.docker.com/engine/userguide/eng-image/multistag...

> Use non-root user inside container.

This is basically common sense with regards to docker as runtime, and is bad security practice (especially when combined with problematic filesystem mounts). Also is recommended by the Docker team: https://docs.docker.com/develop/dev-best-practices/

> Make the file system read only.

I’m not sure what OP is referring to here, but if it refers to the container filesystem, that is because writing to AuFS or OverlayFS is significantly slower (and more memory intensive) than writing to a PersistentVolumeClaim or EmptyDir volume in Kubernetes, so you should always mount an EmptyDir volume for all log folders, temporary data, etc, and a PersistentVolumeClaim for all persistent data.

Also is recommended by the Docker team: https://docs.docker.com/develop/dev-best-practices/

> One process per container, Don’t restart on failure, crash cleanly instead, Log to stdout & stdderr

This is related to the logging system (which mostly looks at stdout and stderr), which is a result of the fact that Kubernetes itself was mostly designed to work with a single process per container. Yes, you can spawn multiple processes, from a single shell, and print their combined stdout, but then you also need to ensure that if one crashes, everything restarts properly.

If you use a single process per container, logging to stdout/stderr, then scaling is a lot simpler, and restarts are handled automatically (and this is required for staged rollout).

> Add dumb-init to prevent zombie processes.

If you need multiple processes, and one that isn’t PID1 crashes, you’ll end up with zombie processes. A single process per container obviously avoids this, but if you have to use multiple, at least add an init system to reap dead child processes, potentially restart crashed dependent processes, etc.

Normally, docker supports the --init parameter to do this, but the version recommended for use with Kubernetes does not support this yet (EDIT: apparently, since 1.7.0, Kubernetes actually does automatically do this for you), so you could add e.g. https://github.com/Yelp/dumb-init or https://github.com/krallin/tini (both officially recommended by the Docker team)