Hey everyone! I have a home server that runs some apps, and I've been installing them directly, but they kept breaking on every update. I wanted to Dockerize them, but I needed something that would manage all the containers without me having to ever log into the machine.
This also worked very well for work, where we have some simple services and scripts that run constantly on a micro AWS server. It's made deployments completely automated and works really well, and now people can deploy their own services just by adding a line to a config instead of having to learn a whole complicated system or SSH in and make changes manually.
I thought I'd share this with you, in case it was useful to you too.
> I needed something that would manage all the containers without me having to ever log into the machine.
Not saying this would at all replace Harbormaster, but with DOCKER_HOST or `docker context` one can easily run docker and docker-compose commands without "ever logging in to the machine". Well, it does use SSH under the hood but this here seems more of a UX issue so there you go.
Discovering the DOCKER_HOST env var (changes the daemon socket) has made my usage of docker stuff much more powerful. Think "spawn a container on the machine with bad data" à la Bryan Cantrill at Joyent.
Hmm, doesn't that connect your local Docker client to the remote Docker daemon? My goal isn't "don't SSH to the machine" specifically, but "don't have state on the machine that isn't tracked in a repo somewhere", and this seems like it would fail that requirement.
What do you think isn't getting tracked?
You could put your SSH server configuration in a repo. You could put your SSH authorization key in a repo. You could even put your private key in a repo if you really wanted.
How do you track what's supposed to run and what's not, for example? Or the environment variables, or anything else you can set through the cli.
While I haven't used it personally, there is [0] Watchtower which aims to automate updating docker containers.