I never understood why "modern" tools like Docker have to provide everything: networking, firewall, repository, you name it...
I understand somebody wanting to type "docker run xxx" and have everything setup automatically, but if you're running anything but default networking and actually care where the xxx image comes from, it's gonna fail miserably. Coming from the VM world, I found it much easier to work with macvlan interfaces that lxd supports for example - the container gets it's own interface and IP address and all networking can be prepared and set up on the host instead of some daemon thinking it knows my firewall better than me...
Yeah, it feels too monolithic. Just to showcase it can run Hello World in one "docker run" command I guess?
Another thing people use Docker for but shouldn't is application packaging. Using Docker you build one fossilized fat package with both all OS and app dependencies baked in. Then some day after years of using that Docker image you need to upgrade your OS version in the image but you can't replicate the app build because you didn't pin exact library versions and the global app repository's (pip, npm) later version of the package is no longer compatible with your app.
Application packaging is better to be done in proper packaging systems like rpm od deb or other proprietary ones and stored in organization's package repository. Then you can install these rpm packages in your Docker images and deploy them into cloud.
The difference in OS dependencies and app dependencies are clear when looking at contents of actual dockerfiles. OS dependencies are installed leveraging rpm or deb ecosystem. Apps are cobbled together using a bunch of bash glue and remote commands to fetch dependencies. Why not use proper packaging for both OS and apps and then just assemble the Docker image using that?
Well mostly because some of us are just devs and we don t know about it ? How about a blog writeup, I'd read it :)
I don't think I ever came across a developer who made a RPM/DEB package. I'm not sure I came across a devops/sysadmin who made a RPM/DEB package in recent years.
I wouldn't waste my time learning that if I were you. Software like python needs pip to build, they can't be built with the OS package manager alone.
Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.
The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.
eg. one rpm package for the app executable and statix files
my_app_exe-2.0-1.noarch.rpm
and many othe related config rpms
my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)
my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)
You then install only the deployment packages and rpm system fetches the dependencies for you automatically.
There are other mature places who use similar techniques for deployments, for example [4].
All of this then can be wrapped in even higher level of packaging, namely Docker.
[1] https://github.com/pantsbuild/pex [2] https://code.fb.com/data-infrastructure/xars-a-more-efficien... [3] https://github.com/jordansissel/fpm [4] https://hynek.me/articles/python-app-deployment-with-native-...