Or just ship a statically compiled binary, with resources inside, without having to mess with anyhing docker related.

And don't forget to include making sure to create the right resource and namespace constraints so that the one binary doesn't gobble up memory endlessly or use up the file system or put your infrastructure completely at risk if/when it's compromised -- whether you do that with systemd units, raw cgroup/namespace finagling, or regular tried-and-true linux user-based resource segregation.

Containers do provide benefits.

So ship the static binary and a systemd unit file like a lot of packages in the repos of your linux distro do.

Yep, and you'd generally expect to spend time understanding the complexity that systemd brings, the subsystems that power it, and how you can separate resources there! There is no free lunch.

Restated, my point here is that while containers can be more complex and have pitfalls (a lot of which have been worked out somewhat at this point), there is no complexity free lunch -- `[docker|podman|crictl] run --rm --cpus 2 --memory 500mb ...` is pretty darn easy, and more so than writing properly portable and well-considered systemd unit files (and putting them in the right place, with the right permissions, under the right slice, etc). It's easier than most of the options out there (including the old methods of per-user resource segregation).

Let's say you are installing this from a normal package manager and want to limit it:

To install:

    apt install mypackage
To edit the systemd unit (automatically creating the file in the right place):

    systemctl edit mypackage
Then add these lines to limit to 2 cpus and 500mb memory:

    [Service]
    CPUAccounting=true
    CPUQuota=200%
    MemoryAccounting=true
    MemoryHigh=500M
And then to start it now and on boot:

    systemctl enable --now mypackage
This is now integrated with package updates, starts on boot, logs to the same place that most other system utilities do and so on.
But there is a lot of work to be done before you can do the simple apt install. I (gladly) don't know how it is nowadays but before Dockerfiles/Docker creating your own packages according to the various standards was a pita. Most companies needed a 'packaging specialist/release engineer' role as most developers where not up to the task. Solutions like FPM[0] did help somewhat, but it was still hard when dealing with non-homogeneous environments. Containers solved that problem universally for all distributions.

[0] https://github.com/jordansissel/fpm