I'm excited about arm in more places but my experience with arm and docker isn't as easy as i expected

Is it just me? When I've started using arm more, I've noticed that docker images are often incomplete or behind the x86 release cycle.

I love the ease of wiring docker images together for all my services (corollary: never having to understand the myriad packaging issues with whatever language the service is written in, python, nodejs, etc).

But when I'm using an arm image, often it is not the same version as the latest on x86, or even worse, is packaged by someone random on the internet. If I were to install the JavaScript service myself, I could audit it (not that I ever do!) by looking into the package.json file, or reviewing the source code, or whatever. There is a clear path to reviewing it. But with a docker image from the Internet, I'm not sure how I would assert it is a well behaved service. Docker itself gives me some guarantees, but it still feels less straightforward.

I've packaged things for an arm container myself and it isn't always exactly the same as for x86.

Is this just me? Am I doing it wrong on arm?

You can inspect the layers of a Docker image. Tools like dive[0] provide a quick and easy way to navigate through the different components your image of choice is made up of.

In terms of functionality once the container is running, you'll have to put some amount of trust into the project maintainers, no more or less than the trust you need om amd64. For containers repackaged by third parties that's quite a pain, but in most cases you can get by just fine with the official container.

If your container of choice has been made by someone real fancy, you may be able to get reproducible builds for all the files inside the container. That would verify that the source and the binary match (though container metadata may not, so a direct image compare would be challenging).

[0]: https://github.com/wagoodman/dive