What does HackerNews think of rules_docker?
Rules for building and handling Docker images with Bazel
They're pretty great and have a lot of the caching and parallelism benefits mentioned in the post for free out of the box, along with determinism (which Docker files don't have because you can run arbitrary shell commands). Our backend stack is also built with Bazel so we get a nice tight integration to build our images that is pretty straightforward.
We've also built some nice tooling around this to automatically put our maven dependencies into different layers using Bazel query and buildozer. Since maven deps don't change often we get a lot of nice caching advantages.
I was very interested in this Bazel-based way of building containers but its README page says "it is on minimal life support," which does not inspire confidence. How's your experience using it?
service_binary(
name = "foobar",
srcs = ["foobar.jsonnet"],
deps = ["//jsonnet/service.jsonnet"],
images = ["//a:image"],
)
Then make it possible to create a class of `*_test` rules that would start up a bunch of docker containers in the same network topology they would exist in prod, run your test in the same netns, then clean everything up. It could look something like this: cc_integration_test(
name = "...",
services = ["//a", "//b"],
srcs = ["main.cc"]
)
There's some BazelCon talks about people doing similar stuff but not actually open sourcing their code.P.S. if you use rules_docker please feel free to open a PR to add your company to our README: https://github.com/bazelbuild/rules_docker/#adopters
> service_binary
In a nutshell, rules_docker is a set of build rules for the Bazel build system (https://bazel.build). What's pretty nice about these rules is that they don't rely on a Docker daemon. They are rules that directly construct image tarballs that you can either load into your local Docker daemon or push to a registry.
What's nice about this approach is that image generation works on any operating system. For example, even on a Mac or Windows system that doesn't have Docker installed, you're able to build Linux containers. They are also fully reproducible, meaning that you often don't need to upload layers when pushing (either because they haven't changed, or because some colleague/CI job already pushed those layers).
I guess rules_docker works fine for a variety of programming languages. I've mainly used it with Go, though.
Bazel's caching abilities are by far the best I've ever worked with because it understands the full source tree. It can also cache test executions. There's some tests in my code that make sure I'm calling out to crypto libraries correctly and these tests take >30 seconds to execute but almost never change. With bazel I can feel free to write as many of those integration tests as I want since they will only ever be rerun when something effects them (I.E. I change the version of my crypto library).
> Honestly, while the theory is that you can Dockerize your build and you can do remote caching with Bazel I've never seen anyone do it
Yea, you likely don't want to run bazel within a docker container, you want to build a Docker container within bazel [0]. The performance of this way of doing things is much better. My monorepo has >30 services and `docker-compose up --build` was becoming super slow. To address this I've written bazel_compose [1] to obtain the same workflow docker-compose offers you with bazel as your container build system. It also supports a gradual migration scheme and will build both the Dockerfile AND the bazel version of your container to make sure they both start.
Unfortunately the bazel community is mainly populated with companies who are 100x the size of the average and as such they already cant run all of their services on their dev machines and so they don't see the value of something like this. This version of bazel_compose is out of sync with HEAD @ caper but if you're adventurous I'd recommend checking it out. It has extra features to watch all of the source files using ibazel and will automatically build&restart containers (<<10 seconds in my experience) as you edit and save code.
https://github.com/bazelbuild/rules_docker
rules_docker allows you to create byte-for-byte reproducible container images on your system, without even having a Docker daemon installed. So much cleaner to use than 'docker build' once you get the hang of it!
Rulesets like rules_go (https://github.com/bazelbuild/rules_go) also support pretty decent cross compilation, meaning that you can, for example, build Linux containers containing Go microservices on your Mac and push them into a registry immediately. All without running VMs/Docker daemons/... on your Mac.
A pure Go codebase? Not worth it.
Personally, I use Bazel even for small projects as soon as they involve things like generated code or gRPC.
Yes, there are multiple rulesets for deployments, like rules_k8s[1] and rules_docker[2]. Of course, you can easily build your own custom deployment pipeline.
Bazel has many other benefits too like creating a full dependency build graph and fully reproductible builds.
Downloading and installing system packages lists, etc.
For this reason, Google doesn't use Docker at all.
It writes the OCI images more or less directly. https://github.com/bazelbuild/rules_docker
https://docs.bazel.build/versions/master/be/protocol-buffer....
We use it at work to build a web app whose backend is written in Go and frontend in Typescript. All of the code gets built and placed in a Docker image using these rules:
You can checkout Bazel's docker rule: https://github.com/bazelbuild/rules_docker
Also, go already has a very good build sysmtem build-in, and hazel really shines when: - you have a complex codebase with multi-languages: one can build a tool in one language and use it as tool for a another language. - you simply have a really large code base - you can work on the path //service1/my/service and your colleague can work on //service2/their/service, and only the path got changed needs to rebuild every time.
The repo links to a talk that goes into more depth, but the basic idea is to a use minimal language-specific base for your runtime instead of e.g. statically linking all of ubuntu into your image.
The base images are built with bazel's docker rules[2], so you get reproducible builds.