For context: most if not all of our team has developed calibration tooling similar to what we're doing now in the past, just at different startups and very specific to certain robotic or sensing configurations.
If anything, once we got CI sorted and started using our own internal registry I would argue that we are significantly faster in terms of iteration time. This is partly because the team is small, but also because most of our tooling is consistent and easy to keep in lockstep. Pulling libraries is done uniformly across platforms and architectures, and our CI runs (through GitLab) stay up-to-date with the latest tooling without issue. Having a stronger type system to detect errors early and a compiler that actually tries to give human-readable messages (looking at you C++ linker errors) using that type system makes everything so much easier.
Compile time seems like it would be an obvious bit that slows one down, but in practice sccache [1] does what it ought to and we barely notice it (at least, I don't and I haven't seen team members complaining about build times). Mostly I'd argue that the real thing holding us back is tooling extant to the rest of the wider Rust ecosystem. Debugging and perf tools are great in Unix land, but if you're making anything cross-platform you need to know more than just perf. That might just be my opinion though, I'll admit I'm still learning how best to apply BPF-based tooling even in Linux alone.
I also realize I'm responding to steveklabnik, so I suspect most of what I'm saying is well-known and that this comment is really more directed at TFA.
[0] https://github.com/rust-lang/rust/pull/84762 [1] https://github.com/mozilla/sccache
Not nearly as flexible or powerful as Bazel, but also vastly simpler to setup if all you want is caching.
Zuul (https://zuul-ci.org/) was created for openstack to solve the issue of optimistic merges / PR queue testing.
When you use buildkite with own containers on AWS ecs you can use efs to do a git clone with reference. (https://git-scm.com/docs/git-clone#Documentation/git-clone.t...) Essentially what they do with a packed base repo, but you only end up sending what you need, not more.
The binary cache is available in other flavours too. If you don't use go, then sccache (https://github.com/mozilla/sccache) may be useful.
Another idea is crate-build caching so local and CI can pull down a pre-built dependency, rather than building locally. This would need to handle rust versions, feature flags, architectures, compiler settings, etc. This would most help CI since the result would get cached locally
The last idea I'm aware of in this area is watt (https://github.com/dtolnay/watt). If the design and implementation was finished to allow proc-macros (and maybe `build.rs` scripts) to opt-in to a sandboxed wasm environment, we could have a local and networked binary cache for these which would dramatically improve Rust build times (and security). Some people outright avoid proc-macros because of the build-time impact.
- local caching (like ccache)
- remote caching e.g. to S3, or a LAN redis instance (unique afaik)
- distributed compilation of C/C++ code (like distcc, icecream)
- distributed compilation of Rust (unique afaik)
- distributed compilation on Windows by cross compiling on Linux machines (unique afaik)
Note that I think bazel also does a bunch of these, but you need to use bazel as your build system.
[0] https://github.com/mozilla/sccache
[1] quickstart - https://github.com/mozilla/sccache/blob/master/docs/Distribu...
[2] reference docs - https://github.com/mozilla/sccache/blob/master/docs/Distribu...
You can find how to install and use it here: https://github.com/mozilla/sccache
And, LOL, just now finally read the readme, didn't even know I could archive the the cache over the network.... #foreverN00b that's gonna be awesome.
[0]: https://github.com/icecc/icecream [1]: https://github.com/mozilla/sccache
This already exists and is easy to use: https://github.com/mozilla/sccache
In the context of resource-constrained machines, one can always host it remotely on S3. (or mount an NFS share as the CARGO_TARGET_DIR, if you’re feeling adventurous or want fast CI)
So sure, I'm missing caching that I've had on Travis. But honestly, it builds reliably fast anyway. On Travis, I always needed to wait a LONG time to get macOS agents, on Appveyor, we were limited to 2 parallel builds. Here, everything starts all at once, completes under 10 minutes (there's quite a lot to build and test).
If I could, I would love to have an sccache ( https://github.com/mozilla/sccache ) compatible distributed free caching solution for my C++ projects. That'd be a killer feature!