What does HackerNews think of sccache?

sccache is ccache with cloud storage

Language: Rust

https://github.com/mozilla/sccache is another option which addresses the use cases of both icecream and ccache (and also supports Rust, and cloud storage of artifacts, if those are useful for you)
Tangram Vision [0] is also a startup and we also use Rust. We're using it to develop robotic / autonomous sensor calibration tools that would normally be written in a variety of C / C++ libraries.

For context: most if not all of our team has developed calibration tooling similar to what we're doing now in the past, just at different startups and very specific to certain robotic or sensing configurations.

If anything, once we got CI sorted and started using our own internal registry I would argue that we are significantly faster in terms of iteration time. This is partly because the team is small, but also because most of our tooling is consistent and easy to keep in lockstep. Pulling libraries is done uniformly across platforms and architectures, and our CI runs (through GitLab) stay up-to-date with the latest tooling without issue. Having a stronger type system to detect errors early and a compiler that actually tries to give human-readable messages (looking at you C++ linker errors) using that type system makes everything so much easier.

Compile time seems like it would be an obvious bit that slows one down, but in practice sccache [1] does what it ought to and we barely notice it (at least, I don't and I haven't seen team members complaining about build times). Mostly I'd argue that the real thing holding us back is tooling extant to the rest of the wider Rust ecosystem. Debugging and perf tools are great in Unix land, but if you're making anything cross-platform you need to know more than just perf. That might just be my opinion though, I'll admit I'm still learning how best to apply BPF-based tooling even in Linux alone.

I also realize I'm responding to steveklabnik, so I suspect most of what I'm saying is well-known and that this comment is really more directed at TFA.

[0] https://tangramvision.com

[1] https://github.com/mozilla/sccache

For the branch-switching usecase you might get some milage out of sccache [1]. For local storage it's just one binary and two lines of configuration to have a cache around rustc, so it's worth testing out.

1: https://github.com/mozilla/sccache

sccache works really well and there’s only two steps to install it and enable it globally, speeds up compilation time a lot as well:

https://github.com/mozilla/sccache

A recent change[0] on nightly rustc might help with incremental builds. And for repeated clean + full build cycles there's sccache[1].

[0] https://github.com/rust-lang/rust/pull/84762 [1] https://github.com/mozilla/sccache

There's also mozilla's sccache, which integrates with cargo (by wrapping rustc) to cache artifacts. A local cache is 2 lines of config in your .cargo/config.toml, and if you want to you can have shared caches in Redis or S3/Azure/GCP.

Not nearly as flexible or powerful as Bazel, but also vastly simpler to setup if all you want is caching.

https://github.com/mozilla/sccache

Some alternative solutions to similar issues:

Zuul (https://zuul-ci.org/) was created for openstack to solve the issue of optimistic merges / PR queue testing.

When you use buildkite with own containers on AWS ecs you can use efs to do a git clone with reference. (https://git-scm.com/docs/git-clone#Documentation/git-clone.t...) Essentially what they do with a packed base repo, but you only end up sending what you need, not more.

The binary cache is available in other flavours too. If you don't use go, then sccache (https://github.com/mozilla/sccache) may be useful.

There is sccache (https://github.com/mozilla/sccache) so a first step would be looking to see why it isn't used more to see how to lower that barrier.

Another idea is crate-build caching so local and CI can pull down a pre-built dependency, rather than building locally. This would need to handle rust versions, feature flags, architectures, compiler settings, etc. This would most help CI since the result would get cached locally

The last idea I'm aware of in this area is watt (https://github.com/dtolnay/watt). If the design and implementation was finished to allow proc-macros (and maybe `build.rs` scripts) to opt-in to a sandboxed wasm environment, we could have a local and networked binary cache for these which would dramatically improve Rust build times (and security). Some people outright avoid proc-macros because of the build-time impact.

I'm not sure what you mean by a build cache, does sccache suit?

https://github.com/mozilla/sccache

sccache[1] is a similar project, which supports remote execution and caching for C, C++, and Rust. Unfortunately the remote-execution mechanism is designed for Mozilla's internal environment and doesn't support cloud backends like Lambda or Google Cloud Build. But the code is well-structured Rust and not too big, adding a cloud backend would be a nice project.

[1] https://github.com/mozilla/sccache

I find if you're aware about how you define your modules, incremental compiles are usually pretty quick. Yes a complete build can take a while but tools like sccache[0] can help with that in CI pipelines and when getting a new dev environment up.

[0]: https://github.com/mozilla/sccache

I worked on adding distributed compilation to sccache [0]. Docs at [1] and [2]. Compared to existing tools, sccache supports:

- local caching (like ccache)

- remote caching e.g. to S3, or a LAN redis instance (unique afaik)

- distributed compilation of C/C++ code (like distcc, icecream)

- distributed compilation of Rust (unique afaik)

- distributed compilation on Windows by cross compiling on Linux machines (unique afaik)

Note that I think bazel also does a bunch of these, but you need to use bazel as your build system.

[0] https://github.com/mozilla/sccache

[1] quickstart - https://github.com/mozilla/sccache/blob/master/docs/Distribu...

[2] reference docs - https://github.com/mozilla/sccache/blob/master/docs/Distribu...

About long compile time, I recommend using sccache. It saved a lot of compile time for me.

You can find how to install and use it here: https://github.com/mozilla/sccache

Obviously just a band-aid for the symptom, and probably known by most, but FWIW sccache has been pretty freakin' great for me at keeping build times manageable: https://github.com/mozilla/sccache

And, LOL, just now finally read the readme, didn't even know I could archive the the cache over the network.... #foreverN00b that's gonna be awesome.

Developers at my company who work on a large shared C++ codebase all use Icecream[0] to distribute the builds across their workstations. I've only hooked into it to build once since I don't work in that codebase, but it seems to work pretty well scaling it (I think on the order of 50-60 servers and about the same number of clients). As for caching, I think sccache[1] is designed to handle that in a scalable way, although I've personally only used it locally for my Rust builds.

[0]: https://github.com/icecc/icecream [1]: https://github.com/mozilla/sccache

> Thirdly, make use of a build cache across crates, including third party dependencies instead of building the whole world from scratch.

This already exists and is easy to use: https://github.com/mozilla/sccache

Have you tried sccache [0]? It doesn’t always choose to cache a dependency, but it helps about 70% of the time. Anecdotally, it hastened a release build of a pretty standard CLI tool (with incremental compilation) by almost 4x.

In the context of resource-constrained machines, one can always host it remotely on S3. (or mount an NFS share as the CARGO_TARGET_DIR, if you’re feeling adventurous or want fast CI)

[0] https://github.com/mozilla/sccache

In the meantime, you can use sccache to at least save the time spent to compile the same version of a crate more than once.

https://github.com/mozilla/sccache

I've migrated my Boost builds from Travis and Appveyor to your CI offering recently. It wasn't super smooth, but I got it to work quite fast and I like it a lot.

So sure, I'm missing caching that I've had on Travis. But honestly, it builds reliably fast anyway. On Travis, I always needed to wait a LONG time to get macOS agents, on Appveyor, we were limited to 2 parallel builds. Here, everything starts all at once, completes under 10 minutes (there's quite a lot to build and test).

If I could, I would love to have an sccache ( https://github.com/mozilla/sccache ) compatible distributed free caching solution for my C++ projects. That'd be a killer feature!

I've been using sccache (https://github.com/mozilla/sccache) lately, and it mostly solves this problem for me (although I agree with what others have mentioned that the issue isn't that much of a deal-breaker for me given how infrequently I run `cargo clean`.
You're thinking of https://github.com/mozilla/sccache which is not quite the same thing.