Hey people, first, congrats for the great work on GCC for RUST, second I have a basic question here from someone who is not experienced with low level languages. Would it be possible (and beneficial to the community) to have a "compiler as a service" in the cloud (either GCC or LLVM based) that would have the most powerful hardware setup available to compile RUST? Really cheap/free per seconds of compilation... so anyone would be able to compile Rust faster and also ANY improvement would de added to this Service... and once new and more powerful hardware is available it could be shared and used by the community. I know we still have to work on improving the compilation times, but maybe having a shared compilation pipeline that can be used by everyone could somehow alleviate the pain a little. Thanks

There is sccache (https://github.com/mozilla/sccache) so a first step would be looking to see why it isn't used more to see how to lower that barrier.

Another idea is crate-build caching so local and CI can pull down a pre-built dependency, rather than building locally. This would need to handle rust versions, feature flags, architectures, compiler settings, etc. This would most help CI since the result would get cached locally

The last idea I'm aware of in this area is watt (https://github.com/dtolnay/watt). If the design and implementation was finished to allow proc-macros (and maybe `build.rs` scripts) to opt-in to a sandboxed wasm environment, we could have a local and networked binary cache for these which would dramatically improve Rust build times (and security). Some people outright avoid proc-macros because of the build-time impact.