I think what's missing from the tools discussion is the infrastructure that the tools sit on top of:

1) CitC (Client in the Cloud). Mounts your development environment on FUSE filesystems that exist in the cloud. The entire monorepo is mapped into your CitC directory. You can access it from your desktop shell, your home laptop, or a Web Browser based IDE. Any edits you make are overlaid onto the (readonly) source repository, looking seemless and creating reviewable changelists on the fly. Effortless sharing of editing between multiple machines. ObjFS, which also sits in your client, allows blaze(bazel) build artifacts to be shared as well, between clients, even between users. In other words, if I work on 3 machines, I don't need to "check out" my work 3 times. In fact, I almost never "check out" anything at all. I work in a single monorepo with Mercurial, edit files, which produce reviewable changelists against the main repo. I don't need to decide what files to check out or track, nor decide which machine I will work on, and I often switch between IntelliJ locally, IntelliJ via Chrome Remote Desktop on my office computer, and a VS-Code like Web IDE.

2) Skyframe (https://bazel.build/designs/skyframe.html). Imagine parsing the entire monorepo and every single BUILD file into a massive pre-processed graph that knows every possible build target and its dependencies. This allows ultra-efficient determination of "what do I need to rebuild? what tests need to be re-run" across all of Google. I guess the closest thing to this is MvnRepository.net or BinTray, but Skyframe doesn't just parse the stuff and give you a search box, it informs CI tools.

3) Citc/Critique extensions to Mercurial -- take a chain of commits and make them a single code review, or take a a chain of commits and make them into a stacked chain of code reviews.

4) Critique presubmit tools (e.g. errorprone, tricorder, etc). Google has a huge number of analysis tools can run on every review update, for bugs, security problems, privacy problems, optimizations, data-races, etc. Yes, these are usually available outside, but it's just so easy to enable them internally compared to doing it on GitHub. Lots of other codehealth tools, for automatically applying fixes, removing unused code, auto-updating build files with correct dependencies.

5) Forge -- basically Blaze's remote build execution (what Bazel calls RBEs). Almost every build at Google is extremely parallelized, and if you need to run flake tests, running a suite of tests 10,000 times is almost as fast as running it once.

6) Monitoring's been mentioned, but monitoring combined with CodeSearch hasn't been touched on. Depending on configuration, you can often see from Critique or CodeSearch what release or running server code ended up in and what happened to it (did it cause bugs?). CodeSearch has an insane number of overlays, it can even overlay Google's Sentry-like exception logger being able to tell you about how many times some line of code produced a crash.

A lot of Googlers use maybe 25% of all of the features in CodeSearch and Critique.

Here's a in-depth article from Mike Bland https://mike-bland.com/2012/10/01/tools.html

> 5) Forge -- basically Blaze's remote build execution (what Bazel calls RBEs). Almost every build at Google is extremely parallelized, and if you need to run flake tests, running a suite of tests 10,000 times is almost as fast as running it once.

This is available outside Google now - start at https://github.com/bazelbuild/remote-apis or https://docs.bazel.build/versions/master/remote-execution.ht... . It's a standardized API, supported by a growing set of build tools (notably Bazel, Pants, Please) with a variety of OSS and Commercial implementations. At this point almost anyone can set up Remote Execution if they wish to, and remote caching is even easier.

Minor terminology correction: RBE generally refers to Google's own implementation of the same name; Remote Execution (RE) and the REAPI are used to refer to the generic concept.

(Disclaimer: I work on this at Google.)