How does this compare to Fuchsia?

Helios author here. I tried to think of a nice spin for this comment, but came up short. If there is a future of computing with Fuschia in it, it will have everything to do with Google's industry weight and nothing to do with its good systems engineering and design.

Fuschia is an extraordinarily complex kernel design. Hell, it's a "microkernel" which is bigger than Linux! It's fairly typical of Google's inwardly-focused engineering culture, using questionable tools (with large shadows) such as Bazel and gRPC, which were likely chosen simply because they plug into the Googler engineering culture. At the same time, these decisions give way to a very complex system with heaps of moving parts, in this rube goldberg design which is endemic of Google engineering.

Helios is much, much simpler. The kernel itself will probably clock in at under, say, 20,000 lines of code (for x86_64, at least), and it has a very small syscall API, less than two dozen in the final design. Many of the things Fuschia does in the kernel will be done in userspace on Helios, in the Mercury component, such as service discovery and capability allocation. Finally, Helios is written in Hare, which itself is a much simpler language than C++, and kernel hackers at the very least should be convinced of the argument that the complexity of your implementation language contributes to the complexity of your implementation.

> questionable tools (with large shadows) such as Bazel and gRPC,

Bazel and gRPC, whilst both saddled with problems stemming from Google's inwardly-focused engineering culture (and also just some sub-optimal historical decisions) are not "questionable tools" in the sense that they solve the wrong problems or something else solves the same problems much better.

Blaze/Bazel (and its various clones like Buck) are basically the only general purpose build systems out there that even attempt to do the basic tasks of a build system, namely actually figuring out what has changed and needs to be rebuilt. So of course they are gonna use it, nothing else comes even close (and the main downsides don't apply to them).

Similarly, gRPC has a lot of warts in both how its encoding, type system, API and transport work. But anything in that space that doesn't completely suck (such as cap'n proto) is basically a clone of the core design. Again what else even attempts to solve the core problem of having some backwards/forwards compatibly reasonably efficient rpc, messaging and data storage?

> basically the only general purpose build systems out there that even attempt to do the basic tasks of a build system, namely actually figuring out what has changed and needs to be rebuilt.

Extraordinary claim. You will need to explain how Bazel does that and, say, CMake + Ninja don't.

Even [task](https://github.com/go-task/task) (a tiny task-runner tool) does that... I don't know any build system that doesn't.

Perhaps the OP means full incremental compilation, which requires "cooperation" from the compilers, really (or the build tool actually parsing the language's AST like Gradle does, I believe). Or in the case of Bazel, the build author explicitly explaining to the build what the fine-grained dependencies are (I don't use Bazel so I may be wrong, happy to be corrected).