I feel so lucky that I found waf[1] a few years ago. It just... solves everything. Build systems are notoriously difficult to get right, but waf is about as close to perfect as you can get. Even when it doesn't do something you need, or it does things in a way that doesn't work for you, the amount of work needed to extend/modify/optimize it to your project's needs is tiny (minus the learning curve ofc, but the core is <10k lines of Python with zero dependencies), and doesn't require you to maintain a fork or anything like that.

The fact that the Buck team felt they had to do a from scratch rewrite to build the features they needed just goes to show how hard it is to design something robust in this area.

If there are any people in the Buck team here, I would be curious to hear if you all happened to evaluate waf before choosing to build Buck? I know FB's scale makes their needs unique, but at least at a surface level, it doesn't seem like Buck offers anything that couldn't have been implemented easily in waf. Adding Starlark, optimizing performance, implementing remote task execution, adding fancy console output, implementing hermetic builds, supporting any language, etc...

[1]: https://waf.io/

I truly believe any build system that uses a general-purpose language by default is too powerful. It lets people do silly stuff too easily. Build systems (for projects with a lot of different contributors) should be easy to understand, with few, if any, project specific concepts to learn. There can always be an escape hatch to python (see GN, for example), but 99% of the code should just be boring lists of files to build.

You cannot magick away complexity. Large systems (think thousands of teams with hundreds of commits per minute) require a way to express complexity. When all is said and done, you'll have a turing-complete build system anyway - so why not go with something readable

I seriously doubt there's a single repo on the planet that averages hundreds of commits per minute. That's completely unmanageable for any number of reasons.

I didn't mean on average, but the build tool has to handle the worst case and I probably am understating the worst case.

I'd bet there are a more than a few repos that do get (at least) hundreds of commits as a highwater mark. My guess is lots of engineers + mono-repo + looming code-freeze deadline can do that like clockwork.

Edit: Robots too as sibling pointed out. A single human action may result in dozens of bot-generated commits

IMO there's almost never a good reason to have automated commits in repos outside of two cases:

1) Automated refactoring

2) Automated merges when CI passes

Configs that can be generated should just be generated by the build.

But that's a different topic

There are at least two other hugely important use cases you missed:

- automatic security / vendoring updates (e.g. https://github.com/renovatebot/renovate)

- automated cross-repo syncs, e.g. Google has processes and tools that bidirectionally sync pieces of Google3 with GitHub repos