Am I the only one who wants to see a split into a "fast compile" mode and a "spend hours making every optimization possible" mode?

Most code is executed a lot more frequently than it is compiled, so if I can get a 1% speed increase with a 100x compile slowdown, I'll take it.

I don't want to see good PR's that improve LLVM delayed simply because they cause a speed regression.

You can already spend as much as you'd like on optimizations if you are using LLVM. Just use a superoptimizer [0]:

    clang -Xclang -load -Xclang libsouperPass.so -mllvm -z3-path=/usr/bin/z3
, or increase the inlining heuristics..., or just create your own optimization strategy like rustc does [1], or...

LLVM is super configurable, so you can make it do whatever you want.

Clang defaults are tuned for the optimizations that give you the most bang for the time you put in, while still being able to compile a Web browser like Chrome or Firefox, or a whole Linux distribution packages, in a reasonable amount of time.

If you don't care about how long compile-times take, then you are not a "target" clang user, but you can just pass clang extra arguments like those mentioned above, or even fork it to add your own -Oeternity option that takes in your project and tries to compile it for a millenia on a super computer for that little extra 0.00001% reduction in code size, at best.

Because often, code compiled with -O3 is slower than code compiled with -O2. Because "more optimizations" do not necessarily mean "faster code" like you seem tobe suggesting.

[0]: https://github.com/google/souper [1]: https://github.com/rust-lang/rust/blob/master/src/rustllvm/P...