The reason compilation time is a problem is that compiler architecture is stuck in the 1970s, at least for systems programming languages. Compilers make you wait while they repeat the same work over and over again, thousands of times. Trillions, if you count the work globally. How many times has been parsed over the years?
In a world where compiler architecture had actually advanced, you would download a precompiled binary alongside the source, identical to what you would have produced with a clean build, and that binary would be updated incrementally as you type code. If you made a change affecting a large part of the binary that couldn't be applied immediately, JIT techniques would be used to allow you to run and test the program anyway before it finished compiling.
There is no fundamental reason why anyone should ever have to wait for a compiler. And if you didn't have to wait, then it would free the compiler to spend potentially much more time doing optimizations, actually improving the final binary.
The zapcc project shows a bit of the potential for improvement in build times, though it's just scratching the surface. https://github.com/yrnkrn/zapcc
[1] https://github.com/StanfordSNR/gg Sort of checksums-and-argument-lists to make gcc deterministic, as for a cache, but farmed out so `make -j100` runs on amazon Lambda.