Only tangentially related to the post, but I don't see it mentioned there: what do people use to run benchmarks on CI? If I understand correctly, standard OSS GH Actions/Azure Pipelines runners aren't going to be uniform enough to provide useful benchmark results. What does the rust project use? What do other projects use?
However it's not very good. Seems like most people just write their own custom performance monitoring tooling.
As for how you actually run it, you can get fairly low noise runtimes by running on a dedicated machine on Linux. You have to do some tricks like pinning your program to dedicated CPU cores and making sure nothing else can run on them. You can get under 1% variance that way, but in general I found you can't really get low enough variance on wall time to be useful in most cases, so instruction count is a better metric.
I think you could do better than instruction count though but it would be a research project - take all the low noise performance metrics you can measure (instruction count, branch misses etc), measure a load of wall times for different programs and different systems (core count, RAM size etc.). Feed it into some kind of ML system and that should give you a decent model to get a low noise wall time estimate.
Good tips here:
https://llvm.org/docs/Benchmarking.html
https://easyperf.net/blog/2019/08/02/Perf-measurement-enviro...