I've never understood why folks treat the Benchmarks Game results as indicative nor representative of anything useful. The code specimens they use are often unpolished nor idiomatic, without even commenting on whether they could be made to perform better through Byzantine, careful by-hand optimization.

Why does their web site have no contact nor link to where the source code for the project can be checked out, contributed to, or amended?

> I've never understood why…

Perhaps they don't read the website text?

> … no contact nor link…

Search works.

I ran multiple search queries. I wouldn't be so dumb to post a comment like this here without having done my homework. The best I found after trying numerous keyword permutations was https://salsa.debian.org/benchmarksgame-team/benchmarksgame, but this did not appear to contain all of the benchmarks' source, just the source embedded in HTML, which is specious at best. This repository looks mostly like frontend HTML and chrome, not a SUT, executor, nor even the sub-test code.

At the very least, I couldn't realistically re-run some of the example benchmarks from the source embedded in the HTML, because they did not include vendoring/version information for external packages they depend on. That made me doubt the provenance of https://salsa.debian.org/benchmarksgame-team/benchmarksgame.

Perhaps these other projects are a better match with what you expect to see —

https://programming-language-benchmarks.vercel.app/

https://github.com/kostya/benchmarks