This is impressive work: it's time consuming to set up and benchmark so many different systems!

Impressiveness of the effort notwithstanding, I also want to encourage people to do their own research. As a database author myself (I work on Apache Druid) I have really mixed feelings about publishing benchmarks. They're fun, especially when you win. But I always want to caution people not to put too much stock in them. We published one a few months ago showing Druid being faster than Clickhouse (https://imply.io/blog/druid-nails-cost-efficiency-challenge-...) on a different workload, but we couldn't resist writing it in a tongue-in-cheek way that poked fun at the whole concept of published benchmarks. It just seems wrong to take them too seriously. I hope most readers took the closing message to heart: benchmarks are just one data point among many.

That's why I appreciate the comment "All Benchmarks Are Liars" on the "limitations" section of this benchmark -- something we can agree on :)

Benchmarks can be quite difficult to interpret when the test parameters vary between tests. However, I think the point in providing the open-source ClickBench benchmark [https://github.com/ClickHouse/ClickBench] is exactly to allow users to do their own research in providing a standardized client and workload across any SQL-based DBMS. Standardized benchmarking is an important technique, for comparing across different applications, or also for comparing the same application across different environments (compute, storage, cloud vs. self-managed, etc.). SPEC [https://www.spec.org] used to do a great job in developing and releasing standardized benchmarks, although their activity has waned of late