What does HackerNews think of ClickBench?
ClickBench: a Benchmark For Analytical Databases
It compares Redshift and Athena as well.
ClickHouse's ClickBench is a good general tool. However, it's not the end-all, be-all of performance benchmarking and testing. Its results may or may not be applicable for guidance on the performance of your specific use case when you get to production.
It is definitely a stab at getting an objective suite of tools for the real-time analytics space. But just like you had YCSB as a good general performance test, eventually a subset of users wanted something specific for Cassandra and Cassandra-like databases (DSE, ScyllaDB, etc.), so you eventually saw cassandra-stress. We have to consider cases where certain databases may need to have testing suites that really capture their capabilities.
ClickHouse themselves publishes a list of Limitations that everyone should keep in mind as they run ClickBench:
https://github.com/ClickHouse/ClickBench/#limitations
CelerData (based on StarRocks) also wrote up this:
https://celerdata.com/blog/what-you-should-know-before-using...
Plus, I want to direct people to the discussion generated when ClickBench was first posted to HN:
https://news.ycombinator.com/item?id=32084571
As user AdamProut commented back at the time:
> It looks like the queries are all single table queries with group-bys and aggregates over a reasonably small data set (10s of GB)?
>I'm sure some real workloads look like this, but I don't think it's a very good test case to show the strengths/weaknesses of an analytical databases query processor or query optimizer (no joins, unions, window functions, complex query shapes ?).
> For example, if there were any queries with some complex joins Clickhouse would likely not do very well right now given its immature query optimizer (Clickhouse blogs always recommend denormalizing data into tables with many columns to avoid joins).
So, again, ClickBench is a good (great) beginning. As an industry we should not let it be seen as the end. I'd be interested in the community's opinions on what and how we should be doing better.
You can check the methodology described in the README and check the pull requests submitted by many database vendors.
(I work at ClickHouse)
Disclaimer: I'm the author of ClickBench. It contains the comparison of Datafusion, DuckDB as well, and 30+ more DB engines in an open benchmark. The results of Datafusion were contributed by their team, and the results of DuckDB were contributed by their respective team.
https://pastila.nl/?0198061e/f2e0e7b2d61d0fe322607b58fc7200b...
Where ClickHouse operates in a "data lake" mode - simply by processing a bunch of parquet files on S3. Obviously it is faster than Athena. But I also want to add Presto, Trino, Spark, Databricks, Redshift Spectrum, and Boilingdata, that are currently missing from the benchmark.
Please help me adding them: https://github.com/ClickHouse/ClickBench
Also, it includes another mode of ClickHouse, named "web" - MergeTree tables hosted on a HTTP server (which is more efficient than parquet). See https://github.com/ClickHouse/web-tables-demo
About R2 - it is currently slow, and also incompatible with S3 (e.g., no multipart uploads).
Note: BigQuery has a "De-Witt" clause[2] that prevents publishing the benchmark results. That's why there are all the testing instructions, but no published results: https://cube.dev/blog/dewitt-clause-or-can-you-benchmark-a-d...
Nevertheless, BigQuery is good for long-running queries, as the number of query workers is selected dynamically on a per-query basis.
[1] ClickBench: https://github.com/ClickHouse/ClickBench/ [2] https://cube.dev/blog/dewitt-clause-or-can-you-benchmark-a-d...
There are other responses from ClickHouse in the comments on the pricing, so I'll defer to their expertise on that topic there. Thank you for your feedback and ideas, as normalizing a price-based benchmark is an interesting concept (and where ClickHouse would expect to lead also given the architecture and efficiency)