I haven't used ClickHouse nor TimescaleDB, but I thought TimescaleDB was competing with the likes of InfluxDB, QuestDB & Prometheus. I guess I'm not surprised that it looses to an OLAP database on OLAP queries.

Are people using ClickHouse as their timeseries backend? IIRC, Clickhouse doesn't perform all that well with millions of tiny inserts.

At first, let's give the definition of `time series`. This is a series of (timestamp, value) pairs ordered by timestamp. The `value` may contain arbitrary data - a floating-point value, a text, a json, a data structure with many columns, etc. Each time series is uniquely identified by its name plus an optional set of {label="value"} labels. For example, temperature{city="London",country="UK"} or log_stream{host="foobar",datacenter="abc",app="nginx"}.

ClickHouse is perfectly optimized for storing and querying of such time series, including metrics. That's true that ClickHouse isn't optimized for handling millions of tiny inserts per second. It prefers infrequent batches with big number of rows per each batch. But this isn't the real problem in practice, because:

1) ClickHouse provides Buffer table engine for frequent inserts.

2) It is easy to create a special proxy app or library for data buffering before sending it to ClickHouse.

TimescaleDB provides Promscale [1] - a service, which allows using TimescaleDB as a storage backend for Prometheus. Unfortunately, it doesn't show outstanding performance comparing to Prometheus itself and to other remote storage solutions for Prometheus. Promscale requires more disk space, disk IO, CPU and RAM according to production tests [2], [3].

[1] https://github.com/timescale/promscale

[2] https://abiosgaming.com/press/high-cardinality-aggregations/

[3] https://valyala.medium.com/promscale-vs-victoriametrics-reso...

Full disclosure: I'm CTO at VictoriaMetrics - competing solution for TimescaleDB. VictoriaMetrics is built on top of architecture ideas from ClickHouse.