Optimizing ClickHouse: Tactics that worked for us


131 points by podoman on 2024-05-14 | 25 comments

Automated Summary

This article describes the optimization process of ClickHouse at Highlight.io, an open source monitoring distribution. The company was facing scaling issues due to the high volume of observability data from their customers, leading to a significant CPU load on their ClickHouse cluster. The main challenges included handling large volumes, daily spikes in traffic patterns, and ensuring efficient data ingestion for performant querying. The optimization process involved strategies such as reducing merges using larger insert batches, keeping data in wide parts, optimizing ORDER BY granularity, checking merge levels, avoiding use of projections, and TTL optimization for clearing old parts. These strategies helped Highlight.io transition from an un-optimized, CPU-heavy ClickHouse cluster to a more efficient and affordable configuration, significantly improving the customer experience.

Other submissions


Syntaf on 2024-05-14

We've been using highlight for our bootstrapped member management platform[1] and I gotta say I'm super impressed with the session replay feature, it's really helpful for understanding user behavior at a fraction of the price of competitors.

I remember wanting to use Heap's session replay only to release they want hundreds of dollars per _month_, my last bill with highlight was $2.38 I recall.

That's all to say that I'm glad Highlight is figuring out how to scale while still offering their features to the small players of the world.

[1] https://embolt.app

tmzt on 2024-05-15

Very nice and very inspirational for someone bootstrapping a startup.

The pages clearly defined what you are building and how to use it.

The explanation of platform fees makes sense, though it could more clear if the pricing examples are only based on those fees or are account limits to number of members or dues.

You might want to check your terms of service, they do not list a jurisdiction and have the placeholder [jurisdiction] instead.

Best of luck with it!

podoman on 2024-05-14

Highlight.io cofounder here. Thanks for the shout out. Glad to hear you like the product; continue to share feedback as you use it!

cpursley on 2024-05-15

Highlight looks great. Does it support Elixir LiveView?

vadman97 on 2024-05-15

We do via the OpenTelemetry Elixir SDK (and the Phoenix extension) https://opentelemetry.io/docs/languages/erlang/

jkercher on 2024-05-15

clickhouse-local is pretty slick as well. You can operate directly on text files as if they were tables. I made my own toy text file database thing and thought I was cool because I could outrun similar programs like q, textql, sqlite, etc. But clickhouse-local had me by a factor of 10 easy in every kind of query with every type of data. Those guys know stuff.

podoman on 2024-05-15

Have heard good things about local. Clickhouse more broadly is an awesome project and they're always on standby to help our eng team team as well.

Dachande663 on 2024-05-15

We found the "lots of small inserts" issue, and fixed it by just using the Buffered table engine[0]. Can create it as a replica of the destination table, and it stores inserts in memory until they cross a threshold and are written. Super simple and took 5 minutes.

[0] https://clickhouse.com/docs/en/engines/table-engines/special...

dilyevsky on 2024-05-15

Why not just use async inserts that internally do the same thing?

vadman97 on 2024-05-15

Once we hit >100k inserts per second, async inserts didn't work well for us because we had limited control over the background async insert batching happening on the cluster. The background inserts would be too small, resulting in many merges running, causing high CPU, causing high back-pressure (latency) on async inserts which would just result in an ingestion delay.

Plus, async inserts are only available on ClickHouse Cloud.

zX41ZdbW on 2024-05-15

Asynchronous INSERTs are available in the open-source ClickHouse version and in ClickHouse Cloud. The feature is independent of the Cloud.

To enable asynchronous INSERTs, set `async_insert` to true.

Here is the documentation: https://clickhouse.com/docs/en/optimize/asynchronous-inserts ; intro: https://clickhouse.com/blog/asynchronous-data-inserts-in-cli... and a hands-on guide: https://clickhouse.com/blog/monitoring-asynchronous-data-ins...

dilyevsky on 2024-05-15

Like sibling comment is saying - async inserts are part of the oss version. Batch flushes are tunable in two dimensions - size and latency. I think it’s likely the default size was just too low for your usecase

vadman97 on 2024-05-15

Forget the exact details, but we ran into issues with tuning the two setting you're describing on the Cloud instance (these settings were user level settings that seemed to get reset by ClickHouse Cloud because they were updated based on the cluster size). Perhaps this could change with Cloud and isn't an issue with the OSS version.

ericb on 2024-05-15

What sort of insert volume are you handling with that?

banditelol on 2024-05-15

> We opted to use the ClickHouse Kafka Connect Sink that implements batched writes and exactly-once semantics achieved through ClickHouse Keeper.

Just a heads up, You've got repeated line there

YZF on 2024-05-15

Might have been a joke on "exactly once" ;)

vadman97 on 2024-05-15

I see what you did there.

podoman on 2024-05-15
jiveturkey on 2024-05-16

since you're doing typos, this one also


ople on 2024-05-15

Very interesting observations! Merge performance tuning seems often overlooked even though it's a key aspect of sustained ClickHouse performance.

I also like that the blog is quite compact and gets the points across without getting too much into the weeds.

One thing I've noticed also that bloom filter index types can be quite costly to merge. In many cases that's acceptable though due to the massive benefit they provide for text queries. One just has to be mindful of the overhead when adding them.

vadman97 on 2024-05-15

Exploring bloom filter index merges would be an interesting addition. I do wish it were easier to profile merge performance to break down where most of the CPU time is being spent.

JosephRedfern on 2024-05-15

Thanks for sharing! I'm curious as to your approach to changing the ORDER BY key for such large tables without significant downtime, since AFAIK this can't be done in place (see: https://kb.altinity.com/altinity-kb-schema-design/change-ord...). Are you able to share any details?

vadman97 on 2024-05-15

Materialized views can help change the ORDER BY with 0 downtime:

* Create a new version of the table with the new ORDER BY.

* Create a materialized view that will insert from the old table to the new table.

* Update your application code to query the new table.

* Start inserting data into the new table.

misiek08 on 2024-05-15

What size is the cluster? Just curious how much hardware is needed to handle such traffic :)

vadman97 on 2024-05-15

3 nodes of 120 GiB and 30 vCPU each.