logoalt Hacker News

ClickHouse gets lazier and faster: Introducing lazy materialization

340 pointsby tbraginyesterday at 4:03 PM107 commentsview on HN

Comments

tmoertelyesterday at 4:39 PM

This optimization should provide dramatic speed-ups when taking random samples from massive data sets, especially when the wanted columns can contain large values. That's because the basic SQL recipe relies on a LIMIT clause to determine which rows are in the sample (see query below), and this new optimization promises to defer reading the big columns until the LIMIT clause has filtered the data set down to a tiny number of lucky rows.

    SELECT *
    FROM Population
    WHERE weight > 0
    ORDER BY -LN(1.0 - RANDOM()) / weight
    LIMIT 100  -- Sample size.
Can anyone from ClickHouse verify that the lazy-materialization optimization speeds up queries like this one? (I want to make sure the randomization in the ORDER BY clause doesn't prevent the optimization.)
show 3 replies
jurgenkeskeryesterday at 5:52 PM

I really like Clickhouse. Discovered it recently, and man, it's such a breath of fresh air compared to suboptimal solutions I used for analytics. It's so fast and the CLI is also a joy to work with.

show 5 replies
simonwyesterday at 4:38 PM

Unrelated to the new materialization option, this caught my eye:

"this query sorts all 150 million values in the helpful_votes column (which isn’t part of the table’s sort key) and returns the top 3, in just 70 milliseconds cold (with the OS filesystem cache cleared beforehand) and a processing throughput of 2.15 billion rows/s"

I clearly need to update my mental model of what might be a slow query against modern hardware and software. Looks like that's so fast because in a columnar database it only has to load that 150 million value column. I guess sorting 150 million integers in 70ms shouldn't be surprising.

(Also "Peak memory usage: 3.59 MiB" for that? Nice.)

This is a really great article - very clearly explained, good diagrams, I learned a bunch from it.

show 5 replies
mmsimangayesterday at 9:04 PM

IMHO if ClickHouse had Windows native release that does not need WSL or a Linux virtual machine it would be more popular than DuckDB. I remember for years MySQL being way more popular than PostgreSQL. One of the reasons being MySQL had a Windows installer.

show 2 replies
skeptrunetoday at 2:00 AM

>Despite the airport drama, I’m still set on that beach holiday, and that means loading my eReader with only the best.

What a nice touch. Technical information and diagrams in this were top notch, but the fact there was also some kind of narrative threaded in really put it over the top for me.

xiasonghtoday at 6:29 AM

Has anyone compared ClickHouse and StarRocks[0]? Join performance seems a lot better on StarRocks a few months ago but I'm not sure if that still holds true.

[0] https://www.starrocks.io/

justmarcyesterday at 9:37 PM

Clickhouse is a masterpiece of modern engineering with absolute attention to performance.

vjerancrnjakyesterday at 7:51 PM

It's quite amazing how a db like this shows that all of those row-based dbs are doing something wrong, they can't even approach these speeds with btree index structures. I know they like transactions more than Clickhouse, but it's just amazing to see how fast modern machines are, billions of rows per second.

I'm pretty sure they did not even bother to properly compress the dataset, with some tweaking, could have probably been much smaller than 30GBs. The speed shows that reading the data is slower than decompressing it.

Reminds me of that Cloudflare article where they had a similar idea about encryption being free (slower to read than to decrypt) and finding a bug, that when fixed, materialized this behavior.

The compute engine (chdb) is a wonder to use.

show 1 reply
higeorge13today at 5:14 AM

That’s an awesome change. Will that also work for limit offset queries?

ohnoesjmryesterday at 6:21 PM

Wonder how well this propagates down to subqueries/CTE's

simianwordsyesterday at 5:41 PM

Maybe I'm too inexperienced in this field but reading the mechanism I think this would be an obvious optimisation. Is it not?

But credit where it is due, obviously clickhouse is an industry leader.

show 2 replies
janglisstoday at 11:00 AM

Thought this was Clickhole.com and was waiting for the payoff to the joke

meta_ai_xyesterday at 6:55 PM

can we take the "packing your luggage" analogy and only pack the things we actually use in the trip and apply that to clickhouse?

show 1 reply
Onavoyesterday at 7:02 PM

Reminder clickhouse can be optionally embedded, you don't need to reach for Duck just because of hype (it's buggy as hell everytime I tried it).

https://clickhouse.com/blog/chdb-embedded-clickhouse-rocket-...

show 1 reply
apwell23today at 3:45 AM

is apache druid still a player in this space ? Never seem to hear about it anymore. why would someone choose it over clickhouse?

dangoodmanUTyesterday at 9:14 PM

God clickhouse is such great software, if it only it was as ergonomic as duckdb, and management wasn't doing some questionable things (deleting references to competitors in GH issues, weird legal letters, etc.)

The CH contributors are really stellar, from multiple companies (Altinity, Tinybird, Cloudflare, ClickHouse)

show 3 replies
tnolettoday at 9:03 AM

We adopted ClickHouse ~4 years ago. We COULD have stayed on just Postgres. With a lot of bells, whistles, aggregation, denormalisation, aggressive retention limits and job queues etc. we could have gotten acceptable response times for our interactive dashboard.

But we chose ClickHouse and now we just pump in data with little to no optimization.

curtisszmaniatoday at 1:18 PM

[dead]

momonotoday at 6:50 AM

[dead]