logoalt Hacker News

amlutolast Tuesday at 4:52 PM4 repliesview on HN

> I guess sorting 150 million integers in 70ms shouldn't be surprising.

I find sorting 150M integers at all to be surprising. The query asks for finding the top 3 elements and returning those elements, sorted. This can be done trivially by keeping the best three found so far and scanning the list. This should operate at nearly the speed of memory and use effectively zero additional storage. I don’t know whether Clickhouse does this optimization, but I didn’t see it mentioned.

Generically, one can find the kth best of n elements in time O(n):

https://en.m.wikipedia.org/wiki/Selection_algorithm

And one can scan again to find the top k, plus some extra if the kth best wasn’t unique, but that issue is manageable and, I think, adds at most a factor of 2 overhead if one is careful (collect up to k elements that compare equal to the kth best and collect up to k that are better than it). Total complexity is O(n) if you don’t need the result sorted or O(n + k log k) if you do.

If you’re not allowed to mutate the input (which probably applies to Clickhouse-style massive streaming reads), you can collect the top k in a separate data structure, and straightforward implementations are O(n log k). I wouldn’t be surprised if using a fancy heap or taking advantage of the data being integers with smallish numbers of bits does better, but I haven’t tried to find a solution or disprove the existence of one.


Replies

danlark1last Tuesday at 7:10 PM

I am the author of the optimization of partial sorting and selection in Clickhouse. It uses Floyd-Rivest algorithm and we tried a lot of different things back at the time, read [1]

Overall clickhouse reads blocks of fixed sizes (64k) and finds top elements and then does top of the top until it converges.

[1] https://danlark.org/2020/11/11/miniselect-practical-and-gene...

kevinventulloyesterday at 2:56 AM

With non-mutable “streaming” input, there is an O(n) algorithm to obtain the unsorted top k with only O(k) extra memory.

The basic idea is to maintain a buffer of size 2k, run mutable unsorted top k on that, drop the smaller half (i.e the lowest k elements), then stream in the next k elements from the main list. Each iteration takes O(k), but you’re processing k elements at a time, so overall runtime is O(n).

When you’re done, you can of course sort for an additional k*log(k) cost.

Akronymuslast Tuesday at 5:13 PM

> This can be done trivially by keeping the best three found so far and scanning the list.

That doesnt seem to guarantee correctness. If you dont track all of the unique values, at least, you could be throwing away one of the most common values.

The wiki entry seems to be specifically about the smallest, rather than largest values.

show 3 replies
simonwlast Tuesday at 5:55 PM

Maybe they do have that optimization and that explains the 3.59 MiB peak memory usage for ~600MB of integers.