I found in the article that the column uses 70 Mb of storage. if it was sorted (i.e. if it was an index) it would take even much less space. I don't understand though how they loaded 70 Mb of data with 125 MiB/s SSD in 70 ms.
There is no need in loading data block, which has no rows with column values, which might be included into the final set of rows. If every column in every granule has a header containing the minimum and the maximum value seen in the granule, then ClickHouse can read and check only the column header per every granule, without the need to read the column data.
There is no need in loading data block, which has no rows with column values, which might be included into the final set of rows. If every column in every granule has a header containing the minimum and the maximum value seen in the granule, then ClickHouse can read and check only the column header per every granule, without the need to read the column data.