logoalt Hacker News

Sparse File LRU Cache

41 pointsby paladin314159today at 1:00 AM11 commentsview on HN

Comments

electrolytoday at 1:19 PM

I simply use SQLite for this. You can store the cache blocks in the SQLite database as blobs. One file, no sparse files. I don't think the "sparse file with separate metadata" approach is necessary here, and sparse files have hidden performance costs that grow with the number of populated extents. A sparse file is not all that different than a directory full of files. It might look like you're avoiding a filesystem lookup, but you're not; you've just moved it into the sparse extent lookup which you'll pay for every seek/read/write, not just once on open. You can simply use a regular file and let SQLite manage it entirely at the application level; this is no worse in performance and better for ops in a bunch of ways. Sparse files have a habit of becoming dense when they leave the filesystem they were created on.

show 1 reply
uronitoday at 11:06 AM

I’ve used this technique in the past, and the problem is that the way some file systems perform the file‑offset‑to‑disk‑location mapping is not scalable. It might always be fine with 512 MB files, but I worked with large files and millions of extents, and it ran into issues, including out‑of‑memory errors on Linux with XFS.

The XFS issue has since been fixed (though you often have no control over which Linux version your program runs on), but in general I’d say it’s better to do such mapping in user space. In this case, there is a RocksDB present anyway, so this would come at no performance cost.

avmichtoday at 6:10 AM

We can talk about even more general idea of saving file space: compression. Ever heard about it used across the whole filesystems?

show 3 replies
clawsyndicatetoday at 12:39 PM

sparse files are efficient but they break NFS quota accounting. we run ~10k pods and found that usage reporting drifts and rehydration latency causes weird timeouts. strict ext4 project quotas ended up being more reliable for us.

hahahahhaahtoday at 11:40 AM

I am guessing the choice here is do you want the kernel to handle this and is that more performant than just managing a bunch of regular empty files and a home grown file allocation table.

Or even just bunch of little files representing segments of larger files.