logoalt Hacker News

MaxCompressiontoday at 3:29 PM0 repliesview on HN

One underappreciated aspect of zswap vs zram is the compression algorithm choice and its interaction with the data being compressed.

LZ4 (default in both) is optimized for speed at the expense of ratio — typically 2-2.5x on memory pages. zstd can push that to 3-3.5x but at significantly higher CPU cost per page fault.

The interesting tradeoff: memory pages are fundamentally different from files. They contain lots of pointer-sized values, stack frames, and heap metadata — data patterns where simple LZ variants actually perform surprisingly well relative to more complex algorithms. Going beyond zstd (e.g., BWT-based or context mixing) would give diminishing returns on memory pages while destroying latency.

So the real question isn't just "zswap vs zram" but "how much CPU are you willing to spend per compressed page, given your workload's memory access patterns?" For latency-sensitive workloads, LZ4 with zswap writeback is hard to beat.