I wrote about this recently too:
https://www.theregister.com/2026/03/13/zram_vs_zswap/
I prefer zswap to zram and as I linked at the end of the piece, it's not just me:
https://linuxblog.io/zswap-better-than-zram/
Maybe I am overthinking but I am wondering if this piece about myths is in any way a response to my article?
Would be nice if zswap could be configured to have no backing cache so it could completely replace zram. Having two slightly different systems is weird.
There's not really any difference between swap on disk being full and swap in ram being full, either way something needs to get OOM killed.
Simplifying the configuration would probably also make it easier to enable by default in most distros. It's kind of backwards that the most common Linux distros other than ChromeOS are behind Mac and Windows in this regard.
A simpler alternative to OOM daemons could be enabling MGLRU's thrashing prevention: https://www.kernel.org/doc/html/next/admin-guide/mm/multigen...
I'm using it together with zram sized to 200% RAM size on a low RAM phone with no disk swap (plus some tuning like the mentioned clustering knob) and it works pretty well if you don't mind some otherwise preventable kills, but I will happily switch to diskless zswap once it's ready.
>They size the zram device to 100% of your physical RAM, capped at 8GB. You may be wondering how that makes any sense at all – how can one have a swap device that's potentially the entire size of one's RAM?
zram size applies to uncompressed data, real usage is dynamically growing (plus static bookkeeping). Most memory compresses well, so you probably want to have zram device size even larger than physical RAM.
> It only really makes sense for extremely memory-constrained embedded systems
Even "mildly" memory constrained embedded systems don't use swap because their resources are tailored for their function. And they are typically not fans [1] of compression either because the compression rate is often unpredictable.
[1] Yes, they typically don't need fans because overheating and using a motor for cooling is a double waste of energy.
With zram, I can just use zram-generator[0] and it does everything for me and I don't even need to set anything up, other than installing the systemd generator, which on some distros, it's installed by default. Is there anything equivalent for zswap? Otherwise, I'm not surprised most people are just using zram, even if sub-optimal.
[dead]
So much polemic and no numbers? If it is a performance issue, show me the numbers!
can you make a follow-up here for the best way to setup swap to support full disk encryption+hybernation?
That's a banger article, I don't even like low level stuff and yet read the whole thing. Hopefully I will have opportunity to use some of it if I ever get around to switch my personal notebook back to linux
There is one more feature that zram can do: multiple compression levels. I use simple bash script to first use fast compression and after 1h recompress it using much stronger compression.
unfortunately you cannot chain it with any additional layer or offload to disk later on, because recompression breaks idle tracking by setting timestamp to 0 (so it's 1970 again)
https://gist.github.com/Szpadel/9a1960e52121e798a240a9b320ec...
I used to put swap on zram when my laptop had one of those early ssds, that people would tell you not to put swap on for fear of wearing them out
Setup was tedious
thank goodness Kubernetes got support for swap; zswap has been a great boon for one of my workloads
[dead]
[dead]
User here, who also acts as a Level 2 support for storage.
The article contains some solid logic plus an assumption that I disagree with.
Solid logic: you should prefer zswap if you have a device that can be used for swap.
Solid logic: zram + other swap = bad due to LRU inversion (zram becomes a dead weight in memory).
Advice that matches my observations: zram works best when paired with a user-space OOM killer.
Bold assumption: everybody who has an SSD has a device that can be used for swap.
The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells. This is much worse than any HDD. If a DRAMless consumer SSD is all that you have, better use zram.