logoalt Hacker News

AlanYxtoday at 3:09 PM2 repliesview on HN

There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.

But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.


Replies

dwood_devtoday at 3:35 PM

Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.

archagontoday at 8:49 PM

I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.

Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?

show 1 reply