There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.
But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.
I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.
Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?
Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.