> Is the 80% rule real or just passed down across decades like other “x% free” rules?
As I understand it, the primary reason for the 80% was that you're getting close to another limit, which IIRC was around 90%, where the space allocator would switch from finding a nearby large-enough space to finding the best-fitting space. This second mode tanks performance and could lead to much more fragmentation. And since there's no defrag tool, you're stuck with that fragmentation.
It has also changed, now[1] the switch happens at 96% rather than 90%. Also the code has been improved[2] to better keep track of free space.
However, performance can start to degrade before you reach this algorithm switch[3], as you're more likely to generate fragmentation the less free space you have.
However, it was also a generic advice, which was ignorant to your specific workload. If you have a lot of cold data, low churn but it's fairly equal in size, then you're probably less affected than if you have high churn with lots of files of varied sizes.
[1]: https://openzfs.github.io/openzfs-docs/Performance%20and%20T...
[2]: https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFra...
[3]: https://www.bsdcan.org/2016/schedule/attachments/366_ZFS%20A...