logoalt Hacker News

rpcope1yesterday at 4:24 PM2 repliesview on HN

It seems like it's rare to find M.2 with the sort of things you'd want in a NAS like PLP, reasonably high DWPD, good controllers, etc. and you've also got to contend with the problem of managing heat in a way I had never seen with 2.5 or 3.5 drives. I would imagine the sort of people doing NVMe for NAS/SAN/servers are all probably using U.2 or U.3 (I know I do).


Replies

zamadatixyesterday at 6:57 PM

I've been doing my home NASes in m.2 NVMe for years now with 12 disks on one and 22 disks on another (backup still HDD though):

DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.

Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.

PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.

The biggest downsides I ran across were:

- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.

- It costs more, obviously

- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.

The biggest upsides (other than the obvious ones) I ran across were:

- No spinup drain on the PSU

- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.

- I don't look like a struggling fool trying to move a full chassis around :)

8cvor6j844qw_d6yesterday at 4:37 PM

Its also quite difficult to find 2280 M.2 SATA SSD. Had an old laptop that only takes 2280 M.2 SATA SSD.

Its always one of the 2. M.2 but PCIe/NVMe, or SATA but not M.2.

show 1 reply