While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
Should a mini-NAS be considered a new type of thing with a new design goal? He seems to be describing about a desktop worth of storage (6TB), but always available on the network and less power consuming than a desktop.
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
I love reviews like these. I'm a fan of the N100 series for what they are in bringing low power x86 small PCs to a wide variety of applications.
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
I've been running one of these quad nvme mini-NAS for a while. They're a good compromise if you can live with no ECC. With some DIY shenanigans they can even run fanless
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
These are cute, I'd really like to see the "serious" version.
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
Are there any mini NAS with ECC ram nowadays? I recall that being my personal limiting factor
Related question: does anyone know of an usb-c powerbank that can be effectively used as UPS? That is to say is able to be charged while maintaining power to load (obviously with rate of charge greater by a few watts than load).
Most models I find reuse the most powerful usb-c port as ... recharging port so unusable as DC UPS.
Context: my home server is my old https://frame.work motherboard running proxmox VE with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at idle.
Which SSDs do people rely on? Considering PLP (power loss protection), write endurance/DWPD (no QLC), and other bugs that affect ZFS especially? It is hard to find options that do these things well for <$100/TB, with lower-end datacenter options (e.g., Samsung PM9A3) costing maybe double what you see in a lot of builds.
What are the non-Intel mini NAS options for lower idle power?
I know of FriendlyElec CM3588, are there others?
This discussion got me curious: how much data are you all hoarding?
For me, the media library is less than 4TB. I have some datasets that, put together, go to 20TB or so. All this is handled with a microserver with 4 SATA spinning metal drives (and a RAID-1 NVMe card for the OS and).
I would imagine most HN'ers to be closer to the 4TB bracket than the 40TB one. Where do you sit?
I use a 12600H MS-01 with 5x4tb nvme. Love the SFP+ ports since the DAC cable doesn't need ethernet to SFP adapters. Intel vPro is not perfect but works just fine for remote management access. I also plug a bus powered dual ssd enclosure to it which is used for Minio object storage.
It's a file server (when did we started calling these "NAS"?) with Samba, NFS but also some database stuff. No VMs or dockers. Just a file and database server.
It has full disk encryption with TPM unlocking with my custom keys so it can boot unattended. I'm quote happy with it.
That's cool, except that NAND memories are horrible to hoard data. It has to be powered all the time as cells needs to be refreshed periodically and if you exceed threshold of like 80 percent of occupied storage you will get huge performance penalty due to internal memory organization.
I was about to order that GMKtek G9 and then saw Jeff's video about it on the same day. All those issues, even with the later fixes he showed, are a big no-no for me. Instead, I went with a Odroid H4-Ultra with an Intel N305, 48GB Crucial DDR5 and 4x4TB Samsung 990 Evo SSDs (low-power usage) + a 2TB SATA SSD to boot from. Yes, the SSDs are way overkill and pretty expensive at $239 per Samsung 990 Evo (got them with a deal at Amazon). It's running TrueNAS. I am somewhat space-limited with this system, didn't want spinning disks (as the whole house slightly shakes when pickup or trash trucks pass by), wanted a fun project and I also wanted to go as small as possible.
No issues so far. The system is completely stable. Though, I did add a separate fan at the bottom of the Odroid case to help cool the NVMe SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets maxed out. Maybe I could try bonding the 2 networking ports but I don't have any client devices that could use it.
I had an eye on the Beelink ME Mini too, but I don't think the NVMe disks are sufficiently cooled under load, especially on the outer side of the disks.
My main challenge is that we don't have wired ethernet access in our rooms so even if I bought a mini-NAS and attached it to the router over ethernet, all "clients" will be accessing it over wifi.
Not sure if anyone else has dealt with this and/or how this setup works over wifi.
Question regarding these mini pcs: how do you connect them to plain old hard drives ? Is thunderbolt / usb these days reliable enough to run 24/7 without disconnects like an onboard sata?
There are lots of used mini ex-corporate desktops on ebay. Dell Leonovo etc, they're probably the best value.
I’ve been always puzzled by the strange choice of raiding multiple small capacity M.2 NVMe in these tiny low-end Intel boxes with severely limited PCIe lanes using only one lane per SSD.
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
> Testing it out with my disk benchmarking script, I got up to 3 GB/sec in sequential reads.
To be sure... is the data compressible, or repeated? I have encountered an SSD that silently performed compression on the data I wrote to it (verified by counting its stats on blocks written). I don't know if there are SSDs that silently deduplicate the data.
(An obvious solution is to copy data from /dev/urandom. But beware of the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds to read 1GB from /dev/urandom, so that would be the bottleneck in a write test. But at least for a read test, it doesn't matter how long the data took to write.)
NVMe NAS is completely and totally pointless with such crap connectivity.
What in the WORLD is preventing these systems from getting at least 10gbps interfaces? I have been waiting for years and years and years and years and the only thing on the market for small systems with good networking is weird stuff that you have to email Qotom to order direct from China and _ONE_ system from Minisforum.
I'm beginning to think there is some sort of conspiracy to not allow anything smaller than a full size ATX desktop to have anything faster than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the solution.)
Still think its highly underrated to use fs-cache with NASes (usually configured with cachefilesd) for some local dynamically scaling client-side nvme caching.
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
I was recently looking for a mini PC to use as a home server with, extendable storage. After comparing different options (mostly Intel), I went with the Ryzen 7 5825U (Beelink SER5 Pro) instead. It has an M.2 slot for an SSD and I can install a 2.5" HDD too. The only downside is that the HDD is limited by height to 7 mm (basically 2 TB storage limit), but I have a 4 TB disk connected via USB for "cold" storage. After years of using different models with Celeron or Intel N CPUs, Ryzen is a beast (and TDP is only 15W). In my case, AMD now replaced almost all the compute power in my home (with the exception of the smartphone) and I don't see many reasons to go back to Intel.
Is it possible (and easy) to make a NAS with harddrives for storage and an SSD for cache? I don't have any data that I use daily or even weekly, so I don't want the drives spinning needlessly 24/7, and I think an SSD cache would stop having to spin them up most of the time.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
So Jeff is really decent guy that doesn’t keep terabytes of Linux ISOs.
I am currently running a 8 4TB NVMe NAS via OpenZFS on TrueNAS Linux. It is good but my box is quite large. I made this via a standard AMD motherboard with both built-in NVMe slots as well as a bunch of expansion PCEi cards. It is very fast.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
I think the N100 and N150 suffer the same weakness for this type of use case in the context of SSD storage 10gb networking. We need a next generation chip that can leverage more PCI lanes with roughly the same power efficiency.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
Something that Apple should have done with TimeCapsule iOS but they were too focused on service revenue.
Whenever these things come up I have to point out the most of these manufactures don’t do bios updates. Since spectre/meltdown we see cpu and bios vulnerabilities every few months-yearly.
I know u can patch microcode at runtime/boot but I don’t think that covers all vulnerabilities
I’ve been thinking about moving from SSDs for my NAS to solid state. The drive are so loud, all the time, it’s very annoying.
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
What types of distributed/network filesystem are people running nowadays on Linux?
thanks for the article.
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or USB. I think I'd want HMDI and let the nas handle streaming/decoding. But if my TV can handle enough formats. maybe USB will do.
anyone have experience with this?
I've been using a combination of media server on my Mac with client on Apple TV and I have no end of glitches.
These look compelling, but unfortunately, we know that SSDs are not nearly as reliable as spinning rust hard drives when it comes to data retention: https://www.tomshardware.com/pc-components/storage/unpowered...
(I assume M.2 cards are the same, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
Also, these things are just begging for a 10Gbps Ethernet port, since you're going to lose out on a ton of bandwidth over 2.5Gbps... though I suppose you could probably use the USB-C port for that.
Would be nice to see what those little N100 / N150 (or big brother N305 / N350) can do with all that NVMe. Raw throughput is pretty whatever but hypothetically if the CPU isn't too gating, there's some interesting IOps potential.
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
i want a NAS i can puf 4tb nvme’s in and a 12tb hdd running backup every night. with ability to shove a 50gbps sfp card in it so i can truly have a detached storage solution.
These need remote management capabilities (IPMI) to not be a huge PITA.
I will wait until the have AMD efficient chip for one very simple reason: AMD graciously allow ECC on some* cpus.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
[flagged]
Intel N150 is the first consumer Atom [1] CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986