Yeah, that's mostly fair, but it kind of misses the point. This is a professional tool for AI R&D. Not something that strives to be the cheapest possible option for the homelab. It's fine to use them in the lab, but that's not who they built it for.
If I wanted to I could go on ebay, buy a bunch of parts, build my own system, install my own OS, compile a bunch of junk, tinker with config files for days, and then fire up an extra generator to cope with the 2-4x higher power requirements. For all that work I might save a couple of grand and will be able to actually do less with it. Or... I could just buy a GB10 device and turn it on.
It comes preconfigured to run headless and use the NVIDIA ecosystem. Mine has literally never had a monitor attached to it. NVIDIA has guides and playbooks, preconfigured docker containers, and documentation to get me up and developing in minutes to hours instead of days or weeks. If it breaks I just factory reset it. On top of that it has the added benefit of 200Gbe QSFP networking that would cost $1,500 on it's own. If I decide I need more oomph and want a cluster I just buy another one and connect them, then copy/paste the instructions from NVIDIA.
> This is a professional tool for AI R&D.
Not really, not it isn't, because it's deliberately gimped and doesn't support the same feature-set as the datacenter GPUs[1]. So as a professional development box to e.g. write CUDA kernels before you burn valuable B200 time it's completely useless. You're much better off getting an RTX 6000 or two, which is also gimped, but at least is much faster.
[1] -- https://github.com/NVIDIA/dgx-spark-playbooks/issues/22
You could also pay someone $5 an hour and they’ll give you a better machine for similar hassle.