I have a slightly cheaper similar box, NVIDIA Thor Dev Kit. The point is exactly to avoid deploying code to servers that cost half a million dollars each. It's quite capable in running or training smart LLMs like Qwen3-Next-80B-A3B-Instruct-NVFP4. So long as you don't tear your hair out first figuring out pecularities and fighting with bleeding edge nightly vLLM builds.
> training smart LLMs like Qwen3-Next-80B-A3B-Instruct-NVFP4
Sounds interesting; can you suggest any good discussions of this (on the web)?