You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.