Right now, open models that run on hardware that costs under $5000 can get up to around the performance of Sonnet 3.7. Maybe a bit better on certain tasks if you fine tune them for that specific task or distill some reasoning ability from Opus, but if you look at a broad range of benchmarks, that's about where they land in performance.
You can get open models that are competitive with Sonnet 4.6 on benchmarks (though some people say that they focus a bit too heavily on benchmarks, so maybe slightly weaker on real-world tasks than the benchmarks indicate), but you need >500 GiB of VRAM to run even pretty aggressive quantizations (4 bits or less), and to run them at any reasonable speed they need to be on multi-GPU setups rather than the now discontinued Mac Studio 512 GiB.
The big advantage is that you have full control, and you're not paying a $200/month subscription and still being throttled on tokens, you are guaranteed that your data is not being used to train models, and you're not financially supporting an industry that many people find questionable. Also, if you want to, you can use "abliterated" versions which strip away the censoring that labs do to cause their models to refuse to answer certain questions, or you can use fine-tunes that adapt it for various other purposes, like improving certain coding abilities, making it better for roleplay, etc.
You don't need that much VRAM to run the very largest models, these are MoE models where only a small fraction is being computed with at any given time. If you plan to run with multiple GPUs and have enough PCIe lanes (such as with a proper HEDT platform) CPU-GPU transfers start to become a bit less painful. More importantly, streaming weights from disk becomes feasible, which lets you save on expensive RAM. The big labs only avoid this because it costs power at scale compared to keeping weights in DRAM, but that aside it's quite sound.