I don't think anyone can honestly say a huge frontier model is actually going to be matched by something running on 64gb locally?
I have read many comments saying Qwen3.5 various ~30B models, Gemma 4 ~30B models and now Qwen3.6 "better than sonnet".
I don't know how large sonnet and opus are but the rumor is 1T and 5T respectively.
You don't have to use the most recent bleeding edge model to succeed. A local FOSS coding agent coupled with a reasonably priced LLM could yield the optimal ROI.