Never, local models are for hobby and (extreme) privacy concerns.
A less paranoid and much more economically efficient approach would be to just lease a server and run the models on that.
This.
I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc.
This.
I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc.