I ran a fairly large production test of this and on _every_ measure except for privacy it was worse than a free tier server hosted LLM.
Not happy about that as I would like to see more local models but that's the current state of things.
https://sendcheckit.com/blog/ai-powered-subject-line-alterna...
> on _every_ measure except for privacy it was worse than a free tier server hosted LLM
Would you be able to compare this to other local models in it's class and a above that would fit consumer-grade hardware?