You seem like you know what you're talking about... what inference engine should I use? (linux, 4090)
I keep having "I tried it but it sucks" issues mostly around tool calling and it's not clear if it's the model or ollama. And not one model in particular, any of them really.
I've had really good success with LMStudio and GLM 4.7 Flash and the Zed editor which has a baked in integration with LMStudio. I am able to one-shot whole projects this way, and it seems to be constantly improving. Some update recently even allowed the agent to ask me if it can do a "research" phase - so it'll actually reach out to website and read docs and code from github if you allow it. GLM 4.7 flash has been the most adept at tool calling I've found, but the Qwen 3 and 3.5 models are also fairly good, though run into more snags than I've seen with GLM 4.7 flash.
I don’t know if any of engines are fully tested yet.
For new LLMs I get in the habit of building llama.cpp from upstream head and checking for updated quantizations right before I start using it. You can also download llama.cpp CI builds from their release page but on Linux it’s easy to set up a local build.
If you don’t want to be a guinea pig for untested work then the safe option would be to wait 2-3 weeks
just use openrouter or google ai playground for the first week till bugs are ironed out. You still learn the nuances of the model and then yuu can switch to local. In addition you might pickup enough nuance to see if quantization is having any effect
For the specific issue parent is talking about, you really need to give various tools a try yourself, and if you're getting really shit results, assume it's the implementation that is wrong, and either find an existing bug tracker issue or create a new one.
Same thing happened when GPT-OSS launched, bunch of projects had "day-1" support, but in reality it just meant you could load the model basically, a bunch of them had broken tool calling, some chat prompt templates were broken and so on. Even llama.cpp which usually has the most recent support (in my experience) had this issue, and it wasn't until a week or two after llama.cpp that GPT-OSS could be fairly evaluated with it. Then Ollama/LM Studio updates their llama.cpp some days after that.
So it's a process thing, not "this software is better than that", and it heavily depends on the model.