The article has a few good tips for using Ollama. Perhaps it should note that the Gemma 4 models are not really trained for strong performance with coding agents like OpenCode, Claude Code, pi, etc. The Gemma 4 models are excellent for applications requiring tool use, data extraction to JSON, etc. I asked Gemini Pro about this earlier and Gemini Pro recommended qwen 3.5 models specifically for coding, and backed that up with interesting material on training. This makes sense, and is something that I do: use strong models to build effective applications using small efficient models.
Oh yeah absolute genius. I asked GPT-2 about Claude Opus 4.6 and it said “this is not a recommendation. You might get some benefits from Opus… but this is not what you want”. Damn, real wisdom from the OG there. What a legend
> I asked Gemini Pro about this earlier and Gemini Pro recommended qwen 3.5 models specifically for coding, and backed that up with interesting material on training.
The Gemma models were literally released yesterday. You can’t ask LLMs for advice on these topics and get accurate information.
Please don’t repeat LLM-sourced answers as canonical information