> I asked Gemini Pro about this earlier and Gemini Pro recommended qwen 3.5 models specifically for coding, and backed that up with interesting material on training.
The Gemma models were literally released yesterday. You can’t ask LLMs for advice on these topics and get accurate information.
Please don’t repeat LLM-sourced answers as canonical information
I spent two hours doing my own research before asking for Gemini’s analysis, which reinforced my own opinion that the gemini models historically have not been trained and target for agentic coding use.
Have you tried using the new Gemma 4 models with agentic coding tools?If you do, you might end up agreeing with me.
LLMs can search the web. Although I don’t trust the LLM (or someone repeating its claim) without quotes and URLs to where it got the information.
It's not just LLM sourced though, folks have literally tried this after the release with the 26A4B model and it wasn't very good. Maybe the dense ~31B model is worthwhile though.