Strongly agree. Gemma3:27b and Qwen3-vl:30b-a3b are among my favorite local LLMs and handle the vast majority of translation, classification, and categorization work that I throw at them.
what HW are you running them on ? are you using OLLAMA ?
what HW are you running them on ? are you using OLLAMA ?