Wen gemma4? :)
But on a serious note, I'm happy to see more research going into vSLMs (very small...) My "dream" scenario is to have the "agentic" stuff run locally, and call into the "big guns" as needed. Being able to finetune these small models on consumer cards is awesome, and can open up a lot of niche stuff for local / private use.
Trust me as a daily at home Gemma user myself, I'm just excited for what's upcoming as you are, maybe even more because I have some hints for what's to come.
>My "dream" scenario is to have the "agentic" stuff run locally, and call into the "big guns" as needed.
FunctionGemma 270m is your starter pack for this, train your own functions to call out to whatever larger models you choose. It's been quite effective my testing, and the finetuning guides should show you how to add in your own capabilities.
Speaking from the research side its incredible how so many small models, not just Gemma, are achieving performance levels of must larger models from just a year or two ago. It's personally why I stay in this space.