logoalt Hacker News

vesseneslast Thursday at 8:41 PM1 replyview on HN

Hey! Love the Gemma series. Question that came to mind reading the announcement post - the proposal there is that you can use this as a local backbone and have it treat a larger model as a 'tool call' when more reasoning is needed.

In my mind we want a very smart layer frontier model orchestrating, but not slowing everything down by doing every little thing; this seems like the opposite - a very fast layer that can be like "wait a minute, I'm too dumb for this, need some help".

My question is - does the Gemma team use any evaluation around this particular 'call a (wiser) friend' strategy? How are you thinking about this? Is this architecture flow more an accommodation to the product goal - fast local inference - or do you guys think it could be optimal?


Replies

canyon289last Thursday at 8:59 PM

We evaluate many things that you alluded to, such as speed on device, output correctness, and also "is this something that would be useful" the last one being a bit abstract.

The way we think about it is what do we think developers and users need, and is there a way we can fill that gap in a useful way. With this model we had the hypothesis you had, there are fantastic larger models out there pushing the frontier of AI capabilities, but there's also a nice for smaller customizable model that's quick to run and quick to tune.

What is optimal then ultimately falls to you and your use cases (which I'm guessing at here), you have options now between Gemini and Gemma.

show 1 reply