logoalt Hacker News

zurfertoday at 7:03 PM3 repliesview on HN

It's a cool release, but if someone on the google team reads that: flash 2.5 is awesome in terms of latency and total response time without reasoning. In quick tests this model seems to be 2x slower. So for certain use cases like quick one-token classification flash 2.5 is still the better model. Please don't stop optimizing for that!


Replies

edvinasbartkustoday at 7:50 PM

Did you try setting thinkingLevel to minimal?

thinkingConfig: { thinkingLevel: "low", }

More about it here https://ai.google.dev/gemini-api/docs/gemini-3#new_api_featu...

show 1 reply
Tiberiumtoday at 8:23 PM

You can still set thinking budget to 0 to completely disable reasoning, or set thinking level to minimal or low.

retropragmatoday at 7:46 PM

That's more of a flash-lite thing now, I believe