logoalt Hacker News

Gemini 3 Flash: Frontier intelligence built for speed

609 pointsby meetpateltechtoday at 4:42 PM284 commentsview on HN

Docs: https://ai.google.dev/gemini-api/docs/gemini-3

Developer Blog: https://blog.google/technology/developers/build-with-gemini-...

Model Card [pdf]: https://deepmind.google/models/model-cards/gemini-3-flash/

Gemini 3 Flash in Search AI mode: https://blog.google/products/search/google-ai-mode-update-ge...

Deepmind Page: https://deepmind.google/models/gemini/flash/


Comments

Tiberiumtoday at 4:52 PM

Yet again Flash receives a notable price hike: from $0.3/$2.5 for 2.5 Flash to $0.5/$3 (+66.7% input, +20% output) for 3 Flash. Also, as a reminder, 2 Flash used to be $0.1/$0.4.

show 1 reply
walthamstowtoday at 5:10 PM

I'm sure it's good, I thought the last one was too, but it seems like the backdoor way to increase prices is to release a new model

show 1 reply
tanhtoday at 4:47 PM

Does this imply we don't need as much compute for models/agents? How can any other AI model compete against that?

timperatoday at 6:35 PM

Looks awesome on paper. However, after trying it on my usual tasks, it is still very bad at using the French language, especially for creative writing. The gap between the Gemini 3 family and GPT-5 or Sonnet 4.5 is important for my usage.

Also, I hate that I cannot send the Google models in a "Thinking" mode like in ChatGPT. When I send GPT 5.1 Thinking on a legal task and tell it to check and cite all sources, it takes +10 minutes to answer, but it did check everything and cite all its sources in the text; whereas the Gemini models, even 3 Pro, always answer after a few seconds and never cite their sources, making it impossible to click to check the answer. It makes the whole model unusable for these tasks. (I have the $20 subscription for both)

show 1 reply
heliophobicdudetoday at 5:45 PM

Any word on if this using their diffusion architecture?

JeremyHerrmantoday at 5:07 PM

Disappointed to see continued increased pricing for 3 Flash (up from $0.30/$2.50 to $0.50/$3.00 for 1M input/output tokens).

I'm more excited to see 3 Flash Lite. Gemini 2.5 Flash Lite needs a lot more steering than regular 2.5 Flash, but it is a very capable model and combined with the 50% batch mode discount it is CHEAP ($0.05/$0.20).

show 1 reply
nickvectoday at 5:37 PM

So is Gemini 3 Fast the same as Gemini 3 Flash?

show 1 reply
prompt_godtoday at 8:32 PM

it's better than Pro in a few evals. anyone who used, how is it for coding?

retinarostoday at 8:08 PM

i might have missed the bandwagon on gemini but I never found the models to be reliable. now it seems they rank first in some hallucinations bench?

I just always thought the taste of gpt or claude models was more interesting in the professional context and their end user chat experience more polished.

are there obvious enterprise use cases where gemini models shine?

GaggiXtoday at 4:46 PM

They went too far, now the Flash model is competing with their Pro version. Better SWE-bench, better ARC-AGI 2 than 3.0 Pro. I imagine they are going to improve 3.0 Pro before it's no more in Preview.

Also I don't see it written in the blog post but Flash supports more granular settings for reasoning: minimal, low, medium, high (like openai models), while pro is only low and high.

show 3 replies
i_love_retrostoday at 9:56 PM

Oh wow another LLM update!

jijjitoday at 5:56 PM

I tried Gemini CLI the other day, typed in two one line requests, then it responded that it would not go further because I ran out of tokens. I've hard other people complaint that it will re-write your entire codebase from scratch and you should make backups before even starting any code-based work with the Gemini CLI. I understand they are trying to compete against Claude Code, but this is not ready for prime time IMHO.

jdthediscipletoday at 7:52 PM

To those saying "OpenAI is toast"

ChatGPT still has 81% market share as of this very moment, vs Gemini's ~2%, and arguably still provides the best UX and branding.

Everyone and their grandma knows "ChatGPT", who outside developers' bubble has even heard of Gemini Flash?

Yea I don't think that dynamic is switching any time soon.

show 2 replies
anonym29today at 5:28 PM

I never have, do not, and conceivably never will use gemini models, or any other models that require me to perform inference on Alphabet/Google's servers (i.e. gemma models I can run locally or on other providers are fine), but kudos to the team over there for the work here, this does look really impressive. This kind of competition is good for everyone, even people like me who will probably never touch any gemini model.

show 1 reply
andrepdtoday at 4:59 PM

Is there a way to try this without a Google account?

show 1 reply
moralestapiatoday at 4:57 PM

Not only it is fast, it is also quite cheap, nice!

inquirerGeneraltoday at 5:12 PM

[dead]

Lucasjohnteetoday at 6:25 PM

[dead]

imvetritoday at 5:02 PM

this is why samsung is stopping production in flash

show 1 reply