logoalt Hacker News

GaggiXyesterday at 4:46 PM3 repliesview on HN

They went too far, now the Flash model is competing with their Pro version. Better SWE-bench, better ARC-AGI 2 than 3.0 Pro. I imagine they are going to improve 3.0 Pro before it's no more in Preview.

Also I don't see it written in the blog post but Flash supports more granular settings for reasoning: minimal, low, medium, high (like openai models), while pro is only low and high.


Replies

minimaxiryesterday at 4:57 PM

"minimal" is a bit weird.

> Matches the “no thinking” setting for most queries. The model may think very minimally for complex coding tasks. Minimizes latency for chat or high throughput applications.

I'd prefer a hard "no thinking" rule than what this is.

show 1 reply
skerityesterday at 4:51 PM

> They went too far, now the Flash model is competing with their Pro version

Wasn't this the case with the 2.5 Flash models too? I remember being very confused at that time.

show 1 reply
jugyesterday at 4:52 PM

I'm not sure how I'm going to live with this!