logoalt Hacker News

banjoelast Wednesday at 5:20 PM3 repliesview on HN

Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.


Replies

embedding-shapelast Wednesday at 5:27 PM

Just remember to benchmark it yourself first with you private task collection, so you can actually measure them against each other. Pretty much any public benchmark is unreliable at this moment, and making model choices based on other's benchmarks is bound to leave you disappointed.

show 1 reply
red2awnlast Wednesday at 8:45 PM

Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.

skrunchlast Thursday at 10:41 AM

Except the image benchmarks are compared against 2.0, which seems suspicious that they would casually drop to an older model for those.