I'm glad we're seeing a shift towards objectively scored tests.
We've been doing this at scale at https://gertlabs.com/rankings, and although the author looks to be running unique one-off samples, it's not surprising to see how well Kimi K2.6 performed. Based on our testing, for coding especially, Kimi is within statistical uncertainty of MiMo V2.5 Pro for top open weights model, and performs much better with tools than DeepSeek V4 Pro.
GPT 5.5 has a comfortable lead, but Kimi is on par with or better than Opus 4.6. The problem with Kimi 2.6 is that it's one of the slower models we've tested.
In my experience benchmarks are pretty meaningless.
Not only is performance dependent on the language and tasks gives but also the prompts used and the expected results.
In my own internal tests it was really hard to judge whether GPT 5.5 or Opus 4.7 is the better model.
They have different styles and it's basically up to preference. There where even times where I gave the win to one model only to think about it more and change my mind.
At the end of the day I think I slightly prefer Opus 4.7.
Any thoughts on using it on Fireworks? It's extremely fast there.
Seems like in agentic work flow the qween flash and Deepseek flash models are quite good.
Fits with another comment from yesterday on here who said the flash models are just better at tool calling.
Planning with gpt55 and implementation with a flash model could be bang for the buck route.