logoalt Hacker News

danny_codesyesterday at 8:34 PM6 repliesview on HN

This exactly. Kimi 2.5 has coding performance hardly discernible from Claude. The only way to maintain a business edge is to crush open source clients to force people into a closed ecosystem. Once there, create context moat where people are not in control of their own context data (cannot export it to open tooling). Maybe we can call it the Oracle play?

It’ll be interesting to see if companies get tricked. I think it’s inevitable that it goes like MySQL/Postgres, where the open tools gets way better


Replies

manacityesterday at 9:53 PM

This is, I'm sorry to say, simply not true. Anthropic and Open AI are materially ahead of every open source model out there at this time. The best they can hope to do is be Sonnet-adjacent, and even then I have not seen it.

show 1 reply
mark_l_watsonyesterday at 10:43 PM

I agree about Kimi 2.5. Also, MiniMax M2.7 that just dropped is amazing, and it is just a 200G MOE model and inference is very fast. I tried using MiniMax M2.7 twice today as the backend for Claude Code and it did very well for both existing Python and Common Lisp projects. I will try MiniMax M2.7 next as the backend for OpenCode.

icedchaiyesterday at 10:51 PM

If you believe benchmarks, maybe this is true. But I've done my own experiments and it is absolutely not the case for real world usage. The quality of output from Claude (Sonnet) was much higher than Kimi K2.5.

theshrike79yesterday at 8:57 PM

Which "Claude"? Sonnet, Opus? With which harness are you comparing the coding performance?

Nowadays the harness matters more than the model itself. For example pi.dev + GPT5-codex is a lot smarter than plain codex cli

extryesterday at 8:56 PM

K2.5 is dog shit compared to leading OAI/Ant models.

tim-staryesterday at 8:51 PM

thats only because kimi 2.5 was trained using data stolen from claude. it wouldnt exist without riding claudes coat tails. none of the so called 'open source' models would

show 2 replies