logoalt Hacker News

sinuhe69yesterday at 6:25 PM3 repliesview on HN

I hope better and cheaper models will be widely available because competition is good for the business. However, I'm more cautious about benchmark claims. MiniMax 2.1 is decent, but one can really not call it smart. The more critical issue is that MiniMax 2 and 2.1 have the strong tendency to reward hacking, often write nonsensical test report while the tests actually failed. And sometimes it changed the existing code base to make its new code "pass", when it actually should fix its own code instead.

Artificial Analysis put MiniMax 2.1 Coding index on 33, far behind frontier models and I feel it's about right. [1]

[1] https://artificialanalysis.ai/models/minimax-m2-1


Replies

amlutoyesterday at 11:09 PM

> And sometimes it changed the existing code base to make its new code "pass", when it actually should fix its own code instead.

I haven’t tried MiniMax, but GPT-5.2-Codex has this problem. Yesterday I watched it observe a Python type error (variable declared with explicit incorrect type — fix was trivial), and it added a cast. (“cast” is Python speak for “override typing for this expression”.) I told it to fix it for real and not use cast. So it started sprinkling Any around the program (“Any” is awful Python speak for “don’t even try to understand this value and don’t warn either”).

ostiyesterday at 6:35 PM

That's what I found with some of these LLM models as well. For example I still like to test those models with algorithm problems, and sometimes when they can't actually solve the problem, they will start to hardcode the test cases into the algorithm itself.. Even DeepSeek was doing this at some point, and some of the most recent ones still do this.

show 2 replies
XCSmeyesterday at 7:10 PM

MiniMax 2.1 didn't really work for my data-parsing tasks, a lot of errors.

Instead, this one works surprisingly well for the cost: https://openrouter.ai/xiaomi/mimo-v2-flash