logoalt Hacker News

gertlabstoday at 3:45 PM1 replyview on HN

A better benchmark needs to be objectively scored, have multi-disciplinary, breadth, and be scalable (no single correct answer).

That's what we designed at https://gertlabs.com. We put a lot of thought into it, and kept it mostly (not fully) related to problem solving through coding.


Replies

orangebreadtoday at 4:04 PM

Wow. This benchmark definitely feels more accurate than the other rankings I've seen. My experience with gpt 5.4/5.5 is that they are technically flawless and if there are any technical issues that is because the input didn't provide enough clarity; that's not to say that it doesn't autonomously react to any issues during bug fixes or implementations, but it'll tend to nail its tasks without leaving behind gaps.

Opus otoh is overrated in terms of its technical ability. It is certainly a better designer/developer for beautiful user experiences, but I'll always lean on gpt 5.5 to check its work.

The biggest surprise in the benchmark is Xiao-Mi. I haven't tried it yet, but I will be after looking at this.

Grats on your team for putting together something meaningful to make sense of the ongoing AI speedrun! Great work!

show 1 reply