In my experience benchmarks are pretty meaningless.
Not only is performance dependent on the language and tasks gives but also the prompts used and the expected results.
In my own internal tests it was really hard to judge whether GPT 5.5 or Opus 4.7 is the better model.
They have different styles and it's basically up to preference. There where even times where I gave the win to one model only to think about it more and change my mind.
At the end of the day I think I slightly prefer Opus 4.7.
In addition, the harness around these models do a lot of work and changes the outcome significantly.
I just had an issue where Claude CLI with Opus 4.7 High could not figure out why my Blazor Server program was inert, buttons didn't do anything etc. After several rounds, I opened the web console and found that it failed to load blazor.js due to 404 on that file. I copied the error message to Claude CLI and after another several unproductive rounds I gave up.
I then moved the Codex, with ChatGTP 5.5 High. I gave it the code base, problem description and error codes. Unlike Claude CLI it spun up the project and used wget/curl to probe for blazor.js, and found indeed it was not served. It then did a lot more probing and some web searches and after a while found my project file was missing a setting. It added that and then probed to verify it worked.
So Codex fixed it in about 20 minutes without me laying hands on it (other than approve some program executions).
However, I'm not convinced this shows GPT 5.5 being that much better than Opus 4.7. It could very well be the harness around it, the system prompts used in the harness and tools available.
For reference this was me just trying to see how good the vibecoding experience is now, so was trying to do this as much hands-off as possible.
I think benchmarks are improving and will always have value, but it's the equivalent to someone's college and GPA for an entry level job application.
It's a strong signal for a job, but the soft skills are sometimes going to get Claude Opus 4.6 a job over smarter applicants. That's what we'd really like to measure objectively, and are actively working on.