Purely anecdotal, but GPT 5.4 has been better than Opus 4.6 this past week or so since it came out. It’s interesting to see it rank fairly low on that table. Opus “talks” better and produces nicer output (or, it renders better Markdown in OpenCode) than 5.4.
Not in my experience. Quoting my tweet:
Gave the same prompt to GPT 5.4 (high) and Opus 4.6 (high).
GPT 5.4 implemented the feature, refactored the code (was not asked to), removed comments that were not added in that session, made the code less readable, and introduced a bug. "Undo All".
Opus 4.6 correctly recognized that the feature is already implemented in the current code (yeah, lol) and proposed implementing tests and updating the docs.
Opus 4.6 is still the best coding agent.
So yeah, GPT 5.4 (high) didn't even check if the feature was already implemented.
Tried other tasks, tried "medium" reasoning - disappointment.
Chatbot Arena is notoriously unreliable for several reasons. First it's (at least in theory) based on normal human feedback. Given by normal people's current voting trends, they clearly are not very good at identifying experts or at least remotely correct statements. Second, the leaderboards are gamed hard by the big companies. Even ARC AGI entered the actively gamed stage by now. Sure the current gen models are certainly better than the last and if two are vastly different in leaderboards there may be something fundamental to it, but there is hardly any reason to use these kinds of comparison tables for anything useful among the latest models.