Related: Many SWE-bench-Passing PRs would not be merged - https://news.ycombinator.com/item?id=47341645 - March 2026 (149 comments)
I feel that two things are true at the same time:
1) Something happened during 2025 that made the models (or crucially, the wrapping terminal-based apps like Claude Code or Codex) much better. I only type in the terminal anymore.
2) The quality of the code is still quite often terrible. Quadruple-nested control flow abounds. Software architecture in rather small scopes is unsound. People say AI is “good at front end” but I see the worst kind of atrocities there (a few days ago Codex 5.3 tried to inject a massive HTML element with a CSS before hack, rather than proprerly refactoring markup)
Two forces feel true simultaneously but in permanent tension. I still cannot make out my mind and see the synthesis in the dialectic, where this is truly going, if we’re meaningfully moving forward or mostly moving in circles.
There is a decent case for this thesis to hold true especially if we look at the shift in training regimes and benchmarking over the last 1-2 years. Frontier labs don't seem to really push pure size/capability anymore, it's an all in focus on agentic AI which is mainly complex post-training regimes.
There are good reasons why they don't or can't do simple param upscaling anymore, but still, it makes me bearish on AGI since it's a slow, but massive shift in goal setting.
In practice this still doesn't mean 50 % of white collar can't be automated though.
I am pretty convinced that for most types of day to day work, any perceived improvements from the latest Claude models for example were total placebo. In blind tests and with normal tasks, people would probably have no idea if they're using Opus 4.5 or 4.6.
I reckon LLM merge rates will go up, but not necessarily due to quality improvements. Instead I think maintainers will just become fatigued. The amount of code I'm expected to review now is way higher than before. And while I'm reviewing you know more is being generated. I'm sure I've let through more crap due to this fatigue attack on me.
Interesting article, although with so few data points and such a specific time slice it is difficult to draw serious conclusions about the "improvement" of LLM models.
It's notably lacking newer models (4.5 Opus, 4.6 Sonnet) and models from Gemini.
LLMs appear to naturally progress in short leaps followed by longer plateaus, as breakthroughs are developed such as chain-of-thought, mixture-of-experts, sub-agents, etc.
I feel even if the models are stagnating, the tooling around them, and the integrations and harnesses they have are getting significantly more capable (if not always 'better' - the recent vscode update really handicapped them for some reason). Things like the new agent from booking.com or whatever, if it could integrate with all hotels, activities, mapping tools, flight system, etc could be hugely powerful.
Assuming we get no better than opus 4.6, they're very capable. Even if they make up nonsense 5% of the time!
I don't think it's true, but am I alone in wishing it was? My world is disrupted somewhat but so far I don't think we have a thing that upends our way of life completely yet. If it stayed exactly this good I'd be pretty content.
I think what happened with static image generation is happening with LLMs. Basically the tools around are becoming better, but all the AI improvements stall, the error rate stay the same (but external tools curate the results so it won't be noticeable if you don't run your own model), the accuracy is still slightly improving, but slower and slower, and never reach the 'perfect' point. Basically stablediffusion early 2025
That's an interesting claim, but I don't see it in my own work. They have got better but it's very hard to quantify. I just find myself editing their work much less these days (currently using GPT 5.4).
Controversial opinion from a casual user, but state-of-art LLMs now feel to me more intelligent then the average person on the steet. Also explains why training on more average-quality data (if there's any left) is not making improvements.
But LLMs are hamstrung by their harnesses. They are doing the equivalent of providing technical support via phone call: little to no context, and limited to a bidirectional stream of words (tokens). The best agent harnesses have the equivalent of vision-impairment accessibility interfaces, and even those are still subpar.
Heck, giving LLMs time to think was once a groundbreaking idea. Yesterday I saw Claude Code editing a file using shell redirects! It's barbaric.
I expect future improvements to come from harness improvements, especially around sub agents/context rollbacks (to work around the non-linear cost of context) and LLM-aligned "accessibility tools". That, or more synthetic training data.
I gave up on trying months ago, you can see the timeline on top of https://fabien.benetou.fr/Content/SelfHostingArtificialIntel...
Truth is I'm probably wrong. I should keep on testing ... but at the same time I precisely gave up because I didn't think the trend was fast enough to keep on investing on checking it so frequently. Now I just read this kind of post, ask around (mainly arguing with comments asking for genuine examples that should be "surprising" and kept on being disappointed) and that seems to be enough for a proxy.
I should though, as I mentioned in another comment, keep track of failed attempts.
PS: I check solely on self-hosted models (even if not on my machine but least on machines I could setup) because I do NOT trust the scaffolding around proprietary closed sources models. I can't verify that nobody is in the loop.
My experience has been that raw “one-shot intelligence” hasn’t improved as dramatically in the last year, but the workflow around the models has improved massively.
When you combine models with:
tool use
planning loops
agents that break tasks into smaller pieces
persistent context / repos
the practical capability jump is huge.
As they become more capable peoples commits will also become more ambitious.
So I’d say fairly flat commit acceptance numbers make sense even in the context of improving LLMs
How do they know? Not everybody includes to coauthored by Claude. I certainly don't.
Benchmaxxing aside, if you are using those tools for programming on a regular basis it should be self-evident that they are improving. I find it very hard to believe that someone using LLMs today vs what was available one year ago (Claude Code released Feb 2025) would have any difficulty answering this question.
fwiw the merge rate metric itself might be misleading. most real codebases have implicit conventions and architectural patterns that aren't captured in the issue description, so even if the model writes correct code it might not match what the maintainer actually wanted. imo the bigger signal is how much back-and-forth it takes before merging, not whether the first attempt lands cleanly.
You really can't model these 5 data points with a linear regression or a step function. The models are of different sizes / use cases, and from two different labs. I feel like what we've observed generally is that different labs releasing similarly sized models at similar times are generally pretty similar.
I think the only reasonable thing to read into is Sonnet 3.5 -> 3.7 -> 4.5. But yeah, you just can't draw a line through this thing.
I will die on the hill that LLMs are getting better, particularly Anthropic's releases since December. But I can't point at a graph to prove that, I'm just drawing on my personal experience. I do use Claude Code though, so I think a large part of the improvement comes from the harness.
Well, on one hand they lack new data. Lot's of new code came out of an LLM, so it feeds back.
On the other hand, LLMs tend to go for an average by their nature (if you squint enough). What's more common in their training data, it's more common in the output, so getting them better without fundamental changes, requires one to improve the training data on average too which is hard.
What did improve a lot is the tooling around them. That's gotten way better.
These studies are always really hard to judge the efficacy of. I would say though the most surprising thing to me about LLMs in the past year is how many people got hyped about the Opus 4.5 release. Having used Claude Code at work since it was released I haven't really noticed any step changes in improvement. Maybe that's because I've never tried to use it to one shot things?
Regardless I'm more inclined to believe that 4.5 was the point that people started using it after having given up on copy/pasting output in 2024. If you're going from chat to agentic level of interaction it's going to feel like a leap.
I had this suspicion for a while I think we just got way better in harnessing not the models actual reasoning
So we got better in giving it the right context and tools to do the stuff we need to do but not the actual thinking improvements
I've been able to supercharge a hobby project of mine over the last couple months using Opus 4.6 in claude code. I had to collaborate and write code still, but claude did like 75% of the work to add meaningful new features to an iOS/Android native mobile app, including Live Activities which is so overly complicated i would not have been able to figure that out. I have it running in a folder that contains both my back end api (express) and my mobile app (nativescript), so it does back end and front end work simultaneously to support new features. this wasnt possible 8 months ago.
I feel like anyone used AI coding tools before 11/25 and after 1/26 (with frontier models) will say there has been a massive jump in, there is a difference between whether LLM can do a specific task or pass some arguably arbitrary checks by maintainers vs. what the are capable of.
We still have tons of gaps about how to build and maintain code with AI, but LLM themselves getting better at an unbelievable pace, even with this kind of data analysis I’m surprised anyone can even question it.
Data is missing on this chart.
It's my experience that opus 4, and then, particularly, 4.5, in Claude code, are head and shoulders above the competition.
I wrote an agentic coder years ago and it yielded trash. (Tried to make it do then what kiro does today).
The models are better. Now, caveat - I don't use anything but opus for coding - Sonnet doesn't do the trick. My experience with Codex and Gemini is that their top models are as good as Sonnet for coding...
Given that it is the general consensus that a step function occurred with Opus 4.5/4.6 only 3 months ago - it seems like an insane omission.
I agree completely. I haven't noticed much improvement in coding ability in the last year. I'm using frontier models.
What's been the game changer are tools like Claude Code. Automatic agentic tool loops purpose built for coding. This is what I have seen as the impetus for mainstream adoption rather than noticeable improvements in ability.
From my personal experience, they have gotten better, but they haven’t unlocked any new capabilities. They’ve just improved at what I was already using them for.
At the end of the day they still produce code that I need to manually review and fully understand before merging. Usually with a session of back-and-forth prompting or manual edits by me.
That was true 2 years ago, and it’s true now (except 2 years ago I was copy/pasting from the browser chat window and we have some nicer IDE integration now).
Yeah I'm not buying the last bit about lower MSE with one term in the model vs two (Brier with one outcome category is MSE of the probabilities). That's the sort of thing that would make me go dig to find where I fucked up the calculation.
If you look at a separate trend for the smaller Sonnet models, you can see a rapid trend
Yesterday I asked a frontier model to help generate a report. It said great, it can do that, and output a table. I asked it to evaluate its prompt compliance in the result. It concluded that it had failed on every requirement. I asked why it had expressed such confidence, was it analagous to narcissism or psycopathy? It said no, and then said that if I just had to anthropomorphize it, I should think of it as a brilliant friend with severe frontal lobe brain damage.
That actually helps.
Even if one-shot LLM performance has plateaued (which I'm not convinced this data shows given omission of recent models that are widely claimed to be better) that missing the point that I see in my own work. The improved tooling and agent-based approaches that I'm using now make the LLM one-shot performance only a small part of the puzzle in terms of how AI tools have accelerated the time from idea to decent code. For instance the planning dialogs I now have with Claude are an important part of what's speeding things up for me. Also, the iterative use of AI to identify, track, and take care of small coding tasks (none of which are particularly challenging in terms of benchmarks) is simply more effective. Could this all have been done with the LLM engines of late 2024. Perhaps, but I think the fine-tuning (and conceivably the system prompts) that make the current LLM's more effective at agent-centered workflows (including tool-use) are a big part of it. One-shot task performance at challenging tasks is an interesting, certainly foundational, metric. But I don't think it captures the important advances I see in how LLM's have gotten better over the last year in ways that actually matter to me. I rarely have a well-defined programming challenge and the obligation to solve it in a single-shot.
>This means the step function has more predictive power (“fits better”) than the linear slope. For fun, we can also fit a function that is completely constant across the entire timespan. That happens to get the best Brier score.
I mean, sure. but it's obvious in that graph that the single openai model is dragging down the right side. Wouldn't it be better to just stick to analyzing models from only one lab so that this was showing change over time rather than differences between models?
No Gemini. No Opus 4.5. No GPT codex.
As they said, ragebait used to be believable.
>fischer warned us against eyeballing plots proceeds to eyeball it with an arbitrary function
There was a long flat line before the step, models improve, but PR pass rate without human intervention is inherently a staircase function
They are getting better, but they are also hitting diminishing returns.
There's only so much data to train on, and we are unlikely to see giant leaps in performance as we did in 2023/2024.
2026-27 will be the years of primarily ecosystem/agentic improvements and reducing costs.
> This means llms have not improved in their programming abilities for over a year. Isn’t that wild? Why is nobody talking about this?
Because it's not true. They have improved tremendously in the last year, but it looks like they've hit a wall in the last 3 months. Still seeing some improvements but mostly in skills and token use optimization.
How the "costant function" result fits the data points better than a slope that has two parameters instead of one.
In my niche the Opus 4.6 has been a game changer. In comparison all other LLMs look stupid. I am considering cancelling all other subscriptions.
LLM's have 100% gotten better, but it's hard to say if it's "intrinsically better", if that makes sense.
> OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024 [1]
That's evidence against "intrinsically better". They've also trained on the entire internet - we only have 1 internet, so.
However, late 2024 was the introduction of o1 and early 2025 was Deepseek R1 and o3. These were definitely significant reasoning models - the introduction of test time compute and significant RL pipelines were here.
Mid 2025 was when they really started getting integrated with tool calling.
Late 2025 is when they really started to become agentic and integrate with the CLI pretty well (at least for me). For example, codex would at least try and run some smoke tests for itself to test its code.
In early 2026, the trend now appears to be harness engineering - as opposed to "context engineering" in 2025, where we had to preciously babysit 1 model's context, we make it both easier to rebuild context (classic CS trick btw: rebooting is easier than restoring stale state [2]) and really lean into raw cli tool calling, subagents, etc.
[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
[2] https://en.wikipedia.org/wiki/Kernel_panic
FWIW, AI programming has still been as frustrating as it was when it was just TTC in 2025. Maybe because I don't have the "full harness" but it still has programming styles embedded such as silent fallback values, overly defensive programming, etc. which are obvoiusly gleaned from the desire to just pass all tests, rather than truly good programming design. I've been able to do more, but I have to review more slop... also the agents are really unpleasant to work with, if you're trying to have any reasonable conversation with them and not just delegate to them. It's as if they think the entire world revolves around them, and all information from the operator is BS, if you try and open a proper 2-way channel.
It seems like 2026 will go full zoom with AI tooling because the goal is to replace devs, but hopefully AI agents become actually nice to work with. Not sycophantic, but not passively aggressively arrogant either.
From the METR study (https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs...):
>To study how agent success on benchmark tasks relates to real-world usefulness, we had 4 active maintainers from 3 SWE-bench Verified repositories review 296 AI-generated pull requests (PRs). We had maintainers (hypothetically) accept or request changes for patches as well as provide the core reason they were requesting changes: core functionality failure, patch breaks other code or code quality issues.
I would also advise taking a look at the rejection reasons for the PRs. For example, Figure 5 shows two rejections for "code quality" because of (and I quote) "looks like a useless AI slop comment." This is something models still do, but that is also very easily fixable. I think in that case the issue is that the level of comment wanted hasn't been properly formalized in the repo and the model hasn't been able to deduce it from the context it had.
As for the article, I think mixing all models together doesn't make sense. For example, maybe a slope describe the increasing Claude Sonnet better than a step function.
Anecdotally, I haven't seen any real improvement from the AI tools I leverage. They're all good-ish at what they do, but all still lie occasionally, and all need babysitting.
I also wonder how much of the jump in early 2025 comes from cultural acceptance by devs, rather than an improvement in the tools themselves.
[dead]
[dead]
[dead]
[dead]
[flagged]
This means llms have not improved in their programming abilities for over a year. Isn’t that wild? Why is nobody talking about this?
Because hype makes money.
I don't find this very compelling. If you look at the actual graph they are referencing but never showing [1] there is a clear improvement from Sonnet 3.7 -> Opus 4.0 -> Sonnet 4.5. This is just hidden in their graph because they are only looking at the number of PRs that are mergable with no human feedback whatsoever (a high standard even for humans).
And even if we were to agree that that's a reasonable standard, GPT 5 shouldn't be included. There is only one datapoint for all OpenAI models. That data point more indicative of the performance of OpenAI models (and the harness used) than of any progression. Once you exclude it it matches what you would expect from a logistic model. Improvements have slowed down, but not stopped
1: https://metr.org/assets/images/many-swe-bench-passing-prs-wo...