I've yet to be convinced by any article, including this one, that attempts to draw boxes around what coding agents are and aren't good at in a way that is robust on a 6 to 12 month horizon.
I agree that the examples listed here are relatable, and I've seen similar in my uses of various coding harnesses, including, to some degree, ones driven by opus 4.5. But my general experience with using LLMs for development over the last few years has been that:
1. Initially models could at best assemble a simple procedural or compositional sequences of commands or functions to accomplish a basic goal, perhaps meeting tests or type checking, but with no overall coherence,
2. To being able to structure small functions reasonably,
3. To being able to structure large functions reasonably,
4. To being able to structure medium-sized files reasonably,
5. To being able to structure large files, and small multi-file subsystems, somewhat reasonably.
So the idea that they are now falling down on the multi-module or multi-file or multi-microservice level is both not particularly surprising to me and also both not particularly indicative of future performance. There is a hierarchy of scales at which abstraction can be applied, and it seems plausible to me that the march of capability improvement is a continuous push upwards in the scale at which agents can reasonably abstract code.
Alternatively, there could be that there is a legitimate discontinuity here, at which anything resembling current approaches will max out, but I don't see strong evidence for it here.
I used to get made up APIs in functions, now I get them in modules. I used to get confidently incorrect assertions in files now I get them across codebases.
Hell, I get poorly defined APIs across files and still get them between functions. LLMs aren't good at writing well defined APIs at any level of the stack. They can attempt it at levels of the stack they couldn't a year ago, but they're still terrible at it unless the problem is so well known enough that they can regurgitate well reviewed code.
LLMs are bad at creating abstraction boundaries since inception. People have been calling it out since inception. (Heck, even I got a twitter post somewhere >12 months old calling that out, and I'm not exactly a leading light of the effort)
It is in no way size-related. The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.
I feel like the main challenge is where to be "loose" and where to be "strict", Claude takes too much liberty often. Assuming things, adding some mock data to make it work, using local storage because there is no db. This makes it work well out of the box, and means I can prompt half ass and have great results. But it also long term causes issues. It can be prompted away, but it needs constant reminder. This seems like a hard problem to solve. I feel like it can already almost do everything if you have the correct vision / structure in mind and have the patience to prompt properly.
It's worst feature is debugging hard errors, it will just keep trying everything and can get pretty wild instead of entering plan mode and really discuss & think things true.
Claude is overrated premium piece of developer tech, i have produced equally good results from Gemini and Way better with GPT - medium. And GPT Medium is a really good model at assembling and debugging stuff than Claude. Claude hallucinates when asked why something is correct or should be done. All Models fail equally in some or the other aspect, which point to the fact that these models have strength's and weaknesses, and GPT just happens to be a good overall model. But dev community is so stuck up on Claude for no good reason other than shiny tooling : "Claude Code", besides that the models can be equally worse as the competition. The Benchmarks do not explain the full story. In general though the Thumb rule is if the Model says you are Brilliant, Thats genius or Now thats a deep and insightful question you asked... Its time to start a new session.
The article is mostly reporting on the present. (Note the "yet" in the title.)
There's only one sentence where it handwaves about the future. I do think that line should have been cut.
It feels like a lot of people keep falling into the trap of thinking we’ve hit a plateau, and that they can shift from “aggressively explore and learn the thing” mode to “teach people solid facts” mode.
A week ago Scott Hanselman went on the Stack Overflow podcast to talk about AI-assisted coding. I generally respect that guy a lot, so I tuned in and… well it was kind of jarring. The dude kept saying things in this really confident and didactic (teacherly) tone that were months out of date.
In particular I recall him making the “You’re absolutely right!” joke and asserting that LLMs are generally very sycophantic, and I was like “Ah, I guess he’s still on Claude Code and hasn’t tried Codex with GPT 5”. I haven’t heard an LLM say anything like that since October, and in general I find GPT 5.x to actually be a huge breakthrough in terms of asserting itself when I’m wrong and not flattering my every decision. But that news (which would probably be really valuable to many people listening) wasn’t mentioned on the podcast I guess because neither of the guys had tried Codex recently.
And I can’t say I blame them: It’s really tough to keep up with all the changes but also spend enough time in one place to learn anything deeply. But I think a lot of people who are used to “playing the teacher role” may need to eat a slice of humble pie and get used to speaking in uncertain terms until such a time as this all starts to slow down.