logoalt Hacker News

samdjstephensyesterday at 5:36 PM3 repliesview on HN

If LLMs stopped improving today I’m sure you would be correct- as it is I think it’s very hard to predict what the future holds and where the advancements take us.

I don’t see a particularly good reason why LLMs wouldn’t be able to do most programming tasks, with the limitation being our ability to specify the problem sufficiently well.


Replies

maccardyesterday at 6:02 PM

I feel like we’ve been hearing this for 4 years now. The improvements to programming (IME) haven’t come from improved models, they’ve come from agents, tooling, and environment integrations.

show 3 replies
PaulRobinsonyesterday at 6:39 PM

LLM capability improvement is hitting a plateau with recent advancements mostly relying on accessing context locally (RAG), or remotely (MCP), with a lot of extra tokens (read: drinking water and energy), being spent prompting models for "reasoning". Foundation-wise, observed improvements are incremental, not exponential.

> able to do most programming tasks, with the limitation being our ability to specify the problem sufficiently well

We've spent 80 years trying to figure that out. I'm not sure why anyone would think we're going to crack this one anytime in the next few years.

show 1 reply
majormajoryesterday at 10:11 PM

> the limitation being our ability to specify the problem sufficiently well

Such has always been the largest issue with software development projects, IMO.