logoalt Hacker News

orpheatoday at 4:50 PM1 replyview on HN

Do you think new LLMs are going to write better and better code? When all they are going to have is the slop generated by previous, worse models?


Replies

chickensongtoday at 7:55 PM

Yes. The models may have started from indiscriminate scraping, but people are undoubtedly working on refining the training data. Combined with the overall model capabilities, I suspect code quality will continue to go up.

What you're suggesting is a negative flywheel where quality spirals down, but I'm hoping it becomes a positive loop and the quality floor goes up. We had plenty of slop before LLMs, and not all LLM output is slop. Time will tell, but I think LLMs will continue to improve their coding abilities and push overall quality higher.