While I would love for this to be true for financial and egotistical reasons, I have a growing feeling that this might not be true for long unless progress really starts to stall.
Progress has stalled already, I didn't see much improvement in the past year for my real world tasks
I've actually gone in the other direction. A year ago, I had that feeling, but since then I've gotten more certain that LLMs are never going to be able to handle complexity. And complexity is still the real problem of developing software.
We keep getting more cool features in the tools, but I don't see any indication that the models are getting any better at understanding or managing complexity. They still make dumb mistakes. They still write terrible code if you don't give them lots of guardrails. They still "fix" things by removing functionality or adding a ts-ignore comment. If they were making progress, I might be convinced that eventually they'll get there, but they're not.