As the models keep improving, wouldn’t you be able to task a newer AI to “clean up this mess”?
How is a newer AI going to "clean up" dropped databases, compromised computers or leaked personal data?
(None of above is theoretical)
Frankly this is what everyone is counting on whether they know it or not. The question though is not “will the models get good enough?”. The question is does the repo even contain enough accurate information content to determine what the system is even supposed to be doing.
Are they improving? I thought they were just getting more expensive
People are often skeptical when I say this, but there's simply no guarantee that it's possible in principle to clean up a bad architecture. If your system is "overfitted" to 10,000 requirements from 1,000 customers, it may be impossible to satisfy requirements 10,001 through 10,100 without starting over from scratch.
How could anyone answer that with any level of certainty?
Someone responded to a previous comment of mine [0] positing a Peter principle [1] of slopcoding — it will always be easier to tack on a new feature than to understand a whole system and clean it up. The equilibrium will remain at the point of near, but not total, codebase incomprehensibility.
[0] https://news.ycombinator.com/item?id=48037128#48038639
[1] https://en.wikipedia.org/wiki/Peter_principle