logoalt Hacker News

orevyesterday at 10:12 PM7 repliesview on HN

As the models keep improving, wouldn’t you be able to task a newer AI to “clean up this mess”?


Replies

jcalxtoday at 1:04 AM

Someone responded to a previous comment of mine [0] positing a Peter principle [1] of slopcoding — it will always be easier to tack on a new feature than to understand a whole system and clean it up. The equilibrium will remain at the point of near, but not total, codebase incomprehensibility.

[0] https://news.ycombinator.com/item?id=48037128#48038639

[1] https://en.wikipedia.org/wiki/Peter_principle

fg137yesterday at 11:37 PM

How is a newer AI going to "clean up" dropped databases, compromised computers or leaked personal data?

(None of above is theoretical)

jeremyjhyesterday at 10:22 PM

Frankly this is what everyone is counting on whether they know it or not. The question though is not “will the models get good enough?”. The question is does the repo even contain enough accurate information content to determine what the system is even supposed to be doing.

malfistyesterday at 10:41 PM

Are they improving? I thought they were just getting more expensive

show 1 reply
SpicyLemonZestyesterday at 11:21 PM

People are often skeptical when I say this, but there's simply no guarantee that it's possible in principle to clean up a bad architecture. If your system is "overfitted" to 10,000 requirements from 1,000 customers, it may be impossible to satisfy requirements 10,001 through 10,100 without starting over from scratch.

show 1 reply
aaron_m04yesterday at 10:16 PM

How could anyone answer that with any level of certainty?

hennellyesterday at 10:22 PM

Ai runs `rm -rf`

show 2 replies