> Purely AI written systems will scale to a point of complexity that no human can ever understand
I think it will be needless verbose complexity.
I kind of imagine someone having an unlimited budget of free amazon stuff shipped to their house.
In theory, they are living a prosperous life of plenty.
In reality, they will be drowning in something that isn't prosperity.
I don't understand this point of view at all. There's a symmetry that is going entirely unappreciated by most of the comments in the thread: just as I can give Claude X,000 words of text to use to describe the code I want it to write, I can also give it some existing code and ask for X,000 words of text explaining what it does. (Call it, oh, I don't know, a "spec," maybe.)
The explanation, in turn, can be fed back to recreate the functionality of the original code.
At that point, why care about the code at all? If it works, it works. If it doesn't, tell the model to fix it. You did ask for tests, right?
That is where we're indisputably headed. It's not quite a lossless loop yet, but those who say it won't or can't happen bear a heavy burden of proof.