> If the code has no author ... there's nowhere to go if you realise "oops, I didn't understand that as well as I had thought!"
That's also true if I author the code myself; I can't go to anyone for help with it, so if it doesn't work then I have to figure out why.
> Believing you know how it works and why it works is not the same as that actually being the case.
My series of accidental successes producing working code is honestly starting to seem like real skill and experience at this point. Not sure what else you'd call it.
> so if it doesn't work then I have to figure out why.
But it's built on top of things that are understood. If it doesn't work, then either:
• You didn't understand the problem fully, so the approach you were using is wrong.
• You didn't understand the language (library, etc) correctly, so the computer didn't grasp your meaning.
• The code you wrote isn't the code you intended to write.
This is a much more tractable situation to be in than "nobody knows what the code means, or has a mental model for how it's supposed to operate", which is the norm for a sufficiently-large LLM-produced codebase.
> My series of accidental successes
That somewhat misses the point. To write working code, you must have some understanding of the relationship between your intention and your output. LLMs have a poor-to-nonexistent understanding of this relationship, which they cover up with the ability to regurgitate (permutations of) a large corpus of examples – but this does not grant them the ability to operate outside the domain of those examples.
LLM-generated codebases very much do not lie within that domain: they lack the clues and signs of underlying understanding that human readers and (to an extent) LLMs rely on. Worse, the LLMs do replicate those signals, but they don't encode anything coherent in the signal. Unless you are very used to critically analysing LLM output, this can be highly misleading. (It reminds me of how chess grandmasters blunder, and struggle to even remember, unreachable board positions.)
Believing you know how LLM-generated code works, and why it works, is not the same as that actually being the case – in a very real sense that is different to that of code with human authors.