> Hallucinations that lead to code that doesn't work just get fixed
How about hallucinations that lead to code that doesn't work outside of the specific conditions that happen to be true in your dev environment? Or, even more subtly, hallucinations that lead to code which works but has critical security vulnerabilities?
Replace "hallucination" with "oversight" or "ignorance" and you have the same issue when a human writes the code.
A lot of that will come to the prompter's own foresight much like the vigilance of a beginner developer where they know they are working on a part of the system that is particularly sensitive to get right.
That said, only a subset of software needs an authentication solution or has zero tolerance to some codepath having a bug. Those don't apply to almost all of the apps/TUIs/GUIs I've built over the last few months.
If you have to restrict the domain to those cases for LLMs to be "disastrous", then I'll grant that for this convo.
What about everything else?