> Given that the models will attempt to check their own work with almost the identical verification that a human engineer would
That's not the case at all though. The LLM doesn't have a mental model of what the expected final result is, so how could it possibly verify that?
It has a description in text format of what the engineer thinks he wants. The text format is inherently limited and lossy and the engineer is unlikely to be perfect at expressing his expectations in any case.