Systems randomly failing is of significant concern to non programmers, that’s inherent to the non-deterministic nature of LLM’s.
I can send specific LLM output to QA, I can’t ask QA to validate that this prompt will always produce bug free code even for future versions of the AI.
Huh? No.
The output of the LLM is nondeterministic, meaning that the same input to the LLM will result in different output from the LLM.
That has nothing to do with weather the code itself is deterministic. If the LLM produces non-deterministic code, that's a bug, which hopefully will be caught by another sub-agent before production. But there's no reason to assume that programs created by LLMs are non-deterministic just because the LLMs themselves are. After all, humans are non-deterministic.
> I can send specific LLM output to QA, I can’t ask QA to validate that this prompt will always produce bug free code even for future versions of the AI.
This is a crazy scenario that does not correspond to how anyone uses LLMs.