So for safety critical systems, one should not look or check if code has been AI generated?
If you don't review the code your C compiler generates now, why not? Compiler bugs still happen, you know.
> If you don't review the code your C compiler generates now, why not?
That isn't a reason why you should NOT review AI-generated code. Even when comparing the two, a C compiler is far more deterministic in the code that it generates than LLMs, which are non-deterministic and unpredictable by design.
> Compiler bugs still happen, you know.
The whole point is 'verification' which is extremely important in compiler design and there exists a class of formally-verified compilers that are proven to not generate compiler bugs. There is no equivalent for LLMs.
In any case, you still NEED to check if the code's functionality matches the business requirements; AI-generated or not; especially in safety critical systems. Otherwise, it is considered as a logic bug in your implementation.
You do understand that LLM output is non-deterministic and tends to have a higher error ratio than compiler bugs, which do not exhibit this “feature”.
I see in one of your other posts that you were loudly grumbling about being downvoted. You may want to revisit if taking a combative, bad faith approach while replying to other people is really worth it.