LLMs piggyback on human knowledge encoded in all the texts they were trained on without understanding what they're doing.
Humans would execute that code and validate it. From plausible it'd becomes hey, it does this and this is what I want. LLMs skip that part, they really have no understanding other than the statistical patterns they infer from their training and they really don't need any for what they are.
LLMs can execute code and validate it too so the assertions you've made in your argument are incorrect.
What a shame your human reasoning and "true understanding" led you astray here.
Could we stop using vague terms like “understanding” when talking about LLMs and machine learning? You don't know what understanding is. You only know how it feels to understand something.
It's better to describe what you can do that LLMs currently can't.