I think this kind of misses what's actually challenging with LLM code -- auditing it for correctness. LLMs are ~fine at spitting out valid syntax. Humans need to be able to read the output, though.