If you mean "once in a thousand times an LLM will do something absolutely stupid" then I agree, but the exact same applies to human beings. In general LLMs show excellent understanding of the context and actual intents, they're completely different from our stereotype of blind algorithmic intelligence.
Btw, were you using codex by any chance? There was a discussion a few days ago where people reported that it follows instruction in an extremely literal fashion, sometimes to absurd outcomes such as the one you describe.
If you mean "once in a thousand times an LLM will do something absolutely stupid" then I agree, but the exact same applies to human beings. In general LLMs show excellent understanding of the context and actual intents, they're completely different from our stereotype of blind algorithmic intelligence.
Btw, were you using codex by any chance? There was a discussion a few days ago where people reported that it follows instruction in an extremely literal fashion, sometimes to absurd outcomes such as the one you describe.