Computers do what you tell them to, not what you want them to. This is naturally infuriating, as when a computer doesn't do what you want, it means you've failed to express your vague idea in concrete terms.
LLMs in the most general case do neither what you tell them, nor what you want them to. This, surprisingly, can be less infuriating, as now it feels like you have another actor to blame - even though an LLM is still mostly deterministic, and you can get a pretty good idea of what quality of response you can expect for a given prompt.