You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.
The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.
I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…
People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.