On the contrary; stubborn refusal to anthropomorphize LLMs is where the frustration comes from. To a first approximation, the models are like little people on a chip; the success and failure modes are the same as with talking to people.
If you look, all the good advice and guidelines for LLMs are effectively the same as for human employees - clarity of communication, sufficient context, not distracting with bullshit, information hygiene, managing trust. There are deep reasons for that, and as a rule of thumb, treating LLMs like naive savants gives reliable intuitions for what works, and what doesn't.
On the contrary; stubborn refusal to anthropomorphize LLMs is where the frustration comes from. To a first approximation, the models are like little people on a chip; the success and failure modes are the same as with talking to people.
If you look, all the good advice and guidelines for LLMs are effectively the same as for human employees - clarity of communication, sufficient context, not distracting with bullshit, information hygiene, managing trust. There are deep reasons for that, and as a rule of thumb, treating LLMs like naive savants gives reliable intuitions for what works, and what doesn't.