On the contrary; stubborn refusal to anthropomorphize LLMs is where the frustration comes from. To a first approximation, the models are like little people on a chip; the success and failure modes are the same as with talking to people.
If you look, all the good advice and guidelines for LLMs are effectively the same as for human employees - clarity of communication, sufficient context, not distracting with bullshit, information hygiene, managing trust. There are deep reasons for that, and as a rule of thumb, treating LLMs like naive savants gives reliable intuitions for what works, and what doesn't.
Exactly this. People treat LLMs like they treat machines and then are surprised that "LLMs are bad".
The right mental model for working with LLMs is much closer to "person" than to "machine".
I treat LLMs as statistics driven compression of knowledge and problem solving patterns.
If you treat it as such it is all understandable where they might fail and where you might have to guide them.
Also treat it as something that during training has been biased to produce immediate impressive results. This is why it bundles everything into single files, try catch patterns where catch will return mock data to show impressive one shot demo.
So the above you have to actively fight against, to make them prioritise scalability of the codebase and solutions.