The "figure out what you want to say" is key. I've started to think of LLMs, at least in a business setting, as misunderstanding amplifiers.
How many times at work have you been talking to someone else where they're using common words as jargon? Maybe it's something like "the online system" or "the platform". And it's perfectly clear to them what they mean, but everyone else in the company either doesn't know what that actually is, or they have a distorted idea based on the conventional definitions of the words. Even without LLMs in the mix, this can lead to people coming out of meetings with completely different understandings of what's going on.
My experience is few people are actually providing the relevant context to the LLM to explain what they mean in situations like this. Or they don't have the actual knowledge and are using the LLM in the hopes it'll fill in for their ignorance. The LLMs are RLHFed to sound confident, so they won't convey that they don't know what a piece of jargon means. Instead they'll use a combination of the common meaning and the rest of the context to invent something. When this gets copy/pasted and sent around, it causes everyone who isn't familiar to get the wrong idea. Hence "misunderstanding amplifier".
To the point of the article, this is soluble if people take the time to actually figure out what they are trying to convey. But if they did that, they wouldn't need the LLM in the first place.
And that people and the systems actually know the relevant terms.
I recently was dealing with the Amazon robot--after correctly identifying the items in the order it then proceeded to use short terms which were wrong, but make sense as what a classifier might have spit out. Instead of understanding being a shared thing it falls entirely on the user. Sufficiently adept user, this is fine. But a lot of users aren't sufficient adept.