I totally agree with most of the article, but the hallucinations bit puzzles me. If it’s genuinely an unchangeable limitation of the product (as hallucinations are with LLMs) it’s good to set the right expectation rather than making promises you can’t deliver on.
You are not allowed to tell the truth about LLMs, it is simply outside of the current Overton window. In a year or two, this will be retconned. I guarantee it.
It doesn't matter to the end user if hallucinations are an unchangeable limitation, the fact that they happen undermines the confidence that people have in them as a tool.
I've wondered the same thing as the author about why we even call them "hallucinations." They're errors, the LLM generated an erroneous output.