This seems unnecessarily pedantic. We know how the system works, we just use "hallucination" colloquially when the system produces wrong output.
Other people do not, hence the danger and the responsibility of not giving them the wrong impression of what they're dealing with.
If the information it gives is wrong, but is grammatically correct, then the "AI" has fulfilled its purpose. So it isn't really "wrong output" because that is what the system was designed to do. The problem is when people use "AI" and expect it will produce truthful responses - it was never designed to do that.