> You apparently did not read the article.
Please don't say things like this in comments (see https://news.ycombinator.com/newsguidelines.html).
I don't think "LLM" and "hallucinated" are accurate; different kinds of AI create images, and I get the impression that they generally don't ascribe semantics to words in the same way that LLMs do, and thus when they draw letter shapes they typically aren't actually modelling the fact that the letters are supposed to spell a particular word that has a particular meaning.