I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.
That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.
Custodians of human knowledge.