Unfortunately, that kind of truth isn't even encoded in the models. It doesn't really get ideas and can't understand whether worms are actually crawling on your skin because factual concepts aren't part of the embeddings as far as we can tell. It just knows how words relate to each other, and if you keep telling it there are worms on your skin, it will start extrapolating what someone who sees worms crawling on someone's skin would say.