I was trying to explain the concept of "token prediction" to my wife, whose eyes glaze over when discussing such technical topics. (I think she has the brainpower to understand them, but a horrible math teacher gave her a taste aversion to even attempting to that hasn't gone away. So she just buys Apple stuff and hopes Tim Apple hasn't shuffled around the UI bits AGAIN.)
I stumbled across a good-enough analogy based on something she loves: refrigerator magnet poetry, which if it's good consists of not just words but also word fragments like "s", "ed", and "ing" kinda like LLM tokens. I said that ChatGPT is like refrigerator magnet poetry in a magical bag of holding that somehow always gives the tile that's the most or nearly the most statistically plausible next token given the previous text. E.g., if the magnets already up read "easy come and easy ____", the bag would be likely to produce "go". That got into her head the idea that these things operate based on plausibility ratings from a statistical soup of words, not anything in the real world nor any internal cogitation about facts. Any knowledge or thought apparent in the LLM was conducted by the original human authors of the words in the soup.
Did you explain how LLMs can achieve gold-medal performance at math competitions involving original problems, without any original knowledge or thought?
Did she ask if a "statistical soup of words," if large enough, might somehow encode or represent something a little more profound than just a bunch of words?