If the LLM hasn't learned the letters that comprise input tokens, how do you explain this sort of behaviour?
https://chatgpt.com/share/678e95cf-5668-8011-b261-f96ce5a33a...
It can literally spell out words, one letter per line.
Seems pretty clear to me the training data contained sufficient information for the LLM to figure out which tokens correspond to which letters.
And it's no surprise the training data would contain such content - it'd be pretty easy to synthetically generate misspellings, and being able to deal with typos and OCR mistakes gracefully would be useful in many applications.
Two answers: 1 - ChatGPT isn't an LLM, its an application using one/many LLMs and other tools (likely routing that to a split function).
2 - even for a single model 'call':
It can be explained with the following training samples:
"tree is spelled t r e e" and "tree has 2 e's in it"
The problem is, the LLM has seen something like:
8062, 382, 136824, 260, 428, 319, 319
and
19816, 853, 220, 17, 319, 885, 306, 480
For a lot of words, it will have seen data that results in it saying something sensible. But it's fragile. If LLMs used character level tokenization, you'd see the first example repeat the token for e in tree rather than tree having it's own token.
There are all manner of tradeoffs made in a tokenization scheme. One example is that openai made a change in space tokenization so that it would produce better python code.