logoalt Hacker News

selfhoster1104/27/20250 repliesview on HN

Those tokens won't come up during training, but LLMs are capable of In-Context Learning. If you give it some examples of how to create new words/characters in this manner as a part of the prompt, they will be able to use those tokens at inference time. Show it some examples of how to compose a Thai or Chinese sentence out of byte tokens, and give them a description of the hypothetical Unicode range of a custom alphabet, and a sufficiently strong LLM will be able to just output bytes despite those codepoints not technically existing.

And like I said, single-byte tokens very much are a part of word tokenisers, or to be precise, their token selection. "Word tokeniser" is a misnomer in any case - they are word piece tokenisers. English is simple enough that word pieces can be entire words. With languages where you have numerous suffixes, prefixes, and even in-fixes as a part of one "word" (as defined by "one or more characters preceded or followed by a space" - because the truth is more complicated than that), you have not so much "word tokenisers" as "subword tokenisers". A character tokeniser is just a special case of a subword tokeniser where the length of each subword is exactly 1.