logoalt Hacker News

matthewolfeyesterday at 3:02 PM0 repliesview on HN

A lot of model-specific tokenizers have reference implementations ([0], [1]). Underlying them is a core algorithm like SentencePiece or Byte-pair encoding (BPE). Tiktoken and TokenDagger are BPE implementations. The wrapping "tokenizer" mostly deals with the quirks of the vocabulary and handling special tokens.

For this project, I think there is value in building some of these model-specific quirks into the library. Could see some minor performance gains and generally make it easier to integrate with. It's probably not too much work to keep up with newer models. Tokenizers change much less frequently.

[0] https://github.com/meta-llama/llama-models/blob/01dc8ce46fec...

[1] https://github.com/mistralai/mistral-common/tree/main/src/mi...