logoalt Hacker News

goose0004yesterday at 1:23 AM0 repliesview on HN

The collision point is interesting, but I'd argue context disambiguates. If I'm understanding you correctly, I don't think the models are confused about whether or not it's looking at an email when `@` appears before a route pattern. These symbols are heavily represented in programming contexts (e.g. Python decorators, shell scripts, etc.), so LLMs have seen them plenty of times in code. I'd be interested if you shared your findings though! Definitely an issue I would like to see if I could avoid or at least mitigate somewhat.

That's an absolutely fair point that vocabularies differ regarding the tokenizer variance, but the symbols GlyphLang uses are ASCII characters that tokenize as single tokens across GPT4, Claude, and Gemini tokenizers. THe optimization isn't model-specific, but rather it's targeting the common case of "ASCII char = 1 token". I could definitely reword my post though - looking at it more closely, it does read more as "fix-all" rather than "fix-most".

Regardless, I'd genuinely be interested in seeing failure cases. It would be incredibly useful data to see if there are specific patterns where symbol density hurts comprehension.