logoalt Hacker News

danielmarkbruce01/23/20250 repliesview on HN

Conclusion:

""" While current LLMs with BPE vocabularies lack direct access to a token’s characters, they perform well on some tasks requiring this information, but perform poorly on others. The models seem to understand the composition of their tokens in direct probing, but mostly fail to understand the concept of orthographic similarity. Their performance on text manipulation tasks at the character level lags far behind their performance at the word level. LLM developers currently apply no methods which specifically address these issues (to our knowledge), and so we recommend more research to better master orthography. Character-level models are a promising direction. With instruction tuning, they might provide a solution to many of the shortcomings exposed by our CUTE benchmark """

That is "having problems with spelling 'games'" and "probably better to use character level models for such tasks". Maybe you don't understand what "spelling games" are, here: https://chatgpt.com/share/67928128-9064-8002-ba4d-7ebc5edf07...