logoalt Hacker News

danielmarkbruce01/21/20251 replyview on HN

> Once they've learnt the letters predicted by each token, they'll be able to do this for any word (i.e. token sequence).

They often fail at things like this, hence the strawberry example. Because they can't break down a token or have any concept of it. There is a sort of sweat spot where it's really hard (like strawberry). The example you give above is so far from a real word that it gets tokenized into lots of tokens, ie it's almost character level tokenization. You also have the fact that none of the mainstream chat apps are blindly shoving things into a model. They are almost certainly routing that to a split function.


Replies

HarHarVeryFunny01/21/2025

You're still not getting it ...

Why would an LLM need to "break down" tokens into letters to do spelling?! That is just not how they work - they work by PREDICTION. If you ask an LLM to break a word into a sequence of letters, it is NOT trying to break it into a sequence of letters - it is trying to do the only thing it was trained to do, which is to predict what tokens (based on the training samples) most likely follow such a request, something that it can easily learn given a few examples in the training set.

show 1 reply