logoalt Hacker News

HarHarVeryFunny01/21/20252 repliesview on HN

You're the one out of your depth ...

LLMs are taught to predict. Once they've seen enough training samples of words being spelled, they'll have learnt that in a spelling context the tokens comprising the word predict the tokens comprising the spelling.

Once they've learnt the letters predicted by each token, they'll be able to do this for any word (i.e. token sequence).

Of course, you could just try it for yourself - ask an LLM to break a non-dictionary nonsense word like "asdpotyg" into a letter sequence.


Replies

og_kalu01/22/2025

Have you seen the Byte-latent Transformer paper?

It does away with sub-word tokenization but is still more or less a transformer (no working memory or internal iteration). Mostly, the (performance) gains seem modest (not unanimous, some benchmarks it's a bit worse) ....until you hit anything to do with character level manipulation and it just stomps. 1.1% to 99% on CUTE - Spelling as a particularly egregious example.

I'm not sure what the problem is exactly but clearly something about sub-word tokenization is giving these models a particularly hard time on these sort of tasks.

https://arxiv.org/abs/2412.09871

show 1 reply
danielmarkbruce01/21/2025

> Once they've learnt the letters predicted by each token, they'll be able to do this for any word (i.e. token sequence).

They often fail at things like this, hence the strawberry example. Because they can't break down a token or have any concept of it. There is a sort of sweat spot where it's really hard (like strawberry). The example you give above is so far from a real word that it gets tokenized into lots of tokens, ie it's almost character level tokenization. You also have the fact that none of the mainstream chat apps are blindly shoving things into a model. They are almost certainly routing that to a split function.

show 1 reply