logoalt Hacker News

Marazantoday at 8:24 AM1 replyview on HN

If you remove the auxiliary tools and just leave the core LLM then strawberry still has an undefined number of `r`s in it.


Replies

p-e-wtoday at 8:35 AM

That’s false. Larger LLMs learn token decompositions through their training, and in fact modern training pipelines are designed to occasionally produce uncommon tokenizations (including splitting words into individual characters) for this reason. Frontier models have no trouble spelling words even without tools. Even many mid-sized models can do that.

show 1 reply