logoalt Hacker News

mrobtoday at 9:48 AM1 replyview on HN

Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.


Replies

teiferertoday at 11:56 AM

No, tokenization is not the only reason. A next-word predictor has fundamentally a hard time executing algorithms, even as simple as counting.