logoalt Hacker News

HarHarVeryFunny01/22/20250 repliesview on HN

The CUTE benchmark is interesting, but doesn't have enough examples of the actual prompts used and model outputs to be able to evaluate the results. Obviously transformers internally manipulate their input at token level granularity, so to be successful at character level manipulation they first need to generate the character level token sequence, THEN do the manipulation. Prompting them to directly output a result without allowing them to first generate the character sequence would therefore guarantee bad performance, so it'd be important to see the details.

https://arxiv.org/pdf/2409.15452