logoalt Hacker News

locknitpickertoday at 7:20 AM0 repliesview on HN

> I like to imagine that the number of consumed tokens before a solution is found is a proxy for how difficult a problem is (...)

The number of tokens required to get to an output is a function of the sequence of inputs/prompts, and how a model was trained.

You have LLMs quite capable of accomplishing complex software engineering work that struggle with translating valid text from english to some other languages. The translations can be improved with additional prompting but that doesn't mean the problem is more challenging.