logoalt Hacker News

QuadrupleAyesterday at 10:07 PM3 repliesview on HN

Claude Code's primarily optimized for burning as many tokens as possible.


Replies

redman25today at 2:43 AM

It’s mainly the benchmarks that have encouraged that. The more tokens they crank out the more likely the answer is to be somewhere in the output.

tartoranyesterday at 10:09 PM

Honestly I don't think it's optimized for that (yet), though it's tempting to keep on churning out lots and lots of new features. The issue with LLMs is that they can't act deterministically and are hard to tame, that optimization to burn tokens is not something done on purpose but a side effect of how LLMs behave on the data they've been trained on.

show 1 reply
arcanemachineryesterday at 10:11 PM

That's OpenCode. The model is Claude Opus, which is probably RL'ed pretty heavily to work with Claude Code. So it's a little less surprising to see it bungle the intentions since it's running in another harness. Still laughable though.

RL - reinforcement learning