logoalt Hacker News

kogoldtoday at 11:46 AM10 repliesview on HN

[flagged]


Replies

Chance-Devicetoday at 1:12 PM

Let’s see, I think these pretty much map out a little chronology of the research:

https://arxiv.org/abs/2112.00114 https://arxiv.org/abs/2406.06467 https://arxiv.org/abs/2404.15758 https://arxiv.org/abs/2512.12777

First that scratchpads matter, then why they matter, then that they don’t even need to be meaningful tokens, then a conceptual framework for the whole thing.

show 1 reply
bitexplodertoday at 1:58 PM

That "unproven claim" is actually a well-established concept called Chain of Thought (CoT). LLMs literally use intermediate tokens to "think" through problems step by step. They have to generate tokens to talk to themselves, debug, and plan. Forcing them to skip that process by cutting tokens, like making them talk in caveman speak, directly restricts their ability to reason.

ShowalkKamatoday at 12:09 PM

the fact that more tokens = more smart should be expected given cot / thinking / other techniques that increase the model accuracy by using more tokens.

Did you test that ""caveman mode"" has similar performance to the ""normal"" model?

show 2 replies
ano-thertoday at 1:47 PM

Looking at the skill.md wouldn’t this actually increase token use since the model now needs to reformat its output?

Funny idea though. And I’d like to see a more matter-of-fact output from Claude.

collingreentoday at 6:57 PM

I assume you're a human but wow this is the type of forum bot I could really get behind.

Take it a step further and do kind of like that xkcd where you try to post and it rewrites it like this and if you want the original version you have to write a justification that gets posted too.

Chef's kiss

mynegationtoday at 12:12 PM

No, let me rephrase it for you. “tokens used for think. Short makes model dumb”

show 1 reply
huflungdungtoday at 1:59 PM

[dead]

estearumtoday at 1:00 PM

Can't you know that tokens are units of thinking just by... like... thinking about how models work?

show 2 replies