logoalt Hacker News

What I don’t like about chains of thoughts (2023)

44 pointsby jxmorris12last Sunday at 4:39 PM23 commentsview on HN

Comments

marcus_holmestoday at 2:18 AM

This is Dual Process Theory [0] otherwise known as Fast vs Slow thinking, or System 1 and System 2 thinking.

Humans are the only known organism to do System 2 (which doesn't mean we're the only ones that do it, just that we don't know if whales do it), but System 2 is what the author is talking about when they refer to Chains of Thought.

System 1 is what they're referring to when they talk about Messi reacting to an unusual situation on the field.

Related anecdote: I tested myself for ADHD by taking amphetamines. I normally think by intuitive leaps from point to point, without doing the intermediate steps consciously. I found that during this experience my System 2 thinking was fast enough to follow and I actually experienced proper chains of thought. Or I was whizzing my tits off and hallucinated the whole thing. Not sure yet. I should repeat the experiment.

[0] https://en.wikipedia.org/wiki/Dual_process_theory

crystal_revengetoday at 3:12 AM

Decoder only LLMs are Markov chains with sophisticated models of the state space. Anyone familiar with Hamiltonian Markov Chains will know that for good results you need a warm up period so that you're sampling from the typical set which is the area with generally the highest probability density in the distribution (not necessary the high propbability/maximum likelihood).

I have spent a lot of time experimenting with Chain of Thought professionally and I have yet to see any evidence to suggest that what's happening with CoT is any more (or less) than this. If you let the model run a bit longer it enters a region close to the typical set and when it's ready to answer you have a high probability of getting a good answer.

There's absolutely no "reasoning" going on here, except that some times sampling from the typical set near the region of your answer is going to look very similar to how human reason before coming up with an answer.

show 3 replies
tintortoday at 2:50 AM

"Obviously no, this IMO proves that we humans can reason efficiently without an inner speech."

Well, no, it proves that Messi can reason efficiently without an inner speech.

AlexCoventrytoday at 8:28 AM

There's a good deal of work on latent reasoning models, these days. Here is a recent survey paper.

https://arxiv.org/abs/2507.06203

cubefoxtoday at 4:31 AM

Interesting that he came to this conclusion (CoT should be done in latent space) well before the release of OpenAI's o1, which made explicit CoT reliable in the first place. At the time the blog post was written, CoT was only achieved via a "reason step by step" instruction, which was highly error prone compared to modern o1-like reasoning. (And before InstructGPT/ChatGPT, it was achieved by prompting the model with "let me reason step by step".)

albert_etoday at 2:39 AM

> chains of thoughts

Pedantic maybe -- but does this need two plurals?

show 1 reply