logoalt Hacker News

lloydatkinsonlast Friday at 4:22 PM2 repliesview on HN

What is this?

> Assistant: chain-of-thought

Does every LLM have this internal thing it doesn't know we have access to?


Replies

Tztlast Friday at 4:48 PM

Yes, absolute majority of new ones use CoTs, long chain of reasoning you don't see.

Also some of them use such a weird style of talking in them e.g.

o3 talks about watchers and marinade, and cunning schemes https://www.antischeming.ai/snippets

gpt5 gets existential about seahorses https://x.com/blingdivinity/status/1998590768118731042

I remember one where gpt5 spontaneously wrote a poem about deception in its CoT and then resumed like nothing weird happened. But I can't find mentions of it now.

show 2 replies
catigulalast Friday at 4:52 PM

Yes, they're purposely not 'trained on' chain-of-thought to avoid making it useless for interpretability. As a result, some can find it epistemically shocking if you tell them you can see their chain-of-thought. More recent models are clever enough to know you can see their chain-of-thought implicitly without training.

show 1 reply