logoalt Hacker News

pfortunylast Friday at 2:25 PM2 repliesview on HN

Honest question:

> Anthropic showed that LLMs don't understand their own thought processes

Where can I find this? I am really interested in that. Thanks.


Replies

roywigginslast Friday at 3:18 PM

https://www.anthropic.com/research/tracing-thoughts-language...

> Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models...

> Claude seems to be unaware of the sophisticated "mental math" strategies that it learned during training. If you ask how it figured out that 36+59 is 95, it describes the standard algorithm involving carrying the 1. This may reflect the fact that the model learns to explain math by simulating explanations written by people, but that it has to learn to do math "in its head" directly, without any such hints, and develops its own internal strategies to do so.

show 1 reply
encyclopedismlast Friday at 2:31 PM

Well algorithms don't think. That's what LLM's are.

Your digital thermometer doesn't think either.

show 2 replies