logoalt Hacker News

Terr_last Monday at 7:50 PM1 replyview on HN

> I'm so baffled when I see this being blindly asserted. With the reasoning models, you can literally watch their thought process.

Not true, you are falling for a very classic (prehistoric, even) human illusion known as experiencing a story:

1. There is a story-like document being extruded out of a machine humans explicitly designed for generating documents, and which humans trained on a bajillion stories humans already made.

2. When you "talk" to a chatbot, that is an iterative build of a (remote, hidden) story document, where one of the characters is adopting your text-input and the other's dialogue is being "performed" at you.

3. The "reasoning" in newer versions is just the "internal monologue" of a film noir detective character, and equally as fictional as anything that character "says out loud" to the (fictional) smokin-hot client who sashayed the (fictional) rent-overdue office bearing your (real) query on its (fictional) lips.

> If that's not thinking, I literally don't know what is.

All sorts of algorithms can achieve useful outcomes with "that made sense to me" flows, but that doesn't mean we automatically consider them to be capital-T Thinking.

> So I have to ask you: when you claim they don't think -- what are you basing this on?

Consider the following document from an unknown source, and the "chain of reasoning" and "thinking" that your human brain perceives when encountering it:

    My name is Robot Robbie.
    That high-carbon steel gear looks delicious. 
    Too much carbon is bad, but that isn't true here.
    I must ask before taking.    
    "Give me the gear, please."
    Now I have the gear.
    It would be even better with fresh manure.
    Now to find a cow, because cows make manure.
Now whose reasoning/thinking is going on? Can you point to the mind that enjoys steel and manure? Is it in the room with us right now? :P

In other words, the reasoning is illusory. Even if we accept that the unknown author is a thinking intelligence for the sake of argument... it doesn't tell you what the author's thinking.


Replies

crazygringolast Monday at 8:38 PM

You're claiming that the thinking is just a fictional story intended to look like it.

But this is false, because the thinking exhibits cause and effect and a lot of good reasoning. If you change the inputs, the thinking continues to be pretty good with the new inputs.

It's not a story, it's not fictional, it's producing genuinely reasonable conclusions around data it hasn't seen before. So how is it therefore not actual thinking?

And I have no idea what your short document example has to do with anything. It seems nonsensical and bears no resemblance to the actual, grounded chain of thought processes high-quality reasoning LLM's produce.

> OK, so that document technically has a "chain of thought" and "reasoning"... But whose?

What does it matter? If an LLM produces output, we say it's the LLM's. But I fail to see how that is significant?

show 2 replies