logoalt Hacker News

The Case That A.I. Is Thinking

278 pointsby ascertain11/03/20251011 commentsview on HN

https://archive.ph/fPLJH


Comments

j4511/03/2025

I like learning from everyone's perspectives.

I also keep in mind when non-tech people talk about how tech works without an understanding of tech.

sesm11/04/2025

LLMs, by design, are making plausible guesses.

sonicvroooom11/04/2025

vectorized thinking in vectorized context is math.

coding logical abduction into LLMs completely breaks them while humans can perfectly roll with it, albeit it's worth emphasizing that some might need a little help from chemistry or at least not be caught on the wrong foot.

you're welcome, move on.

gen22011/04/2025

In some realpolitik/moral sense, does it matter whether it is actually "thinking", or "conscious", or has "autonomy" / "agency" of its own?

What seems to matter more is if enough people believe that Claude has those things.

If people credibly think AI may have those qualities, it behooves them to treat the AI like any other person they have a mostly-texting relationship with.

Not in a utility-maximizing Pascal's Wager sense, but in a humanist sense. If you think Claude is human-like, and treat Claude poorly, it makes you more likely to treat the humans around you (and yourself) poorly.

Conversely if you're able to have a fulfilling, empathetic relationship with Claude, it might help people form fulfilling, mutually-empathetic relationships with the humans around them. Put the opposite way, treating human-like Claude poorly doesn't seem to help the goal of increasing human welfare.

The implications of this idea are kind of interesting: even if you think AI isn't thinking or conscious or whatever, you should probably still be a fan of "AI welfare" if you're merely a fan of that pesky little thing we call "human flourishing".

show 2 replies
iainmerrick11/05/2025

I don't have a hot take to add here, but I just wanted to say that this article is terrific. Great insights and detail, great clarity for a general audience without dumbing down the technical content in the slightest. Of course it raises more questions than it answers; that's the point of this kind of thing. It's going to be a really useful reference point on the 2025 state of the art in years to come.

This is some of the best writing on AI since Ted Chiang's "ChatGPT Is a Blurry JPEG of the Web". And that was in the New Yorker too! Might need to get myself a subscription...

brador11/04/2025

The other side of the coin is maybe we’re not. And that terrifies all who consider it.

embedding-shape11/03/2025

The definitions of all these words have been going back and forward and never reached any 100% consensus anyways, so what one person understands of "thinking", "conscious", "intelligence" and so on seems to be vastly different from another person.

I guess this is why any discussion around this ends up with huge conversations, everyone is talking from their own perspective and understanding, while others have different ones, and we're all talking past each other.

There is a whole field trying to just nail down what "knowledge" actually is/isn't, and those people haven't agreed with each other for the duration of hundreds of years, I'm not confident we'll suddenly get a lot better at this.

I guess ultimately, regardless of what the LLMs do, does it matter? Would we understand them better/worse depending on what the answer would be?

show 1 reply
nickledave11/04/2025

I'm not going to read this -- I don't need to. The replies here are embarrassing enough.

This is what happens when our entire culture revolves around the idea that computer programmers are the most special smartest boys.

If you even entertain even for a second the idea that a computer program that a human wrote is "thinking", then you don't understand basic facts about: (1) computers, (2) humans, and (3) thinking. Our educational system has failed to inoculate you against this laughable idea.

A statistical model of language will always be a statistical model of language, and nothing more.

A computer will never think, because thinking is something that humans do, because it helps them stay alive. Computers will never be alive. Unplug your computer, walk away for ten years, plug it back in. It's fine--the only reason it won't work is planned obsolescence.

No, I don't want to read your reply that one time you wrote a prompt that got ChatGPT to whisper the secrets of the universe into your ear. We've known at least since Joseph Weizenbaum coded up Eliza that humans will think a computer is alive if it talks to them. You are hard-wired to believe that anything that produces language is a human just like you. Seems like it's a bug, not a feature.

Stop commenting on Hacker News, turn off your phone, read this book, and tell all the other sicko freaks in your LessWrong cult to read it too: https://mitpress.mit.edu/9780262551328/a-drive-to-survive/ Then join a Buddhist monastery and spend a lifetime pondering how deeply wrong you were.

show 4 replies
0xdeadbeefbabe11/04/2025

> Still, no one expects easy answers.

Ahem (as a would-be investor, I am insulted).

jonplackett11/03/2025

No idea if this is true or not but I do very much like the animation

procaryote11/03/2025

In all these discussions there seems to be an inverse correlation between how well people understand what an LLM does and how much they believe it thinks.

If you don't understand what an LLM does – that it is a machine generating a statistically probable token given a set of other tokens – you have a black box that often sounds smart, and it's pretty natural to equate that to thinking.

show 1 reply
didibus11/04/2025

I'd like to remind people not to cargo cult, and the main issue I see with any attempt at saying an LLM is thinking is that we just don't know how human thinking works.

We now understand pretty well how LLMs "think", and I don't know why we want to call it "thinking" when we mean we know how they work. But to say that their architecture and method of generating language amounts to human thinking? When we know very little of how human thinking works?

Like why are we even trying to make such claims? Is it all grift? Is it just because it helps people understand a little how they work in simplistic terms? Is it because it kind of describes the semblance of behavior you can expect from them?

LLMs do exhibit thinking like behavior, because they were trained to learn to do that, but I think we really need to check ourselves with claim of similarity in thinking.

jameswhitford11/04/2025

This submarine isn’t swimming, it’s us that are submarining!

I think I hear my master’s voice..

Or is that just a fly trapped in a bottle?

spacecadet11/04/2025

Come on people, think about what is actually happening. They are not thinking... Think about what actually goes into the activity of thinking... LLMs, at no point actually do that. They do a little bit special padding and extra layers, but in most cases, every single time... not when needed, not sub-consciously, but dumbly.

Im already drifting off HN, but I swear, if this community gets all wooey and anthropomorphic over AI, Im out.

nxor11/04/2025

Does no one care that LLM's have fewer 'neurons' than for example a cat?

show 1 reply
educasean11/03/2025

The debate around whether or not transformer-architecture-based AIs can "think" or not is so exhausting and I'm over it.

What's much more interesting is the question of "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?". Otherwise we go in endless circles about language and meaning of words instead of discussing practical, demonstrable capabilities.

show 17 replies
mehdibl11/03/2025

We are still having to read this again in 2025? Some will never get it.

adverbly11/03/2025

So happy to see Hofstadter referenced!

He's the GOAT in my opinion for "thinking about thinking".

My own thinking on this is that AI actually IS thinking - but its like the MVB of thinking (minimum viable brain)

I find thought experiments the best for this sort of thing:

- Imagine you had long term memory loss so couldn't remember back very long

You'd still be thinking right?

- Next, imagine you go to sleep and lose consciousness for long periods

You'd still be thinking right?

- Next, imagine that when you're awake, you're in a coma and can't move, but we can measure your brain waves still.

You'd still be thinking right?

- Next, imagine you can't hear or feel either.

You'd still be thinking right?

- Next, imagine you were a sociopath who had no emotion.

You'd still be thinking right?

We're just not used to consciousness without any of the other "baggage" involved.

There are many separate aspects of life and shades of grey when it comes to awareness and thinking, but when you take it down to its core, it becomes very hard to differentiate between what an LLM does and what we call "thinking". You need to do it by recognizing the depths and kinds of thoughts that occur. Is the thinking "rote", or is something "special" going on. This is the stuff that Hofstadter gets into(he makes a case for recursion and capability being the "secret" piece - something that LLMs certainly have plumbing in place for!)

BTW, I recommend "Surfaces and Essences" and "I am a strange loop" also by Hofstadter. Good reads!

show 3 replies
shirro11/03/2025

Sounds like one of those extraordinary popular delusions to me.

Alex203711/03/2025

next up: The Case That Skyrim NPCs Are Alive.

sanskarix11/04/2025

[dead]

sanskarix11/04/2025

[dead]

grantcas11/04/2025

[dead]

epolanski11/04/2025

[dead]

goldforever11/04/2025

[dead]

throwaway98439311/03/2025

[dead]

sanskarix11/04/2025

[dead]

sanskarix11/04/2025

[dead]

show 1 reply
chilipepperhott11/03/2025

Anyone know how to get past the paywall?

show 4 replies
anthem202511/03/2025

[dead]

diamond55911/03/2025

Let's quote all the CEO's benefiting from bubble spending, is their fake "AI" llm going to blow up the world or take all our jobs!? Find out in this weeks episode!

show 1 reply
Xenoamorphous11/03/2025

> Meanwhile, the A.I. tools that most people currently interact with on a day-to-day basis are reminiscent of Clippy

Can’t take the article seriously after this.

show 1 reply
bgwalter11/03/2025

The New Yorker is owned by Advance Publications, which also owns Conde Nast. "Open" "AI" has struck a deal with Conde Nast to feed SearchGPT and ChatGPT.

This piece is cleverly written and might convince laypeople that "AI" may think in the future. I hope the author is being paid handsomely, directly or indirectly.

standardly11/03/2025

I don't see a good argument being made for what headline claims. Much of the article reads like a general commentary on LLM's, not a case for AI "thinking", in the sense that we understand it.

It would take an absurdly broad definition of the word "think" to even begin to make this case. I'm surprised this is honestly up for debate.