logoalt Hacker News

CamperBob2last Friday at 6:24 PM3 repliesview on HN

That's where these threads always end up. Someone asserts, almost violently, that AI does not and/or cannot "think." When asked how to falsify their assertion, perhaps by explaining what exactly is unique about the human brain that cannot and/or will not be possible to emulate, that's the last anyone ever hears from them. At least until the next "AI can't think" story gets posted.

The same arguments that appeared in 2015 inevitably get trotted out, almost verbatim, ten years later. It would be amusing on other sites, but it's just pathetic here.


Replies

pegasuslast Friday at 7:33 PM

Consider that you might have become polarized yourself. I often encounter good arguments against current AI systems emulating all essential aspects of human thinking. For example, the fact that they can't learn from few examples, that they can't perform simple mathematical operations without access to external help (via tool calling) or that they have to expend so much more energy to do their magic (and yes, to me they are a bit magical), which makes some wonder if what these models do is a form of refined brute-force search, rather than ideating.

Personally, I'm ok with reusing the word "thinking", but there are dogmatic stances on both sides. For example, lots of people decreeing that biology in the end can't but reduce to maths, since "what else could it be". The truth is we don't actually know if it is possible, for any conceivable computational system, to emulate all essential aspects of human thought. There are good arguments for this (in)possibility, like those presented by Roger Penrose in "the Emperor's new Mind" and "Shadows of the Mind".

show 1 reply
abloblast Friday at 7:22 PM

Usually it is the work of the one claiming something to prove it. So if you believe that AI does "think" you are expected to show me that it really does. Claiming it "thinks - prove otherwise" is just bad form and also opens the discussion up for moving the goalposts just as you did with your brain emulation statement. Or you could just not accept any argument made or circumvent it by stating the one trying to disprove your assertion got the definition wrong. There are countless ways to start a bad faith argument using this methodology, hence: Define property -> prove property.

Conversely, if the one asserting something doesn't want to define it there is no useful conversation to be had. (as in: AI doesn't think - I won't tell you what I mean by think)

PS: Asking someone to falsify their own assertion doesn't seem a good strategy here.

PPS: Even if everything about the human brain can be emulated, that does not constitute progress for your argument, since now you'd have to assert that AI emulates the human brain perfectly before it is complete. There is no direct connection between "This AI does not think" to "The human brain can be fully emulated". Also the difference between "does not" and "can not" is big enough here that mangling them together is inappropriate.

show 2 replies
Terr_last Friday at 6:28 PM

Someone asserts, almost religiously, that LLMs do and/or can "think." When asked how to falsify their assertion, perhaps by explaining what exactly is "thinking" in the human brain that can and/or will be possible to emulate...

show 4 replies