logoalt Hacker News

torginuslast Tuesday at 2:36 PM1 replyview on HN

If scientific consensus is that he's wrong why is he being constantly brought up and defended - am I not right to call them out then?

Nobody brings up that light travels through the aether, that diseases are caused by bad humors etc. - is it not right to call out people for stating theory that's believed to be false?

>The randomness stuff is very straw man,

And a direct response to what armada651 wrote:

>I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.

> He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes.

Once again the argument here changed from 'computers which only manipulate symbols cannot create consciousness' to 'we don't have the algorithm for consiousness yet'.

He might have successfully argued against the expert systems of his time - and true, mechanistic attempts at language translation have largely failed - but that doesn't extend to modern LLMs (and pre LLM AI) or even statistical methods.


Replies

dahartlast Tuesday at 4:41 PM

You’re making more assumptions. There’s no “scientific consensus” that he’s wrong, there are just opinions. Unlike the straw man examples you bring up, nobody has proven the claims you’re making. If they had, then the argument would go away like the others you mentioned.

Where did the argument change? Searle’s argument that you quoted is not arguing that we don’t have the algorithm yet. He’s arguing that the algorithm doesn’t run on electrical computers.

I’m not defending his argument, just pointing out that yours isn’t compelling because you don't seem to fully understand his, at least your restatement of it isn’t a good faith interpretation. Make his argument the strongest possible argument, and then show why it doesn’t work.

IMO modern LLMs don’t prove anything here. They don’t understand anything. LLMs aren’t evidence that computers can successfully think, they only prove that humans are prone to either anthropomorphic hyperbole, or to gullibility. That doesn’t mean computers can’t think, but I don’t think we’ve seen it yet, and I’m certainly not alone there.

show 1 reply