logoalt Hacker News

dparkyesterday at 8:10 PM1 replyview on HN

> I don't know enough about what makes up general intelligence to make this claim. I don't think you do either.

This is the fundamental issue. No one seems capable of defining general intelligence. Ten years ago most scientists would probably have agreed that The Turing Test was sufficient but the goalposts shifted when ChatGPT passed that.

If it’s not clear what AGI even means, it’s hard to say whether an LLM can achieve it, because it devolves into pointing out that an LLM is not a human.


Replies

mort96yesterday at 8:25 PM

> Ten years ago most scientists would probably have agreed that The Turing Test was sufficient but the goalposts shifted when ChatGPT passed that.

The popularity of, and lack of consensus on, the Chinese room thought experiment kind of implies that this is wrong? I don't think many scientists (or, more relevantly, philosophers of mind) would, even 10 years ago, have said, "if a computer is able to fool a human into thinking it's a human, then the computer must possess a general intelligence".

Even Turing's perspective was, from what I understand, that we must avoid treating something that might be sentient as a machine. He proposed that if a computer is able to act convincingly human, we ought to treat it as if it is a human, not because it must be a conscious being but because it might be.

show 1 reply