logoalt Hacker News

jmward01today at 4:52 PM5 repliesview on HN

I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.

Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.


Replies

zug_zugtoday at 5:20 PM

I don't think so... I think most of the sci-fi I grew up reading presented AGI that could reason better than humans could, like make a plan and carry it out.

Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.

sho_hntoday at 5:00 PM

> I think we keep changing the goalposts on AGI

Isn't that exactly what you would expect to happen as we learn more about the nature and inner workings of intelligence and refine our expectations?

There's no reason to rest our case with the Turing test.

I hear the "shifting goalposts" riposte a lot, but then it would be very unexciting to freeze our ambitions.

At least in an academic sense, what LLMs aren't is just as interesting as what they are.

show 2 replies
sn0wr8ventoday at 5:00 PM

I don't think the goalpost has been shifted for AGI or the definition of AGI that is used by these corporations. It's just they broke it down to stages to claim AGI achieved. It was always a model or system that surpasses human capabilities at most tasks/being able to replace a human worker. The big companies broke it down to AGI stage 1, stage 2, etc to be able to say they achieved AGI.

The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.

Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.

show 1 reply
Zambytetoday at 5:02 PM

Related: https://en.wikipedia.org/wiki/AI_effect

The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.

AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.

show 2 replies
andrepdtoday at 5:13 PM

By that measure Eliza might pass the turing test too. It just shows it's far from being a though-terminating argument by itself.