logoalt Hacker News

ethin11/04/20250 repliesview on HN

I always find the claims that we'll have AGI impossible to believe, on the basis that nobody even knows what AGI is. The definition is so vague and hand-wavy that it might as well have never been defined in the first place. As in: I seriously can't think of a definition that would actually work. I'll explain my thought process, because I may be over-analyzing things.

If we define it as "a machine that can match humans on a wide range of cognitive tasks," that begs the questions: which humans? Which range? What cognitive tasks? I honestly think there is no answer you could give to these three alone that wouldn't cause everything to break down again:

For the first question, if you say "all humans," how do you measure that?

If we use IQs? If so, then you have just created an AI which is able to match the average IQ of whatever "all" happens to be. I'm pretty sure (though have no data to prove) that the vast super-majority of people don't take IQ tests, if they've ever even heard of them. So that limits your set to "all the IQ scores we have". But again... Who is we? Which test organization? There are quite a few IQ testing centers/orgs, and they all have variations in their metrics, scoring, weights, etc.

If you measure it by some other thing, what's the measurement? What's the thing? And, does that risk us spiraling into an infinite debate on what intelligence is? Because if so, the likelihood of us ever getting an AGI is nil. We've been trying to define intelligence for literally thousands of years and we still can't find a definition that is even halfway universal.

If you say anything other than all, like "the smartest humans" or "the humans we tested it against," well... Do I really need to explain how that breaks?

For the second and third questions, I honestly don't even know what you'd answer. Is there even one? Even if we collapse the second and third questions into "what wide range of cognitive tasks?", who creates the range of tasks? Are these tasks ones any human from, lets say, age 5 onward would be capable of doing? (Even if you answer yes here, what about those with learning disabilities or similar who may not be able to do whatever tasks you set at that age because it takes them longer to learn?) Or, are they tasks a PhD student would be able to do? (If you do this, then you've just broken the definition again.)

Even if we rewrite the definition to be narrower and less hand-wavy, like, an AI which matches some core properties or something, as was suggested elsewhere in these comments, who defines the properties? How do we measure them? How do we prove that us comparing the AI against these properties doesn't cause us to optimize for the lowest common denominator?