logoalt Hacker News

Measuring progress toward AGI: A cognitive framework

127 pointsby surprisetalkyesterday at 11:44 AM201 commentsview on HN

Comments

pocketarcyesterday at 12:41 PM

When people imagined AI/AGI, they imagined something that can reason like we can, except at the speed of a computer, which we always envisioned would lead to the singularity. In a short period of time, AI would be so far ahead of us and our existing ideas, that the world would become unrecognizable.

That's not what's happening here, and it's worth remembering: A caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language or technology, or any knowledge.

In Carolyn Porco's words: "These beings, with soaring imagination, eventually flung themselves and their machines into interplanetary space."

When you think of it that way, it should be obvious that LLMs are not AGI. And that's OK! They're a remarkable piece of technology anyway! It turns out that LLMs are actually good enough for a lot of use cases that would otherwise have required human intelligence.

And I echo ArekDymalski's sentiment that it's good to have benchmarks to structure the discussions around the "intelligence level" of LLMs. That _is_ useful, and the more progress we make, the better. But we're not on the way to AGI.

show 12 replies
tyleoyesterday at 12:15 PM

It still seems like something is missing from all these frameworks.

I feel like an average human wouldn't pass some of these metrics yet they are "generally intelligent". On the other hand they also wouldn't pass a lot of the expert questions that AI is good at.

We're measuring something, and I think optimizing it is useful, I'd even say it is "intelligent" in some ways, but it doesn't seem "intelligent" in the same way that humans are.

show 3 replies
orangebreadyesterday at 1:06 PM

As an engineer who is also spiritual at the core, it seems obvious to me the missing piece: consciousness.

Hear me out.

I love AI and have been using it since ChatGPT 3.5. The obvious question when I first used it was "does this qualify as sentience?" The answer is less obvious. Over the next 3 years we saw EXPONENTIAL intelligence gains where intelligence has now become a commodity, yet we are still unable to determine what qualifies as "AGI".

My thoughts: As humans, we possess our own internal drive and our own perspective. Think of humans as distilled intelligence, we each have our own specialty and motivations. Einstein was a genius physicist but you wouldn't ask him for his expertise on medicine.

What people are describing as AGI is essentially a godlike human. What would make more sense is if the AGI spawned a "distilled" version with a focused agenda/motivation to behave autonomously. But even then, there are limitations. What is the solution? A trillion tokens of system prompt to act as the "soul"/consciousness of this AI agent?

This goes back to my original statement, what is missing is a level of consciousness. Unless this AGI can power itself and somehow the universe recognizes its complexity and existence and bestows it with consciousness I don't think this is phsyically attainable.

show 6 replies
yellow_leadyesterday at 12:54 PM

It's kind of funny that Google's idea of evaluating AGI is outsourcing the work to a Kaggle competition.

show 1 reply
ArekDymalskiyesterday at 12:25 PM

It's good to have some kind of benchmark at least to structure the ongoing, fruitless discussion around "are we there already?".

However I must admit that including the last point that is partially hinting at the emotional or rather social intelligence surprised me. It makes this list go beyond usual understanding of AGI and moves it toward something like AGI-we-actually-want. But for that purpose this last point isn't ok narrow, too specific. And so is the whole list.

To be actually useful the AGI-we-actually-want benchmark should not only include positive indicators but also a list of unwanted behaviors to ensure this thing that used to be called alignment I guess.

show 2 replies
lccerinayesterday at 4:09 PM

Every week we are 50% closer to shifting the goalpost...

from the paper "AI systems already possess some capabilities not found in humans, such as LiDAR perception and native image generation". I don't know about them, but I can natively generate images in my mind.

show 1 reply
andsoitisyesterday at 12:02 PM

> Perception: extracting and processing sensory information from the environment

> Generation: producing outputs such as text, speech and actions

> Attention: focusing cognitive resources on what matters

> Learning: acquiring new knowledge through experience and instruction

> Memory: storing and retrieving information over time

> Reasoning: drawing valid conclusions through logical inference

> Metacognition: knowledge and monitoring of one's own cognitive processes

> Executive functions: planning, inhibition and cognitive flexibility

> Problem solving: finding effective solutions to domain-specific problems

> Social cognition: processing and interpreting social information and responding appropriately in social situations

--------------------

I prefer:

a) working memory (hold & manipulate information in mind simultaneously)

b) processing speed (how quickly & efficiently execute basic cognitive operations, leaving more resources for complex tasks)

c) fluid intelligence (ability to reason through novel problems without relying on prior knowledge)

d) crystallized intelligence (accumulated knowledge and ability to apply learned skills)

e) attentional control / executive function (focus, suppress irrelevant information, switch between tasks, inhibit impulsive responses)

f) long-term memory and retrieval (ability to form strong associations and retrieve them fluently)

g) spatial / visuospatial reasoning (mental rotation, visualization, navigating abstract spatial relationships)

h) pattern recognition & inductive reasoning (this is the most primitive and universal expression of intelligence across species, the ability to extract regularities from noisy data, to generalized from examples to rules)

show 1 reply
mrkstuyesterday at 2:26 PM

To me, a lot of what makes us sentient is our continuity. I even (briefly) remember my dreams when I wake up, and my dreams are influenced by my state of mind as I enter it.

LLMs 'turn on' when given a question and essentially 'die' immediately after answering a question.

What kind of work is going on with designing an LLM type AI that is continuously 'conscious' and giving it will? The 'claws' seem to be running all the time, but I assume they need rebooting occasionally to clear context.

show 1 reply
ianrahmantoday at 3:47 AM

Altruism would make a good addition to the list. It’s clearly not universal, but most humans would help a fellow human in need. Or even (and in some cases more so) an animal need. Even if it didn’t directly benefit the actor.

There are other changes and additions which could be made to this list, but altruism may be the most important.

swagv1today at 7:26 AM

AGI? As if all value that humans offer can be compressed and transmitted in binary form?

You'd have a more serious debate about antigravity.

wcgan7yesterday at 11:51 AM

Cool that we are at a stage where it is meaningful to start measuring progress toward AGI. Something I am wondering on the philosophical side: are we ever going to be able to tell if the system really "understands" and "perceives" the world?

show 2 replies
lvoudouryesterday at 12:25 PM

Social cognition: processing and interpreting social information and responding appropriately in social situations

Is social cognition really a measure of intelligence for non-social entities?

show 2 replies
qsortyesterday at 12:14 PM

Those are crowdsourced benchmarks. We're calling them "cognitive" and "AGI" now, though. It's similar to when they made a benchmark and called it "GDP".

To be clear, I think we've seen very fast progress, certainly faster than I would have expected, I'm not trying to peddle some "wall" rhetoric here, but I struggle to see how this isn't just the SWE-bench du jour.

show 2 replies
baggachipzyesterday at 1:03 PM

This is a long way to say "let's crowdsource the shifting of our goalposts".

wewewedxfgdfyesterday at 12:33 PM

AGI feels like a vanity project.

Who cares about AGI? Honestlky what's the gain.

Maybe Google could actually make Gemini good instead of being about 10 miles behind Claude instead of trying to make AGI because of - well some reason - cause they want to be famous.

show 1 reply
Havocyesterday at 12:58 PM

Measuring something you can’t define or quantify seems somewhat dubious

show 1 reply
gibsonf1yesterday at 11:48 PM

Its interesting that they don't even mention the key to human intelligence, concepts, in this list.

1970-01-01yesterday at 1:29 PM

Way too much framework. The A in AGI is for artificial. Have it build its own test harness instead of outsourcing it via hackathon. If you cannot trust that output, you're nowhere near AGI.

show 1 reply
hbarkayesterday at 12:12 PM

The two guys from Google get to set the rules?

How will they measure wisdom or common sense (ability to make an exception)?

https://youtu.be/lA-zdh_bQBo

show 1 reply
righthandtoday at 3:12 AM

Google’s next cognitive framework will be for AGI Pro after we reach whatever productized, socially-accepted definition they cook up for AGI.

zug_zugyesterday at 12:36 PM

I'm sorry what even is this? Giving $10k rewards for significant advancements toward "AGI"?

What does "making a framework" even mean, it feels like a nothing post.

When I think of what real AGI would be I think:

- Passes the turing test

- Writes a New York Times Bestseller without revealing it was written by AI

- Writes journal articles that pass peer review

- Wins a Nobel Prize

- Writes a successful comedy routine

- Creates a new invention

And no, nobody is going to make an automated kaggle benchmark to verify these. Which is fine, because an LLM will never be AGI. An LLM can't even learn mid-conversation.

show 6 replies
boca_honeyyesterday at 1:08 PM

Friendly reminder:

Scaling LLMs will not lead to AGI.

show 2 replies
cess11yesterday at 1:47 PM

The belief that there is no fundamental difference between mammals navigating fractal dimensions and imprisoned electrons humming in logic gates has to be considered a religious one.

show 1 reply
wslhyesterday at 1:41 PM

AGI may be a prerequisite for true superintelligence, but we're already seeing superhuman performance in narrow domains. We probably need a broader evaluation framework that captures both.

ottahyesterday at 2:34 PM

Can we just focus on real problems, like stable and safe application of existing models? I'm just exhausted with the bullshit.

causalzapyesterday at 1:30 PM

[dead]

drewcboston87today at 1:02 AM

[dead]

nbnmbnmbnbmyesterday at 12:49 PM

[dead]

AnaPaula15S98today at 12:41 AM

[dead]

speefersyesterday at 12:51 PM

[dead]

jamesvzbyesterday at 3:07 PM

[flagged]

fnoefyesterday at 1:24 PM

What is it with humans that we tend to speedrun into the extinction of our own race?