logoalt Hacker News

mirekrusintoday at 8:52 AM0 repliesview on HN

Again, yes and no.

Humans don't have monopoly on intelligence.

We don't need to mimick every aspect of humans to have intelligence or intelligence surpassing human abilities.

"General general-intelligence" doesn't exist in nature, it never did.

Humans can't echolocate, can't do fast mental arithmetic reliably, can't hold more than ~7 items in working memory, systematically fail at probabilistic reasoning and are notoriously bad at long term planning under uncertainty etc.

Human intelligence is _specialized_ (for social coordination, language, and tool use in a roughly savanna like environment).

We call it "general (enough)" because it's the only intelligence we have to compare against — it's a sample size of one, and we wrote down this definition.

The AGI goalposts keep moving but that's argument supporting what I'm saying not the other way around.

When machines beat us at chess, we said "that's just search".

When AlphaFold solved protein folding, we said "that's just pattern matching".

When models write better code than most engineers, manage complex information, and orchestrate multi-step agentic workflows — we say "but can it really understand"?

The question isn't whether AI mimics human cognition/works at low level the same way.

It's whether it can do things that do matter to us.

Programming, information synthesis and self directed task orchestration capabilities that exploded in last weeks/months aren't narrow tasks and they do compound.

Systems that now can coherently, recursively search, write, run, evaluate, revise etc. while keeping in memory equivalent 3k pages of text etc. are simply better than humans, now, today, I see it myself, you can hear people saying it.

Following weeks and months will be flooded with more and more reports – it takes a bit of time to set everything up and the tooling is still a bit rough on the edges.

But it's here and it's general enough.