logoalt Hacker News

0xbadcafebeeyesterday at 11:06 PM1 replyview on HN

> you can already stuff-in more information than human can remember during whole lifetime

The human eye processes between 100GB and 800GB of data per day. We then continuously learn and adapt from this firehose of information, using short-term and long-term memory, which is continuously retrained and weighted. This isn't "book knowledge", but the same capability is needed to continuously learn and reason on a human-equivalent level. You'd need a supercomputer to attempt it, for a single human's learning and reasoning.

RL is used for SOTA models, but it's a constant game of catch-up with limited data and processing. It's like self-driving cars. How many millions of miles have they already captured? Yet they still fail at some basic driving tasks. It's because the cars can't learn or form long-term memories, much less process and act on the vast amount of data a human can in real time. Same for LLM. Training and tweaking gets you pretty far, but not matching humans.

> With LoRA and friends it's also already possible to do continuous training that directly affects weights, it's just that economy of it is not that great

And that means we're stuck with non-AGI. Which is fine! We could've had flying cars decades ago, but that was hard, expensive and unnecessary, so we didn't do that. There's not enough money in the global economy to "spend" our way to AGI in a short timeframe, even if we wanted to spend it all, even if we could build all the datacenters quickly enough, which we can't (despite being a huge nation, there are many limitations).

> For some definitions of AGI

Changing the goalposts is dangerous. A lot of scary real-world stuff is hung on the idea of AGI being here or not. People will keep getting more and more freaked out and acting out if we're not clear on what is really happening. We don't have AGI. We have useful LLMs and VLMs.


Replies

mirekrusintoday at 8:52 AM

Again, yes and no.

Humans don't have monopoly on intelligence.

We don't need to mimick every aspect of humans to have intelligence or intelligence surpassing human abilities.

"General general-intelligence" doesn't exist in nature, it never did.

Humans can't echolocate, can't do fast mental arithmetic reliably, can't hold more than ~7 items in working memory, systematically fail at probabilistic reasoning and are notoriously bad at long term planning under uncertainty etc.

Human intelligence is _specialized_ (for social coordination, language, and tool use in a roughly savanna like environment).

We call it "general (enough)" because it's the only intelligence we have to compare against — it's a sample size of one, and we wrote down this definition.

The AGI goalposts keep moving but that's argument supporting what I'm saying not the other way around.

When machines beat us at chess, we said "that's just search".

When AlphaFold solved protein folding, we said "that's just pattern matching".

When models write better code than most engineers, manage complex information, and orchestrate multi-step agentic workflows — we say "but can it really understand"?

The question isn't whether AI mimics human cognition/works at low level the same way.

It's whether it can do things that do matter to us.

Programming, information synthesis and self directed task orchestration capabilities that exploded in last weeks/months aren't narrow tasks and they do compound.

Systems that now can coherently, recursively search, write, run, evaluate, revise etc. while keeping in memory equivalent 3k pages of text etc. are simply better than humans, now, today, I see it myself, you can hear people saying it.

Following weeks and months will be flooded with more and more reports – it takes a bit of time to set everything up and the tooling is still a bit rough on the edges.

But it's here and it's general enough.