logoalt Hacker News

Why I don't think AGI is imminent

15 pointsby anonymidyesterday at 11:34 PM16 commentsview on HN

Comments

est31today at 12:48 AM

Our brains evolved to hunt prey, find mates, and avoid becoming hunted ourselves. Those three tasks were the main factors for the vast majority of evolutionary history.

We didn't evolve our brains to do math, write code, write letters in the right registers to government institutions, or get an intuition on how to fold proteins. For us, these are hard tasks.

That's why you get AI competing at IMO level but unable to clean toilets or drive cars in all of the settings that humans do.

show 2 replies
hi_hitoday at 12:57 AM

How will we know if its AGI/Not AGI? (I don't think a simple app is gonna cut it here haha)

What is the benchmark now that the Turing test has been blown out of the water?

show 5 replies
parpfishtoday at 12:56 AM

I’d love to see one of the AI behemoths put their money where their mouth is and replace their C-suite with their SOTA chatbot.

ed_mercertoday at 12:59 AM

As far as I'm concerned, it's already here.

ryanSrichtoday at 12:50 AM

AGI is here. 90%+ of white collar work _can_ be done by an LLM. We are simply missing a tested orchestration layer. Speaking broadly about knowledge work here, there is almost nothing that a human is better at than Opus 4.6. Especially if you're a typical office worker whose job is done primarily on a computer, if that's all AGI is, then yeah, it's here.

show 4 replies
tananaevtoday at 12:53 AM

I think it's really poor argument that AGI won't happen because model doesn't understand physical world. That can be trained the same way everything else is.

I think the biggest issue we currently have is with proper memory. But even that is because it's not feasible to post-train an individual model on its experiences at scale. It's not a fundamental architectural limitation.

show 1 reply
Legend2440today at 12:48 AM

I've said it before and I'll say it again, all AI discussion feels like a waste of effort.

“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.

There’s no point in talking about it anymore, just wait to see how it all turns out.

show 1 reply
mikewarottoday at 12:45 AM

I think that AGI has already happened, but it's not well understood, nor well distributed yet.

OpenClaw, et al, are one thing that got me nudged a little bit, but it was Sammy Jankis[1,2] that pushed me over the edge, with force. It's janky as all get out, but it'll learn to build it's own memory system on top of an LLM which definitely forgets.

[1] https://sammyjankis.com/

[2] https://news.ycombinator.com/item?id=47018100

show 1 reply
nickjjtoday at 12:58 AM

I'm certainly not holding my breath.

In a handful of prompts I got the paid version of ChatGPT to say it's possible for dogs to lay eggs under the right circumstances.

show 1 reply
TMWNNtoday at 12:53 AM

If AGI can be defined as meeting the general intelligence of a Redditor, we hit ASI a while ago. Highly relevant comment <https://www.reddit.com/r/singularity/comments/1jh9c90/why_do...> by /u/Pyros-SD-Models:

>Imagine you had a frozen [large language] model that is a 1:1 copy of the average person, let’s say, an average Redditor. Literally nobody would use that model because it can’t do anything. It can’t code, can’t do math, isn’t particularly creative at writing stories. It generalizes when it’s wrong and has biases that not even fine-tuning with facts can eliminate. And it hallucinates like crazy often stating opinions as facts, or thinking it is correct when it isn't.

>The only things it can do are basic tasks nobody needs a model for, because everyone can already do them. If you are lucky you get one that is pretty good in a singular narrow task. But that's the best it can get.

>and somehow this model won't shut up and tell everyone how smart and special it is also it claims consciousness. ridiculous.

show 1 reply
simbleautoday at 12:53 AM

AGI is a messy term, so to be concise, we have the models that can do work. What we lack is orchestration, management, and workflows to use models effectively. Give it 5 years and those will be built and they could be built using the models we have today (Opus 4.6 at the time of this message).