logoalt Hacker News

advaelyesterday at 2:15 AM2 repliesview on HN

For all the advancement in machine learning that's happened in just the decade I've been doing it, this whole AGI debate's been remarkably stagnant, with the same factions making essentially the same handwavey arguments. "Superintelligence is inherently impossible to predict and control and might act like a corporation and therefore kill us all". "No, intelligence could correlate with value systems we find familiar and palatable and therefore it'll turn out great"

Meanwhile people keep predicting this thing they clearly haven't had a meaningfully novel thought about since the early 2000s and that's generous given how much of those ideas are essentially distillations of 20th century sci-fi. What I've learned is that everyone thinking about this idea sucks at predicting the future and that I'm bored of hearing the pseudointellectual exercise that is debating sci-fi outcomes instead of doing the actual work of building useful tools or ethical policy. I'm sure many of the people involved do some of those things, but what gets aired out in public sounds like an incredibly repetitive argument about fanfiction


Replies

tim333yesterday at 8:51 AM

Hinton came out with a new idea recently. He's been a bit in the doomer camp but is now talking about a mother baby relationship where a super intelligent AI wants to look after us https://www.forbes.com/sites/ronschmelzer/2025/08/12/geoff-h...

I agree though that much of the debate suffers from the same problem as much philosophy that a group of people just talking about stuff doesn't progress much.

Historically much progress has been through experimenting with stuff. I'm encouraged that the LLMs so far seem quite easy going and not wanting to kill everyone.

show 1 reply
YZFyesterday at 2:52 AM

It's hard for people to say "we don't know". And you don't get a lot of clicks on that either.