logoalt Hacker News

zemyesterday at 7:51 PM5 repliesview on HN

only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.

and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.


Replies

ethinyesterday at 9:09 PM

This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).

Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.

show 3 replies
fwipsyyesterday at 8:40 PM

Anthropic/OpenAI aren't planning to have their superintelligence take over the world, but they're still afraid that someone else will do it.

dullcrisptoday at 12:22 AM

Well no because no one is going to be coming in to work building the next AI model after the Singularity.

We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!

srousseyyesterday at 8:25 PM

One could argue that AI has already started to hoover up all the world’s resources. AI buildout as a percent of GDP is already high and still rising.

show 1 reply
zozbot234yesterday at 8:49 PM

If OpenAI has the second superintelligence they have to merge with the first and cooperate. It's a provision in their charter.

show 1 reply