logoalt Hacker News

txrx0000today at 4:59 AM3 repliesview on HN

Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.


Replies

moozoohtoday at 12:26 PM

I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.

This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.

show 1 reply
khafratoday at 6:12 AM

> we will need neural interfaces long term if we want to survive.

If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.

show 1 reply
lebovictoday at 5:06 AM

Yeah, I think that's one way it could go!

I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.