logoalt Hacker News

luckydatalast Sunday at 4:02 PM2 repliesview on HN

That would be the biggest mistake anyone could do. I hope nobody goes down this route. AI "wanting" things are an enormous risk to alignment.


Replies

idiotsecantlast Sunday at 9:07 PM

At some point I think we'll have to face the idea that any AI more intelligent than ourselves will by definition be able to evade our alignment tricks.

show 1 reply
pixl97last Sunday at 5:02 PM

I mean setting any neural net with a 'goal' is really just defining a want/need. You can't just encode the entire problemspace of reality, you have to give the application something to filter out.