That would be the biggest mistake anyone could do. I hope nobody goes down this route. AI "wanting" things are an enormous risk to alignment.
I mean setting any neural net with a 'goal' is really just defining a want/need. You can't just encode the entire problemspace of reality, you have to give the application something to filter out.
At some point I think we'll have to face the idea that any AI more intelligent than ourselves will by definition be able to evade our alignment tricks.