logoalt Hacker News

nospicelast Monday at 1:53 AM1 replyview on HN

Even if you assume a sci-fi scenario of an omniscient, infallible AI, there's probably no single utility function that would allow it to decide optimal resource allocation.

In fact, we avoid a lot of difficult moral dilemmas because we accept the systems are crappy and just a necessary evil. The closest you claim to be to perfection, the more you have to acknowledge that some moral questions are just impossible to settle to everyone's satisfaction.

Is the life of child X more important than the life of child X because of a score calculated based on their grades, parents' income, etc? The system we have today may implicitly result in such outcomes, but at least it's not intentional.


Replies

smitty1elast Monday at 9:51 PM

I cannot refute your response, sir.

OTOH, I don't more than partially agree.

We can stipulate that some kind of mathematical perfection is unattainable, sure. The discussion might then move to the feedback loops that detect the state of the State and offer stabilizing input.