Even if you assume a sci-fi scenario of an omniscient, infallible AI, there's probably no single utility function that would allow it to decide optimal resource allocation.
In fact, we avoid a lot of difficult moral dilemmas because we accept the systems are crappy and just a necessary evil. The closest you claim to be to perfection, the more you have to acknowledge that some moral questions are just impossible to settle to everyone's satisfaction.
Is the life of child X more important than the life of child X because of a score calculated based on their grades, parents' income, etc? The system we have today may implicitly result in such outcomes, but at least it's not intentional.
Even if you assume a sci-fi scenario of an omniscient, infallible AI, there's probably no single utility function that would allow it to decide optimal resource allocation.
In fact, we avoid a lot of difficult moral dilemmas because we accept the systems are crappy and just a necessary evil. The closest you claim to be to perfection, the more you have to acknowledge that some moral questions are just impossible to settle to everyone's satisfaction.
Is the life of child X more important than the life of child X because of a score calculated based on their grades, parents' income, etc? The system we have today may implicitly result in such outcomes, but at least it's not intentional.