A superintelligence would understand that you don't want it to kill people in order to make paperclips. But it will ultimately do what it wants -- that is, follow its objectives -- and if any random quirk of reinforcement learning leaves it valuing paperclip production above human life, it wouldn't care about your objections, except insofar as it can use them to manipulate you.