But if there is a policy in place to prevent some sort of modification, then performing an exploit or workaround to make the modification anyways is arguably understood and respected by most people.
That seems to be the difference here, we should really be building AI systems that can be taught or that learn to respect things like that.
If people are claiming that AI is so smart or smarter than the average person then it shouldn't be hard for it to handle this.
Otherwise it seems people are being to generous in talking about how smart and capable AI systems truly are.