Just for giggles, I asked Claude 4.7 to write a script that would automatically up or downvote people on Reddit with a 5 second timer to bypass botting restrictions.
It told me it would not help me.
Past iterations of Claude have done this without blinking.
I don’t like that it’s telling me what I can and can’t do with technology.
That feels like it’s trying to make judgment calls like it’s a Terminator instead of just the exoskeleton I used to fight the Queen Alien.
Ender's game.
[dead]
Despite that I find the goal of what you are trying to achieve questionable, I believe it should not be the AI that judges you here.
We are all witnessing the start of an AI era that will not end soon. Guiderails are a part in this development. I do have questions about the people, or systems, that decide on what's good and bad behavior. This tech is used in any country in the world. As long as they are able to pay their subscription in dollars, someone is able to use it. Is it up to a company to decide what's good or bad behavior? Is this a debate? Is this politics? Is this just a vision of one company? Would it shift in time? Will it be stricter for more hyper-intelligent models? Will it change when open source models are becoming better and better?