Interesting, I'd have assumed the guardrails would disallow them from doing anything like that, regardless of legality. Do you need to "convince" it to do it or no questions asked?
I use AWS Kiro, with the Claude models, and its only to happy to help. I give it the headerless ghidra, and decompilers etc... and away it goes.
It is no questions asked. Even if you are reversing things like anticheats (I wanted to know the privacy implications of running the anticheat modules).
Claude doesn't care as long as you aren't straight up asking it to write exploits. It's my go-to for reverse engineering tasks.
ChatGPT is full of refusals and has to be jailbroken out of it.