I agree that's why they do it.
I happen to think that some states will want to prosecute people who publish realistic-looking AI generated images without making it explicit that they're generated. I'm wondering if watermarking could be an effective tool for that.
(If I was on a bad mood, I would say that we should make it explicit when images are too heavily photoshoped, too ; but that's an other debat, because tools like Sora make manufacturing lies several order of magnitude cheaper.)