Actually no it just makes me use a different model. My uses are not nefarious at all, although it's fine for you to assume so. There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all. SynthId is just a stain on legitimate AI users. People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails (which is not what I am advocating for per se, just that SynthId is an awful solution).
It actually reeks of Google, since it's a technical solution to a people problem. Google doesn't seem to understand people.
> There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all
I am legitimately curious: can you name some?
> Actually no it just makes me use a different model
Yes, this is a very good thing when "a different model" means "a worse model."
> People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails
That's totally invalid logic. There are plenty of deception and manipulation use cases that don't run afoul of model safety rails at all. Trivially: Creating fake dating profiles to scam people. Fake product images. Fake insurance claims. Fake blackmail (e.g. of a person and another man/woman at a bar).
Can you explain your use case? I’d be interested to understand.