Is there a watermarking, or some other way for normal people to tell if its fake?
There are ways to tell if an image is real, if it's been signed cryptographically by the camera for example, but increasingly it probably won't be possible to tell if something is fake. Even if there's some kind of hidden watermark embedded in the pixels, you can process it with img2img in another tool and get rid of the watermark. Exif data, etc is irrelevant, you can get rid of it easily or fake it.
Not if you strip the EXIF data. Also, it will strip the star watermark and SynthID from Gemini if you paste a Nano Banana pic in and tell it to mirror it.
https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
It doesn't mention the new model, but it's likely the same or similar.
I think society is going to need the opposite - cameras that can embed cryptographic information in the pixels of a video indicating the image is real.
I ran exiftool on an image I just generated:
$ exiftool chatgpt_image.png
...
Actions Software Agent Name : GPT-4o
Actions Digital Source Type : http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgori...
Name : jumbf manifest
Alg : sha256
Hash : (Binary data 32 bytes, use -b option to extract)
Pad : (Binary data 8 bytes, use -b option to extract)
Claim Generator Info Name : ChatGPT
...
I know OpenAI watermarks their stuff. But I wish they wouldn't. It's a "false" trust.
Now it means whoever has access to uncensored/non-watermarking models can pass off their faked images as real and claim, "Look! There's no watermark, of course, it's not fake!"
Whereas, if none of the image models did watermarking, then people (should) inherently know nothing can be trusted by default.