Why is Google indexing these harmful images in the first place?
Microsoft, Google, Facebook, and other large tech companies have had image recognition models capable of detecting this kind of content at scale for years, long before large language models became popular. There’s really no excuse for hosting or indexing these images as publicly accessible assets when they clearly have the technical ability to identify and exclude explicit content automatically.
Instead of putting the burden on victims to report these images one by one, companies should be proactively preventing this material from appearing in search results at all. If the technology exists, and it clearly does, then the default approach should be prevention, not reactive cleanup.
How is an image model supposed to detect if there was consent to share the picture?
If you're saying they shouldn't index any explicit images, you're talking about something very different from the article.